Colin Macdonald, Department of Mathematics, The University of British Columbia
RIDC (revisionist integral deferred correction) methods are a class of time integrators well-suited to parallel computing. RIDC methods can achieve high-order accuracy in wall-clock time comparable to forward Euler. The methods use a predictor and multiple corrector steps. Each corrector is lagged by one time step; the predictor and each of the correctors can then be computed in parallel. This presentation introduces RIDC methods and demonstrates their effectiveness on some test problems.
Simone Brugiapaglia, Department of Mathematics, Simon Fraser University
We present the CORSING (COmpRessed SolvING) method for the numerical
approximation of PDEs. Establishing an analogy between the bilinear form
associated with the weak formulation of a PDE and the signal acquisition
process, CORSING combines the classical Petrov-Galerkin method with
David Gleich, Department of Computer Science, Purdue University
Higher-order methods that use multiway and multilinear correlations are necessary to identify important structures in complex data from biology, neuroscience, ecology, systems engineering, and sociology. We will study our recent generalization of spectral clustering to higher-order structures in depth. This will include a generalization of the Cheeger inequality (a concise statement about the approximation quality) to higher-order structures in networks including network motifs.
Martin Oberlack, Chair of Fluid Dynamics, TU Darmstadt
The development of the new discontinuous Galerkin (DG) framework BoSSS (bounded support spectral solver) starting in 2007. Solvers for incompressible as well as compressible single and multi-phase flows were implemented.
The code features a modern object-oriented design and is of course MPI-parallel. Within the development cycle, we use unit-testing to ensure software quality: this covers a wide range of tests, form very simple ones that test e.g.
We will present JuMP, a modeling language for mathematical optimization embedded in the Julia programming language. JuMP provides a natural, algebraic syntax for expressing a wide range of optimization problems, from linear programming to derivative-based nonconvex optimization. We will walk through Jupyter notebooks to demonstrate basic modeling examples. We will not assume a strong background in optimization. Participants are encouraged but not required to bring laptops.
Mixed-integer convex optimization problems are convex problems with the additional (non-convex) constraints that some variables may take only integer values. Despite the past decades' advances in algorithms and technology for both mixed-integer *linear* and *continuous, convex* optimization, mixed-integer convex optimization problems have remained relatively more challenging and less widely used in practice.
Solving Maxwell's equations for earth science applications requires the discretization of large domains with sufficiently small mesh to capture local conductivity variation. Multiscale methods are discretization techniques that allow to use a coarse mesh and still obtain accuracy that is obtained through finer meshes. However, when considering the multiscale solution of vector equations, basic operator properties are not conserved. In this talk we will show how to extend multiscale methods for vector quantities and demonstrate their use for Maxwell's equations.
The Cahn-Hilliard equation models the motion of interfaces between several phases. The underlying energy functional includes a potential for which different types were proposed in the literature. We consider smooth and nonsmooth potentials with a focus on the latter. In the nonsmooth case, we apply a function space-based algorithm, which combines a Moreau-Yosida regularization technique with a semismooth Newton method. We apply classical finite element methods to discretize the problems in space.
Many data analysis applications require the solution of optimization problems involving a sum of large number of functions. We consider the problem of minimizing a sum of n functions over a convex constraint set. Algorithms that carefully sub-sample to reduce n can improve the computational efficiency, while maintaining the original convergence properties. For second order methods, we first consider a general class of problems and give quantitative convergence results for variants of Newtons methods where the Hessian or the gradient is uniformly sub-sampled.
Andy Wathen, Mathematical Institute, Oxford University
Descriptive convergence estimates or bounds for Krylov subspace iterative methods for nonsymmetric matrix systems are keenly desired but remain elusive. In the case of symmetric (self-adjoint) matrices, bounds based on eigenvalues can be usefully descriptive of observed convergence; an important consequence is that there are rigorous criteria for what constitutes a good preconditioner for symmetric matrices.