Past SCAIM Seminars

Tue, 2016-11-29 12:30 - 14:00
Simone Brugiapaglia, Department of Mathematics, Simon Fraser University
We present the CORSING (COmpRessed SolvING) method for the numerical approximation of PDEs. Establishing an analogy between the bilinear form associated with the weak formulation of a PDE and the signal acquisition process, CORSING combines the classical Petrov-Galerkin method with compressed sensing.
Fri, 2016-11-25 12:30 - 14:00
David Gleich, Department of Computer Science, Purdue University
Higher-order methods that use multiway and multilinear correlations are necessary to identify important structures in complex data from biology, neuroscience, ecology, systems engineering, and sociology. We will study our recent generalization of spectral clustering to higher-order structures in depth. This will include a generalization of the Cheeger inequality (a concise statement about the approximation quality) to higher-order structures in networks including network motifs.
Tue, 2016-11-15 12:30 - 14:00
Martin Oberlack, Chair of Fluid Dynamics, TU Darmstadt
The development of the new discontinuous Galerkin (DG) framework BoSSS (bounded support spectral solver) starting in 2007. Solvers for incompressible as well as compressible single and multi-phase flows were implemented. The code features a modern object-oriented design and is of course MPI-parallel. Within the development cycle, we use unit-testing to ensure software quality: this covers a wide range of tests, form very simple ones that test e.g.
Wed, 2016-11-02 12:30 - 14:00
Miles Lubin, Operations Research MIT
We will present JuMP, a modeling language for mathematical optimization embedded in the Julia programming language. JuMP provides a natural, algebraic syntax for expressing a wide range of optimization problems, from linear programming to derivative-based nonconvex optimization. We will walk through Jupyter notebooks to demonstrate basic modeling examples. We will not assume a strong background in optimization. Participants are encouraged but not required to bring laptops.
Tue, 2016-11-01 12:30 - 14:00
Miles Lubin, Sloan School of Management, MIT
Mixed-integer convex optimization problems are convex problems with the additional (non-convex) constraints that some variables may take only integer values. Despite the past decades' advances in algorithms and technology for both mixed-integer *linear* and *continuous, convex* optimization, mixed-integer convex optimization problems have remained relatively more challenging and less widely used in practice.
Tue, 2016-10-25 12:30 - 14:00
Eldad Haber, EOAS UBC
Solving Maxwell's equations for earth science applications requires the discretization of large domains with sufficiently small mesh to capture local conductivity variation. Multiscale methods are discretization techniques that allow to use a coarse mesh and still obtain accuracy that is obtained through finer meshes. However, when considering the multiscale solution of vector equations, basic operator properties are not conserved. In this talk we will show how to extend multiscale methods for vector quantities and demonstrate their use for Maxwell's equations.
Tue, 2016-10-04 12:30 - 14:00
Jessica Bosch, Computer Science, UBC
The Cahn-Hilliard equation models the motion of interfaces between several phases. The underlying energy functional includes a potential for which different types were proposed in the literature. We consider smooth and nonsmooth potentials with a focus on the latter. In the nonsmooth case, we apply a function space-based algorithm, which combines a Moreau-Yosida regularization technique with a semismooth Newton method. We apply classical finite element methods to discretize the problems in space.
Tue, 2016-09-20 12:30 - 14:00
Fred Roosta, Statistics UC Berkeley
Many data analysis applications require the solution of optimization problems involving a sum of large number of functions. We consider the problem of minimizing a sum of n functions over a convex constraint set. Algorithms that carefully sub-sample to reduce n can improve the computational efficiency, while maintaining the original convergence properties. For second order methods, we first consider a general class of problems and give quantitative convergence results for variants of Newtons methods where the Hessian or the gradient is uniformly sub-sampled.
Tue, 2016-09-06 12:30 - 14:00
Andy Wathen, Mathematical Institute, Oxford University
Descriptive convergence estimates or bounds for Krylov subspace iterative methods for nonsymmetric matrix systems are keenly desired but remain elusive. In the case of symmetric (self-adjoint) matrices, bounds based on eigenvalues can be usefully descriptive of observed convergence; an important consequence is that there are rigorous criteria for what constitutes a good preconditioner for symmetric matrices.
Tue, 2016-04-05 12:30 - 14:00
Julie Nutini, UBC Computer Science
There has been significant recent work on the theory and application of randomized coordinate descent algorithms, beginning with the work of Nesterov, who showed that a random-coordinate selection rule achieves the same convergence rate as the Gauss-Southwell selection rule. This result suggests that we should never use the Gauss-Southwell rule, as it is typically much more expensive than random selection.
Templates provided by UBC Department of Physics & Astronomy

a place of mind, The University of British Columbia

Faculty of Science
Institute of Applied Mathematics
311-6356 Agricultural Road
Vancouver, BC V6T 1Z2
Tel 604.822.8571
Fax 604.822.0957

Emergency Procedures | Accessibility | Contact UBC | © Copyright The University of British Columbia