Reduced Order Models and Randomization for Fast Nonlinear Inversion and Optimization

Eric de Sturler, Department of Mathematics, Virginia Tech SCAIM Seminar
April 9, 2019 12:30 pm ESB 4133
In many large-scale inverse problems and optimization problems, the objective function (to be minimized) is composed of many terms, each term or group of terms requiring an expensive simulation/computation. In the cases discussed here, we need to solve a large linear system. As a result, in a realistic setting each optimization step may require hundreds or thousands of large linear system solves, which creates an overwhelming computational burden.
In an inverse problem, we typically try to infer parameters describing (the interior of) a medium from measurements on the surface. The optimization minimizes the misfit, the difference between actual measurements and predicted measurements  computed using the underlying physical model and a given set of parameters. In topology optimization, we try find the distribution of a limited amount of material(s) to define a structure that has, for example, maximum stiffness under a large number of loading conditions. 
We discuss several techniques that will make these optimizations much cheaper. Model reduction can drastically reduce the size of the systems to be solved, but typically does not reduce the number of solves required. In addition, computing the reduced model itself introduces a substantial cost, though typically much less then for the overall optimization. Randomization uses stochastic techniques to estimate function values, gradients, and possibly (approximate) Hessians with a greatly reduced number of linear solves but of the original size. Combinations of these methods promise to be very efficient.