Reception preceding the talk in the IAM Lounge (LSK 306), 2:15
Optimization is a fundamental pillar of data science. Traditionally, the art and challenge in optimization lay primarily in problem formulation to ensure desirable properties such as convexity. In the context of contemporary data science, however, optimization is practiced differently, with scalable local search methods applied to nonconvex objectives being the dominant paradigm in high-dimensional problems. This has brought a number of foundational mathematical challenges at the interface between optimization and data science pertaining to the dichotomy between convexity and nonconvexity.
In this talk, I will discuss some of my work addressing these challenges in regularization, a technique to encourage structure in solutions to statistical estimation and inverse problems. Even setting aside computational considerations, we currently lack a systematic understanding from a modeling perspective of what types of geometries should be preferred in a regularizer for a given data source. In particular, given a data distribution, what is the optimal regularizer for such data and what are the properties that govern whether it is amenable to convex regularization? Using ideas from star geometry, Brunn-Minkowski theory, and variational analysis, I show that we can characterize the optimal regularizer for a given distribution and establish conditions under which this optimal regularizer is convex. Moreover, I describe results establishing the robustness of our approach, such as convergence of optimal regularizers with increasing sample size and statistical learning guarantees with applications to several classes of regularizers of interest.