URL for Speaker: https://sites.google.com/site/yifansunwebsite/
In this work, we examine notions of alignment with respect to a generalized form of Holder’s inequality. We show that tightness of this generalized inequality often serves as an optimality condition for many important convex optimization problems, such as atomic regularization in machine learning, gauge duality, and linear programming. In particular, when the primal and dual variables are aligned, the support of the primal solution can be recovered from the dual solution, which is a property often exploited in two-stage methods for sparse optimization. We show that many popular convex optimization methods (e.g. proximal gradient and conditional gradient) can be interpreted as “aligning methods”, which allows for a more geometric view of methods and problem properties.