Convex-composite optimization concerns the optimization of functions that can be written as the composition of a convex function and a smooth function. Such functions are typically nonsmooth and nonconvex. Nonetheless, most problems in applications can be formulated as a problem in this class, examples include, nonlinear programming, feasibility problems, Kalman smoothing, compressed sensing, and sparsity optimization, as well as many problems in data analysis. We begin by stating the problem structure and reviewing its history and several examples. The variational properties of these functions are then developed from which optimality conditions are derived. In particular, we state the underlying constraint qualification required for a variational calculus as well a theory of Lagrange multipliers. After this introduction we dive into the meat of the talk, which concerns algorithmic solution methods and their convergence properties. If time permits, we examine the question of when a convex-composite function is convex.
Event Categories
- Other IAM Events
- Computer Science Distinguigshed Lectures
- IAM-PIMS Distinguished Colloquium
- IAM Public Lecture
- IAM Distinguished Alumni Lecture
- IAM Seminar / UBC Early-Career Award Lecture
- Mathematical Biology Seminar
- IAM Career Chats
- IAM Seminar
- IAM Graduate Seminar
- SCAIM Seminar
- Fluids Seminar
- Mathematics of Information and Applications Seminar
- BC Data Science Colloquium
- IAM Distinguished Colloquium
- Probability Seminar
- CS Seminar
- Quantum Information and Computing
- Algorithms Seminars
- Annual Retreat
- PIMS