In many problems arising in data science and scientific engineering, one must reconstruct a signal from few corrupted measurements, a class of problems known more broadly as inverse problems. Due to the ill-posedness of such problems, one requires a prior: a mathematical model of what makes some signals natural. Recently, there have been great empirical strides in deep generative models, a class of neural network-based models that learn to sample signals from a distribution of interest. Compared to classical priors like sparsity, generative priors lack the same understanding of when and why they work well, especially in relation to notions such as statistical complexity and computational efficiency. Here, I will discuss the theory and applications of using such priors in inverse problems. I will first present a rigorous recovery guarantee for generative priors in the nonlinear inverse problem of phase retrieval. I will show that generative models enable an efficient algorithm from generic measurements with optimal sample complexity. In the second part of the talk, I will address how one can exploit ideas from this approach to tackle problems where ground-truth data may be unavailable in practice, including problems arising from black hole imaging.
Event Categories
- Annual Retreat
- PIMS
- Other IAM Events
- Computer Science Distinguigshed Lectures
- IAM-PIMS Distinguished Colloquium
- IAM Public Lecture
- IAM Distinguished Alumni Lecture
- IAM Seminar / UBC Early-Career Award Lecture
- Mathematical Biology Seminar
- IAM Career Chats
- IAM Seminar
- IAM Graduate Seminar
- SCAIM Seminar
- Fluids Seminar
- Mathematics of Information and Applications Seminar
- BC Data Science Colloquium
- IAM Distinguished Colloquium
- Probability Seminar
- CS Seminar
- Quantum Information and Computing
- Algorithms Seminars