Deep learning’s success has revealed a number of phenomena that appear to conflict with classical inuitions in the fields of optimization and statistics. First, the objective functions formulated in deep learning are highly nonconvex but are typically amenable to minimization with first-order optimization methods like gradient descent. And second, neural networks trained by gradient descent are capable of ‘benign overfitting’: they can achieve zero training error on noisy training data and simultaneously generalize well to unseen data. In this talk we go over our recent work towards understanding these phenomena. We show how the framework of proxy convexity allows for tractable optimization analysis despite nonconvexity, while the implicit regularization of gradient descent plays a key role in benign overfitting. In closing, we discuss some of the questions that motivate our current work on understanding deep learning, and how we may use our insights to make deep learning more trustworthy, efficient, and powerful.
Event Categories
- Annual Retreat
- PIMS
- Other IAM Events
- Computer Science Distinguigshed Lectures
- IAM-PIMS Distinguished Colloquium
- IAM Public Lecture
- IAM Distinguished Alumni Lecture
- IAM Seminar / UBC Early-Career Award Lecture
- Mathematical Biology Seminar
- IAM Career Chats
- IAM Seminar
- IAM Graduate Seminar
- SCAIM Seminar
- Fluids Seminar
- Mathematics of Information and Applications Seminar
- BC Data Science Colloquium
- IAM Distinguished Colloquium
- Probability Seminar
- CS Seminar
- Quantum Information and Computing
- Algorithms Seminars