Zheng Qu

Acceleration of linearly convergent first-order methods without the knowledge of error bound condition number

Zheng Qu, University of Hong Kong IAM Seminar
June 17, 2019 3:00 pm LSK 306

Many popular algorithms in convex optimization exhibit linear convergence behaviour when applied on problems arising from the context of machine learning and signal processing. Examples of such algorithms include the proximal gradient descent methods, the ADMM and its linearised/proximal variants, the randomized coordinate descent methods, the stochastic variance reduced methods (SVRG/SAGA), etc. The linear convergence phenomenon is connected to certain error bound condition around the optimal set, obtained from the special structure of the objective function to minimise. If the error bound constant is known, then one can accelerate the linear convergence rate by a careful extrapolation of past iterates. However, the estimation of such constant often involves the solution of another constrained problem which is usually as hard as the original problem. In this talk we present some recent results in obtaining improved linear convergence rate without requirement on the knowledge of the error bound constant.

Zheng Qu is Assistant Professor in the Department of Mathematics at the University of Hong Kong.