A Systems-Theoretic Viewpoint on Real-Time Optimization and Feedback Games

Dominic Liao-McPherson, UBC Mechanical Engineering
October 16, 2023 3:00 pm LSK 306

Many iterative algorithms in optimization, games, and learning can be viewed as dynamical systems with inputs (measurements, historical data, user feedback), internal states (decision variables, state estimates, Lagrange multipliers), outputs (residuals, actuator commands), and uncertainties (noise, unknown parameters). The last few years have witnessed a growing interest in studying how learning, optimization and game-theoretic algorithms behave when placed in closed loop with noisy, uncertain, and dynamic physical systems and conversely how systems theory can be leveraged to both analyze existing algorithms and synthesize new ones. Notable recent examples include applying robust control tools (such as integral quadratic constraints) to analyze and synthesize gradient methods, repurposing optimization algorithms as feedback controllers for physical systems, and studying reinforcement learning algorithms using hybrid systems theory.

This dynamical systems perspective on algorithms is crucial for tackling some of the challenges arising when analyzing and synthesizing the algorithms that underlie modern autonomous, AI-driven, and socio-technical systems. These include algorithms that: (i) operate online (e.g., running algorithms with streaming data), (ii) include humans in the loop (e.g., autonomous driving, recommender systems), (iii) are interconnected with physical systems (e.g., optimization-based feedback controllers, real-time MPC), (iv) involve self-interested decision makers with shared resources (e.g., traffic/logistic networks), or (v) need to be robust to severe uncertainty (e.g., exogenous disturbances, intrinsic randomness, or non-stationarity). These new and timely applications are driving the development of new theoretical tools that synergistically build on control, optimization, game, and learning theory as well as novel algorithms and computational techniques that are specialized for noisy, time-varying, and dynamic environments.

I will discuss two scenarios where optimization algorithms are used directly as controllers for physical systems. First, I present Time-distributed Optimization (TDO), a unifying framework for studying the system theoretic consequences of computational limits in the context of Model Predictive Control (MPC). I show that it is possible to recover the stability and robustness properties of optimal MPC despite limited computational resources and illustrate how a system-theoretic view of algorithms can be exploited to certify the closed-loop system. Further, I illustrate the applicability of the these methods in the real-world through diesel engine, and autonomous driving examples. Second, I discuss Feedback Equilibrium Seeking (FES), a design framework for dynamic feedback controllers that track solution trajectories of time-varying generalized equations, such as local minimizers of nonlinear programs or competitive equilibria (e.g., Nash) of non-cooperative games. I present tracking error, stability, and robustness results for the sampled-data cased and provide illustrative examples in DC power grids and supply chains.

Refreshments will be served preceding the talk, starting at 2:45.

We gratefully acknowledge funding support from the Pacific Institute of Mathematical Sciences (PIMS)