Skip to main content

Robustness in Machine Learning and Optimization: A Minmax Approach

Asu Ozdaglar

Abstract

Minmax problems arise in a large number of problems in optimization, including worst-case design, duality theory, and zero-sum games, but also have become popular in machine learning in the context of adversarial robustness and Generative Adversarial Networks (GANs). This talk will review our recent work on solving minmax problems using discrete-time gradient based optimization algorithms. We focus on Optimistic Gradient Descent Ascent (OGDA) and Extra- gradient (EG) methods, which have attracted much attention in the recent literature because of their superior empirical performance in GAN training. We show that OGDA and EG can be seen as approximations of the classical proximal point method and use this interpretation to establish convergence rate guarantees for these algorithms. These guarantees are provided for the ergodic (averaged) iterates of the algorithms. We also consider the last iterate of EG and present convergence rate guarantees for the last iterate for smooth convex-concave saddle point problems. We finally turn to analysis of generalization properties of gradient based minmax algorithms using the algorithmic stability framework defined by Bousquet and Elisseeff. Our generalization analysis suggests superiority of gradient descent ascent (GDA) compared to GDmax algorithm (which involves exact solution of the maximization problem at each iteration) in the nonconvex-concave case provided that similar learning rates are used in the descent and ascent steps.

Bio

Asu Ozdaglar received the B.S. degree in electrical engineering from the Middle East Technical University, Ankara, Turkey, in 1996, and the S.M. and the Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, in 1998 and 2003, respectively.

She is the MathWorks Professor of Electrical Engineering and Computer Science in the Electrical Engineering and Computer Science (EECS) Department at the Massachusetts Institute of Technology. She is the department head of EECS and she Deputy Dean of Academics in the Schwarzman College of Computing. Her research expertise includes optimization theory, with emphasis on nonlinear programming and convex analysis, game theory, with applications in communication, social, and economic networks, distributed optimization and control, and network analysis with special emphasis on contagious processes, systemic risk and dynamic control.

Professor Ozdaglar is the recipient of a Microsoft fellowship, the MIT Graduate Student Council Teaching award, the NSF Career award, the 2008 Donald P. Eckman award of the American Automatic Control Council, the Class of 1943 Career Development Chair, the inaugural Steven and Renee Innovation Fellowship, and the 2014 Spira teaching award. She served on the Board of Governors of the Control System Society in 2010 and was an associate editor for IEEE Transactions on Automatic Control. She was the inaugural area co-editor for the area entitled “Games, Information and Networks” in the journal Operations Research. She is the co-author of the book entitled “Convex Analysis and Optimization” (Athena Scientific, 2003).

Asu Ozdaglar Headshot
Asu Ozdaglar
MIT
Virtual Zoom Lecture
1 Jun 2021, 10:30am until 11:30am