Papers
Topics
Authors
Recent
Search
2000 character limit reached

An implicit gradient-descent procedure for minimax problems

Published 1 Jun 2019 in math.OC | (1906.00233v1)

Abstract: A game theory inspired methodology is proposed for finding a function's saddle points. While explicit descent methods are known to have severe convergence issues, implicit methods are natural in an adversarial setting, as they take the other player's optimal strategy into account. The implicit scheme proposed has an adaptive learning rate that makes it transition to Newton's method in the neighborhood of saddle points. Convergence is shown through local analysis and, in non convex-concave settings, thorough numerical examples in optimal transport and linear programming. An ad-hoc quasi Newton method is developed for high dimensional problems, for which the inversion of the Hessian of the objective function may entail a high computational cost.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.