DFWLayer: Differentiable Frank-Wolfe Optimization Layer
Abstract: Differentiable optimization has received a significant amount of attention due to its foundational role in the domain of machine learning based on neural networks. This paper proposes a differentiable layer, named Differentiable Frank-Wolfe Layer (DFWLayer), by rolling out the Frank-Wolfe method, a well-known optimization algorithm which can solve constrained optimization problems without projections and Hessian matrix computations, thus leading to an efficient way of dealing with large-scale convex optimization problems with norm constraints. Experimental results demonstrate that the DFWLayer not only attains competitive accuracy in solutions and gradients but also consistently adheres to constraints.
- Differentiable convex optimization layers. Advances in neural information processing systems, 32, 2019.
- Optnet: Differentiable optimization as a layer in neural networks. In International Conference on Machine Learning, pp. 136–145. PMLR, 2017.
- Qplayer: efficient differentiation of convex quadratic optimization. Optimization, 2023.
- Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning, 3(1):1–122, 2011.
- Conditional gradient methods. arXiv preprint arXiv:2211.14103, 2022.
- Openai gym, 2016.
- Learning with combinatorial optimization layers: a probabilistic approach. arXiv preprint arXiv:2207.13513, 2022.
- Dc3: A learning method for optimization with hard constraints. In International Conference on Learning Representations, 2020.
- Benjamin Ellenberger. Pybullet gymperium. https://github.com/benelot/pybullet-gym, 2018–2019.
- An algorithm for quadratic programming. Naval research logistics quarterly, 3(1-2):95–110, 1956.
- Benchmarking actor-critic deep reinforcement learning algorithms for robotics control with action constraints. IEEE Robotics and Automation Letters, 2023.
- Benoit Landry. Differentiable and Bilevel Optimization for Control in Robotics. Stanford University, 2021.
- Alternating differentiation for optimization layers. In The Eleventh International Conference on Learning Representations, 2022.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.