Distributed Learning for Stochastic Generalized Nash Equilibrium Problems
Abstract: This work examines a stochastic formulation of the generalized Nash equilibrium problem (GNEP) where agents are subject to randomness in the environment of unknown statistical distribution. We focus on fully-distributed online learning by agents and employ penalized individual cost functions to deal with coupled constraints. Three stochastic gradient strategies are developed with constant step-sizes. We allow the agents to use heterogeneous step-sizes and show that the penalty solution is able to approach the Nash equilibrium in a stable manner within $O(\mu_\text{max})$, for small step-size value $\mu_\text{max}$ and sufficiently large penalty parameters. The operation of the algorithm is illustrated by considering the network Cournot competition problem.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.