Information Geometry of Exponentiated Gradient: Convergence beyond L-Smoothness
Abstract: We study the minimization of smooth, possibly nonconvex functions over the positive orthant, a key setting in Poisson inverse problems, using the exponentiated gradient (EG) method. Interpreting EG as Riemannian gradient descent (RGD) with the $e$-Exp map from information geometry as a retraction, we prove global convergence under weak assumptions -- without the need for $L$-smoothness -- and finite termination of Riemannian Armijo line search. Numerical experiments, including an accelerated variant, highlight EG's practical advantages, such as faster convergence compared to RGD based on interior-point geometry.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.