On Inexact Solution of Auxiliary Problems in Tensor Methods for Convex Optimization
Abstract: In this paper we study the auxiliary problems that appear in $p$-order tensor methods for unconstrained minimization of convex functions with $\nu$-H\"{o}lder continuous $p$th derivatives. This type of auxiliary problems corresponds to the minimization of a $(p+\nu)$-order regularization of the $p$th order Taylor approximation of the objective. For the case $p=3$, we consider the use of Gradient Methods with Bregman distance. When the regularization parameter is sufficiently large, we prove that the referred methods take at most $\mathcal{O}(\log(\epsilon{-1}))$ iterations to find either a suitable approximate stationary point of the tensor model or an $\epsilon$-approximate stationary point of the original objective function.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.