Why Does Adaptive Zeroth-Order Optimization Work?
Abstract: Zeroth-order (ZO) optimization is popular in real-world applications that accessing the gradient information is expensive or unavailable. Recently, adaptive ZO methods that normalize gradient estimators by the empirical standard deviation of function values have achieved strong practical performance, particularly in fine-tuning the LLM. However, the theoretical understanding of such strategy remains limited. In this work, we show that the empirical standard deviation is, with high probability, closely proportional to the norm of the (stochastic) gradient. Based on this insight, we analyze adaptive ZO methods under the generalized $(L_0,L_1)$-smoothness condition with respect to the matrix norm. We establish explicit convergence rates and query complexity bounds for both deterministic and stochastic settings, demonstrating that adaptive ZO methods achieve the faster convergence and the improved query efficiency compared to the vanilla ZO methods with fixed-step.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.