Papers
Topics
Authors
Recent
Search
2000 character limit reached

Why Does Adaptive Zeroth-Order Optimization Work?

Published 2 Feb 2026 in math.OC | (2602.01627v1)

Abstract: Zeroth-order (ZO) optimization is popular in real-world applications that accessing the gradient information is expensive or unavailable. Recently, adaptive ZO methods that normalize gradient estimators by the empirical standard deviation of function values have achieved strong practical performance, particularly in fine-tuning the LLM. However, the theoretical understanding of such strategy remains limited. In this work, we show that the empirical standard deviation is, with high probability, closely proportional to the norm of the (stochastic) gradient. Based on this insight, we analyze adaptive ZO methods under the generalized $(L_0,L_1)$-smoothness condition with respect to the matrix norm. We establish explicit convergence rates and query complexity bounds for both deterministic and stochastic settings, demonstrating that adaptive ZO methods achieve the faster convergence and the improved query efficiency compared to the vanilla ZO methods with fixed-step.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.