A Unified Zeroth-Order Optimization Framework via Oblivious Randomized Sketching
Abstract: We propose a new framework for analyzing zeroth-order optimization (ZOO) from the perspective of \emph{oblivious randomized sketching}.In this framework, commonly used gradient estimators in ZOO-such as finite difference (FD) and random finite difference (RFD)-are unified through a general sketch-based formulation. By introducing the concept of oblivious randomized sketching, we show that properly chosen sketch matrices can significantly reduce the high variance of RFD estimates and enable \emph{high-probability} convergence guarantees of ZOO, which are rarely available in existing RFD analyses. \noindent We instantiate the framework on convex quadratic objectives and derive a query complexity of $\tilde{\mathcal{O}}(\mathrm{tr}(A)/L \cdot L/\mu\log\frac{1}{\epsilon})$ to achieve a $\epsilon$-suboptimal solution, where $A$ is the Hessian, $L$ is the largest eigenvalue of $A$, and $\mu$ denotes the strong convexity parameter. This complexity can be substantially smaller than the standard query complexity of ${\cO}(d\cdot L/\mu \log\frac{1}{\epsilon})$ that is linearly dependent on problem dimensionality, especially when $A$ has rapidly decaying eigenvalues. These advantages naturally extend to more general settings, including strongly convex and Hessian-aware optimization. \noindent Overall, this work offers a novel sketch-based perspective on ZOO that explains why and when RFD-type methods can achieve \emph{weakly dimension-independent} convergence in general smooth problems, providing both theoretical foundations and practical implications for ZOO.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.