Advantages of randomized gradient schemes over finite-difference methods

Determine whether randomized gradient-approximation schemes—particularly those employing ℓ_p-spherical distributions or uniform sampling over ℓ_p-balls—provide significant theoretical and numerical advantages over finite-difference methods for computing gradients of smooth functions in high-dimensional settings and with limited evaluations.

Background

The paper discusses that many known convergence rates for gradient estimators depend on dimensionality, which appears at odds with consistent computation using sample sizes much smaller than the dimension (N << d). This tension motivates scrutiny of whether randomized schemes truly outperform traditional finite-difference methods (FDMs), a question highlighted in prior work.

The authors note longstanding expectations of advantages for randomized schemes since early work by Spall and position their contribution as addressing this open problem via constructions based on ℓ_p-spherical distributions and uniform sampling over ℓ_p-balls, aiming at dimension-free bias bounds and improved MSE behavior.

References

Likewise, queries about the theoretical and numerical advantage of randomized schemes over the traditional FDMs are discussed in . Significant advantages of randomized schemes over FDMs have been expected since the seminar works in , and this paper addresses such an open problem using the ℓ_p-spherical distributions or random vectors that are uniformly distributed over the ℓ_p-ball with p≥1.

Dimension-free estimators of gradients of functions with(out) non-independent variables  (2512.24527 - Lamboni, 31 Dec 2025) in Section 1 (Introduction)