Papers
Topics
Authors
Recent
Search
2000 character limit reached

Zeroth-order gradient estimators for stochastic problems with decision-dependent distributions

Published 28 Oct 2025 in math.OC | (2510.24929v1)

Abstract: Stochastic optimization problems with unknown decision-dependent distributions have attracted increasing attention in recent years due to its importance in applications. Since the gradient of the objective function is inaccessible as a result of the unknown distribution, various zeroth-order methods have been developed to solve the problem. However, it remains unclear which search direction to construct a gradient estimator is more appropriate and how to set the algorithmic parameters. In this paper, we conduct a unified sample complexity analysis of zeroth-order methods across gradient estimators with different search directions. As a result, we show that gradient estimators that average over multiple directions, either uniformly from the unit sphere or from a Gaussian distribution, achieve the lowest sample complexity. The attained sample complexities improve those of existing zeroth-order methods in the problem setting that allows nonconvexity and unboundedness of the objective function. Moreover, by simulation experiments on multiple products pricing and strategic classification applications, we show practical performance of zeroth-order methods with various gradient estimators.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.