Papers
Topics
Authors
Recent
Search
2000 character limit reached

Bandit Phase Retrieval

Published 3 Jun 2021 in stat.ML, cs.LG, math.ST, stat.ME, and stat.TH | (2106.01660v2)

Abstract: We study a bandit version of phase retrieval where the learner chooses actions $(A_t){t=1}n$ in the $d$-dimensional unit ball and the expected reward is $\langle A_t, \theta\star\rangle2$ where $\theta_\star \in \mathbb Rd$ is an unknown parameter vector. We prove that the minimax cumulative regret in this problem is $\smash{\tilde \Theta(d \sqrt{n})}$, which improves on the best known bounds by a factor of $\smash{\sqrt{d}}$. We also show that the minimax simple regret is $\smash{\tilde \Theta(d / \sqrt{n})}$ and that this is only achievable by an adaptive algorithm. Our analysis shows that an apparently convincing heuristic for guessing lower bounds can be misleading and that uniform bounds on the information ratio for information-directed sampling are not sufficient for optimal regret.

Citations (13)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.