Papers
Topics
Authors
Recent
Search
2000 character limit reached

Pure Exploration Bandit Problem with General Reward Functions Depending on Full Distributions

Published 8 May 2021 in cs.LG | (2105.03598v1)

Abstract: In this paper, we study the pure exploration bandit model on general distribution functions, which means that the reward function of each arm depends on the whole distribution, not only its mean. We adapt the racing framework and LUCB framework to solve this problem, and design algorithms for estimating the value of the reward functions with different types of distributions. Then we show that our estimation methods have correctness guarantee with proper parameters, and obtain sample complexity upper bounds for them. Finally, we discuss about some important applications and their corresponding solutions under our learning framework.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.