Fixed-Budget Best-Arm Identification
- Fixed-budget best-arm identification is defined as selecting the arm with the highest expected reward under a strict sampling constraint, balancing adaptation and statistical efficiency.
- It employs adaptive allocation strategies, such as sequential elimination and nonlinear allocation, to achieve exponential decay of error probability despite resource limits.
- The framework extends to Bayesian, structured, and side-observation models, offering practical insights for experimental design and treatment choice in constrained environments.
Fixed-budget best-arm identification (FB-BAI) refers to the problem of identifying, with minimal error probability, the single arm with the largest expected reward from a finite set of stochastic arms, given a fixed, finite sampling budget. Unlike fixed-confidence BAI (which seeks to achieve a target error probability with minimal samples), FB-BAI fundamentally concerns the tradeoff between statistical efficiency and adaption under a strict sample constraint. This regime is central in experimental design, treatment choice, and other applications requiring pure exploration under resource limits.
1. Formal Framework and Problem Statement
Let denote arms; each arm yields i.i.d. rewards from an unknown law , with mean . Without loss, assume . The player is allowed total samples, adaptively allocated: at each , select and observe ( counts pulls of up to ). After all draws, a recommendation rule outputs .
The primary performance metric is the misidentification probability: which one aims to minimize for all bandit instances. Alternative metrics, such as expected simple regret, also arise, but the canonical objective is exponential decay of with .
2. Canonical Algorithms: Sequential Elimination and Allocation Schemes
Sequential elimination algorithms operate in stages, maintaining a set of survivors, sequentially eliminating arms based on empirical means. In round , each arm in is sampled up to times; the worst are discarded. The budget constraint enforces
where , .
Nonlinear allocation rules (e.g., "Nonlinear Sequential Elimination" (Shahrampour et al., 2016)) set , , dedicate budget proportional to , and eliminate one arm per round (). The allocation parameter is tuned to the number of competitive arms: for many-competitor regimes, for few-competitor settings. This nonlinearity can remove factors present in linear or uniform allocation schemes.
Side-observation models further extend the framework: pulling one arm can reveal outcomes of several arms, yielding improved rates by pooling information across arm groups (Shahrampour et al., 2016).
3. Information-Theoretic Complexity and Lower Bounds
The critical problem-dependent complexity is
"Tight (lower) bounds" (Carpentier et al., 2016) demonstrate that, for general -armed stochastic bandits,
for some absolute constant , with a matching upper bound (up to constants) achieved by Successive Rejects and its descendants.
The penalty—absent in the fixed-confidence setting—captures the cost of adaptation when is unknown, and is essential except for certain narrow complexity regimes. If is known, algorithms can achieve the fixed-confidence-style rate ; otherwise, fixed-budget procedures are minimax optimal only up to this log-factor.
This adaptation price generalizes to structured bandits (e.g., linear models), with analogous instance-dependent complexity measures , effective dimension , and corresponding fixed-budget rates of order (Yang et al., 2021, Azizi et al., 2021).
4. Advances in Adaptive Allocation: Optimality and Minimax Rates
Recent decades have seen several advances in minimax-optimal FB-BAI:
- Adaptive Generalized Neyman Allocation (AGNA)/GNA: In the small-gap regime (arms nearly indistinguishable), the minimax exponent is explicitly characterized: the optimal allocation solves
with sampling fractions proportional to variances (cf. classic Neyman allocation for ). If variances are unknown, estimation on the fly combined with appropriately weighted adaptive importance sampling (AIPW) achieves sharp optimality (Kato, 2024, Kato, 2023, Kato, 2023).
- Neural and Batched Tracking: Universal tracking algorithms (e.g., Rgo-tracking, DOT) use neural function approximation or batching with delayed allocation to closely follow the minimax lower bound (Komiyama et al., 2022). The associated policies provably achieve the best possible exponent in the fixed-budget regime.
- Best-Feasible-Arm and Structured Settings: For linear bandits with constraints or structure, fixed-budget algorithms leverage G-optimal design, game-theoretic allocation, or two-phase approaches based on support recovery (e.g., Lasso-OD for sparse settings) to achieve minimax exponents depending on the effective dimension or sparsity only (Bian et al., 3 Jun 2025, Yavas et al., 2023, Yang et al., 2021).
5. Bayesian, Frequentist, and Regret Perspectives
Bayesian FB-BAI considers arms' means drawn from priors. Bayesian elimination, adapting successive elimination to the posterior, achieves error bounds dependent on prior sharpness; Bayes risk decays as and matches the lower bound in two-arm settings (Atsidakou et al., 2022). Recent UCB-type algorithms enhance performance by learning the prior, guaranteeing optimal Bayes risk (Zhu et al., 2024).
A notable negative result is that Bayes-optimal algorithms (minimizing Bayes simple regret via dynamic program recursion) can be strictly suboptimal for worst-case frequentist regret: in pathological instances, their simple regret decays only polynomially, not exponentially (Komiyama, 2022). In contrast, frequentist algorithms (successive rejects, sequential halving) guarantee uniform exponential decay for any with unique best arm.
6. Large-Deviation Analysis, Algorithmic Refinements, and Extensions
Recent work establishes large deviation principles under both static and adaptive allocation. For static strategies, the optimal error exponent is
where is the Kullback–Leibler divergence. Adaptive algorithms (e.g., SRED: "Continuous Rejects") leverage empirical gap-triggered elimination, yielding strictly better exponent guarantees than classical phase-based policies such as successive rejects (Wang et al., 2023).
In combinatorial and quantile objectives, tailored FB-BAI algorithms exploit group coding, batch feedback, or quantile-based elimination to extend minimax rates to more complex pure exploration scenarios (2502.01429, Zhang et al., 2020).
7. Summary Table: Core Algorithmic Paradigms
| Algorithm Class | Error Exponent | Key Features |
|---|---|---|
| Uniform / Static Allocation | (oracle), suboptimal adaptively | Simplicity, log-factor suboptimal in general |
| Successive Rejects (SR) | No knowledge of gaps needed, matches lower bound | |
| Nonlinear Elimination [1609] | Log-free in many regimes with tuned | |
| GNA, NA-AIPW | in small-gap Gaussian models | Minimax optimal, variance-adaptive allocations |
| SRED, Adaptive LD algorithms | Improved exponent over SR/SH | Fully adaptive, local-to-global gap sensitivity |
| Bayesian Elimination | Bayes error [2211] | Prior-aware, better in informed/low-uncertainty |
| Rgo-TNN, DOT [2206] | Achieves oracle exponent | Near-optimal via NN tracking or batching |
| Linear/GLM Bandit BAI | G-optimal design, match minimax for structure |
References
- (Shahrampour et al., 2016): General sequential elimination and nonlinear allocation, performance bounds, and side-observation extension.
- (Carpentier et al., 2016): Tight lower bounds for fixed-budget best-arm with adaptation penalty.
- (Atsidakou et al., 2022): Bayesian elimination, finite-budget Bayes error, prior-dependence.
- (Komiyama et al., 2022): Minimax optimal rates, neural/batched tracking, and oracle characterization.
- (Kato, 2024, Kato, 2023): Generalized Neyman allocation, exact local minimax optimality.
- (Wang et al., 2023): Large deviation principles for FB-BAI, adaptive allocation improvements.
- (Yang et al., 2021, Azizi et al., 2021, Yavas et al., 2023): Minimax exponents for (sparse) linear/structured bandits.
- (Komiyama, 2022): Bayes-optimal frequentist suboptimality, dynamic-program impossibility.
- (Atsidakou et al., 2022, Zhu et al., 2024): Bayesian elimination/UCB, Bayes risk rates.
- (2502.01429): Combinatorial exploration with group-averaged feedback.
- (Zhang et al., 2020): Quantile-based fixed-budget BAI.
These results collectively characterize the statistical and algorithmic landscape of fixed-budget best-arm identification, with rigorous understanding of rate-optimal strategies, adaptation penalties, structured and Bayesian extensions, and the unresolved open problems for general non-Gaussian or non-small-gap regimes.