Bayesian Forest Thompson Sampling
- Bayesian Forest Thompson Sampling is a family of algorithms that combines Bayesian modeling and tree ensembles to enable posterior sampling-based exploration in contextual bandits.
- It leverages methods like BART and random forests to capture complex, non-linear reward structures while providing calibrated uncertainty estimates for effective action selection.
- The approach offers both surrogate Gaussian approximations and full Bayesian inference, delivering strong empirical performance and rigorous theoretical regret bounds.
Bayesian Forest Thompson Sampling (BFTS) refers to a family of algorithms in contextual bandit learning that integrate principled Bayesian modeling with tree ensemble predictors to facilitate posterior sampling-based exploration. BFTS leverages the capacity of tree-based models, such as Bayesian Additive Regression Trees (BART) and random forests, to capture complex, non-linear reward structures, while providing calibrated uncertainty estimates essential for Thompson Sampling (TS). The approach has evolved from surrogate uncertainty heuristics to fully Bayesian posterior inference, establishing both practical efficiency and rigorous theoretical guarantees (Deng et al., 8 Feb 2026, Nilsson et al., 2024, Osband et al., 2015).
1. Formal Problem Setting and Algorithmic Foundations
BFTS operates in the contextual multi-armed bandit (MAB) framework. At each round , the agent observes a context and selects an arm , receiving a stochastic reward
The objective is to minimize cumulative regret
where . BFTS algorithms maintain a growing dataset of observed tuples and fit a Bayesian tree ensemble model for each arm to the available data. Thompson sampling is implemented by drawing a posterior sample (or surrogate) of the reward function, maximizing over arms with these sampled estimates to select actions (Deng et al., 8 Feb 2026, Nilsson et al., 2024).
2. Bayesian Modeling via Tree Ensembles
BFTS encompasses two principal modeling paradigms:
- Surrogate Posteriors over Forests (Heuristic-Bayesian): Tree-ensemble-based BFTS, as in Tree Ensemble Thompson Sampling (TETS), treats each tree output as a noisy sample. The surrogate posterior at is modeled as Gaussian:
producing an ensemble posterior
where and are empirical mean and variance in the leaf, and is the leaf count (Nilsson et al., 2024).
- Fully Bayesian Additive Trees (BART-based): Modern BFTS specifies a full generative model using BART, a sum-of-trees regression prior:
with a depth-geometric prior over tree structure, Dirichlet priors for split selection, quantile-based splitting thresholds, and Gaussian priors for leaf values. For each arm , a BART posterior is constructed and sampled for TS (Deng et al., 8 Feb 2026).
These approaches vary in the strength of their Bayesian calibration and computational complexity. The BART-based approach enables exact posterior inference and uncertainty quantification, while the surrogate-Gaussian view provides computational simplicity.
3. Posterior Sampling and Thompson Sampling Algorithms
The core BFTS action-selection mechanism is as follows: for each round ,
- For each arm , fit the Bayesian tree model (TETS: XGBoost/random forest; BART: full MCMC posterior).
- For each arm, extract the posterior (or surrogate) mean and uncertainty .
- Draw a sample
where is an exploration-scale hyperparameter.
- Choose .
- Update the history with .
The BART-based BFTS performs Markov chain Monte Carlo (MCMC) sampling using a backfitting Gibbs procedure, updating trees and model parameters in batches according to a logarithmic refresh schedule, which ensures sublinear computation in (Deng et al., 8 Feb 2026). Surrogate-Gaussian-based BFTS (TETS) re-fits forests at each round, exploiting XGBoost's staged predictions for online leaf statistics (Nilsson et al., 2024).
4. Theoretical Guarantees and Statistical Properties
BFTS built on BART admits sharp theoretical guarantees:
- Bayesian Regret Bound: Under the Bayesian design, with sampled from the model prior, BFTS achieves
with , and the number of trees per arm. The information-theoretic analysis follows Russo & Roy (2016), bounding mutual information between and the history by the structure and parameter entropy in the BART model (Deng et al., 8 Feb 2026).
- Frequentist Minimax Optimality: The “feel-good” BFTS variant augments the loss with an optimistic bonus, and under Hölder-smooth reward and a Dirichlet-sparse prior, achieves
matching known minimax lower bounds for nonparametric contextual bandits (Deng et al., 8 Feb 2026).
- Empirical Uncertainty Calibration: BFTS with BART achieves near-nominal (e.g., 94.4%) credible-interval coverage and low expected calibration error ( at ) (Deng et al., 8 Feb 2026).
In contrast, surrogate-Gaussian BFTS/TETS borrows regret logic from Gaussian TS but lacks a proven or finite-time regret bound (Nilsson et al., 2024).
5. Computational Complexity and Practical Implementation
- BART-Based BFTS: MCMC cost per refresh is per arm, refreshed times. Wall-clock for is 30–45 minutes on 4 CPU cores for high-dimensional tabular datasets. Online decision per round is (Deng et al., 8 Feb 2026).
- TETS (XGBoost/Random Forest): Refits trees at each round, total cost for trees of depth . XGBoost default settings are , depth=10, learning rate . Warm-starting and subsampling can lower overhead. In benchmarks, runtime per -round experiment is 7 hours on CPU, faster than neural baseline methods (Nilsson et al., 2024).
- Bootstrapped BFTS: Ensemble-based TS via the Bayesian bootstrap re-samples or weights the empirical and artificial/prior data, and may be parallelized trivially. Incremental updating approximates full bootstrap efficiently (Osband et al., 2015).
6. Empirical Performance and Benchmark Results
BFTS exhibits state-of-the-art empirical performance on both synthetic and real-world tasks:
- Synthetic Benchmarks: On Friedman-type, linear, and “SynBART” reward functions, BFTS achieves substantially lower cumulative regret compared to LinTS, LinUCB, NeuralTS, random-forest TS, and XGBoostTS. For example, in Friedman sparse/disjoint, final cumulative regret is for BFTS versus for LinTS (Deng et al., 8 Feb 2026).
- OpenML/UCI Tabular Data: On Adult, Magic Telescope, Mushroom, Shuttle, and other datasets, BFTS outperforms all baselines on 8/9 tasks. For Mushroom, final regret is for BFTS, compared with (XGBoostTS), (RFTS), and (LinTS) (Deng et al., 8 Feb 2026, Nilsson et al., 2024).
- Combinatorial Real-World Bandits: In the Luxembourg shortest-path problem, BFTS/TETS attains significantly lower and faster-converging regret compared to all baselines, while neural methods incur higher variance and much slower runtime if not GPU-accelerated (Nilsson et al., 2024).
- mHealth Applications: Offline policy evaluation on the Drink Less trial with 349 participants indicates a to relative increase in engagement rate versus deployed policy, with BFTS outperforming all other baselines (Deng et al., 8 Feb 2026).
7. Methodological Variants and Extensions
- Bootstrapped Thompson Sampling: Separate from BART-based BFTS, bootstrapped TS implements posterior-like randomization for ensembles via a combination of empirical and artificially generated data, weighted through (classical or Bayesian) bootstrap mechanisms. For forests, each tree is trained on a bootstrap replicate of the full history augmented with prior samples, yielding a distribution over policies for action selection. The artificial data generator and prior strength provide a means to approximate Dirichlet or Beta posterior draws for discrete rewards and can be tuned to modulate exploration behavior. This methodology supports deep exploration without explicit Bayesian posteriors, extends naturally to RL, and is computationally suited for parallelization (Osband et al., 2015).
- Uncertainty Heuristics vs. Full Posteriors: Surrogate-based BFTS approximates uncertainty based on the sampling distribution of leaf statistics in ensemble trees, treating the sum of tree outputs as Gaussian with analytically derived mean and variance. Fully Bayesian BFTS (with BART) instead carries out MCMC posterior inference over all tree parameters. A plausible implication is that the latter delivers more calibrated uncertainty estimates and better supports statistical guarantees for TS (Nilsson et al., 2024, Deng et al., 8 Feb 2026).
References
- BFTS with BART and theoretical guarantees: "BFTS: Thompson Sampling with Bayesian Additive Regression Trees" (Deng et al., 8 Feb 2026)
- Surrogate Gaussian-ensemble BFTS (TETS): "Tree Ensembles for Contextual Bandits" (Nilsson et al., 2024)
- Bootstrapped TS for deep exploration and Bayesian forest implementation: "Bootstrapped Thompson Sampling and Deep Exploration" (Osband et al., 2015)