Adapting Random-X optimism to tree-based boosting ensembles
Develop an optimism framework for tree-based boosting models under the Random-X setting, particularly for boosting ensembles such as gradient boosting and Bayesian additive regression trees, by deriving tractable expressions or estimators for the expected training–testing discrepancy (optimism) that enable prediction-oriented model selection beyond bagging-style ensembles.
References
Furthermore, adapting the optimism framework to tree-based boosting models \citep{rashmi2015dart, linero2018bayesian, friedberg2020local} particularly for boosting ensembles \citep{buhlmann2007boosting, lv2014model} in contrast to the bagging-style ensembles discussed here, remains a significant open problem \citep{hill2020bayesian}.
— Asymptotic Optimism for Tensor Regression Models with Applications to Neural Network Compression
(2603.26048 - Shi et al., 27 Mar 2026) in Discussion (final section)