Understanding the boosted decision tree methods with the weak-learner approximation
Abstract: Two popular boosted decsion tree (BDT) methods, Adaptive BDT (AdaBDT) and Gradient BDT (GradBDT) are studied in the classification problem of separating signal from background assuming all trees are weak learners. The following results are obtained. a) The distribution of the BDT score is approximately Gaussian for both methods. b) With more trees in training, the distance of the expectaion of score distribution between signal and background is larger, but the variance of both distributions becomes greater at the same time. c) Extenstion of the boosting mechanism in AdaBDT to any loss function is possible. d) AdaBDT is shown to be equivalent to the GradBDT with 2 terminal nodes for a decision tree. In the field high energy physics, many applications persue the best statistical significance. We also show that the maximization of the statistical significance is closely related to the minimization of the loss function, which is the target of the BDT algorithms.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.