Papers
Topics
Authors
Recent
Search
2000 character limit reached

Understanding the boosted decision tree methods with the weak-learner approximation

Published 12 Nov 2018 in physics.data-an and hep-ex | (1811.04822v1)

Abstract: Two popular boosted decsion tree (BDT) methods, Adaptive BDT (AdaBDT) and Gradient BDT (GradBDT) are studied in the classification problem of separating signal from background assuming all trees are weak learners. The following results are obtained. a) The distribution of the BDT score is approximately Gaussian for both methods. b) With more trees in training, the distance of the expectaion of score distribution between signal and background is larger, but the variance of both distributions becomes greater at the same time. c) Extenstion of the boosting mechanism in AdaBDT to any loss function is possible. d) AdaBDT is shown to be equivalent to the GradBDT with 2 terminal nodes for a decision tree. In the field high energy physics, many applications persue the best statistical significance. We also show that the maximization of the statistical significance is closely related to the minimization of the loss function, which is the target of the BDT algorithms.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.