Boosting Decision Tree

What

Boosting Decision Tree is a type of Model Ensemble where we learn iteratively better Decision Tree classifiers based on errors from prior Tree. Data are first given the same weights. During training, wrongly classified data gain bigger weights, allowing the next Decision Tree to focus specifically on them. Each Decision Tree also has a score indicating its classification accuracy. These scores will then be used to weight each Tree in the final aggregated model.

The main goal of Boosting Decision Tree is to reduce bias, whereas the goal of Bagging is to reduce variance.

Algorithm

Stanford on Boost

 

Advantages

Allow to use different Loss Function

Caveats

Prone to overfitting. Should stop early to regularize.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.