Boosting Decision Tree is a type of Model Ensemble where we learn iteratively better Decision Tree classifiers based on errors from prior Tree. Data are first given the same weights. During training, wrongly classified data gain bigger weights, allowing the next Decision Tree to focus specifically on them. Each Decision Tree also has a score indicating its classification accuracy. These scores will then be used to weight each Tree in the final aggregated model.
The main goal of Boosting Decision Tree is to reduce bias, whereas the goal of Bagging is to reduce variance.
Allow to use different Loss Function
Prone to overfitting. Should stop early to regularize.