Correct its predecessor by paying a bit more attention to the training instance that the predecessor underfitted. This results in new predictors focusing more and more on the hard cases.
Pseudocode
Assign observation i the weight for d_1,i=n1 (equal weights)
For t=1:T
Train weak learning alg orithm using data weighted by d_ti. This produces weak classifier h_t
Choose coefficient α_t (tells us how good is the classifier is at that round)
Error_tα_t=∑_i;h_t(x_i)=y_id_t(sum of weights of misclassified points)=21(Error_t1−Error_t)
Update weights
d_t+1,i=Z_td_t,i⋅exp(−α_ty_ih_t(x_i))
Z_t=∑_i=1nd_t,i: normalization factor
If prediction i is correct →y_ih_t(x_i)=1→ Weight of observation i will be decreased by exp(−α_t)
If prediction i is incorrect →y_ih_t(x_i)=−1→ Weight of observation i will be increased by exp(α_t)