Boosted Lasso and Reverse Boosting
Report Number
678
Abstract
This paper introduces the concept of ``backward'' step in contrast with forward fashion algorithms like Boosting and Forward Stagewise Fitting. Like classical elimination methods, this ``backward'' step works by shrinking the model complexity of an ensemble learner. Through a step analysis, we show that this additional step is necessary for minimizing $L_1$ penalized loss (Lasso loss). We also propose a BLasso algorithm as a combination of both backward and forward steps which is able to produce the complete regularization path for Lasso problems. Moreover, BLasso can be generalized to solve problems with general convex loss with general convex penalty.
PDF File
Postscript File