Abstract:
|
Boosting is one of the most powerful and popular tools in machine learning/ statistics that is widely used in practice. They work extremely well in a variety of applications. However little is known about many of the statistical and computational properties of the algorithm, and in particular their interplay. We analyze boosting algorithms in linear regression from the perspective modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FSe) and least squares boosting (LS-Boost-e), can be viewed as subgradient descent to minimize the maximum absolute correlation between features and residuals. We also propose a modification of FSe that yields an algorithm for the LASSO, and that computes the LASSO path. We derive novel comprehensive computational guarantees for all of these boosting algorithms, which provide, for the first time, a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm with a pre-specified learning rate for a fixed but arbitrary number of iterations, for any dataset.
|