|
Activity Number:
|
551
|
|
Type:
|
Contributed
|
|
Date/Time:
|
Thursday, August 10, 2006 : 10:30 AM to 12:20 PM
|
|
Sponsor:
|
Section on Statistical Computing
|
| Abstract - #305648 |
|
Title:
|
Adaptive Learning Rate in Stochastic Boosting
|
|
Author(s):
|
Mark Culp*+ and George Michailidis and Kjell Johnson
|
|
Companies:
|
University of Michigan and University of Michigan and Pfizer Inc.
|
|
Address:
|
1843 Pointe Crossing Street, 201, Ann Arbor, MI, 48105,
|
|
Keywords:
|
stochastic gradient boosting ; learning rate ; shrinkage ; exponential loss
|
|
Abstract:
|
We present the Dynamic Ensemble Machine as an extension of stochastic gradient boosting that relies on the adaptive learning rate and local cross-validated estimates of regularization. Specifically, the adaptive learning rate is formed by the local ratio of the current penalized boosting stageweight over the unpenalized version of that stageweight. We show that the original learning rate can be expressed equivalently by a local ridge penalty under squared error loss (classification). Using the simple form of this parameter, one can obtain k-fold cross-validated estimates for shrinkage quickly while the ensemble is constructed. We further present two specific versions of the adaptive learning rate designed for exponential loss. These penalties provide flexible versions of the adaptive learning rate, which depend on the current selected learner.
|