JSM 2004 - Toronto

Abstract #300050

This is the preliminary program for the 2004 Joint Statistical Meetings in Toronto, Canada. Currently included in this program is the "technical" program, schedule of invited, topic contributed, regular contributed and poster sessions; Continuing Education courses (August 7-10, 2004); and Committee and Business Meetings. This on-line program will be updated frequently to reflect the most current revisions.

To View the Program:
You may choose to view all activities of the program or just parts of it at any one time. All activities are arranged by date and time.

The views expressed here are those of the individual authors
and not necessarily those of the ASA or its board, officers, or staff.


Back to main JSM 2004 Program page



Activity Number: 364
Type: Invited
Date/Time: Wednesday, August 11, 2004 : 2:00 PM to 3:50 PM
Sponsor: Section on Statistical Computing
Abstract - #300050
Title: 1-norm Regularization: Efficient and Effective
Author(s): Saharon Rosset*+ and Ji Zhu
Companies: IBM T.J. Watson Research Center and Stanford University
Address: 1101 Kitchawan Rd, Yorktown Heights, NY, 10598,
Keywords: l_1 regularization ; flexible fitting ; robust modeling ; boosting ; LARS ; lasso
Abstract:

We consider the general regularized optimization problem of minimizing loss+penalty, where the loss depends on the data and the model, and the penalty on the model only. We illustrate that the choice of l_1 (lasso) penalty, in combination with appropriate loss, leads to several desirable properties: (1) Approximate or exact l_1 regularization has given rise to highly successful modeling tools: the lasso, boosting, wavelets and 1-norm support vector machines. (2) l_1 regularization creates sparse models, a property that is especially desirable in high-dimensional predictor spaces. We formulate and prove sparsity results. (3) l_1 regularization facilitates efficient methods for solving the regularized optimization problem. The LARS algorithm takes advantage of this property. We present a general formulation of regularized optimization problems for which efficient methods can be designed. We show how we can create modeling tools which are robust (because of the loss function selected), efficient (because we solve the regularized problem efficiently) and adaptive (because we select the regularization parameter adaptively). (Joint work with Trevor Hastie and Rob Tibshirani.)


  • The address information is for the authors that have a + after their name.
  • Authors who are presenting talks have a * after their name.

Back to the full JSM 2004 program

JSM 2004 For information, contact jsm@amstat.org or phone (888) 231-3473. If you have questions about the Continuing Education program, please contact the Education Department.
Revised March 2004