Abstract #301991

This is the preliminary program for the 2003 Joint Statistical Meetings in San Francisco, California. Currently included in this program is the "technical" program, schedule of invited, topic contributed, regular contributed and poster sessions; Continuing Education courses (August 2-5, 2003); and Committee and Business Meetings. This on-line program will be updated frequently to reflect the most current revisions.

To View the Program:
You may choose to view all activities of the program or just parts of it at any one time. All activities are arranged by date and time.

The views expressed here are those of the individual authors
and not necessarily those of the ASA or its board, officers, or staff.


Back to main JSM 2003 Program page



JSM 2003 Abstract #301991
Activity Number: 89
Type: Contributed
Date/Time: Monday, August 4, 2003 : 8:30 AM to 10:20 AM
Sponsor: Section on Statistical Computing
Abstract - #301991
Title: Sparse Bayesian Classifiers for Text Categorization
Author(s): Susana Eyheramendy*+ and David Madigan
Companies: Rutgers University and
Address: 14 Redcliffe Ave., Highland Park, NJ, 08904-1641,
Keywords: regularization ; EM algorithm ; sparse classifiers
Abstract:

Text categorization algorithms assign texts to predefined categories. The study of such algorithms has a rich history dating back at least 40 years. In the last decade or so, the statistical approach has dominated the literature. The essential idea is to infer a text categorization algorithm from a set of labeled documents, i.e., documents with known category assignments, where a feature vector represents the documents. Standard statistical classification tools such as Naïve Bayes, logistic regression, and decision trees are immediately relevant and have been used with some success. So-called sparse classifiers, where irrelevant parameters are set to zero, and regularized linear classifiers, where parameters are shrunk toward zero, have achieved state-of-the-art performance in recent published studies. Support vector machines (SVMs) are especially popular but have a number of drawbacks. In particular, SVM predictions are not probabilistic.


  • The address information is for the authors that have a + after their name.
  • Authors who are presenting talks have a * after their name.

Back to the full JSM 2003 program

JSM 2003 For information, contact meetings@amstat.org or phone (703) 684-1221. If you have questions about the Continuing Education program, please contact the Education Department.
Revised March 2003