Abstract Details
Activity Number:
|
291
|
Type:
|
Topic Contributed
|
Date/Time:
|
Tuesday, August 6, 2013 : 8:30 AM to 10:20 AM
|
Sponsor:
|
Section on Statistical Learning and Data Mining
|
Abstract - #309289 |
Title:
|
When Is the Majority-Vote Classifier Beneficial?
|
Author(s):
|
Mu Zhu*+
|
Companies:
|
University of Waterloo
|
Keywords:
|
bagging ;
boosting ;
ensemble learning ;
phase transition ;
random forest ;
weak learner
|
Abstract:
|
In his seminal paper, Schapire (1990) proved that weak learning algorithms--ones that perform slightly better than random guessing--can be turned into ones capable of achieving arbitrarily high accuracy. His proof, however, does not imply that one can always do so with a simple majority-vote mechanism--a common misconception fueled partially by an incomplete understanding of Breiman's influential algorithms (e.g., bagging and random forest) that do indeed use the majority-vote mechanism, and partially by the popular lessons drawn from the Netflix contest (2006-2009), which testify to the wisdom of crowds. An elementary analysis shows that, for binary classification with equal prior probabilities, the weak classifiers must have a true positive rate of at least 50% and a false positive rate of at most 50% in order for the majority-vote mechanism to be beneficial, even under fairly ideal circumstances.
|
Authors who are presenting talks have a * after their name.
Back to the full JSM 2013 program
|
2013 JSM Online Program Home
For information, contact jsm@amstat.org or phone (888) 231-3473.
If you have questions about the Continuing Education program, please contact the Education Department.
The views expressed here are those of the individual authors and not necessarily those of the JSM sponsors, their officers, or their staff.
Copyright © American Statistical Association.