|
Activity Number:
|
374
|
|
Type:
|
Contributed
|
|
Date/Time:
|
Tuesday, August 4, 2009 : 2:00 PM to 3:50 PM
|
|
Sponsor:
|
Section on Bayesian Statistical Science
|
| Abstract - #303657 |
|
Title:
|
Bayesian Adaptive Ensemble Learning
|
|
Author(s):
|
Sounak Chakraborty*+
|
|
Companies:
|
University of Missouri-Columbia
|
|
Address:
|
209F Middlebush Hall, Columbia, MO, 65211,
|
|
Keywords:
|
ensemble learning ; Bayesian model ; data mining ; regression ; classification ; neural network
|
|
Abstract:
|
Bayesian ensemble learning otherwise known as the Bayesian additive regression tree (Chipman, et al. 2005) is a pioneering work in terms of adopting the philosophy of the "slow learners" under a Bayesian setup. In this paper we describe two Bayesian ensemble methods for classification and regression. The first method is our Bayesian adaptive ensemble tree for multi-class classification. The number of trees are not fixed but we kept it free by putting a prior distribution. Thus the number of tree required to fit a model is selected adaptively and thus avoiding over fitting. The second model is based on neural network architecture. Based on an ensemble of small neural networks we develop our Bayesian ensemble network for regression. The number of required networks are selected adaptively using a prior. Success of both methods are demonstrated on simulated data sets and real data sets.
|