JSM 2004 - Toronto

Abstract #301833

This is the preliminary program for the 2004 Joint Statistical Meetings in Toronto, Canada. Currently included in this program is the "technical" program, schedule of invited, topic contributed, regular contributed and poster sessions; Continuing Education courses (August 7-10, 2004); and Committee and Business Meetings. This on-line program will be updated frequently to reflect the most current revisions.

To View the Program:
You may choose to view all activities of the program or just parts of it at any one time. All activities are arranged by date and time.

The views expressed here are those of the individual authors
and not necessarily those of the ASA or its board, officers, or staff.


Back to main JSM 2004 Program page



Activity Number: 378
Type: Contributed
Date/Time: Wednesday, August 11, 2004 : 2:00 PM to 3:50 PM
Sponsor: General Methodology
Abstract - #301833
Title: Consistency and Generalization Error Bound of Neural Network Models
Author(s): Indrias G. Berhane*+
Companies: University of Kentucky
Address: 4070 Victoria Way Apt. 74, Lexington, KY, 40515,
Keywords: generalization error ; empirical mean square error ; uniform stability ; smoothing regularization
Abstract:

Estimation using flexible models based on finite and noisy datasets is a challenging task. The main concern in using flexible models is the possibility of overfitting the training data. Feed-forward neural networks are class of approximating functions with a universal approximation capabilities. For a class of functions with known approximation capabilities the main concern is performance on future datasets. In real-world applications the availability of training samples are often limited and noisy. Since it is possible to find a neural network model which minimizes the empirical error but may not minimize the generalization error we restrict our focus to stable models only. We use a smoothing regularization as a method of stabilizer. We provide an upper bound of the generalization error of a neural network, estimated with a smoothing regularization. We also prove possible generalization error of a neural network estimate converges to the minimum achievable by the class as the training sample increases, with high probability. Our numerical experiments show neural network models estimated with smoothing regularization give similar or less prediction errors.


  • The address information is for the authors that have a + after their name.
  • Authors who are presenting talks have a * after their name.

Back to the full JSM 2004 program

JSM 2004 For information, contact jsm@amstat.org or phone (888) 231-3473. If you have questions about the Continuing Education program, please contact the Education Department.
Revised March 2004