Online Program Home
My Program

Abstract Details

Activity Number: 642 - Advanced Statistical Methods for Large Data Sets
Type: Topic Contributed
Date/Time: Thursday, August 1, 2019 : 10:30 AM to 12:20 PM
Sponsor: Social Statistics Section
Abstract #302988
Title: Distributed Learning with Minimum Error Entropy Principle
Author(s): Xin Guo* and Ting Hu and Qiang Wu
Companies: The Hong Kong Polytechnic University and Wuhan University and Middle Tennessee State University
Keywords: Information theoretic learning; minimum error entropy; distributed method; semi-supervised data; reproducing kernel Hilbert space
Abstract:

Minimum Error Entropy (MEE) principle is an important approach in Information Theoretical Learning (ITL). It is widely applied and studied in various fields for its robustness to noise. In this paper, we study a reproducing kernel-based distributed MEE algorithm, DMEE, which is designed to work with both fully supervised data and semi-supervised data. With fully supervised data, our proved learning rates equal the minimax optimal learning rates of the classical pointwise kernel-based regressions. Under the semi-supervised learning scenarios, we show that DMEE exploits unlabeled data effectively, in the sense that first, under the settings with weaker regularity assumptions, additional unlabeled data significantly improves the learning rates of DMEE. Second, with sufficient unlabeled data, labeled data can be distributed to many more computing nodes, that each node takes only O(1) labels, without spoiling the learning rates in terms of the number of labels.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2019 program