Online Program Home
My Program

Abstract Details

Activity Number: 228 - Interpreting Machine Learning Models: Opportunities, Challenges, and Applications
Type: Topic Contributed
Date/Time: Monday, July 29, 2019 : 2:00 PM to 3:50 PM
Sponsor: Section on Statistical Learning and Data Science
Abstract #304321 Presentation
Title: Increasing Trust and Interpretability in Machine Learning with Model Debugging
Author(s): Patrick Hall*
Companies: H2O.ai
Keywords: Machine Learning; Interpretability; Debugging; Residual Analysis; FATML; XAI
Abstract:

Prediction by machine learning models is fundamentally the execution of a computer program. In this case, the rules of the computer program are learned by the computer itself from training data instead of being programmed by a human. Like all good programs, machine learning models should be debugged to discover and remediate errors. When the debugging process increases accuracy in holdout data, increases transparency into model mechanisms, decreases or identifies hackable attack surfaces, or decreases disparate impact this debugging process also enhances trust and interpretability in model mechanisms and predictions. This text discusses several standard techniques in the context of model debugging: disparate impact, residual, and sensitivity analysis and introduces novel applications such as global and local explanation of model residuals.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2019 program