Online Program Home
  My Program Register Now!

Abstract Details

Activity Number: 577 - Statistical Methods for Interpreting Machine Learning Algorithms - with Implications for Targeting
Type: Topic Contributed
Date/Time: Wednesday, August 1, 2018 : 2:00 PM to 3:50 PM
Sponsor: Section on Statistical Learning and Data Science
Abstract #329539
Title: Black-Box Model Explanations: a Study of the Good, the Bad, and the Ugly
Author(s): Patrick Hall*
Companies: H20.ai
Keywords: Machine Learning ; Interpretability ; Explanation ; GBM ; Transparency
Abstract:

While understanding and trusting models and their results is a hallmark of good (data) science, model interpretability is a legal mandate in the regulated verticals of many major industries. Moreover, scientists, physicians, researchers, analysts, and humans in general have the need to understand and trust models and modeling results that affect their work and their lives. Today many organizations and individuals are embracing machine learning but what happens when people need to explain these impactful, complex technologies to one-another or when these technologies inevitably make mistakes? This paper analyzes several debugging and explanatory approaches beyond the error measures and assessment plots typically used to interpret machine learning models. The approaches: individual conditional expectation (ICE) plots, LIME, leave-one-covariate-out (LOCO), surrogate models, and Shapley explanations, vary in terms of scope (i.e. global vs. local), the exactness of generated explanations, and suitable application domains. Along with descriptions of the techniques, findings regarding explanation trustworthiness and practical guidance for usage are also presented.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2018 program