Online Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 309 - Interface Between Machine Learning and Uncertainty Quantification
Type: Topic Contributed
Date/Time: Wednesday, August 5, 2020 : 10:00 AM to 11:50 AM
Sponsor: Uncertainty Quantification in Complex Systems Interest Group
Abstract #314081
Title: Quantifying Model Transfer Uncertainties Using Post Hoc Explainability in Deep Learning Models
Author(s): Evangelina Brayfindley* and Thomas Grimes
Companies: Pacific Northwest National Laboratory and Pacific Northwest National Lab
Keywords: model transfer; machine learning; uncertainty quantification; deep learning; explainability; interpretability

In this work, we explore the connection between model explainability and model transferability using electrical signal spectrogram data. Transferability of machine learning (ML) models in real-world scientific applications are dependent on understanding how physical features map to model features important in classification and whether the features usage changes with new datasets. While frequently characterized with train/test set accuracies, cross-validation, and more, these approaches do not necessarily provide understandable data-driven reasons as to where and why a model is (or isn’t) transferable. In this work, we take a two-pronged approach that keeps explainability at the forefront. By first training a novel convolutional neural network architecture under data-driven constraints and subsequently utilizing a post hoc explainability technique, Locally Interpretably Model-agnostic Explainability, we can track model feature usage and compare this usage across data sets by building metrics to compare domain shift. Initial experimental results using this technique will be presented.

Authors who are presenting talks have a * after their name.

Back to the full JSM 2020 program