Online Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 63 - Inference and Interpretability in a Model-Free Setting
Type: Invited
Date/Time: Monday, August 9, 2021 : 10:00 AM to 11:50 AM
Sponsor: IMS
Abstract #316964
Title: Interpreting deep neural networks in a transformed domain
Author(s): Wooseok Ha*
Companies: UC Berkeley
Keywords: Interpretable machine learning; Fourier domain; Wavelet transform; Distillation; Cosmological parameter inference ; Feature importance
Abstract:

Machine learning lies at the heart of new possibilities for scientific discovery, knowledge generation, and artificial intelligence. Its potential benefits to these fields require going beyond predictive accuracy and focusing on interpretability. In particular, many scientific problems require interpretations in domain-specific interpretable feature space (e.g. the frequency or wavelet domain) whereas attributions to the raw features (e.g. the pixel space) may be unintelligible or even misleading. To address this challenge, we propose TRIM (Transformation Importance), a novel approach which attributes importances to features in a transformed space and can be applied post-hoc to a fully trained model. We focus on a problem in cosmology, where it is crucial to interpret how a model trained on simulations predicts fundamental cosmological parameters. By using TRIM in interesting ways, we next introduce adaptive wavelet distillation (AWD), a method that aims to distill information from a trained neural network into a wavelet transform. We showcase AWD informs predictive features that are scientifically meaningful in the context of cosmological parameter inference.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2021 program