Activity Number:
|
577
- Statistical Methods for Interpreting Machine Learning Algorithms - with Implications for Targeting
|
Type:
|
Topic Contributed
|
Date/Time:
|
Wednesday, August 1, 2018 : 2:00 PM to 3:50 PM
|
Sponsor:
|
Section on Statistical Learning and Data Science
|
Abstract #328345
|
Presentation
|
Title:
|
Can We Compute an Optimal Sparse Decision Tree?
|
Author(s):
|
Cynthia Rudin* and Elaine Angelino and Nicholas Larus-Stone and Margo Seltzer and Daniel Alabi
|
Companies:
|
Duke University and Berkeley and Cambridge and Harvard and Harvard
|
Keywords:
|
decision trees;
sparsity;
optimization;
data structures;
interpretability;
machine learning
|
Abstract:
|
I will present an algorithm for fully optimizing sparse one-sided decision trees (also called rule lists). This algorithm uses customized bounds and data structures, and produces a certificate of optimality. It is an alternative to CART, and is optimized for accuracy, regularized by sparsity. It is the best current algorithm for creating optimal decision trees of any kind.
|
Authors who are presenting talks have a * after their name.