Abstract:
|
An "interpretable machine learning" model is constrained in model form to be either useful to someone, or obey structural knowledge of the domain, such as monotonicity, causality, structural (generative) constraints, additivity, or physical constraints that come from domain knowledge. Interpretable machine learning started as far back as the 1970's, but has gained momentum as a subfield only very recently. Some of the most popular forms of interpretable machine learning models are sparse decision trees, additive models, scoring systems (sparse linear models with integer coefficients), and case-based reasoning methods. I will overview (a very very small fraction of the large amount of) recent research in the area of the four problems discussed above.
|