Abstract:
|
The “black box” nature of machine learning (ML) models have limited their widespread adoption in banking and finance. The input-output relationships in ML models are difficult to understand and interpret as they involve thousands of “under-the-hood” calculations. In banking, one must be able to explain the basis of a credit decision to a customer, or the relationship between macro-economic variables and a loss forecast to a regulator. Further, one has to ensure the relationships are consistent with historical and business understanding.
In this presentation, we will provide a framework and a suite of algorithms and associated visualization tools that help to resolve the opaqueness of ML algorithms. These are based on our research at Wells Fargo as well as recent results in the literature. The class of techniques include global diagnostics, local models for interpretability, and structured neural networks for explainability.
|