Abstract:
|
The complexity of statistical and machine learning models increased significantly with the ever-increasing data volume and computing power resources. We see impressive applications of learning algorithms in various domains of life and research, like biology, physics, and social sciences. However, in many scenarios, we require complex predictive models to be interpretable, either in analyzing models for knowledge discovery or creating accurate outcomes for high-stakes decision-making. In recent years, research in explainable and fair machine learning introduced novel algorithms and methods to visualize, explain, and debug black-box predictive models. Making these accessible and convenient for diverse stakeholders who want to apply machine learning responsibly became a challenge. To facilitate the responsible development of machine learning models, we introduce {dalex}, a Python package that implements a model-agnostic interface for interactive explainability and fairness. It aims to address the opaqueness debt phenomenon in predictive modeling, which can be viewed similarly to the well-known technical debt phenomenon in software development. More information is available at https://hbaniecki.com/jsm2022.
|