Abstract:
|
Recently, a very large body of literature has appeared on the legal and ethical bases of explainable machine learning/AI. Most of this work attempts to abstract issues arising from large-scale uses of machine learning to a philosophical level in order to discuss policies and ethics that govern (or should or could govern) interpretability or explainability of machine learning in practice. In reality, however, use cases cannot all be covered under the same umbrella, and decisions about governance rely (or at least should rely) on the technological capabilities for each domain, and the needs of the application. This leads to a disconnect between the governance of AI and its uses in practice. For instance, if interpretable machine learning models can be made to perform as well as black box models for a domain, this information becomes relevant for governance. In this manuscript, we discuss important use-cases in medicine, criminal justice and policing, and finance and provide an understanding of the interpretability and explainability ML's technological capabilities for various problems in each of these domains.
|