Abstract:
|
Machine learning algorithms provide predictions to expert users, such as an underwriter (‘approve loan’) or recommendations to everyday consumers (‘You may like X’) alike. These AI systems rarely provide rationale or explanation for predictions or recommendations made, which could lead to a significant loss of trust and use intentions by the end-users. The critical need for explanations and justifications by AI systems has led to calls for algorithmic transparency, including the EU General Data Protection Regulation (GDPR) that requires many companies to provide a meaningful explanation to involved parties (e.g., users, customers, or employees). However, these calls presuppose that we know what constitutes a meaningful or good explanation. On the contrary, there has been surprisingly little research on this question in the context of AI systems. Thus, in this study, we 1) develop a framework grounded in philosophy, psychology, and interpretable machine learning to investigate and define characteristics of good explanation (e.g., mode of explanation) in various settings (e.g., explainee motivation, magnitude and valence of harm/reward) and 2) measure the impact on perception of u
|