Abstract:
|
The potential for AI and machine learning systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we identified areas of alignment and disconnect between the challenges faced by teams in practice and the solutions proposed in the fair ML research literature. Next, we conducted an iterative co-design process with 48 practitioners, to design an AI fairness checklist and identify desiderata and concerns for AI fairness checklists in general. We found that AI fairness checklists could provide organizational infrastructure for formalizing ad-hoc processes and empowering individual advocates. We discuss aspects of organizational culture that may impact the efficacy of such checklists, and highlight future research directions. This talk is based on joint work with Hal Daumé III, Miro Dudík, Ken Holstein, Michael Madaio, Luke Stark, and Hanna Wallach.
|