Abstract:
|
Algorithmic risk assessments are increasingly used to help humans make decisions in high-stakes settings, such as medicine, criminal justice and education. In each of these cases, the purpose of the risk assessment tool is to inform actions, such as medical treatments or release conditions, often with the aim of reducing the likelihood of an adverse event such as hospital readmission or recidivism. Problematically, most tools are trained and evaluated on historical data in which the outcomes observed depend on the historical decision-making policy. These tools thus reflect risk under the historical policy, rather than under the different decision options that the tool is intended to inform. Even when tools are constructed to predict risk under a specific decision, they are often improperly evaluated as predictors of the target outcome. In this talk I will discuss proposed counterfactual analogues of common predictive performance and algorithmic fairness metrics that we argue are better suited for the decision-making context. I will describe doubly-robust strategies for estimating the target counterfactual quantities, and illustrate both empirically and theoretically how standard observational fairness metrics can induce and exacerbate algorithmic bias. This talk is based on joint work with Amanda Coston, Alan Mishler and Edward Kennedy at CMU.
|