Abstract:
|
While the current literature on algorithm fairness focuses on measuring bias in a risk prediction model using goodness-of-fit and accuracy metrics (e.g., AUC), it is important to understand the implications of using a biased risk prediction model for decision-making on outcomes. We used data from a large integrated health care system to fit a recently developed recurrence risk prediction model for adults with colorectal cancer who underwent resection. We found that, although the recurrence prediction model had fair overall performance (AUC=0.7), performance across racial subgroups was variable, indicating that this model has racial/ethnic bias and may be inappropriate for decision-making in practice for certain subgroups. We developed a patient-level microsimulation model to assess the implications of using this biased risk prediction model on decision-making about surveillance testing. Specifically, if one were to use the biased risk prediction model to identify high-risk patients to target for more frequent surveillance testing, how do the differential error rates impact subgroup-specific outcomes?
|