Abstract:
|
The current approach to characterizing uncertainty in forensic decision-making has largely centered around conducting error rate studies (in which examiners evaluate a set of items consisting of known-source evidence) and calculating aggregated error rates. This approach is not ideal for comparing examiner performance as decisions are not always unanimous and error frequency is likely to vary depending on the evidence quality. Item Response Theory (IRT), a class of statistical methods used prominently in educational testing, is one approach that accounts for differences in proficiency among participants and additionally accounts for varying difficulty among items. Using data from the FBI “Black Box” and “White Box” studies, which estimated error rates for fingerprint comparisons, I will review some of our recent advances using simple IRT models, more elaborate decision tree models, and extensions that incorporate reported difficulty as a second response variable. We find that even when examiners largely agree on a final source decision, there is considerable variability in print quality assessments, tendency to make inconclusive decisions, and perceived difficulty of comparisons.
|