Abstract:
|
Recently, research into the interpretation of forensic evidence is focused on black-box studies for both human examiners and algorithmic systems. For human examiners, the black-box study for fingerprints has been widely commended, and similar studies for handwriting and bloodstain pattern analysis are not far behind. For algorithmic systems, two common probabilistic genotyping programs have undergone extensive error-rate testing to determine whether the results of these systems should be allowed in courtroom testimony. In these studies, the main goal is to determine an average error rate for a subset of tasks in the overall interpretation system. These reported error rates have been used to justify the conclusions that an examiner has made in a particular case, although this is not the intention. In this presentation, we exploit statistical relationships between two different methods of forensic evidence interpretation. In doing so, we develop a method that bridges the gap between the average error-rates and the case-specific score-based likelihood ratio. We illustrate this method using a handwriting dataset and a machine-learning comparison system.
|