Abstract:
|
Prediction by machine learning models is fundamentally the execution of a computer program. In this case, the rules of the computer program are learned by the computer itself from training data instead of being programmed by a human. Like all good programs, machine learning models should be debugged to discover and remediate errors. When the debugging process increases accuracy in holdout data, increases transparency into model mechanisms, decreases or identifies hackable attack surfaces, or decreases disparate impact this debugging process also enhances trust and interpretability in model mechanisms and predictions. This text discusses several standard techniques in the context of model debugging: disparate impact, residual, and sensitivity analysis and introduces novel applications such as global and local explanation of model residuals.
|