Algorithmic decision-making and machine learning (ML) are increasingly used to guide high-stake decisions in various contexts. While computer science research on Fair ML developed a wide range of fairness notions and auditing techniques for evaluating (the results of) such processes, survey science has a long history of studying and correcting for biases in data collection contexts. This includes methods for improving inference from non-probability samples, utilizing information from "found" digital trace data and data integration techniques that aim at combining heterogeneous data sources.
This panel brings together researchers from both disciplines in order to discuss advances and challenges of research on data biases and algorithmic decision-making. How can we detect, quantify and correct for sample bias and selective participation in various data collection processes? How does biased and incomplete data translate to biased models and decisions? How can both disciplines learn from each other? These are only some of the questions this panel will discuss in order to stimulate and promote multidisciplinary research on fairness in automated decision-making.
|