Abstract:
|
Many applications rely on human evaluations, such as peer review, crowdsourcing, hiring, etc. It is well known that ratings given by human evaluators frequently suffer from miscalibration: evaluators may be strict/lenient/extreme/moderate/etc. Such miscalibration leads to biases and unfairness in these applications. In this talk, we will discuss popular approaches to address miscalibration, present some new algorithms, and surprising new insights into the eternal debate between ratings and rankings. Finally, we will discuss some open problems that are exciting, challenging, and can lead to a major real-world impact if solved.
|