Online Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 165 - SLDS CSpeed 2
Type: Contributed
Date/Time: Tuesday, August 10, 2021 : 10:00 AM to 11:50 AM
Sponsor: Section on Statistical Learning and Data Science
Abstract #318656
Title: Validity Check of Peer Assessment Schemes for Massive Open Online Courses
Author(s): Fangda SONG* and Wai Yin Isabella Poon and Xinyuan Song and Yingying Wei
Companies: The Chinese University of Hong Kong and The Chinese University of Hong Kong and The Chinese University of Hong Kong and The Chinese University of Hong Kong
Keywords: Online education; Model identifiability; Experimental design; Dynamic programming
Abstract:

Evaluation of students' performance is an outstanding problem for Massive Open Online Courses (MOOCs). To assess the huge number of students, many courses adopt peer assessment. The tuned model is a widely used probabilistic model to adjust peer assessment scores for the grading bias and precision of students; however, its model identifiability has been poorly studied. Only when a peer assessment scheme produces an identifiable model can the tuned model recover the true score for each homework submission. In this study, we provide the sufficient and necessary identifiability conditions for the tuned model by constructing a shared grading graph and propose a breadth-first search algorithm to quickly determine the validity of a given peer assessment scheme. Moreover, because of the low completion rates, even when the peer assessment scheme assigned by the instructor ensures model identifiability, the realized scheme by the students may not be valid. We thus provide a dynamic programming algorithm to calculate the probability of realizing a valid peer assessment scheme for a given completion rate and class size. Our proposed algorithm can also answer how to remedy an invalid scheme.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2021 program