Vendredi 21 juin, 12:15, en salle D102
Jatin Jindal
Title : Truthful Peer Grading with Limited Effort from Teaching Staff
Abstract :
Massive open online courses pose a massive challenge for grading the answerscripts at a high accuracy. Peer grading is often viewed as a scalable solution to this challenge, which largely depends on the altruism of the peer graders. Some approaches in the literature treat peer grading as a 'best-effort service' of the graders, and statistically correct their inaccuracies before awarding the final scores, but ignore graders' strategic behavior. Few other approaches incentivize non-manipulative actions of the peer graders but do not make use of certain additional information that is potentially available in a peer grading setting, e.g., the true grade can eventually be observed at an additional cost. This cost can be thought of as an additional effort from the teaching staff if they had to finally take a look at the corrected papers post peer grading. In this paper, we use such additional information and introduce a mechanism, TRUPEQA, that (a) uses a constant number of instructor-graded answerscripts to quantitatively measure the accuracies of the peer graders and corrects the scores accordingly, (b) ensures truthful revelation of their observed grades, (c) penalizes manipulation, but not inaccuracy, and (d) reduces the total cost of arriving at the true grades, i.e., the additional person-hours of the teaching staff. We show that this mechanism outperforms several standard peer grading techniques used in practice, even at times when the graders are non-manipulative.