Abstract
Peer assessment has emerged as a crucial approach for scaling evaluation in educational scenarios, fostering learner engagement, critical thinking, and collaborative learning. Nevertheless, traditional aggregation-based and probabilistic methods often fail to capture the intricate relational dependencies among students and submissions, thereby limiting their capacity to ensure reliable and equitable outcomes. Recent advances in graph neural networks (GNNs) offer promising avenues for representing peer-assessment data as graphs. However, most existing approaches treat all relations uniformly, overlooking variations in the reliability of evaluative interactions. To bridge this gap, we accordingly propose ReSAN (Relation-Sensitive Assessment Network), a novel framework that integrates relation-sensitive attention into the message-passing process. ReSAN dynamically evaluates and weights relationships, enabling the model to distinguish informative signals from noisy or biased assessments. Comprehensive experiments on both synthetic and real-world datasets demonstrate that ReSAN consistently surpasses strong baselines in prediction accuracy and robustness. These findings underscore the importance of explicitly modeling evaluator reliability for effectively capturing the dynamics of peer-assessment networks. Overall, this work advances reliable graph-based evaluation methods and provides new insights into leveraging representation learning techniques for educational analytics.