ReSAN: Relation-Sensitive Graph Representation Learning for Peer Assessment in Educational Scenarios
Abstract
1. Introduction
- We introduce a relation-sensitive framework that explicitly captures the diversity and distinct characteristics of interactions in peer assessment networks.
- We design ReSAN, a graph-based model capable of dynamically weighting relationships to enhance representation learning and score prediction.
- We conduct extensive experiments on both synthetic and real-world datasets, demonstrating that ReSAN outperforms state-of-the-art baselines and improves the reliability of peer grading.
2. Related Work
2.1. Peer Assessment in Educational Scenarios
2.2. Graph-Based Modeling of Peer Assessment
2.3. Relation-Aware GNNs in Broader Graph Learning
3. Preliminaries and Notation
3.1. Graph Neural Networks
3.2. Social–Ownership–Assessment Network
- Social relations: Social relations capture the interpersonal connections among users in , such as friendships, collaborations, or potential conflicts of interest. These relations can influence evaluation behavior by introducing biases toward friends or competitors and provide important context for interpreting assessment scores.
- Ownership relations: Ownership relations encode the authorship or contribution of users to items in . Multiple users may contribute to a single item, forming a many-to-one relationship. This type of relation allows the model to distinguish between self-assessment (a user evaluating their own work) and peer-assessment, while also supporting flexible modeling of group contributions.
- Assessment relations: Assessment relations represent the actual evaluations performed by users, as captured in the assessment matrix A. Each directed edge with weight reflects the score assigned by user u to item i, including both self-assessment and peer-assessment. Explicitly representing these relations enables the framework to analyze the reliability and patterns of evaluation across the network.
4. Proposed Model: ReSAN
4.1. Framework Overview
4.2. Relation-Sensitive Representation Learning
4.3. Algorithm Implementation
| Algorithm 1 ReSAN: Relation-Sensitive Assessment Network |
Message encoding: For each edge , compute the raw attention score Attention normalization: Apply softmax over neighbors: Node update: Update each node embedding: Prediction: Obtain item-level outputs , where are learnable parameters. Optimization: Train the model by minimizing the root mean squared error (RMSE): |
5. Experiments
5.1. Experimental Setup
5.1.1. Datasets
- Ground-truth valuation. Each item is assigned a latent quality value , drawn from a mixture of two Gaussian distributions:where the parameters of the mixture are specified by the weights , means , and standard deviations . The tuple fully characterizes the k-th component.
- Social network. We simulate interpersonal connections among users using the Erdős–Rényi random graph . In this model, each of the possible user pairs forms an edge independently with probability p.
- Ownership network. To capture the individual responsibility of submissions, every user is randomly matched to exactly one item, creating a bijective relationship between users and their owned submissions. This setup is consistent with prior works on peer assessment that do not allow group ownership.
- Assessment network. For every item , a set of k graders is chosen uniformly at random, denoted as with . Each grader then assigns a grade to item i, generated under one of the following mechanisms:
- Strategic model.This reflects the behavior where friends collude by awarding the maximum score to each other, while remaining reasonably fair in evaluating unrelated peers.
- Bias–reliability model.where denotes the true valuation of the item owned by user u. The bias parameter adjusts for generosity () or strictness (), while the reliability parameter determines the dependence of grading reliability on the grader’s own item quality. Here, represents the maximum possible standard deviation, corresponding to the least reliable case.
5.1.2. Baselines
- PeerRank [11]: PeerRank is inspired by the PageRank algorithm, modeling peer assessments as a graph structure. It iteratively computes a “reputation” score within the peer network to adjust and weight students’ final grades.
- PG1 [10]: PG1 applies statistical techniques to correct grading bias among students. It standardizes or adjusts individual scores during aggregation to improve fairness and consistency of the final outcomes.
- RankwithTA [13]: RankwithTA incorporates Teaching Assistant (TA) grades as anchor points into the peer assessment process. By leveraging TA evaluations, it reduces noise and instability, thus enhancing the reliability of rankings.
- Vancouver [12]: The Vancouver method employs Bayesian inference to model students’ grading behavior. It simultaneously estimates the true quality of submissions and the grading ability of students, mitigating subjectivity-related bias.
- GCN-SOAN [14]: GCN-SOAN proposes a GNN-based general peer assessment framework that captures complex behaviors by modeling the evaluation system as a multi-relational network.
- Average & Median: Since models such as PeerRank, PG1, and RankwithTA treat users and items interchangeably, they are not directly applicable to our dataset, which contains individual evaluations of group submissions. To enable comparison, we aggregate individual scores by averaging the grades assigned by all members within each group.
5.1.3. Experimental Settings
5.2. Results on Real-World Datasets
5.3. Results on Synthetic Datasets with Bias-Reliability
5.4. Results on Synthetic Datasets with Strategic Assessment
5.5. Parameter Sensitivity Analysis
5.6. Ablation Study
6. Further Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| Symbol | Definition |
| GNN | Graph Neural Network |
| ReSAN | Relation-Sensitive Assessment Network |
| SOAN | Social–Ownership–Assessment Network |
| RMSE | Root Mean Square Error |
| Graph with vertex set and edge set | |
| Number of vertices | |
| Number of edges | |
| Adjacency matrix of the graph | |
| Entry of A; | |
| Degree matrix with | |
| Adjacency matrix with self-loops | |
| Corresponding degree matrix of | |
| Node feature matrix | |
| Node representation at layer ℓ | |
| Trainable weight matrix at layer ℓ | |
| Nonlinear activation function (e.g., ReLU or ELU) | |
| Set of users | |
| Set of items | |
| Number of users | |
| Number of items | |
| True (ground-truth) score of item i | |
| Predicted score of item i | |
| Assessment matrix; is score given by user u to item i | |
| Social adjacency matrix among users | |
| Ownership matrix between users and items | |
| Combined user–item relation matrix | |
| M | Block matrix encoding SOAN structure |
| Neighborhood of node i defined by M | |
| Embedding of node i at layer ℓ | |
| Linear projection matrix for attention computation | |
| Attention vector | |
| Raw (unnormalized) attention score between nodes i and j | |
| Normalized attention coefficient via softmax | |
| L | Total number of layers in the model |
| Parameters of the regression head for score prediction | |
| Loss function (Root Mean Squared Error, RMSE) | |
| Set of items with known ground-truth scores |
References
- Topping, K.J. Peer assessment. Theory Pract. 2009, 48, 20–27. [Google Scholar] [CrossRef]
- Formanek, M.; Wenger, M.C.; Buxner, S.R.; Impey, C.D.; Sonam, T. Insights about large-scale online peer assessment from an analysis of an astronomy MOOC. Comput. Educ. 2017, 113, 243–262. [Google Scholar] [CrossRef]
- Alcarria, R.; Bordel, B.; De Andra, D.M. Enhanced peer assessment in MOOC evaluation through assignment and review analysis. Int. J. Emerg. Technol. Learn. (iJET) 2018, 13, 206–219. [Google Scholar] [CrossRef]
- Berkmans, F.; Bigerelle, M.; Lemesle, J.; Nys, L.; Wieczorowski, M.; Brown, C. Peer Assessment in Interdisciplinary Learning: Measuring Reliability and Engaging Critical Thinking. Think. Ski. Creat. 2025, 58, 101950. [Google Scholar] [CrossRef]
- Seifert, T.; Feliks, O. Online self-assessment and peer-assessment as a tool to enhance student-teachers’ assessment skills. Assess. Eval. High. Educ. 2019, 44, 169–185. [Google Scholar] [CrossRef]
- Garcia-Loro, F.; Martin, S.; Ruip’erez-Valiente, J.A.; Sancristobal, E.; Castro, M. Reviewing and analyzing peer review Inter-Rater Reliability in a MOOC platform. Comput. Educ. 2020, 154, 103894. [Google Scholar] [CrossRef]
- Perdue, M.; Sandland, J.; Joshi, A.; Liu, J. Exploring the Integration of Social Practice into MOOC Peer Assessment. In Proceedings of the 2024 IEEE Digital Education and MOOCS Conference (DEMOcon), Atlanta, GA, USA, 16–18 October 2024; pp. 1–6. [Google Scholar]
- Topping, K.J.; Gehringer, E.; Khosravi, H.; Gudipati, S.; Jadhav, K.; Susarla, S. Enhancing peer assessment with artificial intelligence. Int. J. Educ. Technol. High. Educ. 2025, 22, 3. [Google Scholar] [CrossRef]
- Sajjadi, M.S.M.; Alamgir, M.; von Luxburg, U. Peer grading in a course on algorithms and data structures: Machine learning algorithms do not improve over simple baselines. In Proceedings of the Third ACM Conference on Learning@ Scale, Scotland, UK, 25–26 April 2016; pp. 369–378. [Google Scholar]
- Piech, C.; Huang, J.; Chen, Z.; Do, C.B.; Ng, A.; Koller, D. Tuned Models of Peer Assessment in MOOCs. In Proceedings of the 6th International Conference on Educational Data Mining, Memphis, TN, USA, 6–9 July 2013. [Google Scholar]
- Walsh, T. The PeerRank method for peer assessment. In Proceedings of the Twenty-First European Conference on Artificial Intelligence, Prague, Czech Republic, 18–22 August 2014; pp. 909–914. [Google Scholar]
- de Alfaro, L.; Shavlovsky, M. CrowdGrader: A tool for crowdsourcing the evaluation of homework assignments. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education, Atlanta, GA, USA, 5–8 March 2014; pp. 415–420. [Google Scholar]
- Fang, H.; Wang, Y.; Jin, Q.; Ma, J. RankwithTA: A robust and accurate peer grading mechanism for MOOCs. In Proceedings of the 2017 IEEE 6th International Conference on Teaching, Assessment, and Learning for Engineering (TALE), Hong Kong, China, 12–14 December 2017; pp. 497–502. [Google Scholar]
- Wang, T.; Jing, X.; Li, Q.; Gao, J.; Tang, J. Improving Peer Assessment Accuracy by Incorporating Relative Peer Grades. In Proceedings of the International Educational Data Mining Society, Montreal, QC, Canada, 2–5 July 2019. [Google Scholar]
- Mubarak, A.A.; Cao, H.; Hezam, I.M.; Hao, F. Modeling students’ performance using graph convolutional networks. Complex Intell. Syst. 2022, 8, 2183–2201. [Google Scholar] [CrossRef]
- Dawson, S. A study of the relationship between student social networks and sense of community. J. Educ. Technol. Soc. 2008, 11, 224–238. [Google Scholar]
- Williams, R.T. An overview of MOOCs and blended learning: Integrating MOOC technologies into traditional classes. IETE J. Educ. 2024, 65, 84–91. [Google Scholar] [CrossRef]
- Papadakis, S. MOOCs 2012-2022: An overview. Adv. Mob. Learn. Educ. Res. 2023, 3, 682–693. [Google Scholar] [CrossRef]
- Tzeng, J.-W.; Lee, C.-A.; Huang, N.-F.; Huang, H.-H.; Lai, C.-F. MOOC evaluation system based on deep learning. Int. Rev. Res. Open Distrib. Learn. 2022, 23, 21–40. [Google Scholar] [CrossRef]
- Gamage, D.; Staubitz, T.; Whiting, M. Peer assessment in MOOCs: Systematic literature review. Distance Educ. 2021, 42, 268–289. [Google Scholar] [CrossRef]
- Ortega-Ruip’erez, B.; Correa-Gorospe, J.M. Peer assessment to promote self-regulated learning with technology in higher education: Systematic review for improving course design. Front. Educ. 2024, 9, 1376505. [Google Scholar] [CrossRef]
- Paul, U.; Mantravadi, A.; Shah, J.; Shah, S.; Mylavarapu, S.V.; Rashid, M.P.; Gehringer, E. Scaling Success: A Systematic Review of Peer Grading Strategies for Accuracy, Efficiency, and Learning in Contemporary Education. arXiv 2025, arXiv:2508.11677. [Google Scholar]
- Xiong, Y.; Schunn, C.D.; Wu, Y. What predicts variation in reliability and validity of online peer assessment? A large-scale cross-context study. J. Comput. Assist. Learn. 2023, 39, 2004–2024. [Google Scholar] [CrossRef]
- Morris, W.; Crossley, S.; Holmes, L.; Trumbore, A. Using transformer language models to validate peer-assigned essay scores in massive open online courses (MOOCs). In Proceedings of the 13th International Learning Analytics and Knowledge Conference, Arlington, TX, USA, 13–17 March 2023; pp. 315–323. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
- Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Hansen, J.; Gebhart, T. Sheaf Neural Networks. In Proceedings of the TDA & Beyond Workshop, Vancouver, BC, Canada, 11 December 2020. [Google Scholar]
- Zheng, X.; Zhou, B.; Gao, J.; Wang, Y.G.; Li’o, P.; Li, M.; Mont’ufar, G. How framelets enhance graph neural networks. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 12761–12771. [Google Scholar]
- Sharma, K.; Lee, Y.-C.; Nambi, S.; Salian, A.; Shah, S.; Kim, S.-W.; Kumar, S. A survey of graph neural networks for social recommender systems. ACM Comput. Surv. 2024, 56, 1–34. [Google Scholar] [CrossRef]
- Guo, Z.; Wang, H. A deep graph neural network-based mechanism for social recommendations. IEEE Trans. Ind. Inform. 2020, 17, 2776–2783. [Google Scholar] [CrossRef]
- Liu, T.; Wang, Y.; Ying, R.; Zhao, H. Muse-gnn: Learning unified gene representation from multimodal biological graph data. Adv. Neural Inf. Process. Syst. 2023, 36, 24661–24677. [Google Scholar]
- Li, S.; Hua, H.; Chen, S. Graph neural networks for single-cell omics data: A review of approaches and applications. Briefings Bioinform. 2025, 26, bbaf109. [Google Scholar] [CrossRef]
- Namanloo, A.A.; Thorpe, J.; Salehi-Abari, A. Improving Peer Assessment with Graph Neural Networks. In Proceedings of the International Educational Data Mining Society, Durham, UK, 24–27 July 2022. [Google Scholar]
- Li, M.; Cheng, Y.; Bai, L.; Cao, F.; Lv, K.; Liang, J.; Lio, P. EduLLM: Leveraging Large Language Models and Framelet-Based Signed Hypergraph Neural Networks for Student Performance Prediction. In Proceedings of the 42nd International Conference on Machine Learning, Vancouver, BC, Canada, 13–19 July 2025. [Google Scholar]
- Li, M.; Zhou, S.; Chen, Y.; Huang, C.; Jiang, Y. EduCross: Dual adversarial bipartite hypergraph learning for cross-modal retrieval in multimodal educational slides. Inf. Fusion 2024, 109, 102428. [Google Scholar] [CrossRef]
- Schlichtkrull, M.; Kipf, T.N.; Bloem, P.; Van Den Berg, R.; Titov, I.; Welling, M. Modeling relational data with graph convolutional networks. In Proceedings of the 15th European Semantic Web Conference, Heraklion, Greece, 3–7 June 2018; pp. 593–607. [Google Scholar]
- Vashishth, S.; Sanyal, S.; Nitin, V.; Talukdar, P. Composition-based Multi-Relational Graph Convolutional Networks. In Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Xu, K.; Hu, W.; Leskovec, J.; Jegelka, S. How Powerful are Graph Neural Networks? In Proceedings of the 7th International Conference on Learning Representations, ICLR, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Brody, S.; Alon, U.; Yahav, E. How Attentive are Graph Attention Networks? In Proceedings of the International Conference on Learning Representations, Virtual, 25–29 April 2022. [Google Scholar]
- Wang, X.; Ji, H.; Shi, C.; Wang, B.; Ye, Y.; Cui, P.; Yu, P.S. Heterogeneous graph attention network. In Proceedings of the 2019 World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 2022–2032. [Google Scholar]
- Wang, Q.; Mao, Z.; Wang, B.; Guo, L. Knowledge graph embedding: A survey of approaches and applications. IEEE Trans. Knowl. Data Eng. 2017, 29, 2724–2743. [Google Scholar] [CrossRef]
- Gao, C.; Zheng, Y.; Li, N.; Li, Y.; Qin, Y.; Piao, J.; Quan, Y.; Chang, J.; Jin, D.; He, X. A survey of graph neural networks for recommender systems: Challenges, methods, and directions. ACM Trans. Recomm. Syst. 2023, 1, 1–51. [Google Scholar] [CrossRef]
- Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton, W.L.; Leskovec, J. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 974–983. [Google Scholar]
- Gilmer, J.; Schoenholz, S.S.; Riley, P.F.; Vinyals, O.; Dahl, G.E. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1263–1272. [Google Scholar]
- Kulkarni, C.; Wei, K.P.; Le, H.; Chia, D.; Papadopoulos, K.; Cheng, J.; Koller, D.; Klemmer, S.R. Peer and self assessment in massive online classes. ACM Trans.-Comput.-Hum. Interact. 2013, 20, 1–31. [Google Scholar] [CrossRef]
- Suen, H.K. Peer assessment for massive open online courses (MOOCs). Int. Rev. Res. Open Distrib. Learn. 2014, 15, 312–327. [Google Scholar] [CrossRef]
- Alves, T.; Sousa, F.; Gama, S.; Jorge, J.; Gonçalves, D. How personality traits affect peer assessment in distance learning. Technol. Knowl. Learn. 2024, 29, 371–396. [Google Scholar] [CrossRef]
- Yang, G.; Li, M.; Feng, H.; Zhuang, X. Deeper insights into deep graph convolutional networks: Stability and generalization. IEEE Trans. Pattern Anal. Mach. Intell. 2025. Early Access. [Google Scholar] [CrossRef]
- Xu, Q.S.; Liang, Y.Z. Monte Carlo cross validation. Chemom. Intell. Lab. Syst. 2001, 56, 1–11. [Google Scholar] [CrossRef]







| Statistic | Asst. 1 | Asst. 2 | Asst. 3 | Asst. 4 |
|---|---|---|---|---|
| Average Grades | ||||
| Ground-truth | 0.62 ± 0.27 | 0.71 ± 0.24 | 0.69 ± 0.33 | 0.59 ± 0.27 |
| Peer | 0.70 ± 0.26 | 0.76 ± 0.23 | 0.75 ± 0.31 | 0.68 ± 0.29 |
| Self | 0.74 ± 0.22 | 0.80 ± 0.22 | 0.82 ± 0.26 | 0.76 ± 0.24 |
| Number of | ||||
| Exercises | 3 | 4 | 5 | 3 |
| Groups | 75 | 77 | 76 | 79 |
| Students | 183 | 206 | 193 | 191 |
| Items | 225 | 308 | 380 | 237 |
| Peer grades | 965 | 1620 | 1889 | 1133 |
| Self grades | 469 | 755 | 890 | 531 |
| Component | Formulation |
|---|---|
| Ground-truth valuation | |
| Social network | |
| Ownership network | One-to-one mapping: |
| Assessment network | For |
| Strategic model | |
| Bias–reliability model |
| Hyperparameters | Searching Space |
|---|---|
| Learning rate | , , , , , , |
| Weight decay | 0, |
| Hidden Size | 32, 64, 128, 256, 512 |
| Dropout ratio | 0.1, 0.2, 0.3, 0.4, 0.5 |
| Heads | 1, 2, 3, 4, 5, 6 |
| Model | Peer Evaluation | |||
|---|---|---|---|---|
| Asst. 1 | Asst. 2 | Asst. 3 | Asst. 4 | |
| Average | 0.1917 | 0.1712 | 0.1902 | 0.1989 |
| Median | 0.1991 | 0.1843 | 0.2047 | 0.2250 |
| PeerRank | 0.1913 | 0.1762 | 0.2235 | 0.2087 |
| PG1 | 0.1919 | 0.1669 | 0.2110 | 0.2161 |
| RankwithTA | 0.1922 | 0.1903 | 0.2183 | 0.1740 |
| Vancouver | 0.1851 | 0.1688 | 0.1951 | 0.2071 |
| GCN-SOAN | 0.1795 | 0.1673 | 0.1869 | 0.1822 |
| ReSAN (Ours) | 0.1690 | 0.1617 | 0.1835 | 0.1749 |
| Model | Peer and Self Evaluation | |||
| Asst. 1 | Asst. 2 | Asst. 3 | Asst. 4 | |
| Average | 0.1944 | 0.1681 | 0.2023 | 0.2117 |
| Median | 0.2111 | 0.1750 | 0.2333 | 0.2538 |
| PeerRank | 0.1888 | 0.1721 | 0.2203 | 0.2168 |
| PG1 | 0.2009 | 0.1680 | 0.2111 | 0.2304 |
| RankwithTA | 0.1884 | 0.1845 | 0.2137 | 0.1792 |
| Vancouver | 0.1815 | 0.1672 | 0.1945 | 0.2101 |
| GCN-SOAN | 0.1778 | 0.1621 | 0.1840 | 0.1821 |
| ReSAN (Ours) | 0.1668 | 0.1556 | 0.1829 | 0.1747 |
| Model | Peer Evaluation | |||
|---|---|---|---|---|
| Asst. 1 | Asst. 2 | Asst. 3 | Asst. 4 | |
| Full model | 0.1690 | 0.1617 | 0.1835 | 0.1749 |
| w/o edge_weight | 0.2670 | 0.2444 | 0.3246 | 0.2746 |
| Model | Peer and Self Evaluation | |||
| Asst. 1 | Asst. 2 | Asst. 3 | Asst. 4 | |
| Full model | 0.1668 | 0.1556 | 0.1829 | 0.1747 |
| w/o edge_weight | 0.2670 | 0.2444 | 0.3246 | 0.2746 |
| Model | Number of Graders per Item k | |||
|---|---|---|---|---|
| 3 | 4 | 5 | 6 | |
| Full model | 0.1147 | 0.1021 | 0.0944 | 0.0894 |
| w/o edge_weight | 0.1916 | 0.1924 | 0.1912 | 0.1888 |
| Model | Number of Graders per Item | |||
| 7 | 8 | 9 | 10 | |
| Full model | 0.0829 | 0.0801 | 0.0732 | 0.0718 |
| w/o edge_weight | 0.1927 | 0.1892 | 0.1907 | 0.1873 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ma, X.; Fang, Y.; Gu, Y.; Zhou, S.; Yang, S. ReSAN: Relation-Sensitive Graph Representation Learning for Peer Assessment in Educational Scenarios. Mathematics 2025, 13, 3664. https://doi.org/10.3390/math13223664
Ma X, Fang Y, Gu Y, Zhou S, Yang S. ReSAN: Relation-Sensitive Graph Representation Learning for Peer Assessment in Educational Scenarios. Mathematics. 2025; 13(22):3664. https://doi.org/10.3390/math13223664
Chicago/Turabian StyleMa, Xiaoyan, Yujie Fang, Yongchun Gu, Siwei Zhou, and Shasha Yang. 2025. "ReSAN: Relation-Sensitive Graph Representation Learning for Peer Assessment in Educational Scenarios" Mathematics 13, no. 22: 3664. https://doi.org/10.3390/math13223664
APA StyleMa, X., Fang, Y., Gu, Y., Zhou, S., & Yang, S. (2025). ReSAN: Relation-Sensitive Graph Representation Learning for Peer Assessment in Educational Scenarios. Mathematics, 13(22), 3664. https://doi.org/10.3390/math13223664

