Next Article in Journal
CiTranGAN: Channel-Independent Based-Anomaly Detection for Multivariate Time Series Data
Previous Article in Journal
Enhancing Human Pose Transfer with Convolutional Block Attention Module and Facial Loss Optimization
Previous Article in Special Issue
Protocol for Evaluating Explainability in Actuarial Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Evaluating Fairness Strategies in Educational Data Mining: A Comparative Study of Bias Mitigation Techniques

by
George Raftopoulos
,
Gregory Davrazos
and
Sotiris Kotsiantis
*
Department of Mathematics, University of Patras, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2025, 14(9), 1856; https://doi.org/10.3390/electronics14091856
Submission received: 31 March 2025 / Revised: 24 April 2025 / Accepted: 29 April 2025 / Published: 1 May 2025
(This article belongs to the Special Issue Advances in Information, Intelligence, Systems and Applications)

Abstract

Ensuring fairness in machine learning models applied to educational data is crucial for mitigating biases that can reinforce systemic inequities. This paper compares various fairness-enhancing algorithms across preprocessing, in-processing, and post-processing stages. Preprocessing methods such as Reweighting, Learning Fair Representations, and Disparate Impact Remover aim to adjust training data to reduce bias before model learning. In-processing techniques, including Adversarial Debiasing and Prejudice Remover, intervene during model training to directly minimize discrimination. Post-processing approaches, such as Equalized Odds Post-Processing, Calibrated Equalized Odds Post-Processing, and Reject Option Classification, adjust model predictions to improve fairness without altering the underlying model. We evaluate these methods on educational datasets, examining their effectiveness in reducing disparate impact while maintaining predictive performance. Our findings highlight tradeoffs between fairness and accuracy, as well as the suitability of different techniques for various educational applications.
Keywords: fairness; learning analytics; Open University Learning Analytics dataset; AIF 360 fairness; learning analytics; Open University Learning Analytics dataset; AIF 360

Share and Cite

MDPI and ACS Style

Raftopoulos, G.; Davrazos, G.; Kotsiantis, S. Evaluating Fairness Strategies in Educational Data Mining: A Comparative Study of Bias Mitigation Techniques. Electronics 2025, 14, 1856. https://doi.org/10.3390/electronics14091856

AMA Style

Raftopoulos G, Davrazos G, Kotsiantis S. Evaluating Fairness Strategies in Educational Data Mining: A Comparative Study of Bias Mitigation Techniques. Electronics. 2025; 14(9):1856. https://doi.org/10.3390/electronics14091856

Chicago/Turabian Style

Raftopoulos, George, Gregory Davrazos, and Sotiris Kotsiantis. 2025. "Evaluating Fairness Strategies in Educational Data Mining: A Comparative Study of Bias Mitigation Techniques" Electronics 14, no. 9: 1856. https://doi.org/10.3390/electronics14091856

APA Style

Raftopoulos, G., Davrazos, G., & Kotsiantis, S. (2025). Evaluating Fairness Strategies in Educational Data Mining: A Comparative Study of Bias Mitigation Techniques. Electronics, 14(9), 1856. https://doi.org/10.3390/electronics14091856

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop