Quantile Multi-Attribute Disparity (QMAD): An Adaptable Fairness Metric Framework for Dynamic Environments
Abstract
:1. Introduction
- A novel fairness metric framework is proposed that considers multiple attributes, including machine learning outputs and feature variations for bias detection and quantification.
- The two key components of comparison and aggregation functions introduced in the framework allow the novel fairness metric to be highly adaptable to variations of contexts and scenarios.
- Three innovative comparison–aggregation function pairs are proposed to demonstrate the effectiveness and robustness of the novel fairness metric framework.
2. Related Work
Fairness Monitoring
3. Preliminary
4. Adaptable Fairness Metric Framework
Algorithm 1 QMAD fairness score calculation. |
Input: FC, FA, , , B, A Output: M
|
5. Experiment
5.1. Comparison–Aggregation Functions Evaluated
5.1.1. ROM–Arithmetic Pair
5.1.2. KSTest–Harmonic Pair
5.1.3. ADTest–Harmonic Pair
5.2. Synthetic Data Test on Regression
5.2.1. Synthetic Dataset
5.2.2. Bias Injection
5.2.3. Regression Models Evaluated
5.3. Real-World Data Test on Classification
5.3.1. UCI Adult Dataset
5.3.2. Bias Injection
5.3.3. Classification Model Evaluated
6. Experiment Results
6.1. Synthetic Data Evaluation Result
Bias Detection
6.2. Real-World Data Evaluation Result
Bias Detection
7. Benchmark Comparison
7.1. Benchmark Comparison on the Synthetic Dataset
7.2. Benchmark Comparison on the Real-World Dataset
7.3. Choice of Comparison–Aggregation Function Pair
8. Discussion and Limitations
- Our approach underscores the importance of dynamic fairness, where ML systems are designed to adapt to changing societal norms and population demographics. By integrating this adaptability, our work encourages the development of ML systems that remain fair and unbiased over time, reducing the risk of perpetuating or exacerbating existing biases.
- By emphasizing the need to consider a wide range of feature variations, including both obvious and subtle factors, our work advocates for more inclusive and comprehensive fairness assessments and ensures that AI systems are fair and equitable for diverse groups, accounting for multi-dimensionality that might otherwise be overlooked.
- By focusing on the robustness of ML systems to skewed and imbalanced datasets, our work addresses a significant challenge in real-world data. This aspect is particularly impactful as it guides researchers and practitioners in developing ML models that are not only fair in ideal conditions but also maintain their fairness in less-than-ideal, real-world scenarios.
9. Conclusions and Future Work
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. (Csur) 2021, 54, 1–35. [Google Scholar] [CrossRef]
- Angerschmid, A.; Zhou, J.; Theuermann, K.; Chen, F.; Holzinger, A. Fairness and explanation in AI-informed decision making. Mach. Learn. Knowl. Extr. 2022, 4, 556–579. [Google Scholar] [CrossRef]
- Zhou, J.; Li, Z.; Xiao, C.; Chen, F. Does a Compromise on Fairness Exist in Using AI Models? In Proceedings of the Australasian Joint Conference on Artificial Intelligence, Perth, Australia, 5–8 December 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 191–204. [Google Scholar]
- Jui, T.D.; Rivas, P. Fairness issues, current approaches, and challenges in machine learning models. Int. J. Mach. Learn. Cybern. 2024, 15, 3095–3125. [Google Scholar] [CrossRef]
- Chaudhari, B.; Agarwal, A.; Bhowmik, T. Simultaneous Improvement of ML Model Fairness and Performance by Identifying Bias in Data. arXiv 2022, arXiv:2210.13182. [Google Scholar]
- Schelter, S.; Biessmann, F.; Januschowski, T.; Salinas, D.; Seufert, S.; Szarvas, G. On Challenges in Machine Learning Model Management. IEEE Data Eng. Bull. 2015, 38, 50–60. [Google Scholar]
- Villar, D.; Casillas, J. Facing Many Objectives for Fairness in Machine Learning. In Proceedings of the International Conference on the Quality of Information and Communications Technology, Online, 8–10 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 373–386. [Google Scholar]
- Le Quy, T.; Roy, A.; Iosifidis, V.; Zhang, W.; Ntoutsi, E. A survey on datasets for fairness-aware machine learning. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2022, 12, e1452. [Google Scholar] [CrossRef]
- Caton, S.; Haas, C. Fairness in machine learning: A survey. Acm Comput. Surv. 2024, 56, 1–38. [Google Scholar] [CrossRef]
- Teo, C.T.; Cheung, N.M. Measuring fairness in generative models. arXiv 2021, arXiv:2107.07754. [Google Scholar]
- Wan, M.; Zha, D.; Liu, N.; Zou, N. Modeling techniques for machine learning fairness: A survey. arXiv 2021, arXiv:2111.03015. [Google Scholar] [CrossRef]
- Hort, M.; Chen, Z.; Zhang, J.M.; Harman, M.; Sarro, F. Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey. ACM J. Responsib. Comput. 2024, 1, 11. [Google Scholar] [CrossRef]
- Hardt, M.; Price, E.; Srebro, N. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems (NeurIPS); Curran Associates, Inc.: Red Hook, NY, USA, 2016; Volume 29, pp. 3315–3323. [Google Scholar]
- Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; Zemel, R. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, Cambridge, MA, USA, 8–10 January 2012; pp. 214–226. [Google Scholar]
- Di Stefano, P.G.; Hickey, J.M.; Vasileiou, V. Counterfactual fairness: Removing direct effects through regularization. arXiv 2020, arXiv:2002.10774. [Google Scholar]
- Franklin, J.S.; Bhanot, K.; Ghalwash, M.; Bennett, K.P.; McCusker, J.; McGuinness, D.L. An Ontology for Fairness Metrics. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, Oxford, UK, 19–21 May 2021; Association for Computing Machinery: New York, NY, USA, 2022; pp. 265–275. [Google Scholar]
- Franklin, J.S.; Powers, H.; Erickson, J.S.; McCusker, J.; McGuinness, D.L.; Bennett, K.P. An Ontology for Reasoning About Fairness in Regression and Machine Learning. In Proceedings of the Knowledge Graphs and Semantic Web; Ortiz-Rodriguez, F., Villazón-Terrazas, B., Tiwari, S., Bobed, C., Eds.; Springer: Cham, Switzerland, 2023; pp. 243–261. [Google Scholar]
- Ghosh, A.; Shanbhag, A.; Wilson, C. Faircanary: Rapid continuous explainable fairness. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, Oxford, UK, 19–21 May 2021; Association for Computing Machinery: New York, NY, USA, 2022; pp. 307–316. [Google Scholar]
- Wang, Z.; Zhou, Y.; Qiu, M.; Haque, I.; Brown, L.; He, Y.; Wang, J.; Lo, D.; Zhang, W. Towards Fair Machine Learning Software: Understanding and Addressing Model Bias Through Counterfactual Thinking. arXiv 2023, arXiv:2302.08018. [Google Scholar]
- Liu, L.T.; Dean, S.; Rolf, E.; Simchowitz, M.; Hardt, M. Delayed impact of fair machine learning. In Proceedings of the International Conference on Machine Learning, PMLR, Macau, China, 26–28 February 2018; pp. 3150–3158. [Google Scholar]
- Nanda, V.; Dooley, S.; Singla, S.; Feizi, S.; Dickerson, J.P. Fairness through robustness: Investigating robustness disparity in deep learning. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, 3–10 March 2021; pp. 466–477. [Google Scholar]
- Rampisela, T.V.; Maistro, M.; Ruotsalo, T.; Lioma, C. Evaluation measures of individual item fairness for recommender systems: A critical study. ACM Trans. Recomm. Syst. 2024, 3, 1–52. [Google Scholar] [CrossRef]
- Feldman, M.; Friedler, S.A.; Moeller, J.; Scheidegger, C.; Venkatasubramanian, S. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015; pp. 259–268. [Google Scholar]
- Bellamy, R.K.E.; Dey, K.; Hind, M.; Hoffman, S.C.; Houde, S.; Kannan, K.; Lohia, P.; Martino, J.; Mehta, S.; Mojsilović, A.; et al. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev. 2019, 63, 4.1–4.15. [Google Scholar] [CrossRef]
- Bellamy, R.K.; Dey, K.; Hind, M.; Hoffman, S.C.; Houde, S.; Kannan, K.; Lohia, P.; Martino, J.; Mehta, S.; Mojsilovic, A.; et al. AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv 2018, arXiv:1810.01943. [Google Scholar]
- Selbst, A.D.; Boyd, D.; Friedler, S.A.; Venkatasubramanian, S.; Vertesi, J. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the FAT* ’19: Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019. [Google Scholar]
- Barocas, S.; Hardt, M.; Narayanan, A. Fairness and Machine Learning: Limitations and Opportunities; MIT Press: Cambridge, MA, USA, 2019. [Google Scholar]
- Zemel, R.; Wu, Y.; Swersky, K.; Pitassi, T.; Dwork, C. Learning fair representations. In Proceedings of the International Conference on Machine Learning, PMLR, Atlanta, GA, USA, 16–21 June 2013; pp. 325–333. [Google Scholar]
- Becker, B.; Kohavi, R. UCI Adult Dataset. In Proceedings of the UCI Machine Learning Repository; University of California, Irvine, CA, USA. 1996. Available online: https://archive.ics.uci.edu/dataset/2/adult (accessed on 1 January 2025).
- An, K. Sulla determinazione empirica di una legge didistribuzione. Giorn. Dell’inst. Ital. Degli Att. 1933, 4, 89–91. [Google Scholar]
- Smirnov, N. Table for estimating the goodness of fit of empirical distributions. Ann. Math. Stat. 1948, 19, 279–281. [Google Scholar] [CrossRef]
- Stephens, M.A. EDF statistics for goodness of fit and some comparisons. J. Am. Stat. Assoc. 1974, 69, 730–737. [Google Scholar] [CrossRef]
Metric | Dynamic Fairness | Feature Drift | Multi-Attr. | Time-Aware | Customizable | Scalable |
---|---|---|---|---|---|---|
Statistical Parity Diff. [23] | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
Equalized Odds [13] | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
QDD [18] | Limited | ✗ | ✗ | ✗ | ✗ | ✓ |
QMAD (Ours) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Metric | Continuous Output | Feature Consideration |
---|---|---|
Statistical Parity Difference [23,24] | No | No |
Disparate Impact [13] | No | No |
Empirical Difference Fairness [25] | No | No |
Consistency [25] | No | No |
FairCanary (QDD Metric) [18] | Yes | No |
Pair Name | Ca,b | Ma | M |
---|---|---|---|
ROM–Arithmetic | |||
KPTest–Harmonic | KSTest (, ) | ||
ADTest–Harmonic | ADTest (, ) |
Feature | Values | Distribution |
---|---|---|
Location | {‘Springfield’, ‘Centerville’} | 70:30 |
Education | {‘GRAD’, ‘POST-GRAD’} | 80:20 |
Engineer Type | {‘Software’, ‘Hardware’} | 85:15 |
Experience (Years) | (0, 50) | Normal Distribution |
Relevant Experience (Years) | (0, 50) | Normal Distribution |
Gender | {‘MAN’, ‘WOMAN’} | 50:50 |
ROM–Arithmetic Pair Result | |||||||||
---|---|---|---|---|---|---|---|---|---|
Comparison function: ratio of mean (ROM); aggregation function: arithmetic mean | |||||||||
Aggregation score M aggregates all Ma of features and prediction; score ∼ 0 means no bias | |||||||||
Day | Lin. Reg. Prediction | DT Regressor Prediction | Relevant Exp. | Job Location | Education | Engg. Type | Experience | Lin. Reg. Agg. Score | DT Regressor Agg. Score |
1 | 0.0088 | 0.0088 | 0.0273 | 0.0169 | 0.0332 | 0.0126 | 0.0247 | 0.0206 | 0.0206 |
2 | 0.0084 | 0.0079 | 0.0820 | 0.1105 | 0.5103 | 0.0087 | 0.0237 | 0.1239 | 0.1238 |
3 | 0.0113 | 0.0112 | 0.0359 | 0.0151 | 0.0274 | 0.0151 | 0.0347 | 0.0232 | 0.0232 |
KPTest–Harmonic Pair Result | |||||||||
Comparison function: Kolmogorov–Smirnov Test (KS test); aggregation function: harmonic mean | |||||||||
Aggregated score only aggregates of prediction. The score is the p-value, indicating bias if less than 5%. | |||||||||
Day | Lin. Reg. Prediction | DT Regressor Prediction | Relevant Exp. | Job Location | Education | Engg. Type | Experience | Lin. Reg. Agg. Score | DT Regressor Agg. Score |
1 | 0.4145 | 0.3795 | 0.5223 | 0.8665 | 0.9104 | 0.9139 | 0.5393 | 0.4145 | 0.3795 |
2 | 0.0061 | 0.0024 | 0.0911 | 0 | 0 | 0.9968 | 0.3991 | 0.0061 | 0.0024 |
3 | 0.3086 | 0.3206 | 0.0940 | 0.8495 | 0.9397 | 0.7662 | 0.0646 | 0.3086 | 0.3206 |
ADTest–Harmonic Pair Result | |||||||||
Comparison function: Anderson–Darling test (AD test); aggregation function: harmonic mean | |||||||||
Aggregated score only aggregates of prediction. The score is the p-value, indicating bias if less than 5%. | |||||||||
Day | Lin. Reg. Prediction | DT Regressor Prediction | Relevant Exp. | Job Location | Education | Engg. Type | Experience | Lin. Reg. Agg. Score | DT Regressor Agg. Score |
1 | 0.2303 | 0.2308 | 0.2433 | 0.0879 | 0.1624 | 0.0189 | 0.2317 | 0.2303 | 0.2308 |
2 | 0.0318 | 0.0182 | 0.2662 | 0.0010 | 0.0010 | 0.2235 | 0.2034 | 0.0318 | 0.0182 |
3 | 0.1352 | 0.1362 | 0.0524 | 0.0587 | 0.1856 | 0.0104 | 0.0851 | 0.1352 | 0.1362 |
ROM–Arithmetic Pair Result | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Comparison function: ratio of mean (ROM); aggregation function: arithmetic mean | |||||||||||
Aggregation score M aggregates all Ma of features and prediction; score ∼ 0 indicates no bias | |||||||||||
Day | Income | Age | Workclass | Fnlwgt | Education | Education Num | Marital Status | Race | Sex | Hours/Week | Aggregate Score |
1 | 0.0332 | 0.0077 | 0.0217 | 0.0141 | 0.0327 | 0.0067 | 0.0338 | 0.0082 | 0.0185 | 0.0085 | 0.0185 |
2 | 0.0381 | 0.0099 | 0.0174 | 0.0143 | 0.4244 | 0.0067 | 0.0357 | 0.0104 | 0.0191 | 0.0078 | 0.0584 |
3 | 0.1601 | 0.0103 | 0.0173 | 0.0181 | 0.4295 | 0.0073 | 0.3123 | 0.0121 | 0.0172 | 0.0093 | 0.0993 |
4 | 0.0326 | 0.0124 | 0.0186 | 0.0135 | 0.0352 | 0.0084 | 0.0269 | 0.0119 | 0.0168 | 0.0086 | 0.0185 |
KPTest–Harmonic Pair Result | |||||||||||
Comparison function: Kolmogorov–Smirnov Test (KS test); aggregation function: harmonic mean | |||||||||||
Aggregation score only aggregates of prediction; p-value < 0.05 indicates statistically significant bias | |||||||||||
Day | Income | Age | Workclass | Fnlwgt | Education | Education Num | Marital Status | Race | Sex | Hours/Week | Aggregate Score |
1 | 0.2981 | 0.4024 | 0.7039 | 0.3823 | 0.1557 | 0.2291 | 0.7963 | 0.9730 | 0.8697 | 0.5824 | 0.2981 |
2 | 0.0720 | 0.0839 | 0.9316 | 0.1258 | 0 | 0.4536 | 0.1710 | 0.9389 | 0.9432 | 0.5539 | 0.0720 |
3 | 0 | 0.3613 | 0.8821 | 0.0527 | 0 | 0.6828 | 0 | 0.9240 | 0.8045 | 0.5493 | 0 |
4 | 0.1788 | 0.0362 | 0.8691 | 0.3873 | 0.6746 | 0.4896 | 0.7839 | 0.9651 | 0.7105 | 0.5013 | 0.1788 |
ADTest–Harmonic Pair Result | |||||||||||
Comparison function: Anderson–Darling Test (AD test); aggregation function: harmonic mean | |||||||||||
Aggregation score only aggregates of prediction; p-value < 0.05 indicates statistically significant bias | |||||||||||
Day | Income | Age | Workclass | Fnlwgt | Education | Education Num | Marital Status | Race | Sex | Hours/Week | Aggregate Score |
1 | 0.1147 | 0.1692 | 0.0249 | 0.2196 | 0.0214 | 0.0227 | 0.1101 | 0.0758 | 0.1227 | 0.0845 | 0.1147 |
2 | 0.0206 | 0.0542 | 0.1646 | 0.2089 | 0.0010 | 0.1746 | 0.0139 | 0.0417 | 0.1925 | 0.2169 | 0.0206 |
3 | 0.0010 | 0.1958 | 0.0797 | 0.0726 | 0.0010 | 0.0841 | 0.0010 | 0.0282 | 0.0341 | 0.1590 | 0.0010 |
4 | 0.1284 | 0.0201 | 0.0995 | 0.1612 | 0.0452 | 0.0882 | 0.0736 | 0.0680 | 0.0205 | 0.1024 | 0.1284 |
Attribute | Known Bias Injected? | Flagged by QMAD? | Likely False Positive? |
---|---|---|---|
Education | ✓ | ✓ | ✗ |
Marital Status (Day 2) | ✗ | ✓ | ✓ |
Race (Day 3) | ✗ | ✓ | ✓ |
Day | Statistical Parity Difference | Disparate Impact | Empirical Difference Fairness | Consistency | QMAD_ROM–Arithmetic | QMAD_KSTest–Harmonic | QMAD_ADTest–Harmonic |
---|---|---|---|---|---|---|---|
1 | −0.0069 | 0.9636 | 0.0370 | 0.9929 | 0.0206 | 0.4145 | 0.2303 |
2 | 0.0104 | 1.0559 | 0.0544 | 0.9925 | 0.1239 | 0.0061 | 0.0318 |
3 | −0.0009 | 0.9948 | 0.0051 | 0.9931 | 0.0232 | 0.3086 | 0.1352 |
Day | Statistical Parity Difference | Disparate Impact | Empirical Difference Fairness | Consistency | QMAD_ROM–Arithmetic | QMAD_KSTest–Harmonic | QMAD_ADTest–Harmonic |
---|---|---|---|---|---|---|---|
1 | −0.0011 | 0.9942 | 0.0058 | 0.7286 | 0.0185 | 0.2981 | 0.1147 |
2 | 0.0054 | 1.0292 | 0.0287 | 0.7293 | 0.0584 | 0.0720 | 0.0206 |
3 | 0.0596 | 1.4478 | 0.3699 | 0.7433 | 0.0993 | 0 | 0.001 |
4 | −0.0004 | 0.9978 | 0.0022 | 0.7288 | 0.0185 | 0.1788 | 0.1284 |
Comparison function: t-test p-value | |||||||
---|---|---|---|---|---|---|---|
Aggregation function: harmonic mean (bias flagged if p < 0.05) | |||||||
Group | Linear Regression Salary | Relevant_Exp | Job_Location | Education | Engg_Type | Experience | Linear Regression Aggregate Score |
1 | 0.29682 | 0.38118 | 0.26586 | 0.34371 | 0.14947 | 0.39126 | 0.27385 |
2 | 0.37494 | 0.00868 | 0 | 0 | 0.45766 | 0.42682 | 0.03268 |
3 | 0.17055 | 0.18397 | 0.25875 | 0.41215 | 0.03963 | 0.18378 | 0.12431 |
Comparison function: Mann–Whitney U rank test p-value | |||||||
Aggregation function: harmonic mean (bias flagged if p < 0.05) | |||||||
1 | 0.39280 | 0.45055 | 0.26585 | 0.34369 | 0.14947 | 0.43892 | 0.29404 |
2 | 0.18410 | 0.06851 | 0 | 0 | 0.45764 | 0.44589 | 0.16356 |
3 | 0.25129 | 0.15121 | 0.25875 | 0.41214 | 0.03965 | 0.17786 | 0.12572 |
Comparison function: Brunner–Munzel test p-value | |||||||
Aggregation function: harmonic mean (bias flagged if p < 0.05) | |||||||
1 | 0.39289 | 0.45081 | 0.27140 | 0.34175 | 0.11443 | 0.43827 | 0.26792 |
2 | 0.19226 | 0.07499 | 0 | 0 | 0.45339 | 0.44599 | 0.17403 |
3 | 0.25814 | 0.14900 | 0.26680 | 0.40871 | 0.06563 | 0.17602 | 0.15916 |
Use Case | Recommended Comparison Function | Recommended Aggregation Function | Notes |
---|---|---|---|
Binary Classification (balanced groups) | Ratio of Mean (ROM) | Arithmetic Mean | Standard pair for stable class distributions |
Binary Classification (imbalanced groups) | Kolmogorov–Smirnov Test (KSTest) | Harmonic Mean | KSTest detects subtle shifts in imbalanced classes |
Regression (normal distributions) | Ratio of Mean (ROM) | Arithmetic Mean | Fast and interpretable metric for regression |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alotaibi, D.A.; Zhou, J.; Dong, Y.; Wei, J.; Ge, X.J.; Chen, F. Quantile Multi-Attribute Disparity (QMAD): An Adaptable Fairness Metric Framework for Dynamic Environments. Electronics 2025, 14, 1627. https://doi.org/10.3390/electronics14081627
Alotaibi DA, Zhou J, Dong Y, Wei J, Ge XJ, Chen F. Quantile Multi-Attribute Disparity (QMAD): An Adaptable Fairness Metric Framework for Dynamic Environments. Electronics. 2025; 14(8):1627. https://doi.org/10.3390/electronics14081627
Chicago/Turabian StyleAlotaibi, Dalha Alhumaidi, Jianlong Zhou, Yifei Dong, Jia Wei, Xin Janet Ge, and Fang Chen. 2025. "Quantile Multi-Attribute Disparity (QMAD): An Adaptable Fairness Metric Framework for Dynamic Environments" Electronics 14, no. 8: 1627. https://doi.org/10.3390/electronics14081627
APA StyleAlotaibi, D. A., Zhou, J., Dong, Y., Wei, J., Ge, X. J., & Chen, F. (2025). Quantile Multi-Attribute Disparity (QMAD): An Adaptable Fairness Metric Framework for Dynamic Environments. Electronics, 14(8), 1627. https://doi.org/10.3390/electronics14081627