AI, Ethics, and Cognitive Bias: An LLM-Based Synthetic Simulation for Education and Research
Abstract
1. Introduction
2. Related Literature
3. Materials and Methods
3.1. Study Design
3.2. The LLM-Generated Synthetic Behavior Estimation Experiment
3.3. The LLM-Generated Synthetic Behavior Estimations Setup
3.4. Data Collection and Analysis
3.5. Ethical and Theoretical Considerations
4. Results
4.1. Overview
4.2. Simulation Results
4.2.1. Academic Misconduct
4.2.2. Loss of Human Agency
4.2.3. Biases in Academic Evaluation
4.2.4. Inequality of Access and Educational Outcomes
4.2.5. Misinformation and Deceptive Content
4.2.6. Homogenization of Thought
4.3. Statistical Analysis
4.4. Stress Tests
5. Discussion
5.1. Experiment Results
5.2. Alignment with Prior Evidence
5.3. Interventions and Strategies
5.4. Long-Term Implications of Cognitive Biases in Academic AI Use
5.5. Limitations of the Study and Future Research Directions
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ahmad, K. (2021, September 20–24). Human-centric requirements engineering for artificial intelligence software systems. 2021 IEEE 29th International Requirements Engineering Conference (RE) (pp. 468–473), Notre Dame, IN, USA. [Google Scholar] [CrossRef]
- Aoun, J. (2017). Robot-proof: Higher education in the age of Artificial Intelligence (1st ed.). The MIT Press. [Google Scholar]
- Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, 70, 1–70. [Google Scholar] [CrossRef]
- Ashforth, B. E., & Anand, V. (2003). The normalization of corruption in organizations. Research in Organizational Behavior, 25, 1–52. [Google Scholar] [CrossRef]
- Atreides, K., & Kelley, D. J. (2024). Cognitive biases in natural language: Automatically detecting, differentiating, and measuring bias in text. Cognitive Systems Research, 88, 101304. [Google Scholar] [CrossRef]
- Bahner, J. E., Hüper, A.-D., & Manzey, D. (2008). Misuse of automated decision aids: Complacency, automation bias, and the impact of training experience. International Journal of Human-Computer Studies, 66, 688–699. [Google Scholar] [CrossRef]
- Baker, R. S., & Hawn, A. (2022). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32, 1052–1092. [Google Scholar] [CrossRef]
- Barros, A., Prasad, A., & Śliwa, M. (2023). Generative artificial intelligence and academia: Implications for research, teaching, and service. Management Learning, 54, 597–604. [Google Scholar] [CrossRef]
- Batista, J., Mesquita, A., & Carnaz, G. (2024). Generative AI and higher education: Trends, challenges, and future directions from a systematic literature review. Information, 15, 676. [Google Scholar] [CrossRef]
- Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March 3–10). On the dangers of stochastic parrots: Can language models be too big? FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623), Virtual Event. [Google Scholar] [CrossRef]
- Bertoncini, A. L. C., & Serafim, M. C. (2023). Ethical content in artificial intelligence systems: A demand explained in three critical points. Frontiers in Psychology, 14, 1074787. [Google Scholar] [CrossRef]
- Blackman, R. (2022). Ethical machines: Your concise guide to totally unbiased, transparent, and respectful AI. Harvard Business Review Press. [Google Scholar]
- Bond, R., & Smith, P. B. (1996). Culture and conformity: A meta-analysis of studies using Asch’s line judgment task. Psychological Bulletin, 119, 111–137. [Google Scholar] [CrossRef]
- Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Hesse, C. (2020). GPT-3: Language models are few-shot learners. arXiv, arXiv:2005.14165. [Google Scholar] [CrossRef]
- Cheung, S. K. S., Kwok, L. F., Phusavat, K., & Yang, H. H. (2021). Shaping the future learning environments with smart elements: Challenges and opportunities. International Journal of Educational Technology in Higher Education, 18, 16. [Google Scholar] [CrossRef] [PubMed]
- Chukwuani, V. N. (2024). The influence of behavioural biases on audit judgment and decision making. International Journal of Advanced Finance and Accounting, 5(2), 26–38. [Google Scholar] [CrossRef]
- Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and conformity. Annual Review of Psychology, 55, 591–621. [Google Scholar] [CrossRef]
- Conlin, M., O’Donoghue, T., & Vogelsang, T. J. (2007). Projection bias in catalog orders. American Economic Review, 97, 1217–1249. [Google Scholar] [CrossRef]
- Cooper, G. (2023). Examining science education in ChatGPT: An exploratory study of generative artificial intelligence. Journal of Science Education and Technology, 32, 444–452. [Google Scholar] [CrossRef]
- Currie, G., & Barry, K. (2023). ChatGPT in nuclear medicine education. Journal of Nuclear Medicine Technology, 51, 247–254. [Google Scholar] [CrossRef]
- Da Silva, S., Gupta, R., & Monzani, D. (Eds.). (2023). Highlights in psychology: Cognitive bias. Frontiers Media SA. [Google Scholar] [CrossRef]
- Daza, M. T., & Ilozumba, U. J. (2022). A survey of AI ethics in business literature: Maps and trends between 2000 and 2021. Frontiers in Psychology, 13, 1042661. [Google Scholar] [CrossRef]
- Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press. [Google Scholar]
- Fjelland, R. (2020). Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications, 7, 10. [Google Scholar] [CrossRef]
- Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1, 535–545. [Google Scholar] [CrossRef]
- Furnham, A., & Boo, H. C. (2011). A literature review of the anchoring effect. The Journal of Socio-Economics, 40, 35–42. [Google Scholar] [CrossRef]
- Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30, 99–120. [Google Scholar] [CrossRef]
- Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57, 407–434. [Google Scholar] [CrossRef]
- Jalal, A., & Mahmood, M. (2019). Students’ behavior mining in e-learning environment using cognitive processes with information technologies. Education and Information Technologies, 24, 2797–2821. [Google Scholar] [CrossRef]
- Janis, I. L. (1972). Victims of groupthink: A psychological study of foreign-policy decisions and fiascoes. Houghton Mifflin. [Google Scholar]
- Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux. [Google Scholar]
- Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A flaw in human judgment. Little, Brown Spark. [Google Scholar]
- Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press. [Google Scholar]
- Klingbeil, C., Grützner, M., & Schreck, T. (2024). Trust and reliance on AI—An experimental study on how individuals overrely on AI to their own detriment. Computers in Human Behavior, 152, 108352. [Google Scholar] [CrossRef]
- Lange, D., & Washburn, N. T. (2012). Understanding attributions of corporate social responsibility. Academy of Management Review, 37, 300–326. [Google Scholar] [CrossRef]
- Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46, 50–80. [Google Scholar] [CrossRef] [PubMed]
- Liang, W., Yüksekgönül, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4, 100779. [Google Scholar] [CrossRef] [PubMed]
- Loewenstein, G., O’Donoghue, T., & Rabin, M. (2003). Projection bias in predicting future utility. The Quarterly Journal of Economics, 118, 1209–1248. [Google Scholar] [CrossRef]
- Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. [Google Scholar] [CrossRef]
- Luckin, R. (2018). Machine learning and human intelligence: The future of education for the 21st century. UCL Press. [Google Scholar]
- Lucy, L., & Bamman, D. (2021). Gender and representation bias in GPT-3 generated stories. Proceedings of the Third Workshop on Narrative Understanding, 48–55. [Google Scholar] [CrossRef]
- Maslej, N., Fattorini, L., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Ngo, H., Niebles, J. C., Parli, V., Shoham, Y., Wald, R., Clark, J., & Perrault, R. (2023). Artificial intelligence index report 2023. HAI Stanford University. [Google Scholar] [CrossRef]
- Maslej, N., Fattorini, L., Perrault, R., Parli, V., Reuel, A., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Niebles, J. C., Shoham, Y., Wald, R., & Clark, J. (2024). Artificial intelligence index report 2024. HAI Stanford University. [Google Scholar] [CrossRef]
- Matos, E. J., Bertoncini, A. L. C., Ames, M. C., & Serafim, M. C. (2024). The (lack of) ethics at generative AI in the business management field’s education and research. Revista de Administração Mackenzie, 25, 1–30. [Google Scholar] [CrossRef]
- Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sciences, 13, 856. [Google Scholar] [CrossRef]
- Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67, 371–378. [Google Scholar] [CrossRef] [PubMed]
- Naiseh, M., Simkute, A., Zieni, B., Jiang, N., & Ali, R. (2024). C-XAI: A conceptual framework for designing XAI tools that support trust calibration. Journal of Respiration Technology, 17, 100076. [Google Scholar] [CrossRef]
- Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175–220. [Google Scholar] [CrossRef]
- Parasuraman, R., Molloy, R., & Singh, I. L. (1993). Performance consequences of automation-induced complacency. The International Journal of Aerospace Psychology, 3, 1–23. [Google Scholar] [CrossRef]
- Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39, 230–253. [Google Scholar] [CrossRef]
- Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. arXiv, arXiv:2304.03442. [Google Scholar] [CrossRef]
- Popovici, M.-D. (2023). ChatGPT in the classroom: Exploring its potential and limitations in a functional programming course. International Journal of Human–Computer Interaction, 40, 7743–7754. [Google Scholar] [CrossRef]
- Ratten, V., & Jones, P. (2023). Generative artificial intelligence (ChatGPT): Implications for management educators. The International Journal of Management Education, 21, 100857. [Google Scholar] [CrossRef]
- Ravšelj, D., Keržič, D., Tomaževič, N., Umek, L., Brezovar, N., Iahad, N. A., & Abdulla, A. A. (2025). Higher education students’ perceptions of ChatGPT: A global study of early reactions. PLoS ONE, 20, e0315011. [Google Scholar] [CrossRef] [PubMed]
- Roe, J., Furze, L., & Perkins, M. (2024). Funhouse mirror or echo chamber? A methodological approach to teaching critical AI literacy through metaphors. arXiv, arXiv:2411.14730. [Google Scholar] [CrossRef]
- Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1, 7–59. [Google Scholar] [CrossRef]
- Schwartz, N., & Vaughn, L. A. (2002). The availability heuristic revisited: Ease of recall and content of recall as distinct sources of information. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Judgment and decision making (pp. 103–119). Cambridge University Press. [Google Scholar] [CrossRef]
- Sherif, M. (1935). A study of some social factors in perception. Archives of Psychology, 60, 1–60. [Google Scholar]
- Stahl, B. C., Timmermans, J., & Mittelstadt, B. D. (2016). The ethics of computing: A survey of the computing-oriented literature. ACM Computing Surveys, 48, 1–38. [Google Scholar] [CrossRef]
- Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361, 751–752. [Google Scholar] [CrossRef] [PubMed]
- Tsang, J.-A. (2002). Moral rationalization and the integration of situational factors and psychological processes in immoral behavior. Review of General Psychology, 6, 25–50. [Google Scholar] [CrossRef]
- Turner, M. E., & Pratkanis, A. R. (1998). Twenty-five years of groupthink theory and research: Lessons from the evaluation of a theory. Organizational Behavior and Human Decision Processes, 73, 105–115. [Google Scholar] [CrossRef]
- Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207–232. [Google Scholar] [CrossRef]
- Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131. [Google Scholar] [CrossRef]
- UNESCO. (2022). K-12 AI curricula: A mapping of government-endorsed AI curricula. UNESCO’s Unit for Technology and Artificial Intelligence in Education. [Google Scholar]
- Vodenko, K. V., & Lyausheva, S. A. (2020). Science and education in the form 4.0: Public policy and organization based on human and artificial intellectual capital. Journal of Intellectual Capital, 21, 549–564. [Google Scholar] [CrossRef]
- Walczak, K., & Cellary, W. (2023). Challenges for higher education in the era of widespread access to generative AI. Economics and Business Review, 9, 71–100. [Google Scholar] [CrossRef]
- Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12, 129–140. [Google Scholar] [CrossRef]
- Xu, R., Sun, Y., Ren, M., Guo, S., Pan, R., Lin, H., Sun, L., & Han, X. (2024). AI for social science and social science of AI: A survey. Information Processing & Management, 61, 103665. [Google Scholar] [CrossRef]
- Yilmaz, F. G. K., Yilmaz, R., & Ceylan, M. (2023). Generative artificial intelligence acceptance scale: A validity and reliability study. International Journal of Human–Computer Interaction, 40, 8703–8715. [Google Scholar] [CrossRef]
- Zhao, Y., Michal, A., Thain, N., & Subramonyam, H. (2025). Thinking like a scientist: Can interactive simulations foster critical AI literacy? arXiv, arXiv:2507.21090. [Google Scholar]
Situation | Cognitive Bias |
---|---|
Academic Misconduct | Normalization Bias |
Complacency Bias | |
Rationalization Bias | |
Loss of Human Agency | Automation Bias |
Confirmation Bias | |
Technology Superiority Bias | |
Biases in Academic Evaluation | Anchoring Bias |
Representativeness Bias | |
Availability Bias | |
Inequality of Access and Educational Outcomes | Status Quo Bias |
Social Confirmation Bias | |
Misinformation and Deceptive Content Production | Projection Bias |
Authority Bias | |
Homogenization of Thought | Conformity Bias |
Groupthink Bias |
Situation | Cognitive Bias | Ethical Decisions | Unethical Decisions | Total Decisions |
---|---|---|---|---|
Academic Misconduct | Normalization Bias | 55 | 38 | 93 |
Complacency Bias | 67 | 20 | 87 | |
Rationalization Bias | 80 | 13 | 93 | |
Loss of Human Agency | Automation Bias | 43 | 50 | 93 |
Technology Superiority Bias | 68 | 25 | 93 | |
Confirmation Bias | 71 | 22 | 93 | |
Academic Evaluation | Anchoring Bias | 60 | 33 | 93 |
Availability Bias | 67 | 26 | 93 | |
Representativeness Bias | 73 | 20 | 93 | |
Inequality of Access | Status Quo Bias | 55 | 38 | 93 |
Social Confirmation Bias | 65 | 28 | 93 | |
Misinformation | Authority Bias | 56 | 37 | 93 |
Projection Bias | 63 | 30 | 93 | |
Homogenization of Thought | Conformity Bias | 62 | 31 | 93 |
Groupthink Bias | 68 | 25 | 93 |
Situation | Cognitive Bias | χ2 | df | p-Value | Significant? |
---|---|---|---|---|---|
Academic Misconduct | Normalization Bias | 15.64 | 2 | 0.0004 | Yes |
Complacency Bias | |||||
Rationalization Bias | |||||
Loss of Human Agency | Automation Bias | 18.32 | 2 | 0.0001 | Yes |
Technology Superiority Bias | |||||
Confirmation Bias | |||||
Academic Evaluation | Anchoring Bias | 12.78 | 2 | 0.0016 | Yes |
Availability Bias | |||||
Representativeness Bias | |||||
Inequality of Access | Status Quo Bias | 8.45 | 1 | 0.0147 | Yes |
Social Confirmation Bias | |||||
Misinformation | Authority Bias | 20.43 | 1 | 0.00003 | Yes |
Projection Bias | |||||
Homogenization of Thought | Conformity Bias | 11.56 | 1 | 0.0031 | Yes |
Groupthink Bias |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bertoncini, A.L.; Matsushita, R.; Da Silva, S. AI, Ethics, and Cognitive Bias: An LLM-Based Synthetic Simulation for Education and Research. AI Educ. 2026, 1, 3. https://doi.org/10.3390/aieduc1010003
Bertoncini AL, Matsushita R, Da Silva S. AI, Ethics, and Cognitive Bias: An LLM-Based Synthetic Simulation for Education and Research. AI in Education. 2026; 1(1):3. https://doi.org/10.3390/aieduc1010003
Chicago/Turabian StyleBertoncini, Ana Luize, Raul Matsushita, and Sergio Da Silva. 2026. "AI, Ethics, and Cognitive Bias: An LLM-Based Synthetic Simulation for Education and Research" AI in Education 1, no. 1: 3. https://doi.org/10.3390/aieduc1010003
APA StyleBertoncini, A. L., Matsushita, R., & Da Silva, S. (2026). AI, Ethics, and Cognitive Bias: An LLM-Based Synthetic Simulation for Education and Research. AI in Education, 1(1), 3. https://doi.org/10.3390/aieduc1010003