Integrating Large Language Models into Accessible and Inclusive Education: Access Democratization and Individualized Learning Enhancement Supported by Generative Artificial Intelligence
Abstract
1. Introduction
2. Accessibility and Inclusion Within Education and the Role of Generative AI
3. Enhancing Accessibility Through LLMs
4. Promoting Inclusion via Individualized Learning
5. Discussion and Limitations of LLMs in Education
6. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Yang, L.; Zhu, K.; Chen, H.; Yi, X.; Wang, C.; Wang, Y.; et al. A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–45. [Google Scholar] [CrossRef]
- Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. Gpt-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar]
- Sharma, S.; Mittal, P.; Kumar, M.; Bhardwaj, V. The role of large language models in personalized learning: A systematic review of educational impact. Discov. Sustain. 2025, 6, 243. [Google Scholar] [CrossRef]
- Coxon, A.; Arico, F.; Schildt, J. Accessibility and inclusivity in online teaching. In Tertiary Online Teaching and Learning: TOTAL Perspectives and Resources for Digital Education; Springer Nature: Berlin/Heidelberg, Germany, 2020; pp. 169–175. [Google Scholar]
- Razafinirina, M.A.; Dimbisoa, W.G.; Mahatody, T. Pedagogical alignment of large language models (llm) for personalized learning: A survey, trends and challenges. J. Intell. Learn. Syst. Appl. 2024, 16, 448–480. [Google Scholar] [CrossRef]
- Güneş, F. Discussions of Memorization in Education. Eğitim Kuram Uygul. Araştırmaları Derg. 2020, 6, 409–418. [Google Scholar]
- Farsawang, P.; Songkram, N. Fostering technology integration and adaptability in higher education: Insights from the COVID-19 pandemic. Contemp. Educ. Technol. 2023, 15, ep456. [Google Scholar] [CrossRef] [PubMed]
- VanLehn, K. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educ. Psychol. 2011, 46, 197–221. [Google Scholar] [CrossRef]
- Kulal, A.; Dinesh, S.; Abhishek, N.; Anchan, A. Digital access and learning outcomes: A study of equity and inclusivity in distance education. Int. J. Educ. Manag. 2024, 38, 1391–1423. [Google Scholar] [CrossRef]
- Hang, C.N.; Tan, C.W.; Yu, P.D. MCQGen: A Large Language Model-Driven MCQ Generator for Personalized Learning. IEEE Access 2024, 12, 102261–102273. [Google Scholar] [CrossRef]
- Berrezueta-Guzman, S.; Dolón-Poza, M. Enhancing Preschool Language Acquisition through Robotic Assistants: An Evaluation of Effectiveness, Engagement, and Acceptance. IEEE Access 2025, 13, 25520–25531. [Google Scholar] [CrossRef]
- Zárdai, I. The Ethics of LLMs at Universities: A Case for Restriction and Regulation. 2024. Available online: http://toxiv.ilas.nagoya-u.ac.jp/2024/ZARDAI2024.pdf (accessed on 5 May 2025).
- Meça, A.; Shkëlzeni, N. Academic integrity in the face of generative language models. In Proceedings of the International Conference for Emerging Technologies in Computing, Lahore, Pakistan, 21–23 February 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 58–70. [Google Scholar]
- Wiktor, S.; Dorodchi, M.; Wiktor, N. Ai can help instructors help students: An llm-supported approach to generating customized student reflection responses. In Proceedings of the 2024 IEEE Frontiers in Education Conference (FIE), Washington, DC, USA, 13–16 October 2024; pp. 1–9. [Google Scholar]
- Askarbekuly, N.; Aničić, N. LLM examiner: Automating assessment in informal self-directed e-learning using ChatGPT. Knowl. Inf. Syst. 2024, 66, 6133–6150. [Google Scholar] [CrossRef]
- Mushtaq, A.; Naeem, M.R.; Taj, M.I.; Ghaznavi, I.; Qadir, J. Toward Inclusive Educational AI: Auditing Frontier LLMs through a Multiplexity Lens. arXiv 2025, arXiv:2501.03259. [Google Scholar]
- Li, H.; Chu, Y.; Yang, K.; Copur-Gencturk, Y.; Tang, J. LLM-based Automated Grading with Human-in-the-Loop. arXiv 2025, arXiv:2504.05239. [Google Scholar]
- Yeung, C.; Yu, J.; Cheung, K.C.; Wong, T.W.; Chan, C.M.; Wong, K.C.; Fujii, K. A Zero-Shot LLM Framework for Automatic Assignment Grading in Higher Education. arXiv 2025, arXiv:2501.14305. [Google Scholar]
- Kloker, S.; Bazanya, M.; Kateete, T. I don’t trust you (anymore)!—The effect of students’ LLM use on Lecturer-Student-Trust in Higher Education. arXiv 2024, arXiv:2406.14871. [Google Scholar]
- Uçar, S.Ş.; Lopez-Gazpio, I.; Lopez-Gazpio, J. Evaluating and challenging the reasoning capabilities of generative artificial intelligence for technology-assisted chemistry education. In Education and Information Technologies; Springer: Berlin/Heidelberg, Gemrnay, 2025; pp. 1–20. [Google Scholar]
- Maity, S.; Deroy, A. Generative AI and Its Impact on Personalized Intelligent Tutoring Systems. arXiv 2024, arXiv:2410.10650. [Google Scholar]
- Karpouzis, K.; Pantazatos, D.; Taouki, J.; Meli, K. Tailoring Education with GenAI: A New Horizon in Lesson Planning. arXiv 2024, arXiv:2403.12071. [Google Scholar]
- Li, Q.; Fu, L.; Zhang, W.; Chen, X.; Yu, J.; Xia, W.; Zhang, W.; Tang, R.; Yu, Y. Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges. arXiv 2024, arXiv:2401.08664. [Google Scholar]
- Isayeva, S. Democratizing Education AI and OpenAI Models for Global Access to Knowledge; ResearchGate; IGI Global: Hershey, PA, USA, 2024. [Google Scholar]
- Rose, D. Universal design for learning. J. Spec. Educ. Technol. 2000, 15, 47–51. [Google Scholar] [CrossRef]
- Wu, S.; Fei, H.; Qu, L.; Ji, W.; Chua, T.S. Next-gpt: Any-to-any multimodal llm. In Proceedings of the Forty-first International Conference on Machine Learning, Vienna, Austria, 21–27 July 2024. [Google Scholar]
- Bang, Y.; Cahyawijaya, S.; Lee, N.; Dai, W.; Su, D.; Wilie, B.; Lovenia, H.; Ji, Z.; Yu, T.; Chung, W.; et al. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv 2023, arXiv:2302.04023. [Google Scholar]
- Gamage, G.; De Silva, D.; Mills, N.; Alahakoon, D.; Manic, M. Emotion AWARE: An artificial intelligence framework for adaptable, robust, explainable, and multi-granular emotion analysis. J. Big Data 2024, 11, 93. [Google Scholar] [CrossRef]
- Witter, R.A.; Okun, M.A.; Stock, W.A.; Haring, M.J. Education and subjective well-being: A meta-analysis. Educ. Eval. Policy Anal. 1984, 6, 165–173. [Google Scholar] [CrossRef]
- Sorin, V.; Brin, D.; Barash, Y.; Konen, E.; Charney, A.; Nadkarni, G.; Klang, E. Large Language Models and Empathy: Systematic Review. J. Med. Internet Res. 2024, 26, e52597. [Google Scholar] [CrossRef]
- Cuadra, A.; Wang, M.; Stein, L.A.; Jung, M.F.; Dell, N.; Estrin, D.; Landay, J.A. The illusion of empathy? Notes on displays of emotion in human-computer interaction. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–18. [Google Scholar]
- Hou, R.; Fütterer, T.; Bühler, B.; Bozkir, E.; Gerjets, P.; Trautwein, U.; Kasneci, E. Automated assessment of encouragement and warmth in classrooms leveraging multimodal emotional features and chatgpt. In Proceedings of the International Conference on Artificial Intelligence in Education, Recife, Brazil, 8–12 July 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 60–74. [Google Scholar]
- Xie, Y.; Avila, S. The social impact of generative LLM-based AI. Chin. J. Sociol. 2025, 11, 31–57. [Google Scholar] [CrossRef]
- Guevarra, M.; Bhattacharjee, I.; Das, S.; Wayllace, C.; Epp, C.D.; Taylor, M.E.; Tay, A. An LLM-Guided Tutoring System for Social Skills Training. arXiv 2025, arXiv:2501.09870. [Google Scholar] [CrossRef]
- Seeberg, V.; Minick, T. Enhancing cross-cultural competence in multicultural teacher education: Transformation in global learning. Int. J. Multicult. Educ. 2012, 14. [Google Scholar] [CrossRef]
- Baskara, F.R. Bridging the culture gap: Challenges and limitations of using chatbots in intercultural education. In Proceedings of the National Seminar of PBI (English Language Education), Pekalongan, Indonesia, 23 February 2023. [Google Scholar]
- Vongpradit, P.; Imsombut, A.; Kongyoung, S.; Damrongrat, C.; Phaholphinyo, S.; Tanawong, T. SafeCultural: A Dataset for Evaluating Safety and Cultural Sensitivity in Large Language Models. In Proceedings of the 2024 8th International Conference on Information Technology (InCIT), Chonburi, Thailand, 14–15 November 2024; pp. 740–745. [Google Scholar]
- Myung, J.; Lee, N.; Zhou, Y.; Jin, J.; Putri, R.; Antypas, D.; Borkakoty, H.; Kim, E.; Perez-Almendros, C.; Ayele, A.A.; et al. Blend: A benchmark for llms on everyday knowledge in diverse cultures and languages. Adv. Neural Inf. Process. Syst. 2024, 37, 78104–78146. [Google Scholar]
- Shakespeare, T. The social model of disability. In The Disability Studies Reader; Routledge: London, UK, 2006; pp. 16–24. [Google Scholar]
- Almufareh, M.F.; Kausar, S.; Humayun, M.; Tehsin, S. A conceptual model for inclusive technology: Advancing disability inclusion through artificial intelligence. J. Disabil. Res. 2024, 3, 20230060. [Google Scholar] [CrossRef]
- Nikolopoulou, K. Generative artificial intelligence and sustainable higher education: Mapping the potential. J. Digit. Educ. Technol. 2025, 5, ep2506. [Google Scholar] [CrossRef]
- Tarman, B.; Kilinc, E.; Aydin, H. Barriers to the effective use of technology integration in social studies education. Contemp. Issues Technol. Teach. Educ. 2019, 19, 736–753. [Google Scholar]
- Komljenovic, J. The future of value in digitalised higher education: Why data privacy should not be our biggest concern. High. Educ. 2022, 83, 119–135. [Google Scholar] [CrossRef] [PubMed]
- Lee, J.; Hicke, Y.; Yu, R.; Brooks, C.; Kizilcec, R.F. The life cycle of large language models in education: A framework for understanding sources of bias. Br. J. Educ. Technol. 2024, 55, 1982–2002. [Google Scholar] [CrossRef]
- Wilson-Trollip, M. Harnessing AI for peer-to-peer learning support: Insights from a bibliometric analysis. Perspect. Educ. 2024, 42, 283–304. [Google Scholar] [CrossRef]
- Karthikeyan, J.; Chong, S.T.; Vasanthan, R.; T J, N.; Sundari, P.S.; Devi, V.C. Construction and Implementation of English Translation Simulation Training Classroom Based on Deep Learning. In Proceedings of the 2023 Second International Conference On Smart Technologies For Smart Nation (SmartTechCon), Singapore, 18–19 August 2023; pp. 716–719. [Google Scholar]
- Huang, S.; Mamidanna, S.; Jangam, S.; Zhou, Y.; Gilpin, L.H. Can large language models explain themselves? A study of llm-generated self-explanations. arXiv 2023, arXiv:2310.11207. [Google Scholar]
- Tyen, G.; Caines, A.; Buttery, P. LLM chatbots as a language practice tool: A user study. In Proceedings of the Swedish Language Technology Conference and NLP4CALL, Linkoping, Sweden, 27–29 November 2024; pp. 235–247. [Google Scholar]
- Cherednichenko, O.; Yanholenko, O.; Badan, A.; Onishchenko, N.; Akopiants, N. Large language models for foreign language acquisition. In Proceedings of the CLW-2024: Computational Linguistics Workshop at 8th International Conference on Computational Linguistics and Intelligent Systems (CoLInS-2024), Lviv, Ukraine, 12–13 April 2024. [Google Scholar]
- Li, M.Z. Using Prompt Engineering to Enhance STEM Education. In Proceedings of the 2024 IEEE Integrated STEM Education Conference (ISEC), Princeton, NJ, USA, 9 March 2024; pp. 1–2. [Google Scholar]
- Marinosyan, A.K. LLMs in Physics and Mathematics Education and Problem Solving: Assessment of ChatGPT-4 Level and Suggestions for Improvement. Available online: https://www.researchgate.net/profile/Andreas-Marinosyan/publication/381550026_LLMs_in_Physics_and_Mathematics_Education_and_Problem_Solving_Assessment_of_ChatGPT-4_Level_and_Suggestions_for_Improvement/links/6687cc390a25e27fbc286c3b/LLMs-in-Physics-and-Mathematics-Education-and-Problem-Solving-Assessment-of-ChatGPT-4-Level-and-Suggestions-for-Improvement.pdf (accessed on 5 May 2025).
- Wong, M.F.; Tan, C.W. Aligning Crowd-Sourced Human Feedback for Reinforcement Learning on Code Generation by Large Language Models. IEEE Trans. Big Data 2024, 1–12. [Google Scholar] [CrossRef]
- Demmans Epp, C.; McEwen, R.; Campigotto, R.; Moffatt, K. Information practices and user interfaces: Student use of an iOS application in special education. Educ. Inf. Technol. 2016, 21, 1433–1456. [Google Scholar] [CrossRef]
- Sukiennik, N.; Gao, C.; Xu, F.; Li, Y. An Evaluation of Cultural Value Alignment in LLM. arXiv 2025, arXiv:2504.08863. [Google Scholar]
- Kreijkes, P.; Kewenig, V.; Kuvalja, M.; Lee, M.; Vitello, S.; Hofman, J.M.; Sellen, A.; Rintel, S.; Goldstein, D.G.; Rothschild, D.M.; et al. Effects of LLM Use and Note-Taking on Reading Comprehension and Memory: A Randomised Experiment in Secondary Schools. 2025. Available online: https://ssrn.com/abstract=5095149 (accessed on 5 May 2025).
- Chatziveroglou, G.; Yun, R.; Kelleher, M. Exploring LLM Reasoning Through Controlled Prompt Variations. arXiv 2025, arXiv:2504.02111. [Google Scholar]
- Faisal, F.; Rahman, M.M.; Anastasopoulos, A. Dialectal Toxicity Detection: Evaluating LLM-as-a-Judge Consistency Across Language Varieties. arXiv 2024, arXiv:2411.10954. [Google Scholar]
- Pirjan, A.; Petroşanu, D.M. Exploring large language models in the education process with a view towards transforming personalized learning. J. Inf. Syst. Oper. Manag. 2024, 18, 125–171. [Google Scholar]
- Wei, R.; Li, K.; Lan, J. Improving collaborative learning performance based on llm virtual assistant. In Proceedings of the 2024 13th International Conference on Educational and Information Technology (ICEIT), Chengdu, China, 22–24 March 2024; pp. 1–6. [Google Scholar]
- Fernández-Isabel, A.; Lancho, C.; de Diego, I.M.; Udías, A.; Alonso-Ayuso, A.; Alfaro, C.; López, E.; Ortega, F.; Gómez, J.; Moguerza, J.; et al. Using an llm-based framework to analyze the student performance. In Proceedings of the INTED2025 Proceedings, Valencia, Spain, 3–5 March 2025; pp. 3359–3367. [Google Scholar]
- Steffe, L.P.; Gale, J.E. Constructivism in Education; Psychology Press: London, UK, 1995. [Google Scholar]
- Lopez-Gazpio, J.; Lopez-Gazpio, I. Constructing an electronic calorimeter that students can use to make thermochemical and analytical determinations during laboratory experiments. J. Chem. Educ. 2020, 97, 4355–4360. [Google Scholar] [CrossRef]
- Lopez-Gazpio, I. Revisiting challenges and hazards in large language model evaluation. Proces. Leng. Nat. 2024, 72, 15–30. [Google Scholar]
- Orrù, G.; Piarulli, A.; Conversano, C.; Gemignani, A. Human-like problem-solving abilities in large language models using ChatGPT. Front. Artif. Intell. 2023, 6, 1199350. [Google Scholar] [CrossRef] [PubMed]
- Hadi, M.U.; Qureshi, R.; Shah, A.; Irfan, M.; Zafar, A.; Shaikh, M.; Akhtar, N.; Wu, J.; Mirjalili, S. Large Language Models: A Comprehensive Survey of its Applications, Challenges, Limitations, and Future Prospects. TechRxiv 2023. [Google Scholar] [CrossRef]
- Zhao, W.X.; Zhou, K.; Li, J.; Tang, T.; Wang, X.; Hou, Y.; Min, Y.; Zhang, B.; Zhang, J.; Dong, Z.; et al. A Survey of Large Language Models. arXiv 2023, arXiv:2303.18223. [Google Scholar]
- Ma, J.Y.; Gu, J.C.; Ling, Z.H.; Liu, Q.; Liu, C. Untying the Reversal Curse via Bidirectional Language Model Editing. arXiv 2023, arXiv:2310.10322. [Google Scholar]
- Berglund, L.; Tong, M.; Kaufmann, M.; Balesni, M.; Stickland, A.C.; Korbak, T.; Evans, O. The Reversal Curse: LLMs trained on “A is B” fail to learn “B is A”. arXiv 2023, arXiv:2309.12288. [Google Scholar]
- Kejriwal, M.; Santos, H.; Shen, K.; Mulvehill, A.M.; McGuinness, D.L. Context-Rich Evaluation of Machine Common Sense. In Proceedings of the International Conference on Artificial General Intelligence, Stockholm, Sweden, 16–19 June 2023; pp. 167–176. [Google Scholar]
- Puchert, P.; Poonam, P.; van Onzenoodt, C.; Ropinski, T. LLMMaps–A Visual Metaphor for Stratified Evaluation of Large Language Models. arXiv 2023, arXiv:2304.00457. [Google Scholar]
- Saha, T.; Ganguly, D.; Saha, S.; Mitra, P. Workshop On Large Language Models’ Interpretability and Trustworthiness (LLMIT). In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham, UK, 21–25 October 2023; pp. 5290–5293. [Google Scholar]
- Zhai, Y.; Tong, S.; Li, X.; Cai, M.; Qu, Q.; Lee, Y.J.; Ma, Y. Investigating the catastrophic forgetting in multimodal large language models. arXiv 2023, arXiv:2309.10313. [Google Scholar]
- Sun, J.; Wang, S.; Zhang, J.; Zong, C. Distill and replay for continual language learning. In Proceedings of the 28th International Conference On Computational Linguistics, Barcelona, Spain, 8–13 December 2020; pp. 3569–3579. [Google Scholar]
- Kotek, H.; Dockum, R.; Sun, D. Gender bias and stereotypes in Large Language Models. In Proceedings of the ACM Collective Intelligence Conference, Delft, The Netherlands, 6–9 November 2023; pp. 12–24. [Google Scholar]
- Peng, Z.; Wang, Z.; Deng, D. Near-Duplicate Sequence Search at Scale for Large Language Model Memorization Evaluation. Proc. ACM Manag. Data 2023, 1, 1–18. [Google Scholar] [CrossRef]
- Sakaguchi, K.; Bras, R.L.; Bhagavatula, C.; Choi, Y. Winogrande: An adversarial winograd schema challenge at scale. Commun. ACM 2021, 64, 99–106. [Google Scholar] [CrossRef]
- Xu, X.; Kong, K.; Liu, N.; Cui, L.; Wang, D.; Zhang, J.; Kankanhalli, M. An LLM can Fool Itself: A Prompt-Based Adversarial Attack. arXiv 2023, arXiv:2310.13345. [Google Scholar]
- Chen, Y.; Arunasalam, A.; Celik, Z.B. Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions. arXiv 2023, arXiv:2310.02431. [Google Scholar]
- Chetnani, Y.P. Evaluating the Impact of Model Size on Toxicity and Stereotyping in Generative LLM. Ph.D. Thesis, State University of New York at Buffalo, Buffalo, NY, USA, 2023. [Google Scholar]
- Chiang, C.H.; Lee, H.Y. Can Large Language Models Be an Alternative to Human Evaluations? arXiv 2023, arXiv:2305.01937. [Google Scholar]
- Băroiu, A.C.; Trăuşan-Matu, Ş. How capable are state-of-the-art language models to cope with sarcasm? In Proceedings of the 2023 24th International Conference on Control Systems and Computer Science (CSCS), Bucharest, Romania, 26–28 May 2023; pp. 399–402. [Google Scholar]
- Rillig, M.C.; Ågerstrand, M.; Bi, M.; Gould, K.A.; Sauerland, U. Risks and benefits of large language models for the environment. Environ. Sci. Technol. 2023, 57, 3464–3466. [Google Scholar] [CrossRef]
- Tedeschi, S.; Bos, J.; Declerck, T.; Hajic, J.; Hershcovich, D.; Hovy, E.H.; Koller, A.; Krek, S.; Schockaert, S.; Sennrich, R.; et al. What’s the Meaning of Superhuman Performance in Today’s NLU? arXiv 2023, arXiv:2305.08414. [Google Scholar]
Challenge | Description | Traditional Solutions | Impact and Role of LLMs |
---|---|---|---|
Language Barriers | Non-native speakers face difficulties understanding and communicating in the language of instruction. | ESL programs, bilingual tutors, translation tools. | LLMs provide real-time translation, multilingual tutoring, and simplified content explanations, supporting personalized language development. High capacity for assistance but may struggle with nuanced academic terminology or dialects. |
Previous Educational Quality | Learners with inconsistent or poor prior education may lack foundational skills. | Diagnostic tests, tutoring, remedial programs. | LLMs offer adaptive explanations, identify knowledge gaps through dialogue, and provide targeted practice. Strong support potential, though oversight by educators is needed to ensure alignment with curriculum. |
Learning Disabilities and Special Needs | Students require specialized strategies that account for cognitive or sensory challenges. | IEPs, assistive tech, differentiated instruction. | LLMs can deliver content in multiple formats (text-to-speech, summaries, simplified text), adapt tone and pacing, and support executive function. Moderate-to-high capacity but must be used with accessibility-compliant interfaces. |
Cultural Differences | Diverse cultural backgrounds can create disconnects with standardized curricula. | Culturally responsive pedagogy, inclusive materials. | LLMs can localize examples, generate content with cultural sensitivity, and incorporate diverse narratives. Good potential if trained on inclusive datasets; requires guidance to avoid cultural bias. |
Socioeconomic Disparities | Unequal access to devices, internet, and materials restricts participation. | Tech grants, free materials, community centers. | LLMs offer low-cost, on-demand educational support accessible via mobile apps or offline versions. High capacity if infrastructure is present; limited impact without basic digital access. |
Inclusive Education Element | LLM Contribution | Educator’s Role | Impact on Learners |
---|---|---|---|
Personalized Instruction | LLMs adapt content in real time based on student interactions and learning pace. | Curate and supervise AI-generated materials to ensure pedagogical soundness and emotional resonance. | Enhances learner autonomy, engagement, and mastery through tailored content. |
Cultural and Linguistic Responsiveness | Content is customized to students’ cultural backgrounds, interests, and languages. | Validate cultural relevance, prevent bias, and ensure inclusivity in AI outputs. | Promotes a sense of belonging and increases participation for diverse learners. |
Multimodal Learning Support | Delivers explanations visually, verbally, or interactively based on student preference. | Match modality to student learning profiles and integrate with curriculum goals. | Supports comprehension across various learning styles and abilities. |
Inclusive Collaboration | Mediates peer interaction by clarifying ideas, encouraging quieter voices, and guiding discussions. | Facilitate group work, monitor tone and inclusivity, and intervene when necessary. | Encourages respectful discourse, empathy, and equitable participation. |
Ethical and Equitable Use | Operates at scale with potential for wide-reaching impact. | Address algorithmic bias, monitor content accuracy, and bridge digital divides. | Ensures fair access, maintains trust, and upholds ethical standards. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lopez-Gazpio, I. Integrating Large Language Models into Accessible and Inclusive Education: Access Democratization and Individualized Learning Enhancement Supported by Generative Artificial Intelligence. Information 2025, 16, 473. https://doi.org/10.3390/info16060473
Lopez-Gazpio I. Integrating Large Language Models into Accessible and Inclusive Education: Access Democratization and Individualized Learning Enhancement Supported by Generative Artificial Intelligence. Information. 2025; 16(6):473. https://doi.org/10.3390/info16060473
Chicago/Turabian StyleLopez-Gazpio, Inigo. 2025. "Integrating Large Language Models into Accessible and Inclusive Education: Access Democratization and Individualized Learning Enhancement Supported by Generative Artificial Intelligence" Information 16, no. 6: 473. https://doi.org/10.3390/info16060473
APA StyleLopez-Gazpio, I. (2025). Integrating Large Language Models into Accessible and Inclusive Education: Access Democratization and Individualized Learning Enhancement Supported by Generative Artificial Intelligence. Information, 16(6), 473. https://doi.org/10.3390/info16060473