Generative AI in Higher Education: Applications, Implications, and Future Directions

A special issue of Informatics (ISSN 2227-9709).

Deadline for manuscript submissions: closed (28 February 2026) | Viewed by 27245

Special Issue Editors


E-Mail Website
Guest Editor
College of Health, Sport and Engineering, Victoria University, Melbourne, Australia
Interests: generative AI in industry and higher education; information systems adoption; educational technologies and innovations; health informatics and analytics; decision making

E-Mail Website
Guest Editor
Faculty of Science and Engineering, Southern Cross University, Gold Coast, Australia
Interests: technology/innovation diffusion and planning; software engineering; evolutionary computation; human–computer interaction; information system adoption; user experience

E-Mail Website
Guest Editor
College of Arts, Business, Law, Education and IT, Victoria University, Melbourne, Australia
Interests: business analytics; artificial intelligence adoption; educational technologies; health analytics and informatics

Special Issue Information

Dear Colleagues,

The rapid advancement and integration of Generative Artificial Intelligence (Gen AI) in educational settings marks a pivotal transformation in the higher education landscape. As universities worldwide adapt to these emerging technologies, fundamental questions arise about teaching methodologies, learning assessment, and academic integrity. This Special Issue of Informatics aims to explore the applications, implications, and future directions of Gen AI in higher education, with a particular focus on how these technologies are reshaping traditional educational paradigms.

We welcome submissions addressing, but not limited to, the following topics:

  • Applications of Gen AI in curriculum design and delivery;
  • Impact of Gen AI on assessment strategies and academic integrity;
  • Integration of Gen AI tools in teaching and learning practices;
  • Institutional policies and frameworks for Gen AI adoption;
  • Student engagement and learning outcomes with Gen AI;
  • Faculty development and adaptation to Gen AI technologies;
  • Equity and accessibility considerations in Gen AI implementation;
  • Future skills and graduate employability in a Gen AI-enhanced workplace;
  • Ethical considerations and responsible use of Gen AI in education;
  • Pedagogical innovations and transformations enabled by Gen AI.

Dr. Amir Ghapanchi
Dr. Reza Ghanbarzadeh
Dr. Purarjomandlangrudi Afrooz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Informatics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • generative AI
  • higher education
  • educational technology
  • academic integrity
  • learning analytics
  • artificial intelligence in education
  • digital pedagogy
  • educational innovation
  • student assessment
  • educational policy

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

15 pages, 381 KB  
Article
Assessment Validity in the Age of Generative AI: A Natural Experiment
by Håvar Brattli, Alexander Utne and Matthew Lynch
Informatics 2026, 13(4), 56; https://doi.org/10.3390/informatics13040056 - 3 Apr 2026
Viewed by 714
Abstract
Universities play a dual role as sites of learning and as institutions that certify student competence through assessment. The rapid diffusion of generative artificial intelligence (GenAI) challenges this certification function by altering the conditions under which assessment evidence is produced. When powerful AI [...] Read more.
Universities play a dual role as sites of learning and as institutions that certify student competence through assessment. The rapid diffusion of generative artificial intelligence (GenAI) challenges this certification function by altering the conditions under which assessment evidence is produced. When powerful AI tools are widely available, grades may increasingly reflect a combination of individual understanding and external cognitive support rather than solely independent competence. This study examines how changes in assessment format interact with GenAI availability to reshape observable performance outcomes in higher education. Using exam grade data from a compulsory undergraduate course delivered over five years (2021–2025; N = 1066), the study exploits a naturally occurring change in assessment conditions as a natural experiment. From 2021 to 2024, the course was assessed using an AI-permissive take-home examination, while in 2025 the assessment shifted to an AI-restricted, supervised in-person examination. Course content, intended learning outcomes, grading criteria, examiner continuity, and the structural design of the examination tasks remained stable across cohorts. The results reveal a pronounced shift in grade distributions coinciding with the format change. Failure rates increased sharply in 2025, mid-range grades declined, and the proportion of top grades remained largely unchanged. Statistical analysis indicates a significant association between examination period and grade outcomes (χ2(5, N = 1066) = 60.62, p < 0.001), with a small-to-moderate effect size (Cramér’s V = 0.24), driven primarily by the increase in failing grades. These findings suggest that AI-permissive and AI-restricted assessment formats may not be measurement-equivalent under conditions of widespread GenAI use. The results raise concerns about construct validity and the credibility of grades as signals of independent competence, while also highlighting tensions between certification credibility and assessment authenticity. Full article
Show Figures

Figure 1

19 pages, 610 KB  
Article
Quality Assessment of Generative AI in Cybersecurity Certification
by Vanessa G. Félix, Rodolfo Ostos, Luis J. Mena, Homero Toral-Cruz, Alberto Ochoa-Brust, Pablo Velarde-Alvarado, Apolinar González-Potes, Ramón A. Félix-Cuadras, José A. León-Borges and Rafael Martínez-Peláez
Informatics 2026, 13(4), 53; https://doi.org/10.3390/informatics13040053 - 30 Mar 2026
Viewed by 867
Abstract
Generative Artificial Intelligence (GenAI), particularly Large Language Models (LLMs), is rapidly changing how higher education approaches teaching, learning, and assessment. In cybersecurity education, professional certification exams are key for measuring competence and helping professionals find better job offers, but there is little research [...] Read more.
Generative Artificial Intelligence (GenAI), particularly Large Language Models (LLMs), is rapidly changing how higher education approaches teaching, learning, and assessment. In cybersecurity education, professional certification exams are key for measuring competence and helping professionals find better job offers, but there is little research on how GenAI systems perform in these exam settings. This study looks at how three popular LLMs, ChatGPT-5, Gemini-2.5 Pro, and Copilot-2.5 Pro, handle 183 practice questions from the CompTIA Security+ certification. The study used a two-phase evaluation: a domain-based assessment and a full-length practice exam that mirrors real certification tests. The researchers measured model performance with accuracy scores, chi-square tests for statistical differences, and an error taxonomy to spot patterns of mistakes important for education. All three GenAI systems scored above the passing mark, and there were no significant differences between them. Still, the error analysis showed ongoing conceptual and classification mistakes that did not show up in the overall accuracy scores. Our results show that GenAI systems can pass structured certification tests, but accuracy by itself does not fully measure professional skills. The study points out important issues for the reliability and validity of AI-based assessments in higher education and stresses the need for more realistic, concept-focused ways to evaluate GenAI in cybersecurity education. Full article
Show Figures

Figure 1

21 pages, 1961 KB  
Article
Design and Evaluation of a Generative AI-Enhanced Serious Game for Digital Literacy: An AI-Driven NPC Approach
by Suepphong Chernbumroong, Kannikar Intawong, Udomchoke Asawimalkit, Kitti Puritat and Phichete Julrode
Informatics 2026, 13(1), 16; https://doi.org/10.3390/informatics13010016 - 21 Jan 2026
Viewed by 2288
Abstract
The rapid proliferation of misinformation on social media underscores the urgent need for scalable digital-literacy instruction. This study presents the design and evaluation of a Generative AI-enhanced serious game system that integrates Large Language Models (LLMs) to drive adaptive non-player characters (NPCs). Unlike [...] Read more.
The rapid proliferation of misinformation on social media underscores the urgent need for scalable digital-literacy instruction. This study presents the design and evaluation of a Generative AI-enhanced serious game system that integrates Large Language Models (LLMs) to drive adaptive non-player characters (NPCs). Unlike traditional scripted interactions, the system employs role-based prompt engineering to align real-time AI dialogue with the Currency, Relevance, Authority, Accuracy, and Purpose (CRAAP) framework, enabling dynamic scaffolding and authentic misinformation scenarios. A mixed-method experiment with 60 undergraduate students compared this AI-driven approach to traditional instruction using a 40-item digital-literacy pre/post test, the Intrinsic Motivation Inventory (IMI), and open-ended reflections. Results indicated that while both groups improved significantly, the game-based group achieved larger gains in credibility-evaluation performance and reported higher perceived competence, interest, and effort. Qualitative analysis highlighted the HCI trade-off between the high pedagogical value of adaptive AI guidance and technical constraints such as system latency. The findings demonstrate that Generative AI can be effectively operationalized as a dynamic interface layer in serious games to strengthen critical reasoning. This study provides practical guidelines for architecting AI-NPC interactions and advances the theoretical understanding of AI-supported educational informatics. Full article
Show Figures

Figure 1

27 pages, 610 KB  
Article
Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention
by Leo S. F. Lin, Geberew Tulu Mekonnen, Mladen Zecevic, Immaculate Motsi-Omoijiade, Duane Aslett and Douglas M. C. Allan
Informatics 2025, 12(4), 132; https://doi.org/10.3390/informatics12040132 - 28 Nov 2025
Viewed by 2461
Abstract
Generative Artificial Intelligence (GenAI) has transformed Australian higher education, amplifying online harms such as misinformation, fraud, and image-based abuse, with significant implications for cybercrime prevention. Combining a PRISMA-guided systematic review with MAXQDA-driven analysis of Australian university policies, this research evaluates institutional strategies against [...] Read more.
Generative Artificial Intelligence (GenAI) has transformed Australian higher education, amplifying online harms such as misinformation, fraud, and image-based abuse, with significant implications for cybercrime prevention. Combining a PRISMA-guided systematic review with MAXQDA-driven analysis of Australian university policies, this research evaluates institutional strategies against national frameworks, such as the Cybersecurity Strategy 2023–2030. Analyzing data from academic literature, we identify three key themes: educational strategies, alignment with national frameworks, and policy gaps and development. As the first qualitative analysis of 40 Australian university policies, this study uncovers systemic fragmentation in governance frameworks, with only 12 institutions addressing data privacy risks and none directly targeting AI-driven disinformation threats like deepfake harassment—a critical gap in global AI governance literature. This study provides actionable recommendations to develop the National GenAI Governance Framework, co-developed by TEQSA/UA and DoE, enhanced cyberbullying policies, and behavior-focused training to enhance digital safety and prevent cybercrime in Australian higher education. Mandatory annual CyberAI Literacy Module for all students and staff to ensure awareness of cybersecurity risks, responsible use of artificial intelligence, and digital safety practices within the university community. Full article
Show Figures

Figure 1

32 pages, 362 KB  
Article
Human-AI Symbiotic Theory (HAIST): Development, Multi-Framework Assessment, and AI-Assisted Validation in Academic Research
by Laura Thomsen Morello and John C. Chick
Informatics 2025, 12(3), 85; https://doi.org/10.3390/informatics12030085 - 25 Aug 2025
Cited by 2 | Viewed by 9841
Abstract
This study introduces the Human-AI Symbiotic Theory (HAIST), designed to guide authentic collaboration between human researchers and artificial intelligence in academic contexts, while pioneering a novel AI-assisted approach to theory validation that transforms educational research methodology. Addressing critical gaps in educational theory and [...] Read more.
This study introduces the Human-AI Symbiotic Theory (HAIST), designed to guide authentic collaboration between human researchers and artificial intelligence in academic contexts, while pioneering a novel AI-assisted approach to theory validation that transforms educational research methodology. Addressing critical gaps in educational theory and advancing validation practices, this research employed a sequential three-phase mixed-methods approach: (1) systematic theoretical synthesis integrating five paradigmatic perspectives across learning theory, cognition, information processing, ethics, and AI domains; (2) development of an innovative validation framework combining three established theory-building approaches with groundbreaking AI-assisted content assessment protocols; and (3) comprehensive theory validation through both traditional multi-framework evaluation and novel AI-based content analysis demonstrating unprecedented convergent validity. This research contributes both a theoretically grounded framework for human-AI research collaboration and a transformative methodological innovation demonstrating how AI tools can systematically augment traditional expert-driven theory validation. HAIST provides the first comprehensive theoretical foundation designed explicitly for human-AI partnerships in scholarly research with applicability across disciplines, while the AI-assisted validation methodology offers a scalable, reliable model for theory development. Future research directions include empirical testing of HAIST principles in live research settings and broader application of the AI-assisted validation methodology to accelerate theory development across educational research and related disciplines. Full article
17 pages, 880 KB  
Article
Mitigating Learning Burnout Caused by Generative Artificial Intelligence Misuse in Higher Education: A Case Study in Programming Language Teaching
by Xiaorui Dong, Zhen Wang and Shijing Han
Informatics 2025, 12(2), 51; https://doi.org/10.3390/informatics12020051 - 20 May 2025
Cited by 6 | Viewed by 6332
Abstract
The advent of generative artificial intelligence (GenAI) has significantly transformed the educational landscape. While GenAI offers benefits such as convenient access to learning resources, it also introduces potential risks. This study explores the phenomenon of learning burnout among university students resulting from the [...] Read more.
The advent of generative artificial intelligence (GenAI) has significantly transformed the educational landscape. While GenAI offers benefits such as convenient access to learning resources, it also introduces potential risks. This study explores the phenomenon of learning burnout among university students resulting from the misuse of GenAI in this context. A questionnaire was designed to assess five key dimensions: information overload and cognitive load, overdependence on technology, limitations of personalized learning, shifts in the role of educators, and declining motivation. Data were collected from 143 students across various majors at Shandong Institute of Petroleum and Chemical Technology in China. In response to the issues identified in the survey, the study proposes several teaching strategies, including cheating detection, peer learning and evaluation, and anonymous feedback mechanisms, which were tested through experimental teaching interventions. The results showed positive outcomes, with students who participated in these strategies demonstrating improved academic performance. Additionally, two rounds of surveys indicated that students’ acceptance of additional learning tasks increased over time. This research enhances our understanding of the complex relationship between GenAI and learning burnout, offering valuable insights for educators, policymakers, and researchers on how to effectively integrate GenAI into education while mitigating its negative impacts and fostering healthier learning environments. The dataset, including detailed survey questions and results, is available for download on GitHub. Full article
Show Figures

Figure 1

Other

Jump to: Research

20 pages, 1381 KB  
Systematic Review
AI-Enhanced Skill Assessment in Higher Vocational Education: A Systematic Review and Meta-Analysis
by Xia Sun and Haoheng Tian
Informatics 2026, 13(2), 20; https://doi.org/10.3390/informatics13020020 - 28 Jan 2026
Viewed by 1666
Abstract
This study synthesizes empirical evidence on AI-supported skill assessment systems in higher vocational education through a systematic review and meta-analysis. Despite growing interest in generative AI within higher education, empirical research on AI-enabled assessment remains fragmented and methodologically uneven, particularly in vocational contexts. [...] Read more.
This study synthesizes empirical evidence on AI-supported skill assessment systems in higher vocational education through a systematic review and meta-analysis. Despite growing interest in generative AI within higher education, empirical research on AI-enabled assessment remains fragmented and methodologically uneven, particularly in vocational contexts. Following PRISMA 2020 guidelines, 27 peer-reviewed empirical studies published between 2010 and 2024 were identified from major international and Chinese databases and included in the analysis. Using a random-effects model, the meta-analysis indicates a moderate positive association between AI-supported assessment systems and skill-related learning outcomes (Hedges’ g = 0.72), alongside substantial heterogeneity across study designs, outcome measures, and implementation contexts. Subgroup analyses suggest variation across regional and institutional settings, which should be interpreted cautiously given small sample sizes and diverse methodological approaches. Based on the synthesized evidence, the study proposes a conceptual AI-supported skill assessment framework that distinguishes empirically grounded components from forward-looking extensions related to generative AI. Rather than offering prescriptive solutions, the framework provides an evidence-informed baseline to support future research, system design, and responsible integration of generative AI in higher education assessment. Overall, the findings highlight both the potential and the current empirical limitations of AI-enabled assessment, underscoring the need for more robust, theory-informed, and transparent studies as generative AI applications continue to evolve. Full article
Show Figures

Figure 1

12 pages, 216 KB  
Brief Report
Enhancing Interactive Teaching for the Next Generation of Nurses: Generative-AI-Assisted Design of a Full-Day Professional Development Workshop
by Su-I Hou
Informatics 2026, 13(1), 11; https://doi.org/10.3390/informatics13010011 - 15 Jan 2026
Viewed by 827
Abstract
Introduction: Nursing educators and clinical leaders face persistent challenges in engaging the next generation of nurses, often characterized by short attention spans, frequent phone use, and underdeveloped communication skills. This article describes the design and delivery of a full-day interactive teaching workshop for [...] Read more.
Introduction: Nursing educators and clinical leaders face persistent challenges in engaging the next generation of nurses, often characterized by short attention spans, frequent phone use, and underdeveloped communication skills. This article describes the design and delivery of a full-day interactive teaching workshop for nursing faculty, senior clinical nurses, and nurse leaders, developed using a design-thinking approach supported by generative AI. Methods: The workshop comprised four thematic sessions: (1) Learning styles across generations, (2) Interactive teaching methods, (3) Application of interactive teaching strategies, and (4) Lesson planning and transfer. Generative AI was used during planning to create icebreakers, discussion prompts, clinical teaching scenarios, and application templates. Design decisions emphasized low-tech, low-prep strategies suitable for spontaneous clinical teaching, thereby reducing barriers to adoption. Activities included emoji-card introductions, quick generational polls, colored-paper reflections, portable whiteboard brainstorming, role plays, fishbowl discussions, gallery walks, and movement-based group exercises. Participants (N = 37) were predominantly female (95%) and represented multiple generations of X, Y, and Z. Mid- and end-of-workshop reflection prompts were embedded within Sessions 2 and 4, with participants recording their responses on colored papers, which were then compiled into a single Word document for thematic analysis. Results: Thematic analysis of 59 mid- and end-workshop reflections revealed six interconnected themes, grouped into three categories: (1) engagement and experiential learning, (2) practical applicability and generational awareness, and (3) facilitation, environment, and motivation. Participants emphasized the workshop’s lively pace and hands-on design. Experiencing strategies firsthand built confidence for application, while generational awareness encouraged reflection on adapting methods for younger learners. The facilitator’s passion, personable approach, and structured use of peer learning created a psychologically safe and motivating climate, leaving participants recharged and inspired to integrate interactive methods. Discussion: The workshop illustrates how AI-assisted, design-thinking-driven professional development can model effective strategies for next-generation learners. When paired with skilled facilitation, AI-supported planning enhances engagement, fosters reflective practice, and promotes immediate transfer of interactive strategies into diverse teaching settings. Full article
Back to TopTop