Next Article in Journal
From Satisfaction to AI Integration: Stakeholder Perceptions of Student Classification and Progress Monitoring in Qatar’s Schools
Next Article in Special Issue
Evaluating the Relationship Between Pre-Service Teachers’ Artificial Intelligence Readiness and Professional Self-Efficacy
Previous Article in Journal
Saints, Superheroes, and Zombies: Early Childhood Professionals’ Well-Being and Relational Health in the Waning Days of the COVID-19 Pandemic
Previous Article in Special Issue
Non-Semantic Multimodal Fusion for Predicting Segment Access Frequency in Lecture Archives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Integrating Artificial Intelligence into the Cybersecurity Curriculum in Higher Education: A Systematic Literature Review

NUS-ISS, National University of Singapore, Singapore 119615, Singapore
Educ. Sci. 2025, 15(11), 1540; https://doi.org/10.3390/educsci15111540
Submission received: 12 September 2025 / Revised: 9 November 2025 / Accepted: 11 November 2025 / Published: 15 November 2025

Abstract

Background: To understand the state of the art of how artificial intelligence (AI) and cybersecurity are taught together, this paper conducts a systematic literature review on integrating AI into the cybersecurity curriculum in higher education. Methods: The peer-reviewed works were screened from major databases published between 2020 and 2025. Integrating AI and cybersecurity typically requires new learning designs. To address this gap in higher education, this review is organized by three categories of research questions: (1) who we teach (audiences and delivery modes), (2) what we teach (related AI topics and cybersecurity topics and how they are integrated), and (3) how we teach (instructional activities and tools used in teaching). Results: The course delivery is mostly face-to-face. The course curricula focus mostly on perception AI. Teaching methods are active and practical, with hands-on labs, interactive tasks, and game-based activities, supported by hardware, programming notebooks, and interactive visualizations. Conclusion: This paper provides the state of the art of integrating AI into the cybersecurity curriculum in higher education, actionable recommendations, and implications for further research. Therefore, it is relevant and transferable for instructors in the field of artificial intelligence education and cybersecurity education.

1. Introduction

Cybersecurity education must evolve alongside the rapid evolution of artificial intelligence (AI) in a practice-oriented curriculum that develops both AI expertise and security expertise (Beuran et al., 2022; Cusak, 2023; Jimenez & O’Neill, 2023). In industrial deployments, professionals need competencies in machine learning and secure AI deployment, not only to defend AI-enabled systems but also to leverage AI for threat detection (Bhuiyan & Park, 2025; Zivanovic et al., 2024). This has increased the gap between university outcomes and workplace expectations, particularly in hands-on skills and cross-disciplinary knowledge (Bendler & Felderer, 2023; Tian, 2025). To close this gap, an integration is needed to embed AI into the cybersecurity curriculum.
There are three major types of AI techniques, including perception AI, generative AI, and agentic AI, each of which has distinct capabilities and risk profiles that require different mitigation. Perception AI analyzes sensor data in critical systems such as autonomous driving. It recognizes road context (e.g., traffic signs, road conditions, and obstacles) in real time and triggers actions such as proceeding or urgent braking. These models are particularly vulnerable to adversarial examples deliberately crafted to induce misclassification (Afolabi & Adewale Akinola, 2024). Generative AI produces new content in response to user prompts (e.g., an e-commerce chatbot that handles customer inquiries). It faces unique threats such as jailbreaks, which elicit harmful outputs, and prompt injection, which triggers intended behavior and instructions (Das et al., 2025). Agentic AI orchestrates end-to-end workflows through collaborating agents that sense, reason, and act. In enterprise settings, such systems may manage orders, make purchases, and coordinate supply chains. Their attack surface and failure modes differ fundamentally from those of perceptive and generative systems, introducing new cybersecurity challenges around tool use, autonomy, and authorization (Deng et al., 2025).
The differences across various AI paradigms motivate a tighter integration of AI and cybersecurity in higher education. There are two complementary strategies: security for AI and AI for security. Lessons learned from securing AI systems inform how we responsibly embed models into security operations. In turn, operational use in defense surfaces new attacks and governance needs, tightening the feedback loop between security for AI and AI for security. Security for AI emphasizes safeguarding AI systems through governance, policy, and technical controls that mitigate risks and manage threats across data, model development, deployment, and operations (Jaffal et al., 2025). AI for security applies machine learning and deep learning methods to strengthen protective technologies (e.g., network defense, endpoint protection, and email filtering), accelerating detection and response and augmenting analyst capacity (Okdem & Okdem, 2024).
Effective course delivery relies on instructional methods and digital tooling (Ali et al., 2024; Lozano & Blanco Fontao, 2023; Michel-Villarreal et al., 2023). Integrating AI and cybersecurity typically requires new learning designs, especially hands-on activities, and appropriate tools such as programming environments, curated datasets, sandboxes or simulation platforms, and visualization utilities that make model behavior and security mechanisms transparent.
To address the above-identified gaps in higher education, this paper conducts a systematic literature review on integrating AI into the curriculum of cybersecurity. The review is organized around three groups of research questions focusing on course context, course curriculum design, and the course’s instructional activities and tools. This paper makes two key contributions to the literature on AI education and cybersecurity education.
  • First, it systematically synthesizes studies from multiple major databases (Scopus, IEEE Xplore, and Web of Science), offering a broader and more representative view than prior reviews that were limited to specific sources or course formats. Furthermore, it provides the most up-to-date perspective on the field by covering the period from 2020 to 2025.
  • Second, it adopts an integrated lens that examines three categories of six research questions, covering course context, course curriculum, and course instructional activities and tools.
The rest of this paper is organized as follows. Section 2 introduces the relevant research works and highlights the difference between them and this paper. Then, Section 3 presents the three categories of six research questions covered in this study, including course context, course curriculum, and course instruction. It also presents the systematic literature search process using a PRISMA framework (Page et al., 2021). The research findings are presented in Section 4, followed by discussions on the key observations, recommendations, and limitations of this study in Section 5. Finally, Section 6 provides the conclusion of this paper.

2. Related Works

This section briefly describes relevant review studies on AI and cybersecurity and then highlights the difference between them and this paper. Laato et al. (2020) investigate how cybersecurity has been taught in online courses by conducting a systematic review of prior works on massive open online courses (MOOCs). They find only a limited number of peer-reviewed evaluations of individual cybersecurity MOOCs and highlight the absence of focused treatment of AI applications in cybersecurity education. The article by Dewi et al. (2024) provides a bibliometric analysis of 637 articles; it maps the research landscape on AI, cybersecurity, and education. It identifies thematic clusters and concludes that the use of AI in cybersecurity education and awareness programs remains underdeveloped. The study by Svabensky et al. (2020) synthesizes a decade of cybersecurity education research presented at major computing education conferences. It shows that while many technical and human-centric topics are addressed, few studies provide reusable materials or datasets. The study by Aris et al. (2022) addresses the challenge of updating curricula by proposing a structured method for integrating AI into cybersecurity education. By analyzing around 300 papers from major cybersecurity-related conferences, it demonstrates the growing efforts of AI in security research and argues for its inclusion in teaching. The article by Lasisi et al. (2022) reviews undergraduate cybersecurity programs to assess the presence of AI-related content. It reports that despite the increasing role of AI in enabling advanced cyberattacks, AI courses are lacking, revealing a gap in preparing working professionals for this emerging skill gap. The study by Weitl-Harms et al. (2023) provides a review of 74 papers applying a gamification strategy in cybersecurity operations education in undergraduate coursework.
Unlike these existing works, this paper conducts a literature review with a focus on the integration of AI and cybersecurity in education. Its fundamental difference with existing works is summarized in Table 1. Firstly, while earlier studies typically limited themselves to a single type of data source, such as selected computer science conferences, this paper systematically studies three databases (Scopus, IEEE Xplore, and Web of Science). Secondly, previous reviews often covered past decades or a narrower duration. By spanning 2020 to 2025, this paper captures the most up-to-date research works, including the integration of emerging AI technologies into cybersecurity education. Lastly, earlier works tended to emphasize specific focus, such as MOOC evaluations, bibliometric mapping, or curriculum design. In contrast, this paper provides a comprehensive study on how AI and cybersecurity are taught together, including the course curriculum designs, instructional activities, and the use of digital tools in teaching.

3. Methodology

3.1. Literature Search Process

We conducted a systematic literature search following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework (Page et al., 2021). The search targeted relevant research in the field of integrated AI and cybersecurity teaching and was performed across three major academic databases: Scopus, IEEE Xplore, and Web of Science. These databases were selected for their comprehensive coverage and relevance to this study.
Due to differing search syntax across databases, customized queries were crafted for each database. To ensure relevance and quality, we applied the following inclusion criteria: articles had to be (i) published between 2020 and 2025, (ii) written in English, and (iii) published in peer-reviewed journals or conference proceedings. We selected the 2020–2025 period to capture studies published during the period of fastest methodological and curricular change in AI-enabled cybersecurity. Table 2 provides a detailed breakdown of the search strings used for each database.
The initial search in August 2025 returned a total of 263 records after removing duplicates. We employed a multi-phase screening process to determine the final selection of studies, as illustrated in Figure 1. This involved (i) scope review (e.g., relevance to teaching and education) and (ii) manual abstract screening and full-text screening (e.g., focus on the integration of AI and cybersecurity). In the manual abstract screening process, we excluded 184 papers that were not teaching studies, 36 papers that were teaching papers but not related to cybersecurity, and 26 papers that were traditional cybersecurity teaching papers without the integration of AI. Then, in the manual full-text screening process, we excluded 5 review papers and 4 studies focusing on K-12 education. On the other hand, with the additional web search, we managed to find 6 papers. We further complemented database searching with citation chasing (snowballing) to find 5 papers. Then, we excluded 3 review papers from them. In summary, 16 papers that met all inclusion criteria were selected for in-depth analysis in this study. The annual distribution of these 16 papers is presented in Table 3, and their brief description is provided in Table 4.

3.2. Research Questions

This paper examines six research questions from three dimensions. The statements, motivations, and pedagogical gaps and aims of these research questions are provided as follows.
  • Course context-related research questions.
    RQ1. Who are the target audiences of courses?
    Motivation: Identifying the intended learners clarifies the background knowledge, skill gaps, and professional needs the curriculum is designed to address.
    Pedagogical gap and aim: Calibrate learning objectives, scaffolding, and assessment to learner readiness and context.
    RQ2. What delivery modes are adopted in teaching?
    Motivation: Understanding whether courses are offered face-to-face, online, or in hybrid formats provides insight into the accessibility and scalability of instruction.
    Pedagogical gap and aim: Match the delivery modality to learning outcomes (e.g., labs needing hands-on time vs. asynchronous theory), while considering the resource constraints.
  • Course curriculum-related research questions.
    RQ3. What AI topics are included in the curriculum?
    Motivation: Mapping the range of AI content helps reveal the breadth of technical coverage in current educational practice, particularly on the emerging AI technologies.
    Pedagogical gap and aim: Ensure up-to-date topic sequences that build from fundamentals to advanced methods aligned with current practice.
    RQ4. How are AI and cybersecurity concepts integrated in teaching?
    Motivation: Exploring integration strategies shows whether courses treat AI and cybersecurity separately or promote interdisciplinary learning.
    Pedagogical gap and aim: Promote interdisciplinarity via aligned learning outcomes and iterative tasks that connect AI methods to concrete security problems.
  • Course instruction-related research questions.
    RQ5. What instructional activities and pedagogical approaches are used?
    Motivation: Examining teaching activities (e.g., lectures, labs, and projects) highlights how learning objectives are implemented in practice.
    Pedagogical gap and aim: Adopt evidence-informed designs (scaffolded labs and project-based learning) that cultivate problem-solving and professional practices.
    RQ6. What digital tools support the course delivery?
    Motivation: Investigating the tools used (e.g., simulation environments and security platforms) reveals how digital tools facilitate effective learning.
    Pedagogical gap and aim: Select and integrate tools that are accessible and aligned with tasks and simulate real-world workflows to enhance the learning outcome.

4. Results

This section presents the results synthesized from the sixteen selected papers in this literature review. Each subsection corresponds to the three categories of research questions outlined in Section 3.2.

4.1. Course Context-Related Research Questions

The first research question is as follows: RQ1. Who are the target audiences of courses? To address this question, we examined the learner groups reported in the sixteen papers. This question is important because identifying the intended audience helps to clarify the expected prior knowledge and professional needs that the curriculum is designed to meet. As summarized in Table 5, the most frequently mentioned target learner group is university students, including both undergraduate and postgraduate learners, discussed in eleven studies. One study emphasizes non-computing major students, while another study emphasizes the cybersecurity major. The remaining five papers did not provide explicit information on the target audience. These patterns suggest that AI-based curricula should include various pathways, such as foundational AI literacy for non-computing learners and deeper, practice-oriented tracks for cybersecurity majors, so that prerequisites align with learners’ backgrounds.
The second research question is as follows: RQ2. What delivery modes are adopted in teaching? This question is equally significant, as the chosen delivery mode (face-to-face, online, or hybrid) affects accessibility and scalability of instruction. Among the sixteen papers reviewed and summarized in Table 5, eleven papers reported face-to-face teaching, one paper described an online course, and the remaining four did not specify the delivery modes. Given the findings that most courses are face-to-face offerings, AI-based curricula should adopt modality-agnostic designs, such as cloud notebooks and virtual labs, to preserve hands-on practice.

4.2. Course Curriculum-Related Research Questions

The third research question is as follows: RQ3. What AI topics are included in the curriculum? We consider three major classes of AI, including perception, generative, and agentic, each with distinct capabilities and risk profiles that call for different mitigation. Mapping the range of AI technologies alongside their security implications reveals the breadth of technical coverage in current practice and equips learners with risk-appropriate defenses by design. Across the sixteen papers, the majority of them, fifteen papers, addressed perception AI, while only one paper explicitly taught generative AI, as summarized in Table 6. This imbalance suggests that curricula should be rebalanced beyond perception systems by adding core modules on generative (e.g., prompt injection, jailbreaks, and data leakage) and agentic systems (e.g., tool-use safety and human-in-the-loop).
The fourth research question is as follows: RQ4. How are AI and cybersecurity concepts integrated into teaching? We examine two complementary integration strategies: cybersecurity for AI and AI for cybersecurity. Among the sixteen papers, the coverage was fairly balanced: five papers addressed cybersecurity for AI only, eight focused on AI for cybersecurity only, and the remaining three covered both, as summarized in Table 6. Programs can run the two modules in parallel with integrative capstones. For example, students harden a model and then implement it in a real exercise to exercise both assurance competencies and applied defensive effectiveness.

4.3. Course Instruction-Related Research Questions

The fifth research question is as follows: RQ5 What instructional activities and pedagogical approaches are used? It is important to examine teaching activities (e.g., lectures, labs, and projects) to understand how learning objectives are implemented in practice. Table 7 summarizes instructional activities used across the reviewed papers, which reveals a clear emphasis on active, practice-oriented designs (e.g., hands-on, interactive, game-based, and experiential) with selective use of case studies, scaffolding, and project experiences. Hands-on and project-based learning is prominent, provided as either standalone lab or project work in three studies (Calhoun et al., 2022; Mathews et al., 2025; Payne & Glantz, 2020) and as a combined laboratory plus an independent project in one course (Debello et al., 2023). These activities prioritize skill acquisition in integrative demonstrations. Interactive activities are reported in two papers (Pusey et al., 2024; You et al., 2025), typically to sustain engagement, surface misconceptions early, and support rapid feedback. Game-based learning features both full-course game structures and targeted gamification elements (Arai et al., 2024; Debello et al., 2023; Wei-Kocsis et al., 2024); these approaches aim to increase motivation and provide safe environments for experimenting with offensive/defensive tactics or AI model behaviors. Experiential learning is explicitly implemented in three papers (Alexander et al., 2024; Okpala et al., 2025; Salman, 2024) to support learners’ progression from guided exercises to open tasks. Case studies are used to integrate technical detail with contextual judgment (Apruzzese et al., 2023; Shahriar et al., 2020). One course emphasizes authentic learning (Lo et al., 2022), situating activities in realistic professional contexts to bridge classroom and practice. Traditional elements are also effective; homework serves as structured reinforcement (Farahmand, 2021), while a course that combines theoretical and practical components (Brito et al., 2025) illustrates a blended model that pairs conceptual grounding with implementation. Finally, an independent capstone project (Debello et al., 2023) provides a culminating experience for synthesis and evaluation of learning outcomes. These findings suggest that the curriculum should emphasize authentic practice and interactive game/simulation tasks to expose misconceptions, while providing standardized artifacts (reproducible notebooks and datasets) as assessable learning materials.
The sixth research question is as follows: RQ6. What digital tools support the course delivery? To effectively deliver the technical content, it is critical to select appropriate tools (e.g., simulation environments and security platforms) to facilitate effective learning. Table 7 summarizes the tools and platforms reported across the papers, spanning hardware setups, programming environments with datasets, visualization, and game-oriented delivery. Several papers offer anchor learning in physical systems to surface real-world constraints and attack surfaces, such as circuit hardware in a very large-scale integration context (Calhoun et al., 2022). A few papers rely on accessible software stacks that enable reproducible work. One study combines a Python (version 3) programming tool with open-source datasets to scaffold implementation and evaluation (Alexander et al., 2024; Debello et al., 2023; Okpala et al., 2025). Open-source tools and online programming platforms are used in Lo et al. (2022); Payne and Glantz (2020); Shahriar et al. (2020), facilitating convenient development and simplified classroom logistics. AI tools, such as ChatGPT, are also studied in Mathews et al. (2025). To unlock the complicated topics, multiple courses adopt block-based programming paired with online visualization webpages (You et al., 2025), which provide interpretable outputs or interactive dashboards that support formative feedback. Game-based or simulation-centric platforms appear as an online web game (Arai et al., 2024) and as an immersive learning setup (Wei-Kocsis et al., 2024). These findings indicate that the curricula should standardize on portable, reproducible stacks, such as cloud notebooks and labs, so learners can practice end-to-end workflows with minimal setup friction.

5. Discussion

This section summarizes the key observations and provides recommendations for both research and practice.

5.1. Summary of the Key Observations and Actionable Recommendations

Three observations emerge from the three categories of research questions addressed in this study.

5.1.1. Course Context-Related Findings

  • Finding: Integrating AI and cybersecurity across learner populations is important, with offerings targeting university students; this pattern reveals the current educational landscape and underscores the need for audience-appropriate scaffolding.
  • Educational framework: This finding supports the constructivist learning framework, where learners build their understanding by connecting new information with their existing knowledge and experiences. Intended learning outcomes and activities should be aligned to distinct learner profiles. For example, a practical lab platform is created to offer experiential learning for non-computing students (Okpala et al., 2025), while cybersecurity students are equipped with a foundational understanding of generative AI to further explore their applications (Mathews et al., 2025).
  • Actionable recommendation: Instructors should consider extending constructive alignment to the AI–cybersecurity intersection, provide scaffolded prerequisites, and adopt blended delivery so that varied learners can reach aligned outcomes.

5.1.2. Course Curriculum-Related Findings

  • Finding: Current studies exhibit a balanced emphasis on the two integration strategies, security for AI and AI for security, highlighting cross-disciplinary integration rather than independent treatment.
  • Educational framework: Constructivist learning treats the two lenses, security for AI and AI for security, as paired problems that support knowledge construction through cognitive conflict and resolution (e.g., risk vs. mitigation and attack vs. defense). This finding complements (Arai et al., 2024), where learners experience damage caused by attacks and the advantages of their countermeasures. In addition, an immersive learning environment is designed in Wei-Kocsis et al. (2024) to motivate the students to explore AI development in the context of real-world cybersecurity scenarios, where AI techniques can be manipulated and evaded, resulting in new security implications.
  • Actionable recommendation: Instructors could design lab structures that bind security for AI to AI for security. For each topic, we can design mirrored labs (e.g., prompt injection vs. guardrail; data poisoning vs. governance) so that learners can experience the impact of the AI technique and the inherent risk of the AI technique itself.

5.1.3. Course Instruction-Related Findings

  • Finding: Active pedagogy is prevalent (e.g., hands-on labs/projects, experiential and case-based activities, and visualization to unpack complex concepts), which indicates a need for learning by doing with structured supports that build transferable competencies for AI and cybersecurity practice.
  • Educational framework: Our finding about active pedagogy aligns with the connectivist learning framework, where learners build understanding through manipulation of tools, datasets, and reflection on experience. This is consistent with immersive and visualization-centric designs in Salman (2024); You et al. (2025), hands-on programming design (Alexander et al., 2024), and even the hardware implementation (Apruzzese et al., 2023; Debello et al., 2023).
  • Actionable recommendation: Instructors should ground theory in practice. For example, they can start each lecture with a brief real-world artifact (e.g., a prompt injection transcript), state the intended learning outcomes, and then introduce the concepts that explain the artifact. Furthermore, they can provide one-click, sandboxed environments (e.g., Docker/Colab) so learners can run paired attack–defend labs and safely explore AI techniques. Lastly, they can conclude each hands-on activity with a guided reflection, prompting students to articulate what worked, what failed, and how they would improve their approach.

5.2. Limitations

There are a few limitations in this study. This review may be affected by search bias arising from database coverage, indexing delays, English-language restrictions, and the evolving terminology of AI and cybersecurity that could cause relevant studies to be missed by our keywords. Moreover, this review is focused on peer-reviewed journal and conference papers; consequently, it excludes course websites and practitioner reports that may capture cutting-edge practice. This coverage limits generalizability to other settings (e.g., professional training). Given the rapid pace of AI (especially generative AI and emerging agentic AI systems), this paper might only provide insights for very recent innovations and practices.

5.3. Future Research

Through this literature review, we propose three actionable recommendations for instructors in AI education and cybersecurity education. First, we need to align curriculum activities with two lenses, including security for AI (e.g., threat modeling and red-teaming) and AI for security (e.g., anomaly detection and phishing classifiers), and pair them in hands-on exercises to ensure balanced coverage. Second, we need to standardize on accessible and reproducible tooling, such as online programming notebooks, curated open datasets, visualization dashboards, and one-click environments (e.g., Docker or Colab) with starter kits so students focus on learning rather than setup. Last but not least, we need to provide case studies that reflect real threat scenarios to connect technical work to real-world decision-making in this fast-moving domain.

6. Conclusions

This paper presents a systematic literature review on a focused topic of integrating artificial intelligence into cybersecurity education. The findings show that the current practices reach multiple learner groups (from university undergraduate to postgraduate). However, online delivery and hybrid (online and face-to-face) delivery remain underused. The course curricula currently emphasize perception AI only, while emerging areas, like generative and agentic AI systems, are rarely addressed. To effectively integrate AI technology and the cybersecurity content, hands-on activities (e.g., online programming notebooks) and visual explanations are needed to make concepts interactive and explainable. This paper offers a practical reference for instructors seeking to enhance their courses by embedding AI content into the cybersecurity curriculum.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset available on request from the author.

Acknowledgments

The author would like to express their sincere gratitude to the editor and the four reviewers for their insightful suggestions on revising this manuscript.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Afolabi, A. S., & Adewale Akinola, O. (2024, September 18–20). Vulnerable AI: A survey. IEEE International Symposium on Technology and Society, Puebla, Mexico. [Google Scholar] [CrossRef]
  2. Alexander, R., Ma, L., Dou, Z.-L., Cai, Z., & Huang, Y. (2024). Integrity, confidentiality, and equity: Using inquiry-based labs to help students understand AI and cybersecurity. Journal of Cybersecurity Education Research and Practice, 2024(1), 10. [Google Scholar] [CrossRef]
  3. Ali, D., Fatemi, Y., Boskabadi, E., Nikfar, M., Ugwuoke, J., & Ali, H. (2024). ChatGPT in teaching and learning: A systematic review. Education Sciences, 14(6), 643. [Google Scholar] [CrossRef]
  4. Apruzzese, G., Anderson, H. S., Dambra, S., Freeman, D., Pierazzi, F., & Roundy, K. (2023, February 8–10). Real attackers don’t compute gradients: Bridging the gap between adversarial ML research and practice. 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (pp. 339–364), Raleigh, NC, USA. [Google Scholar] [CrossRef]
  5. Arai, M., Tejima, K., Yamada, Y., Miura, T., Yamashita, K., Kado, C., Shimizu, R., Tatsumi, M., Yanai, N., & Hanaoka, G. (2024). REN-AI: A video game for AI security education leveraging episodic memory. IEEE Access, 12, 47359–47372. [Google Scholar] [CrossRef]
  6. Aris, A., Rondon, L. P., Ortiz, D., Ross, M., & Finlayson, M. (2022, June 26–29). Integrating artificial intelligence into cybersecurity curriculum: New perspectives. ASEE Annual Conference and Exposition (pp. 1–15), Minneapolis, MN, USA. [Google Scholar] [CrossRef]
  7. Bendler, D., & Felderer, M. (2023). Competency models for information security and cybersecurity professionals: Analysis of existing work and a new model. ACM Transactions on Computing Education, 23(2), 1–33. [Google Scholar] [CrossRef]
  8. Beuran, R., Hu, Z., Zeng, Y., & Tan, Y. (2022). Artificial intelligence for cybersecurity education and training. Springer. [Google Scholar] [CrossRef]
  9. Bhuiyan, S., & Park, J. S. (2025). Cybersecurity threats and mitigation strategies in AI applications. Journal of The Colloquium for Information Systems Security Education, 12(1), 1–7. [Google Scholar] [CrossRef]
  10. Brito, F., Mekdad, Y., Ross, M., Finlayson, M. A., & Uluagac, S. (2025, February 26–March 1). Enhancing cybersecurity education with artificial intelligence content. ACM Technical Symposium on Computer Science Education (pp. 158–164), Pittsburgh, PA, USA. [Google Scholar] [CrossRef]
  11. Calhoun, A., Ortega, E., Yaman, F., Dubey, A., & Aysu, A. (2022, June 6–8). Hands-on teaching of hardware security for machine learning. Great Lakes Symposium on VLSI (pp. 455–461), Irvine, CA, USA. [Google Scholar] [CrossRef]
  12. Cusak, A. (2023). Case study: The impact of emerging technologies on cybersecurity education and workforces. Journal of Cybersecurity Education Research and Practice, 1, 3. [Google Scholar] [CrossRef]
  13. Das, B. C., Amini, M. H., & Wu, Y. (2025). Security and privacy challenges of large language models: A survey. ACM Computing Surveys, 57(6), 1–39. [Google Scholar] [CrossRef]
  14. Debello, J. E., Troja, E., & Truong, L. M. (2023, May 1–4). A framework for infusing cybersecurity programs with real-world artificial intelligence education. IEEE Global Engineering Education Conference (pp. 1–5), Kuwait, Kuwait. [Google Scholar] [CrossRef]
  15. Deng, Z., Guo, Y., Han, C., Ma, W., Xiong, J., Wen, S., & Xiang, Y. (2025). AI agents under threat: A survey of key security challenges and future pathways. ACM Computing Surveys, 57(7), 1–36. [Google Scholar] [CrossRef]
  16. Dewi, H. A., Candiwan, C., & Sari, P. K. (2024, December 17–19). Artificial intelligence in security education, training and awareness: A bibliometric analysis. 2024 International Conference on Intelligent Cybernetics Technology & Applications (pp. 914–919), Bali, Indonesia. [Google Scholar] [CrossRef]
  17. Farahmand, F. (2021). Integrating cybersecurity and artificial intelligence research in engineering and computer science education. IEEE Security and Privacy, 19(6), 104–110. [Google Scholar] [CrossRef]
  18. Jaffal, N. O., Alkhanafseh, M., & Mohaisen, D. (2025). Large language models in cybersecurity: A survey of applications, vulnerabilities, and defense techniques. AI, 6(9), 216. [Google Scholar] [CrossRef]
  19. Jimenez, R., & O’Neill, V. E. (2023). Handbook of research on current trends in cybersecurity and educational technology. IGI Global Scientific Publishing. [Google Scholar] [CrossRef]
  20. Laato, S., Farooq, A., Tenhunen, H., Pitkamaki, T., Hakkala, A., & Airola, A. (2020, July 6–9). AI in cybersecurity education—A systematic literature review of studies on cybersecurity MOOCs. IEEE 20th International Conference on Advanced Learning Technologies (pp. 6–10), Tartu, Estonia. [Google Scholar] [CrossRef]
  21. Lasisi, R. O., Menia, M., Farr, Z., & Jones, C. (2022, May 15–18). Exploration of AI-enabled contents for undergraduate cyber security programs. International Florida Artificial Intelligence Research Society Conference (pp. 1–4), Hutchinson Island, FL, USA. [Google Scholar] [CrossRef]
  22. Lo, D. C.-T., Shahriar, H., Qian, K., Whitman, M., Wu, F., & Thomas, C. (2022, March 2–5). Authentic learning of machine learning in cybersecurity with portable hands-on labware. ACM Technical Symposium on Computer Science Education (p. 1153), Providence, RI, USA. [Google Scholar] [CrossRef]
  23. Lozano, A., & Blanco Fontao, C. (2023). Is the education system prepared for the irruption of artificial intelligence? A study on the perceptions of students of primary education degree from a dual perspective: Current pupils and future teachers. Education Sciences, 13(7), 733. [Google Scholar] [CrossRef]
  24. Mathews, N., Schwartz, C., & Wright, M. (2025). Teaching generative AI for cybersecurity: A project-based learning approach. Journal of The Colloquium for Information Systems Security Education, 12(1), 1–10. [Google Scholar] [CrossRef]
  25. Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sciences, 13(9), 856. [Google Scholar] [CrossRef]
  26. Okdem, S., & Okdem, S. (2024). Artificial intelligence in cybersecurity: A review and a case study. Applied Sciences, 14(22), 487. [Google Scholar] [CrossRef]
  27. Okpala, E., Vishwamitra, N., Guo, K., Liao, S., Cheng, L., Hu, H., Yuan, X., Wade, J., & Khorsandroo, S. (2025). AI-cybersecurity education through designing AI-based cyberharassment detection lab. Journal of The Colloquium for Information Systems Security Education, 12(1), 1–8. [Google Scholar] [CrossRef]
  28. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Systematic Reviews, 10, 89. [Google Scholar] [CrossRef]
  29. Payne, C., & Glantz, E. J. (2020, October 7–9). Teaching adversarial machine learning: Educating the next generation of technical and security professionals. Annual Conference of the Special Interest Group in Information Technology Education (pp. 7–12), Virtual Event. [Google Scholar] [CrossRef]
  30. Pusey, P., Gupta, M., Mittal, S., & Abdelsalam, M. (2024). An analysis of prerequisites for artificial intelligence machine learning-assisted malware analysis learning modules. Journal of The Colloquium for Information Systems Security Education, 11, 1–5. [Google Scholar] [CrossRef]
  31. Salman, A. (2024, December 17–19). Integrating artificial intelligence in cybersecurity education: A pedagogical framework and case studies. International Conference on Computer and Applications (pp. 1–5), Cairo, Egypt. [Google Scholar] [CrossRef]
  32. Shahriar, H., Whitman, M., Lo, D., Wu, F., & Thomas, C. (2020, March 11–14). Case study-based portable hands-on labware for machine learning in cybersecurity. ACM Technical Symposium on Computer Science Education (p. 1273), Portland, OR, USA. [Google Scholar] [CrossRef]
  33. Svabensky, V., Vykopal, J., & Celeda, P. (2020, March 11–14). What are cybersecurity education papers about? A systematic literature review of SIGCSE and ITiCSE conferences. ACM Technical Symposium on Computer Science Education (pp. 2–8), Portland, OR, USA. [Google Scholar] [CrossRef]
  34. Tian, J. (2025). A practice-oriented computational thinking framework for teaching neural networks to working professionals. AI, 6(7), 140. [Google Scholar] [CrossRef]
  35. Wei-Kocsis, J., Sabounchi, M., Mendis, G. J., Fernando, P., Yang, B. J., & Zhang, T. L. (2024). Cybersecurity education in the age of artificial intelligence: A novel proactive and collaborative learning paradigm. IEEE Transactions on Education, 67(3), 395–404. [Google Scholar] [CrossRef]
  36. Weitl-Harms, S., Spanier, A., Hastings, J., & Rokusek, M. (2023). A systematic mapping study on gamification applications for undergraduate cybersecurity education. Journal of Cybersecurity Education Research and Practice, 2023(1), 9. [Google Scholar] [CrossRef]
  37. You, Y., Tse, J., & Zhao, J. (2025). Panda or not panda? Understanding adversarial attacks with interactive visualization. ACM Transactions on Interactive Intelligent Systems, 15(2), 11. [Google Scholar] [CrossRef]
  38. Zivanovic, M., Lendák, I., & Popovic, R. (2024, July 30–August 2). Tackling the cybersecurity workforce gap with tailored cybersecurity study programs in central and eastern europe. ACM International Conference on Availability, Reliability and Security, Vienna, Austria. [Google Scholar] [CrossRef]
Figure 1. The PRISMA flow diagram used in this paper.
Figure 1. The PRISMA flow diagram used in this paper.
Education 15 01540 g001
Table 1. A comparison between relevant works and this review paper.
Table 1. A comparison between relevant works and this review paper.
ReferenceYearSourceNumber of
Studies
Coverage
(Year)
Remark
Laato et al. (2020)2020ACM, IEEE Xplore,152003–2019MOOC course only
Springer, DBLP
Svabensky et al. (2020)2020Selected computer712010–2019General cybersecurity
science conferences education
SIGCSE and ITiCSE
Aris et al. (2022)2022Selected cybersecurity3002016–2021Curriculum only
conferences only
Lasisi et al. (2022)2022Undergraduate242020Curriculum only
courses in USA
Weitl-Harms et al. (2023)2023ACM, Taylor & Francis,742007–2022Gamification only
Scopus, IEEE Xplore
Dewi et al. (2024)2024Scopus6372019–2024A bibliometric analysis
Ours Scopus, IEEE Xplore,162020–2025A study on course
Web of Science context, curriculum,
instruction, and tools
Table 2. A list of search syntax used in various databases.
Table 2. A list of search syntax used in various databases.
DatabaseSearch Syntax
ScopusABS ((Cybersecurity) AND (AI OR “artificial intelligence” OR “machine learning”) AND (teaching OR education))
IEEE Xplore(“Abstract”:Cybersecurity) AND (“Abstract”:AI OR “Abstract”:“artificial intelligence” OR “Abstract”:“machine learning”) AND (“Abstract”:teaching OR “Abstract”:education)
Web of scienceAB = (cybersecurity AND (AI OR “artificial intelligence” OR “machine learning”) AND (teaching OR education))
Table 3. Annual distribution of papers (2020–2025) covered in this paper.
Table 3. Annual distribution of papers (2020–2025) covered in this paper.
Year202020212022202320242025Total
Journal papers0100438
Conference papers2022118
Table 4. A brief overview of papers covered in this literature review.
Table 4. A brief overview of papers covered in this literature review.
ReferenceA Short Description
Alexander et al. (2024)It teaches vulnerabilities in AI systems via hands-on, inquiry-based labs.
Apruzzese et al. (2023)It provides teaching and practitioner guidance that is focused on threat models and evaluation.
Arai et al. (2024)It teaches AI security concepts by leveraging game design and mechanics that embed adversarial machine learning ideas into engaging play.
Brito et al. (2025)It provides a set of recommended topics, sequencing, and course structures that integrate machine learning methods into security education.
Calhoun et al. (2022)Focuses on teaching hardware-level threats and defenses affecting machine learning systems. It provides a suite of hands-on lab modules that expose students to issues such as accelerator vulnerabilities.
Debello et al. (2023)A practical way to integrate artificial intelligence into cybersecurity programs so students encounter real, security-relevant AI problems.
Farahmand (2021)It integrates AI cybersecurity research into engineering and computer science education by structuring interdisciplinary courses and projects that connect students with active research problems and security-relevant AI work.
Lo et al. (2022)It provides a low-cost portable lab kit that can support authentic machine learning workflows in cybersecurity contexts, where students collect data, train models, and evaluate results on security-relevant problems.
Mathews et al. (2025)It designs a course to equip students with a foundational understanding of Generative AI and explore their applications in cybersecurity, by a combination of lectures, hands-on projects, and industry guest lectures.
Okpala et al. (2025)A practical lab platform is created to offer experiential learning for students outside computing disciplines. Participants explore foundational AI ideas and how these can be used to identify online harassment.
Payne and Glantz (2020)It shares how to teach adversarial machine learning to security professionals through a course design with learning objectives and hands-on exercises.
Pusey et al. (2024)It studies which prior knowledge indicators can assess student preparedness for modules involving AI-supported malware investigation, aiding educators in shaping effective teaching strategies.
Salman (2024)A pedagogical framework for blending AI topics into cybersecurity education. It provides a set of design principles and case studies that show how to implement the framework across contexts, including learning outcomes and assessment choices.
Shahriar et al. (2020)A portable machine learning for cybersecurity lab platform. It provides a set of realistic cases and exercises that guide students from problem framing through model building and reflection, enabling consistent, hands-on practice.
Wei-Kocsis et al. (2024)A proactive, collaborative pedagogy that connects AI concepts with cybersecurity practice via a learning model combining teamwork, community engagement, and early risk awareness.
You et al. (2025)An interactive visualization that reveals how tiny, structured perturbations can mislead image classifiers. It provides a learning tool that helps students grasp adversarial examples, decision boundaries, and feature sensitivity through direct manipulation and immediate visual feedback.
Table 5. The summarized findings for RQ1 and RQ2. The symbol − indicates that no explicit information was provided in the paper.
Table 5. The summarized findings for RQ1 and RQ2. The symbol − indicates that no explicit information was provided in the paper.
ReferenceRQ1. Who Are the Target
Audiences of Courses?
RQ2. What Delivery Modes
Are Adopted in Teaching?
Farahmand (2021)University undergraduateFace-to-face
Calhoun et al. (2022)University undergraduateFace-to-face
Debello et al. (2023)University undergraduateFace-to-face
Lo et al. (2022)University postgraduate
Payne and Glantz (2020)University undergraduate;Face-to-face
university postgraduate
Pusey et al. (2024)University undergraduate;Face-to-face
university postgraduate
Wei-Kocsis et al. (2024)University undergraduate;Face-to-face
university postgraduate
Brito et al. (2025)University undergraduate;Face-to-face
university postgraduate
You et al. (2025)University undergraduate;Face-to-face
university postgraduate
Mathews et al. (2025)University undergraduate;Face-to-face
university postgraduate
(cybersecurity major)
Okpala et al. (2025)University undergraduate;Face-to-face
university postgraduate
(non-computing major)
Arai et al. (2024)Online
Shahriar et al. (2020)
Apruzzese et al. (2023)
Alexander et al. (2024)
Salman (2024)
Table 6. The summarized findings for RQ3 and RQ4. The symbol − indicates that it was not covered in the paper.
Table 6. The summarized findings for RQ3 and RQ4. The symbol − indicates that it was not covered in the paper.
ReferenceRQ3. What AI Topics
Are Included
in the Curriculum?
RQ4. How Are AI and Cybersecurity
Concepts Integrated into Teaching?
Cybersecurity for AIAI for Cybersecurity
Payne and Glantz (2020)Perception AI
Calhoun et al. (2022)Perception AI
Apruzzese et al. (2023)Perception AI
Alexander et al. (2024)Perception AI
You et al. (2025)Perception AI
Shahriar et al. (2020)Perception AI
Lo et al. (2022)Perception AI
Debello et al. (2023)Perception AI
Pusey et al. (2024)Perception AI
Salman (2024)Perception AI
Brito et al. (2025)Perception AI
Okpala et al. (2025)Perception AI
Farahmand (2021)Perception AI
Arai et al. (2024)Perception AI
Wei-Kocsis et al. (2024)Perception AI
Mathews et al. (2025)Generative AI
Table 7. The summarized findings for RQ5 and RQ6. The symbol − indicates that no explicit information was provided in the paper.
Table 7. The summarized findings for RQ5 and RQ6. The symbol − indicates that no explicit information was provided in the paper.
ReferenceRQ5. What Instructional Activities
and Pedagogical Approaches Are Used?
RQ6. What Digital Tools
Support the Course Delivery?
Alexander et al. (2024)Experiential learningProgramming platform
(e.g., Colab, Anaconda)
Apruzzese et al. (2023)Case studyRobot car
Arai et al. (2024)Game-based learningPython programming tool;
open-source dataset
Brito et al. (2025)Theoretical materials;Block-based programming;
practical materialsonline visualization webpage
Calhoun et al. (2022)Hands-on activity
Debello et al. (2023)Hands-on gamified labs;Drones; Raspberry Pi
Capstone project
Farahmand (2021)HomeworksCircuit hardware
Lo et al. (2022)Authentic learning
Mathews et al. (2025)Project-based learningAI tools (e.g., ChatGPT)
Okpala et al. (2025)Experiential learningOnline programming platform
Payne and Glantz (2020)Hands-on activityOpen-source tool
Pusey et al. (2024)Interactive workshop
Salman (2024)Scaffolding;Visualization tool
experiential learning
Shahriar et al. (2020)Case study
Wei-Kocsis et al. (2024)Game-based learningOnline programming platform
You et al. (2025)Interactive activityOnline web game
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, J. Integrating Artificial Intelligence into the Cybersecurity Curriculum in Higher Education: A Systematic Literature Review. Educ. Sci. 2025, 15, 1540. https://doi.org/10.3390/educsci15111540

AMA Style

Tian J. Integrating Artificial Intelligence into the Cybersecurity Curriculum in Higher Education: A Systematic Literature Review. Education Sciences. 2025; 15(11):1540. https://doi.org/10.3390/educsci15111540

Chicago/Turabian Style

Tian, Jing. 2025. "Integrating Artificial Intelligence into the Cybersecurity Curriculum in Higher Education: A Systematic Literature Review" Education Sciences 15, no. 11: 1540. https://doi.org/10.3390/educsci15111540

APA Style

Tian, J. (2025). Integrating Artificial Intelligence into the Cybersecurity Curriculum in Higher Education: A Systematic Literature Review. Education Sciences, 15(11), 1540. https://doi.org/10.3390/educsci15111540

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop