You are currently viewing a new version of our website. To view the old version click .
Education Sciences
  • Systematic Review
  • Open Access

15 November 2025

Integrating Artificial Intelligence into the Cybersecurity Curriculum in Higher Education: A Systematic Literature Review

NUS-ISS, National University of Singapore, Singapore 119615, Singapore
Educ. Sci.2025, 15(11), 1540;https://doi.org/10.3390/educsci15111540 
(registering DOI)
This article belongs to the Special Issue Artificial Intelligence and Blended Learning: Challenges, Opportunities, and Future Directions

Abstract

Background: To understand the state of the art of how artificial intelligence (AI) and cybersecurity are taught together, this paper conducts a systematic literature review on integrating AI into the cybersecurity curriculum in higher education. Methods: The peer-reviewed works were screened from major databases published between 2020 and 2025. Integrating AI and cybersecurity typically requires new learning designs. To address this gap in higher education, this review is organized by three categories of research questions: (1) who we teach (audiences and delivery modes), (2) what we teach (related AI topics and cybersecurity topics and how they are integrated), and (3) how we teach (instructional activities and tools used in teaching). Results: The course delivery is mostly face-to-face. The course curricula focus mostly on perception AI. Teaching methods are active and practical, with hands-on labs, interactive tasks, and game-based activities, supported by hardware, programming notebooks, and interactive visualizations. Conclusion: This paper provides the state of the art of integrating AI into the cybersecurity curriculum in higher education, actionable recommendations, and implications for further research. Therefore, it is relevant and transferable for instructors in the field of artificial intelligence education and cybersecurity education.

1. Introduction

Cybersecurity education must evolve alongside the rapid evolution of artificial intelligence (AI) in a practice-oriented curriculum that develops both AI expertise and security expertise (; ; ). In industrial deployments, professionals need competencies in machine learning and secure AI deployment, not only to defend AI-enabled systems but also to leverage AI for threat detection (; ). This has increased the gap between university outcomes and workplace expectations, particularly in hands-on skills and cross-disciplinary knowledge (; ). To close this gap, an integration is needed to embed AI into the cybersecurity curriculum.
There are three major types of AI techniques, including perception AI, generative AI, and agentic AI, each of which has distinct capabilities and risk profiles that require different mitigation. Perception AI analyzes sensor data in critical systems such as autonomous driving. It recognizes road context (e.g., traffic signs, road conditions, and obstacles) in real time and triggers actions such as proceeding or urgent braking. These models are particularly vulnerable to adversarial examples deliberately crafted to induce misclassification (). Generative AI produces new content in response to user prompts (e.g., an e-commerce chatbot that handles customer inquiries). It faces unique threats such as jailbreaks, which elicit harmful outputs, and prompt injection, which triggers intended behavior and instructions (). Agentic AI orchestrates end-to-end workflows through collaborating agents that sense, reason, and act. In enterprise settings, such systems may manage orders, make purchases, and coordinate supply chains. Their attack surface and failure modes differ fundamentally from those of perceptive and generative systems, introducing new cybersecurity challenges around tool use, autonomy, and authorization ().
The differences across various AI paradigms motivate a tighter integration of AI and cybersecurity in higher education. There are two complementary strategies: security for AI and AI for security. Lessons learned from securing AI systems inform how we responsibly embed models into security operations. In turn, operational use in defense surfaces new attacks and governance needs, tightening the feedback loop between security for AI and AI for security. Security for AI emphasizes safeguarding AI systems through governance, policy, and technical controls that mitigate risks and manage threats across data, model development, deployment, and operations (). AI for security applies machine learning and deep learning methods to strengthen protective technologies (e.g., network defense, endpoint protection, and email filtering), accelerating detection and response and augmenting analyst capacity ().
Effective course delivery relies on instructional methods and digital tooling (; ; ). Integrating AI and cybersecurity typically requires new learning designs, especially hands-on activities, and appropriate tools such as programming environments, curated datasets, sandboxes or simulation platforms, and visualization utilities that make model behavior and security mechanisms transparent.
To address the above-identified gaps in higher education, this paper conducts a systematic literature review on integrating AI into the curriculum of cybersecurity. The review is organized around three groups of research questions focusing on course context, course curriculum design, and the course’s instructional activities and tools. This paper makes two key contributions to the literature on AI education and cybersecurity education.
  • First, it systematically synthesizes studies from multiple major databases (Scopus, IEEE Xplore, and Web of Science), offering a broader and more representative view than prior reviews that were limited to specific sources or course formats. Furthermore, it provides the most up-to-date perspective on the field by covering the period from 2020 to 2025.
  • Second, it adopts an integrated lens that examines three categories of six research questions, covering course context, course curriculum, and course instructional activities and tools.
The rest of this paper is organized as follows. Section 2 introduces the relevant research works and highlights the difference between them and this paper. Then, Section 3 presents the three categories of six research questions covered in this study, including course context, course curriculum, and course instruction. It also presents the systematic literature search process using a PRISMA framework (). The research findings are presented in Section 4, followed by discussions on the key observations, recommendations, and limitations of this study in Section 5. Finally, Section 6 provides the conclusion of this paper.

3. Methodology

3.1. Literature Search Process

We conducted a systematic literature search following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework (). The search targeted relevant research in the field of integrated AI and cybersecurity teaching and was performed across three major academic databases: Scopus, IEEE Xplore, and Web of Science. These databases were selected for their comprehensive coverage and relevance to this study.
Due to differing search syntax across databases, customized queries were crafted for each database. To ensure relevance and quality, we applied the following inclusion criteria: articles had to be (i) published between 2020 and 2025, (ii) written in English, and (iii) published in peer-reviewed journals or conference proceedings. We selected the 2020–2025 period to capture studies published during the period of fastest methodological and curricular change in AI-enabled cybersecurity. Table 2 provides a detailed breakdown of the search strings used for each database.
Table 2. A list of search syntax used in various databases.
The initial search in August 2025 returned a total of 263 records after removing duplicates. We employed a multi-phase screening process to determine the final selection of studies, as illustrated in Figure 1. This involved (i) scope review (e.g., relevance to teaching and education) and (ii) manual abstract screening and full-text screening (e.g., focus on the integration of AI and cybersecurity). In the manual abstract screening process, we excluded 184 papers that were not teaching studies, 36 papers that were teaching papers but not related to cybersecurity, and 26 papers that were traditional cybersecurity teaching papers without the integration of AI. Then, in the manual full-text screening process, we excluded 5 review papers and 4 studies focusing on K-12 education. On the other hand, with the additional web search, we managed to find 6 papers. We further complemented database searching with citation chasing (snowballing) to find 5 papers. Then, we excluded 3 review papers from them. In summary, 16 papers that met all inclusion criteria were selected for in-depth analysis in this study. The annual distribution of these 16 papers is presented in Table 3, and their brief description is provided in Table 4.
Figure 1. The PRISMA flow diagram used in this paper.
Table 3. Annual distribution of papers (2020–2025) covered in this paper.
Table 4. A brief overview of papers covered in this literature review.

3.2. Research Questions

This paper examines six research questions from three dimensions. The statements, motivations, and pedagogical gaps and aims of these research questions are provided as follows.
  • Course context-related research questions.
    RQ1. Who are the target audiences of courses?
    Motivation: Identifying the intended learners clarifies the background knowledge, skill gaps, and professional needs the curriculum is designed to address.
    Pedagogical gap and aim: Calibrate learning objectives, scaffolding, and assessment to learner readiness and context.
    RQ2. What delivery modes are adopted in teaching?
    Motivation: Understanding whether courses are offered face-to-face, online, or in hybrid formats provides insight into the accessibility and scalability of instruction.
    Pedagogical gap and aim: Match the delivery modality to learning outcomes (e.g., labs needing hands-on time vs. asynchronous theory), while considering the resource constraints.
  • Course curriculum-related research questions.
    RQ3. What AI topics are included in the curriculum?
    Motivation: Mapping the range of AI content helps reveal the breadth of technical coverage in current educational practice, particularly on the emerging AI technologies.
    Pedagogical gap and aim: Ensure up-to-date topic sequences that build from fundamentals to advanced methods aligned with current practice.
    RQ4. How are AI and cybersecurity concepts integrated in teaching?
    Motivation: Exploring integration strategies shows whether courses treat AI and cybersecurity separately or promote interdisciplinary learning.
    Pedagogical gap and aim: Promote interdisciplinarity via aligned learning outcomes and iterative tasks that connect AI methods to concrete security problems.
  • Course instruction-related research questions.
    RQ5. What instructional activities and pedagogical approaches are used?
    Motivation: Examining teaching activities (e.g., lectures, labs, and projects) highlights how learning objectives are implemented in practice.
    Pedagogical gap and aim: Adopt evidence-informed designs (scaffolded labs and project-based learning) that cultivate problem-solving and professional practices.
    RQ6. What digital tools support the course delivery?
    Motivation: Investigating the tools used (e.g., simulation environments and security platforms) reveals how digital tools facilitate effective learning.
    Pedagogical gap and aim: Select and integrate tools that are accessible and aligned with tasks and simulate real-world workflows to enhance the learning outcome.

4. Results

This section presents the results synthesized from the sixteen selected papers in this literature review. Each subsection corresponds to the three categories of research questions outlined in Section 3.2.

4.1. Course Context-Related Research Questions

The first research question is as follows: RQ1. Who are the target audiences of courses? To address this question, we examined the learner groups reported in the sixteen papers. This question is important because identifying the intended audience helps to clarify the expected prior knowledge and professional needs that the curriculum is designed to meet. As summarized in Table 5, the most frequently mentioned target learner group is university students, including both undergraduate and postgraduate learners, discussed in eleven studies. One study emphasizes non-computing major students, while another study emphasizes the cybersecurity major. The remaining five papers did not provide explicit information on the target audience. These patterns suggest that AI-based curricula should include various pathways, such as foundational AI literacy for non-computing learners and deeper, practice-oriented tracks for cybersecurity majors, so that prerequisites align with learners’ backgrounds.
Table 5. The summarized findings for RQ1 and RQ2. The symbol − indicates that no explicit information was provided in the paper.
The second research question is as follows: RQ2. What delivery modes are adopted in teaching? This question is equally significant, as the chosen delivery mode (face-to-face, online, or hybrid) affects accessibility and scalability of instruction. Among the sixteen papers reviewed and summarized in Table 5, eleven papers reported face-to-face teaching, one paper described an online course, and the remaining four did not specify the delivery modes. Given the findings that most courses are face-to-face offerings, AI-based curricula should adopt modality-agnostic designs, such as cloud notebooks and virtual labs, to preserve hands-on practice.

4.2. Course Curriculum-Related Research Questions

The third research question is as follows: RQ3. What AI topics are included in the curriculum? We consider three major classes of AI, including perception, generative, and agentic, each with distinct capabilities and risk profiles that call for different mitigation. Mapping the range of AI technologies alongside their security implications reveals the breadth of technical coverage in current practice and equips learners with risk-appropriate defenses by design. Across the sixteen papers, the majority of them, fifteen papers, addressed perception AI, while only one paper explicitly taught generative AI, as summarized in Table 6. This imbalance suggests that curricula should be rebalanced beyond perception systems by adding core modules on generative (e.g., prompt injection, jailbreaks, and data leakage) and agentic systems (e.g., tool-use safety and human-in-the-loop).
Table 6. The summarized findings for RQ3 and RQ4. The symbol − indicates that it was not covered in the paper.
The fourth research question is as follows: RQ4. How are AI and cybersecurity concepts integrated into teaching? We examine two complementary integration strategies: cybersecurity for AI and AI for cybersecurity. Among the sixteen papers, the coverage was fairly balanced: five papers addressed cybersecurity for AI only, eight focused on AI for cybersecurity only, and the remaining three covered both, as summarized in Table 6. Programs can run the two modules in parallel with integrative capstones. For example, students harden a model and then implement it in a real exercise to exercise both assurance competencies and applied defensive effectiveness.

4.3. Course Instruction-Related Research Questions

The fifth research question is as follows: RQ5 What instructional activities and pedagogical approaches are used? It is important to examine teaching activities (e.g., lectures, labs, and projects) to understand how learning objectives are implemented in practice. Table 7 summarizes instructional activities used across the reviewed papers, which reveals a clear emphasis on active, practice-oriented designs (e.g., hands-on, interactive, game-based, and experiential) with selective use of case studies, scaffolding, and project experiences. Hands-on and project-based learning is prominent, provided as either standalone lab or project work in three studies (; ; ) and as a combined laboratory plus an independent project in one course (). These activities prioritize skill acquisition in integrative demonstrations. Interactive activities are reported in two papers (; ), typically to sustain engagement, surface misconceptions early, and support rapid feedback. Game-based learning features both full-course game structures and targeted gamification elements (; ; ); these approaches aim to increase motivation and provide safe environments for experimenting with offensive/defensive tactics or AI model behaviors. Experiential learning is explicitly implemented in three papers (; ; ) to support learners’ progression from guided exercises to open tasks. Case studies are used to integrate technical detail with contextual judgment (; ). One course emphasizes authentic learning (), situating activities in realistic professional contexts to bridge classroom and practice. Traditional elements are also effective; homework serves as structured reinforcement (), while a course that combines theoretical and practical components () illustrates a blended model that pairs conceptual grounding with implementation. Finally, an independent capstone project () provides a culminating experience for synthesis and evaluation of learning outcomes. These findings suggest that the curriculum should emphasize authentic practice and interactive game/simulation tasks to expose misconceptions, while providing standardized artifacts (reproducible notebooks and datasets) as assessable learning materials.
Table 7. The summarized findings for RQ5 and RQ6. The symbol − indicates that no explicit information was provided in the paper.
The sixth research question is as follows: RQ6. What digital tools support the course delivery? To effectively deliver the technical content, it is critical to select appropriate tools (e.g., simulation environments and security platforms) to facilitate effective learning. Table 7 summarizes the tools and platforms reported across the papers, spanning hardware setups, programming environments with datasets, visualization, and game-oriented delivery. Several papers offer anchor learning in physical systems to surface real-world constraints and attack surfaces, such as circuit hardware in a very large-scale integration context (). A few papers rely on accessible software stacks that enable reproducible work. One study combines a Python (version 3) programming tool with open-source datasets to scaffold implementation and evaluation (; ; ). Open-source tools and online programming platforms are used in (); (); (), facilitating convenient development and simplified classroom logistics. AI tools, such as ChatGPT, are also studied in (). To unlock the complicated topics, multiple courses adopt block-based programming paired with online visualization webpages (), which provide interpretable outputs or interactive dashboards that support formative feedback. Game-based or simulation-centric platforms appear as an online web game () and as an immersive learning setup (). These findings indicate that the curricula should standardize on portable, reproducible stacks, such as cloud notebooks and labs, so learners can practice end-to-end workflows with minimal setup friction.

5. Discussion

This section summarizes the key observations and provides recommendations for both research and practice.

5.1. Summary of the Key Observations and Actionable Recommendations

Three observations emerge from the three categories of research questions addressed in this study.

5.1.1. Course Context-Related Findings

  • Finding: Integrating AI and cybersecurity across learner populations is important, with offerings targeting university students; this pattern reveals the current educational landscape and underscores the need for audience-appropriate scaffolding.
  • Educational framework: This finding supports the constructivist learning framework, where learners build their understanding by connecting new information with their existing knowledge and experiences. Intended learning outcomes and activities should be aligned to distinct learner profiles. For example, a practical lab platform is created to offer experiential learning for non-computing students (), while cybersecurity students are equipped with a foundational understanding of generative AI to further explore their applications ().
  • Actionable recommendation: Instructors should consider extending constructive alignment to the AI–cybersecurity intersection, provide scaffolded prerequisites, and adopt blended delivery so that varied learners can reach aligned outcomes.

5.1.2. Course Curriculum-Related Findings

  • Finding: Current studies exhibit a balanced emphasis on the two integration strategies, security for AI and AI for security, highlighting cross-disciplinary integration rather than independent treatment.
  • Educational framework: Constructivist learning treats the two lenses, security for AI and AI for security, as paired problems that support knowledge construction through cognitive conflict and resolution (e.g., risk vs. mitigation and attack vs. defense). This finding complements (), where learners experience damage caused by attacks and the advantages of their countermeasures. In addition, an immersive learning environment is designed in () to motivate the students to explore AI development in the context of real-world cybersecurity scenarios, where AI techniques can be manipulated and evaded, resulting in new security implications.
  • Actionable recommendation: Instructors could design lab structures that bind security for AI to AI for security. For each topic, we can design mirrored labs (e.g., prompt injection vs. guardrail; data poisoning vs. governance) so that learners can experience the impact of the AI technique and the inherent risk of the AI technique itself.

5.1.3. Course Instruction-Related Findings

  • Finding: Active pedagogy is prevalent (e.g., hands-on labs/projects, experiential and case-based activities, and visualization to unpack complex concepts), which indicates a need for learning by doing with structured supports that build transferable competencies for AI and cybersecurity practice.
  • Educational framework: Our finding about active pedagogy aligns with the connectivist learning framework, where learners build understanding through manipulation of tools, datasets, and reflection on experience. This is consistent with immersive and visualization-centric designs in (); (), hands-on programming design (), and even the hardware implementation (; ).
  • Actionable recommendation: Instructors should ground theory in practice. For example, they can start each lecture with a brief real-world artifact (e.g., a prompt injection transcript), state the intended learning outcomes, and then introduce the concepts that explain the artifact. Furthermore, they can provide one-click, sandboxed environments (e.g., Docker/Colab) so learners can run paired attack–defend labs and safely explore AI techniques. Lastly, they can conclude each hands-on activity with a guided reflection, prompting students to articulate what worked, what failed, and how they would improve their approach.

5.2. Limitations

There are a few limitations in this study. This review may be affected by search bias arising from database coverage, indexing delays, English-language restrictions, and the evolving terminology of AI and cybersecurity that could cause relevant studies to be missed by our keywords. Moreover, this review is focused on peer-reviewed journal and conference papers; consequently, it excludes course websites and practitioner reports that may capture cutting-edge practice. This coverage limits generalizability to other settings (e.g., professional training). Given the rapid pace of AI (especially generative AI and emerging agentic AI systems), this paper might only provide insights for very recent innovations and practices.

5.3. Future Research

Through this literature review, we propose three actionable recommendations for instructors in AI education and cybersecurity education. First, we need to align curriculum activities with two lenses, including security for AI (e.g., threat modeling and red-teaming) and AI for security (e.g., anomaly detection and phishing classifiers), and pair them in hands-on exercises to ensure balanced coverage. Second, we need to standardize on accessible and reproducible tooling, such as online programming notebooks, curated open datasets, visualization dashboards, and one-click environments (e.g., Docker or Colab) with starter kits so students focus on learning rather than setup. Last but not least, we need to provide case studies that reflect real threat scenarios to connect technical work to real-world decision-making in this fast-moving domain.

6. Conclusions

This paper presents a systematic literature review on a focused topic of integrating artificial intelligence into cybersecurity education. The findings show that the current practices reach multiple learner groups (from university undergraduate to postgraduate). However, online delivery and hybrid (online and face-to-face) delivery remain underused. The course curricula currently emphasize perception AI only, while emerging areas, like generative and agentic AI systems, are rarely addressed. To effectively integrate AI technology and the cybersecurity content, hands-on activities (e.g., online programming notebooks) and visual explanations are needed to make concepts interactive and explainable. This paper offers a practical reference for instructors seeking to enhance their courses by embedding AI content into the cybersecurity curriculum.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Dataset available on request from the author.

Acknowledgments

The author would like to express their sincere gratitude to the editor and the four reviewers for their insightful suggestions on revising this manuscript.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Afolabi, A. S., & Adewale Akinola, O. (2024, September 18–20). Vulnerable AI: A survey. IEEE International Symposium on Technology and Society, Puebla, Mexico. [Google Scholar] [CrossRef]
  2. Alexander, R., Ma, L., Dou, Z.-L., Cai, Z., & Huang, Y. (2024). Integrity, confidentiality, and equity: Using inquiry-based labs to help students understand AI and cybersecurity. Journal of Cybersecurity Education Research and Practice, 2024(1), 10. [Google Scholar] [CrossRef]
  3. Ali, D., Fatemi, Y., Boskabadi, E., Nikfar, M., Ugwuoke, J., & Ali, H. (2024). ChatGPT in teaching and learning: A systematic review. Education Sciences, 14(6), 643. [Google Scholar] [CrossRef]
  4. Apruzzese, G., Anderson, H. S., Dambra, S., Freeman, D., Pierazzi, F., & Roundy, K. (2023, February 8–10). Real attackers don’t compute gradients: Bridging the gap between adversarial ML research and practice. 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (pp. 339–364), Raleigh, NC, USA. [Google Scholar] [CrossRef]
  5. Arai, M., Tejima, K., Yamada, Y., Miura, T., Yamashita, K., Kado, C., Shimizu, R., Tatsumi, M., Yanai, N., & Hanaoka, G. (2024). REN-AI: A video game for AI security education leveraging episodic memory. IEEE Access, 12, 47359–47372. [Google Scholar] [CrossRef]
  6. Aris, A., Rondon, L. P., Ortiz, D., Ross, M., & Finlayson, M. (2022, June 26–29). Integrating artificial intelligence into cybersecurity curriculum: New perspectives. ASEE Annual Conference and Exposition (pp. 1–15), Minneapolis, MN, USA. [Google Scholar] [CrossRef]
  7. Bendler, D., & Felderer, M. (2023). Competency models for information security and cybersecurity professionals: Analysis of existing work and a new model. ACM Transactions on Computing Education, 23(2), 1–33. [Google Scholar] [CrossRef]
  8. Beuran, R., Hu, Z., Zeng, Y., & Tan, Y. (2022). Artificial intelligence for cybersecurity education and training. Springer. [Google Scholar] [CrossRef]
  9. Bhuiyan, S., & Park, J. S. (2025). Cybersecurity threats and mitigation strategies in AI applications. Journal of The Colloquium for Information Systems Security Education, 12(1), 1–7. [Google Scholar] [CrossRef]
  10. Brito, F., Mekdad, Y., Ross, M., Finlayson, M. A., & Uluagac, S. (2025, February 26–March 1). Enhancing cybersecurity education with artificial intelligence content. ACM Technical Symposium on Computer Science Education (pp. 158–164), Pittsburgh, PA, USA. [Google Scholar] [CrossRef]
  11. Calhoun, A., Ortega, E., Yaman, F., Dubey, A., & Aysu, A. (2022, June 6–8). Hands-on teaching of hardware security for machine learning. Great Lakes Symposium on VLSI (pp. 455–461), Irvine, CA, USA. [Google Scholar] [CrossRef]
  12. Cusak, A. (2023). Case study: The impact of emerging technologies on cybersecurity education and workforces. Journal of Cybersecurity Education Research and Practice, 1, 3. [Google Scholar] [CrossRef]
  13. Das, B. C., Amini, M. H., & Wu, Y. (2025). Security and privacy challenges of large language models: A survey. ACM Computing Surveys, 57(6), 1–39. [Google Scholar] [CrossRef]
  14. Debello, J. E., Troja, E., & Truong, L. M. (2023, May 1–4). A framework for infusing cybersecurity programs with real-world artificial intelligence education. IEEE Global Engineering Education Conference (pp. 1–5), Kuwait, Kuwait. [Google Scholar] [CrossRef]
  15. Deng, Z., Guo, Y., Han, C., Ma, W., Xiong, J., Wen, S., & Xiang, Y. (2025). AI agents under threat: A survey of key security challenges and future pathways. ACM Computing Surveys, 57(7), 1–36. [Google Scholar] [CrossRef]
  16. Dewi, H. A., Candiwan, C., & Sari, P. K. (2024, December 17–19). Artificial intelligence in security education, training and awareness: A bibliometric analysis. 2024 International Conference on Intelligent Cybernetics Technology & Applications (pp. 914–919), Bali, Indonesia. [Google Scholar] [CrossRef]
  17. Farahmand, F. (2021). Integrating cybersecurity and artificial intelligence research in engineering and computer science education. IEEE Security and Privacy, 19(6), 104–110. [Google Scholar] [CrossRef]
  18. Jaffal, N. O., Alkhanafseh, M., & Mohaisen, D. (2025). Large language models in cybersecurity: A survey of applications, vulnerabilities, and defense techniques. AI, 6(9), 216. [Google Scholar] [CrossRef]
  19. Jimenez, R., & O’Neill, V. E. (2023). Handbook of research on current trends in cybersecurity and educational technology. IGI Global Scientific Publishing. [Google Scholar] [CrossRef]
  20. Laato, S., Farooq, A., Tenhunen, H., Pitkamaki, T., Hakkala, A., & Airola, A. (2020, July 6–9). AI in cybersecurity education—A systematic literature review of studies on cybersecurity MOOCs. IEEE 20th International Conference on Advanced Learning Technologies (pp. 6–10), Tartu, Estonia. [Google Scholar] [CrossRef]
  21. Lasisi, R. O., Menia, M., Farr, Z., & Jones, C. (2022, May 15–18). Exploration of AI-enabled contents for undergraduate cyber security programs. International Florida Artificial Intelligence Research Society Conference (pp. 1–4), Hutchinson Island, FL, USA. [Google Scholar] [CrossRef]
  22. Lo, D. C.-T., Shahriar, H., Qian, K., Whitman, M., Wu, F., & Thomas, C. (2022, March 2–5). Authentic learning of machine learning in cybersecurity with portable hands-on labware. ACM Technical Symposium on Computer Science Education (p. 1153), Providence, RI, USA. [Google Scholar] [CrossRef]
  23. Lozano, A., & Blanco Fontao, C. (2023). Is the education system prepared for the irruption of artificial intelligence? A study on the perceptions of students of primary education degree from a dual perspective: Current pupils and future teachers. Education Sciences, 13(7), 733. [Google Scholar] [CrossRef]
  24. Mathews, N., Schwartz, C., & Wright, M. (2025). Teaching generative AI for cybersecurity: A project-based learning approach. Journal of The Colloquium for Information Systems Security Education, 12(1), 1–10. [Google Scholar] [CrossRef]
  25. Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sciences, 13(9), 856. [Google Scholar] [CrossRef]
  26. Okdem, S., & Okdem, S. (2024). Artificial intelligence in cybersecurity: A review and a case study. Applied Sciences, 14(22), 487. [Google Scholar] [CrossRef]
  27. Okpala, E., Vishwamitra, N., Guo, K., Liao, S., Cheng, L., Hu, H., Yuan, X., Wade, J., & Khorsandroo, S. (2025). AI-cybersecurity education through designing AI-based cyberharassment detection lab. Journal of The Colloquium for Information Systems Security Education, 12(1), 1–8. [Google Scholar] [CrossRef]
  28. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Systematic Reviews, 10, 89. [Google Scholar] [CrossRef]
  29. Payne, C., & Glantz, E. J. (2020, October 7–9). Teaching adversarial machine learning: Educating the next generation of technical and security professionals. Annual Conference of the Special Interest Group in Information Technology Education (pp. 7–12), Virtual Event. [Google Scholar] [CrossRef]
  30. Pusey, P., Gupta, M., Mittal, S., & Abdelsalam, M. (2024). An analysis of prerequisites for artificial intelligence machine learning-assisted malware analysis learning modules. Journal of The Colloquium for Information Systems Security Education, 11, 1–5. [Google Scholar] [CrossRef]
  31. Salman, A. (2024, December 17–19). Integrating artificial intelligence in cybersecurity education: A pedagogical framework and case studies. International Conference on Computer and Applications (pp. 1–5), Cairo, Egypt. [Google Scholar] [CrossRef]
  32. Shahriar, H., Whitman, M., Lo, D., Wu, F., & Thomas, C. (2020, March 11–14). Case study-based portable hands-on labware for machine learning in cybersecurity. ACM Technical Symposium on Computer Science Education (p. 1273), Portland, OR, USA. [Google Scholar] [CrossRef]
  33. Svabensky, V., Vykopal, J., & Celeda, P. (2020, March 11–14). What are cybersecurity education papers about? A systematic literature review of SIGCSE and ITiCSE conferences. ACM Technical Symposium on Computer Science Education (pp. 2–8), Portland, OR, USA. [Google Scholar] [CrossRef]
  34. Tian, J. (2025). A practice-oriented computational thinking framework for teaching neural networks to working professionals. AI, 6(7), 140. [Google Scholar] [CrossRef]
  35. Wei-Kocsis, J., Sabounchi, M., Mendis, G. J., Fernando, P., Yang, B. J., & Zhang, T. L. (2024). Cybersecurity education in the age of artificial intelligence: A novel proactive and collaborative learning paradigm. IEEE Transactions on Education, 67(3), 395–404. [Google Scholar] [CrossRef]
  36. Weitl-Harms, S., Spanier, A., Hastings, J., & Rokusek, M. (2023). A systematic mapping study on gamification applications for undergraduate cybersecurity education. Journal of Cybersecurity Education Research and Practice, 2023(1), 9. [Google Scholar] [CrossRef]
  37. You, Y., Tse, J., & Zhao, J. (2025). Panda or not panda? Understanding adversarial attacks with interactive visualization. ACM Transactions on Interactive Intelligent Systems, 15(2), 11. [Google Scholar] [CrossRef]
  38. Zivanovic, M., Lendák, I., & Popovic, R. (2024, July 30–August 2). Tackling the cybersecurity workforce gap with tailored cybersecurity study programs in central and eastern europe. ACM International Conference on Availability, Reliability and Security, Vienna, Austria. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.