Previous Article in Journal
No Free Lunch in Language Model Bias Mitigation? Targeted Bias Reduction Can Exacerbate Unmitigated LLM Biases
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pedagogical Transformation Using Large Language Models in a Cybersecurity Course

by
Rodolfo Ostos
1,2,
Vanessa G. Félix
1,2,
Luis J. Mena
1,
Homero Toral-Cruz
3,
Alberto Ochoa-Brust
4,
Apolinar González-Potes
4,
Ramón A. Félix
4,
Julio C. Ramírez Pacheco
5,
Víctor Flores
6 and
Rafael Martínez-Peláez
1,6,*
1
Unidad Académica de Computación, Universidad Politécnica de Sinaloa, Mazatlán 82199, Mexico
2
Unidad Mazatlán, Universidad Autónoma de Occidente, Mazatlán 82100, Mexico
3
Departamento de Ingeniería y Tecnología, Universidad Autónoma del Estado de Quintana Roo, Chetumal 77019, Mexico
4
Facultad de Ingeniería Mecánica y Eléctrica, Universidad de Colima, Colima 28040, Mexico
5
Departamento de Ciencias de la Salud y Tecnología, Universidad Autónoma del Estado de Quintana Roo, Cancún 77519, Mexico
6
Departamento de Ingeniería de Sistemas y Computación, Universidad Católica del Norte, Antofagasta 1270709, Chile
*
Author to whom correspondence should be addressed.
Submission received: 26 November 2025 / Revised: 23 December 2025 / Accepted: 8 January 2026 / Published: 13 January 2026
(This article belongs to the Special Issue How Is AI Transforming Education?)

Abstract

Large Language Models (LLMs) are increasingly used in higher education, but their pedagogical role in fields like cybersecurity remains under-investigated. This research explores integrating LLMs into a university cybersecurity course using a designed pedagogical approach based on active learning, problem-based learning (PBL), and computational thinking (CT). Instead of viewing LLMs as definitive sources of knowledge, the framework sees them as cognitive tools that support reasoning, clarify ideas, and assist technical problem-solving while maintaining human judgment and verification. The study uses a qualitative, practice-based case study over three semesters. It features four activities focusing on understanding concepts, installing and configuring tools, automating procedures, and clarifying terminology, all incorporating LLM use in individual and group work. Data collection involved classroom observations, team reflections, and iterative improvements guided by action research. Results show that LLMs can provide valuable, customized support when students actively engage in refining, validating, and solving problems through iteration. LLMs are especially helpful for clarifying concepts and explaining procedures during moments of doubt or failure. Still, common issues like incomplete instructions, mismatched context, and occasional errors highlight the importance of verifying LLM outputs with trusted sources. Interestingly, these limitations often act as teaching opportunities, encouraging critical thinking crucial in cybersecurity. Ultimately, this study offers empirical evidence of human–AI collaboration in education, demonstrating how LLMs can enrich active learning.

1. Introduction

Learning cybersecurity is often challenging because it requires students to develop conceptual knowledge and practical skills. They must understand threat models, analyze attacks, install and configure specialized tools, and build critical thinking about vulnerabilities and system defenses [1,2,3]. Hands-on experience plays an essential role in this process, as students engage directly with tools commonly used in professional environments [3,4].
However, many students have problems with technical tools for the first time because they face incomplete or outdated documentation, complex dependencies, and conflicting instructions across online sources, which often disrupt the learning process. These problems make tasks like setting up virtual machines or configuring vulnerability scanners hard to finish without extra help. Although forums, tutorials, and blogs are available, students often find them fragmented, inconsistent, or too advanced for their skill level.
Language barriers create additional difficulties, especially in multilingual contexts [5]. Much of the documentation and community support is only available in English. Students who are non-native speakers may make slower progress and face a higher risk of errors, even when using translation tools, because technical meaning is not always preserved. These challenges highlight the need for more accessible and adaptive forms of instructional support.
Recent advances in Large Language Models (LLMs) offer new pedagogical possibilities for addressing these barriers [6,7]. Tools such as ChatGPT, Copilot, and Gemini can provide explanations, troubleshoot errors, and deliver personalized guidance in real time [8,9,10,11]. Their growing presence in higher education suggests that LLMs may help reduce learning difficulties in technical fields, including cybersecurity [12]. However, the mere availability of these tools does not guarantee meaningful learning outcomes [13]. Without deliberate pedagogical integration, LLMs risk functioning as answer-generating systems that bypass essential cognitive processes such as reasoning, verification, and problem decomposition [14,15].
To move beyond tool-centric adoption, there is a growing need to situate LLM use within a coherent pedagogical framework that aligns with active learning principles, problem-based learning (PBL), and computational thinking (CT). In cybersecurity education, where learning inherently involves ambiguity, iterative troubleshooting, and system-level reasoning, LLMs should be positioned as cognitive support mechanisms that strengthen research rather than replacing disciplinary thinking [16]. Designing such a framework requires careful attention to task structure, students, and the professor’s orchestration role in mediating human–AI interaction [17].
Within this context, this pedagogical case study investigates how LLMs can serve as didactic support tools in a university cybersecurity course. The central question guiding the study is: How effectively can LLMs provide personalized assistance when students configure and utilize cybersecurity tools, and when they require clear explanations of specialized technical concepts? Addressing this question contributes to understanding how LLMs, when embedded in active learning, can reduce learning barriers, enhance conceptual understanding, and promote greater student autonomy in hands-on cybersecurity tasks.
This study makes three primary contributions to AI-driven education innovation, with a particular focus on cybersecurity pedagogy.
Firstly, this study introduces a pedagogical framework for integrating LLMs into cybersecurity education. It explicitly aligns with active learning, PBL, and CT. The framework considers LLMs as cognitive support tools embedded within learning environments overseen by professors, emphasizing that their efficacy is contingent on task design and instructional management, rather than solely on the models’ intrinsic capabilities.
Secondly, through a multi-semester case study, the research identifies concrete mechanisms by which LLMs mediate learning processes across individual and collaborative cybersecurity activities. The findings show how LLM-supported conceptual clarification, troubleshooting, and procedural reasoning unfold in practice. On the other hand, the findings evidence that the LLMs’ limitations, such as incomplete, ambiguous, or incompatible guidance, can productively activate verification, critical evaluation, and disciplinary problem-solving central to cybersecurity practice.
Thirdly, the study demonstrates how LLMs support cybersecurity education by providing definitions and instructions on installing, configuring, and utilizing cybersecurity tools. These LLMs serve as educational resources aligned with the discipline’s objectives. Consequently, the study emphasizes cybersecurity training as an optimal environment for cultivating effective, practical human–AI collaboration.
The article is structured as follows: Section 2 reviews related work on the use of LLMs in education. Section 3 provides the theoretical background that informs the study. Section 4 outlines the pedagogical approaches guiding the instructional design and the integration of LLMs. Section 5 presents the case study and details the learning activities developed. Section 6 reports the main findings derived from these activities. Section 7 discusses the results, including their implications and limitations. Finally, Section 8 concludes with key insights and directions for future research.

2. Related Works

In [18], the authors investigated how ChatGPT can assist students in solving chemical engineering problems. The study focused on understanding how the tool helps students in tackling traditional engineering challenges by creating simple virtual models that do not require advanced programming skills. This approach opens new opportunities for students to engage with core concepts, especially those who may feel limited by a lack of coding experience. Instead of solely reading about processes or manually solving equations, students used ChatGPT to design simulations for real engineering issues, such as steam cycles and reactor designs. LLMs could describe problems in everyday language, receive code suggestions, test the models, and improve them based on their observations. The activity showed that GPT not only saves time but also encourages critical thinking and fosters students’ ownership of their learning.
In [19], the authors explored how GPT tools can help students improve their programming skills. The study focused on how these tools assist students with everyday challenges, such as making coding mistakes, becoming stuck on logic, or being unsure how to begin a task. Students could ask questions or describe problems in their own words, and the tool would suggest code or ideas to try. This benefit enabled students to test solutions, identify and correct errors, and gain a deeper understanding of how their code functions. The authors suggest that students explore and learn more actively, rather than simply copying code. For students with limited programming experience, the tool acted as a guide, helping them take small steps and build confidence. The study demonstrated that using GPT not only enhanced students’ coding skills but also increased their motivation, independence, and engagement in learning.
In [20], the authors examined how GPT tools, such as ChatGPT, could assist radiographers and nuclear medicine students. LLMs found that the tool was helpful for basic tasks, such as answering simple questions, explaining ideas, and improving writing. Students could ask questions in their own words and receive quick support. This was especially useful for early-year subjects where answers needed minimal detail. However, with more advanced topics, ChatGPT often gave weak explanations or included incorrect or fabricated sources. In writing tasks, this caused issues with plagiarism and low-quality work. Despite this, the authors noted some benefits: GPT could help students get started, organize their work, and learn basic study habits. Instead of banning these tools, the study recommends that teachers create assignments that require deeper thinking, something AI cannot do. This way, students can utilize GPT as a helpful resource, rather than a shortcut.
In [21], the authors explored how ChatGPT could support undergraduate dental students during a radiation protection project. Students were split into two groups: one used traditional research methods, and the other used ChatGPT to complete the task. The ChatGPT group was encouraged to ask questions in simple language, experiment with different prompts, and verify the AI’s answers with textbooks or scientific articles. They also had to report on how they used the tool. The study showed that students who used ChatGPT scored higher on a surprise knowledge test. Many students found the tool easy to use and helpful for understanding key concepts, creating summaries, and organizing presentations. Some also used it to generate quiz questions and make PowerPoint slides with code. However, they noted issues like missing details, fake references, and outdated information. The authors suggest that instead of avoiding ChatGPT, teachers should guide students on using it responsibly and design tasks that require critical thinking and judgment, skills AI cannot replace.
In [22], the authors examined how ChatGPT could support psychology students in developing expert-level thinking through natural language interaction. The study assessed whether the tool could respond to psychology research methods prompts and evaluate multiple-choice options like a subject expert. ChatGPT’s answers closely resembled those of experts, demonstrating its ability to follow advanced psychological reasoning effectively. This capacity enables students to develop their critical thinking skills without requiring specialized software or complex techniques. Instead of merely memorizing theories or following rigid procedures, students can utilize ChatGPT to explore ideas, verify their understanding, and work through problems more conversationally and comfortably. By asking questions in their own words and refining their thoughts based on the responses, students can deepen their understanding and reflect on what they are learning. Rather than merely providing answers, ChatGPT acts as a thinking partner, encouraging students to question assumptions, strengthen their arguments, and take a more active role in their learning.
In [23], the authors explored how ChatGPT helped students in a math lab. Instead of replacing traditional math software, ChatGPT served more like a guide. As a result, students felt more at ease exploring problems and developing their ideas. The findings revealed that ChatGPT encouraged students to take a more active role in their learning rather than merely following instructions.
In [24], the authors examined how GPT could assist students in learning programming. Instead of replacing teachers, GPT served as a helpful tutor. It provided feedback, explained mistakes, and helped students understand how to organize their code using simple, natural language. Students did not need to know complex programming terms because they could describe their goals, and GPT guided them through the steps, making it easier for beginners. As a result, students could experiment, receive immediate feedback, and improve their code. This increased their confidence and reduced reliance on strict rules or examples. Beyond just learning how to code, they began to see programming as a creative way to solve problems. The study demonstrated that GPT made programming more accessible and empowered students to take greater control of their learning.
The analyzed works show consistent evidence that aligns with the pedagogical framework described in this study. In fields such as chemical engineering [18], programming [19,24], the health sciences [20], psychology [22], mathematics [23], and dentistry [21], LLMs are seen not as replacements for teaching but as valuable tools [25]. LLMs reduce entry barriers, aid reasoning, and promote active engagement with complex topics. Prior research [7,8,26,27] suggests that LLMs enable students to express problems in natural language, explore concepts without requiring advanced technical skills, and test ideas repeatedly. This aligns closely with the framework’s emphasis on inquiry-based learning, understanding problems, and iterative reasoning, especially in cybersecurity education.
Furthermore, these works collectively emphasize that one of the main educational advantages of LLMs lies in cultivating conceptual understanding, troubleshooting skills, and reflective thinking, rather than merely generating definitive responses. As delineated in the proposed framework, numerous studies underscore the importance of validation, critical assessment, and the responsible use of AI-generated content, particularly when inaccuracies, outdated data, or fictitious references are present [28]. This corroborates the framework’s strategy of confining LLMs to a mediating function and incorporating their application within PBL [29] and CT [30] methodologies. By engaging students in authentic tasks that involve interaction with LLMs and requiring verification of outputs against authoritative sources, the research extends these educational benefits to the cybersecurity domain, where ambiguity, complex tools, and fragmented documentation are intrinsic to professional practice.

3. Background

3.1. Role of Technology in Supporting Learning

Technology-mediated learning is a highly effective educational strategy when implemented with careful consideration [31]. Research documented in [32] demonstrates that digital tools such as online platforms and feedback mechanisms significantly enhance students’ comprehension and academic performance. This effectiveness largely derives from students’ capacity to receive immediate feedback and access educational content at their convenience [31,32]. Instead of relying exclusively on traditional classroom instruction, students can revisit materials multiple times, engage with practice questions, and reflect on errors [33]. Furthermore, digital tools facilitate professors in monitoring student progress, identifying challenging content areas, and customizing support or lesson plans accordingly, thereby fostering a more flexible and personalized learning environment [32]. However, studies consistently highlight that technology yields optimal results when it complements effective teaching practices rather than serving as a substitute. When professors guide students in utilizing these tools and assign purposeful tasks, student engagement is markedly enhanced, leading to increased responsibility for their own learning [33].

3.2. Cybersecurity Education

In today’s world, where many aspects of life depend on digital systems, understanding cybersecurity has become essential [34,35]. Attacks on vital services, businesses, and individuals are increasing in both frequency and complexity, highlighting the importance of the public and future professionals learning how to stay safe online and safeguard digital infrastructure [36]. Cybersecurity education is not merely about training experts; it also involves cultivating a culture of awareness and responsibility among all technology users [37]. An effective course should combine theoretical knowledge with practical experience [37]. However, many universities lack hands-on resources such as virtual laboratories, licensed tools, or simulation platforms. Additionally, students come from diverse backgrounds, some are complete beginners, while others possess technical skills, which presents challenges in designing inclusive curricula. Overcoming these obstacles necessitates innovative pedagogical approaches and ongoing investment in resources [38]. Cybersecurity education must be adaptable to emerging threats, inclusive across all student levels, and practical to prepare individuals for real-world challenges.

3.3. Challenges in Cybersecurity Education

Cybersecurity expertise is now crucial in a world more reliant on digital systems, as attacks on vital infrastructure, companies, and individuals become more frequent and advanced. Therefore, practical cybersecurity training should equip students not only to grasp theoretical concepts but also to apply them in practical scenarios, promoting awareness, accountability, and technical skills across diverse audiences [39]. Yet many students, particularly in Latin America, face persistent barriers [5,40]. Cybersecurity curricula require students to interpret highly technical content and follow English-language documentation that may be incomplete, outdated, or inconsistent. These conditions slow hands-on progress and create cognitive overload, especially during tool installation and configuration tasks, which can interrupt learning when instructions are unclear. Because students come from varied technical backgrounds, designing learning activities that support beginners without limiting more advanced students remains a substantial pedagogical challenge [15]. Moreover, cybersecurity courses often lack practical learning infrastructures such as virtual labs, simulation platforms, and managed environments, further hindering students’ ability to integrate abstract concepts with procedural skills [5,41]. Under these conditions, the integration of supportive technologies that can scaffold both conceptual and technical learning becomes increasingly important.

3.4. LLMs in Education

Technology-mediated learning can effectively support student understanding when paired with sound instructional design [42]. Digital tools, including online learning platforms, automated feedback systems, and interactive resources, enable students to receive immediate feedback, revisit content at their own pace, and reflect on errors iteratively. These affordances promote deeper engagement and help students manage the complexity of technical domains [43].
Recent advancements in Artificial Intelligence (AI) enhance this potential by providing contextualized explanations, clarifying terminology, summarizing documentation, and recommending procedural steps in response to student inquiries. The capacity of LLMs to generate support in natural language on demand is particularly pertinent in scenarios where fragmented or ambiguous instructions hinder progress, a common situation in cybersecurity [16,44]. By minimizing disruptions to the learning process and facilitating real-time problem-solving, LLM tools may reduce obstacles to practical engagement and enable students to concentrate on higher-order reasoning [26].
This integration aligns with broader educational strategies that consider technology as an integral component of a comprehensive learning ecosystem, rather than as an isolated solution. For instance, research has proposed integrated AI-centric pedagogical models that incorporate AI tools into educational processes to improve instruction, feedback, and evaluation, while maintaining human expertise and deliberate design at the core [16,45].
Outputs from LLMs can differ in thoroughness and correctness, especially in complex technical fields. Students might accept their suggestions without question, lacking proper academic review. Consequently, responsible use entails students verifying and assessing LLM suggestions, ensuring that these tools aid learning rather than substitute for expert judgment [46].

3.5. Integrating LLM into Pedagogical Practice

A growing body of literature highlights both the promise and the limitations of LLMs in education. Surveys show that LLMs can enhance content creation, personalized learning, and formative feedback. Yet these gains are not automatic: they depend on embedding LLMs within structured learning environments that foreground critical thinking, reflection, and metacognitive development [40,46].
Frameworks for responsible AI in education emphasize transparency, accountability, and alignment with learning objectives, ensuring that AI outputs are interpretable and pedagogically meaningful rather than just text generators [13]. For example, transparency frameworks in AI education propose practices that help students and professors understand where and how AI tools contribute to instruction, making the support process explicit rather than opaque [47].
In addition, integrated models of AI-supported learning encourage professors to design tasks that require students to engage with LLMs critically, fostering problem-solving strategies rather than rote acceptance of model outputs. These models align with evidence on effective technology use, which shows that students benefit most when tools support guided inquiry and deep engagement rather than act as shortcuts to answers [48].
Despite these benefits, professors must remain alert to practical limitations. LLMs may produce plausible but incorrect information or may reflect biases in their training data. In cybersecurity contexts, where precision, procedural correctness, and adherence to safe practices are essential, such inaccuracies can mislead students if not properly mediated [49].

3.6. Relevance to Cybersecurity Education

Integrating LLM into cybersecurity education presents both promise and risk. On the positive side, LLM can assist students in interpreting specialized terminology, exploring conceptual foundations, and troubleshooting procedural tasks [50]. By mediating access to explanations and examples, LLM might reduce friction in learning and sustain engagement in sustained problem-solving [37].
However, given the precision and complexity inherent in cybersecurity, LLM outputs must be carefully reviewed and validated within instructional contexts. Incorrect commands or misleading explanations, if internalized without verification, could foster misconceptions or unsafe practices. Therefore, pedagogical integration of LLM must emphasize active evaluation, cross-checking, and reflective verification as part of learning tasks [49,51,52].
This dual potential suggests that LLM ought not to be a substitute for foundational instruction or professor guidance [7,27]. Instead, when used thoughtfully within structured frameworks such as problem-based learning and active inquiry, LLM can enhance students’ ability to engage with challenging content while developing autonomy, adaptability, and critical judgment [53].

3.7. Problem-Based Learning

PBL is widely recognized for fostering flexible knowledge, practical problem-solving skills, collaboration, intrinsic motivation, and lifelong learning competencies, which are particularly critical in STEM disciplines such as cybersecurity [29]. Rather than focusing on passive content transmission, PBL engages students in real problems that resemble professional practice, requiring them to analyze situations, generate hypotheses, and justify decisions under uncertainty.
A central determinant of PBL effectiveness lies in the quality of the problems posed. Well-designed problems activate prior knowledge, introduce cognitive conflict, stimulate discussion, and encourage deep processing of new information [54]. During pre-discussion phases, students articulate initial interpretations, confront alternative viewpoints, and critically examine underlying assumptions. These interactions create conditions for conceptual change and deeper understanding [54,55]. Subsequent reporting and reflection phases support knowledge integration, correction of misconceptions, and transfer of learning to new contexts.
Despite its widespread adoption, PBL does not follow a single standardized model. Existing implementations vary in focus, including knowledge construction, professional skill development, interdisciplinary reasoning, and critical competence formation [54,56]. This diversity highlights the need for intentional instructional design, particularly in defining problem structure, reasoning demands, and contextual relevance [57]. In cybersecurity education, where students must connect abstract concepts with concrete system behaviors, carefully designed PBL tasks are essential to bridge theory and practice.
Within this study, PBL provides the pedagogical foundation for examining how LLM can support student engagement with complex cybersecurity problems. By situating LLM use within problem-solving activities rather than content delivery, the research emphasizes learning as an active, inquiry-driven process that mirrors real-world cybersecurity practice.

3.8. Computational Thinking

CT refers to a set of cognitive skills that enable individuals to analyze problems, design solutions, and reason systematically using principles derived from computer science, independent of specific programming languages [58,59]. As articulated by Wing and subsequent researchers [30,59,60], CT involves decomposing complex problems, abstracting essential information, recognizing patterns, applying logical reasoning, and iteratively evaluating solutions.
In cybersecurity, CT is particularly critical because security challenges often involve large-scale, ambiguous, and dynamically changing problem spaces. Tasks such as identifying attack vectors, interpreting logs, diagnosing misconfigurations, or designing defensive strategies require practitioners to engage in structured reasoning processes. Specifically, cybersecurity problem solving depends on the ability to [60,61]
  • Decompose complex systems into manageable components.
  • Recognize patterns in network traffic, system behavior, or vulnerabilities.
  • Abstract relevant signals from incomplete or noisy data.
  • Compare alternative solution strategies while assessing trade-offs and risks.
  • Iteratively test hypotheses during incident response or system debugging.
Cybersecurity practice, therefore, demands more than technical execution; it requires a disciplined mode of thinking that complements mathematical reasoning and engineering judgment. CT supports the modeling, analysis, and improvement of secure systems by enabling students to understand how systems behave, how threats propagate, and how defenses can be systematically evaluated and refined [62].
In the context of this research, CT also provides a critical lens for evaluating the role of LLM in learning. While LLM can generate procedural suggestions and explanations, students must apply CT skills to assess the validity, relevance, and safety of these outputs. Consequently, CT serves as a foundational competence, enabling students to critically engage with LLM-generated guidance, avoid uncritical reliance, and develop solutions that are both technically sound and conceptually robust.

4. Pedagogical Approaches

In cybersecurity courses, students are required to develop conceptual understanding and practical technical skills. Nonetheless, learning activities are often interrupted by three recurring issues: (1) dependence on English-language documentation, (2) fragmented or outdated learning resources, and (3) challenges in installing and configuring specialized cybersecurity tools. Such disruptions divert students’ efforts from reasoning and problem-solving to procedural retrieval, thereby increasing cognitive load and diminishing engagement with core learning objectives.
To mitigate these challenges, this study proposes a pedagogical framework that integrates LLMs within an active learning environment structured around PBL [56] and supported by CT [63]. Rather than viewing LLMs as general instructional aids, the framework conceptualizes them as contextual pedagogical interventions aimed at reducing learning interruptions, supporting reasoning during technical impasses, and maintaining inquiry during complex cybersecurity tasks [52,64].
The framework adheres to principles of pedagogical design that advocate for instructional strategies to be iteratively designed, enacted, observed, and refined in classroom settings. Figure 1 illustrates the interaction among learning theories, LLM support, and cybersecurity tasks.

4.1. Active Learning Foundation

Active learning is central to the framework for addressing the cognitive challenges inherent in cybersecurity practice [16,65]. For these reasons, the framework should organize learning around student activity rather than professorial exposition, distinguishing between individual and team-based engagement [66]. Both types of activity should be deliberately designed to address common challenges in cybersecurity education for Latin American students. Figure 2 shows the active learning foundation model used to sequence activities and support iterative reasoning.

4.1.1. Individual Activities

Individual activities should foster metacognitive awareness, analytical thinking, and key cybersecurity skills. Students should address poorly defined technical problems using incomplete, ambiguous, or inaccurate cybersecurity scenarios, such as vague error messages or contradictory documentation. Working alone, students interpret these problems without external help, mirroring real-world situations where analysts must diagnose issues independently before seeking assistance or consulting external sources. A central aspect of these activities is the development and continual refinement of prompts for interacting with LLMs. Designing prompts should be viewed as a cognitive process that requires students to clearly specify the context, technical constraints, and the LLM’s role. This approach promotes precision in technical communication and encourages reflection on how language, structure, and assumptions influence AI outputs. Students should also validate AI suggestions by cross-referencing system results, empirical tests, and official documentation, fostering critical evaluation skills and preventing uncritical acceptance of AI-generated content, an essential aspect of cybersecurity where inaccuracies can have serious consequences.

4.1.2. Team Activities

At the team level, students should collaborate to install and configure cybersecurity tools in Linux-based environments. LLM outputs should be used as provisional guidance rather than authoritative instructions, and installation failures, missing dependencies, or incompatible commands should be intentionally preserved as shared learning moments. Teams should then engage in troubleshooting errors and unexpected outputs by jointly analyzing error messages, comparing LLM recommendations with observed system behavior, and testing alternative solutions. This collaborative diagnostic process should emphasize collective reasoning, the negotiation of interpretations, and peer-supported problem-solving, particularly in situations where online resources are outdated, incomplete, or contradictory.
By structuring learning around these complementary individual and team activities, the framework should foreground reasoning, validation, and collaborative inquiry as central mechanisms for learning with LLMs in cybersecurity contexts.

4.2. Core Pedagogical Structure

PBL is the core pedagogical structure of the proposed framework because it aligns closely with the practical nature of cybersecurity practice [67]. Cybersecurity work requires interpreting problems, generating hypotheses, and reasoning from evidence rather than merely reproducing procedures [38]. Moreover, PBL emphasizes that knowledge is developed through active engagement with problems situated in real contexts [68]. In this framework, cybersecurity problems, such as installing and configuring tools or clarifying technical concepts, should be deliberately presented without step-by-step guidance [69]. This design choice should support deeper cognitive processing by requiring students to identify what they need to know, select appropriate resources, and evaluate competing sources of information, including LLM-generated responses. From an instructional design perspective, PBL should also serve a critical methodological function by positioning the professor as a facilitator rather than a source of solutions [29,58]. PBL should ensure that observable evidence of learning emerges from students’ reasoning processes: how they formulate questions, interpret failures, revise strategies, and justify decisions [29,54,56]. This focus on process rather than on task completion should be essential for examining how LLMs are used in real-world problem-solving, directly supporting the study’s central research question.

4.2.1. Relevance of PBL

The use of PBL should directly address the study’s central research question: How effectively can LLMs provide personalized assistance when students configure and use cybersecurity tools and when they need clear explanations of specialized technical concepts? By embedding LLM use in real-world problem-solving situations, specifically during configuration failures and conceptual ambiguity, the framework should enable observation of how students actually engage with LLM outputs during learning, rather than relying on perceptions or self-reports. When confronted with breakdowns, students should be prompted to [70,71]:
  • consult LLMs for clarification or procedural guidance;
  • reformulate prompts when outputs are unclear or incorrect;
  • compare LLM responses with authoritative documentation; and
  • test alternative solutions empirically.
These interactions should produce observable evidence (prompts, commands, configurations, outputs, and reports) that form the empirical basis for the case study analysis presented in later sections.

4.2.2. Pillars of PBL Implementation

The framework operationalizes PBL through four interdependent pillars that should directly address the identified instructional challenges [15,55,56]:
Pillar 1: Real problems. Learning activities should begin with realistic cybersecurity scenarios that include incomplete documentation, conflicting outputs, and misconfigurations. These ill-structured problems should initiate inquiry and should require students to construct understanding actively.
Pillar 2: Student-directed learning. Students should be responsible for recognizing their learning needs, choosing resources, and determining how to move forward. This should reflect professional cybersecurity practice, where guidance is often limited or outdated.
Pillar 3: Collaborative knowledge building. Students should work in small teams to analyze problems, compare interpretations, and evaluate alternative strategies. Collaboration should reveal reasoning processes and should support shared validation of LLM-generated suggestions.
Pillar 4: Professor facilitation. The professor should guide inquiry by encouraging reflection, questioning assumptions, and leading critical evaluation of information sources, including LLM outputs. This role should ensure that exploration remains conceptually grounded.

4.3. Learning Activities Design

Each learning activity should be designed in accordance with established PBL guidelines to ensure methodological consistency. Tasks should include [57,72]:
  • explicit learning objectives aligned with course competencies;
  • real, technically rich scenarios;
  • problem scopes that are challenging but achievable;
  • structured collaboration requirements;
  • defined timelines and milestones; and
  • opportunities for guided reflection and feedback.
Across activities, students should be expected to plan technical responsibilities, develop troubleshooting strategies, analyze results using cybersecurity terminology, and document their processes. LLMs should support these activities by clarifying concepts, suggesting possible error causes, and illustrating procedures. However, students should be explicitly required to validate LLM outputs through authoritative documentation and empirical testing.

4.4. Computational Thinking as a Supporting Cognitive Framework

CT functions as a complementary cognitive base that should structure how students engage with complex cybersecurity problems within the proposed framework [68]. Cybersecurity tasks, such as configuring tools, interpreting system outputs, and diagnosing failures, require systematic reasoning processes that align closely with CT practices, including pattern recognition, abstraction, and iterative testing [73].
Within the framework, students should be encouraged to engage in cybersecurity activities that professionals routinely perform to acquire knowledge through experience. These CT practices should provide a coherent reasoning structure to support navigation of ill-defined, technically unstable problem spaces.
LLMs should be integrated as cognitive supports within this CT-oriented process, helping students articulate reasoning steps, explore alternative explanations, and generate candidate troubleshooting strategies. Importantly, LLM outputs should be treated as provisional inputs rather than final solutions. Students should remain responsible for evaluating suggestions, selecting appropriate actions, and validating outcomes through empirical testing and authoritative documentation.
By combining CT with LLMs, the framework should reinforce analytical reasoning and problem-solving skills, ensuring that LLM use amplifies rather than substitutes core professional competencies.

4.5. Role of LLMs Within the Framework

Within the proposed pedagogical framework, LLMs should function as supportive and mediating components that assist students during complex cybersecurity activities, rather than as authoritative sources of knowledge or decision-making [4,6,12,29]. Their integration is intentionally bounded to ensure that student reasoning, verification, and judgment remain central to the learning process. Specifically, LLMs should support students by:
  • clarifying technical and conceptual explanations when documentation is fragmented, inaccessible, or linguistically challenging;
  • suggesting plausible causes of errors or system misconfigurations to prompt further investigation; and
  • offering provisional procedural guidance that must be empirically tested and validated.
This positioning is essential to the framework’s coherence. The pedagogical focus is not on whether LLMs improve instruction in isolation, but on how their use, when constrained by validation and reflection, can support student engagement in problem-solving processes. In this sense, LLMs operate as cognitive supports embedded within PBL and CT, enabling inquiry while preserving student agency, epistemic vigilance, and responsibility for final decisions.

4.6. Learning Progression Model

The learning progression model structures student activity as an iterative cycle centered on real cybersecurity problems, rather than following a linear instruction model. Students move through interconnected phases that reflect real cybersecurity practice and support sustained inquiry. The learning progression model consists of the following steps. Figure 3 shows the cycle.
Problem Encounter. Students are presented with an ill-defined cybersecurity task (e.g., failed tool installation, incomplete documentation, ambiguous scan results). Problems intentionally include language barriers, fragmented resources, or configuration issues.
Initial Interpretation. Students analyze the problem by decomposing system components, identifying constraints, and isolating unknowns. This phase activates computational thinking through abstraction and problem framing.
Prompt Formulation. Students formulate prompts to request explanations, procedural guidance, or troubleshooting hypotheses from the LLM. Prompts are refined iteratively as students recognize gaps, ambiguities, or inaccuracies in responses.
Action and experimentation. Students apply suggested commands, configurations, or procedures in a Linux environment. This phase includes tool installation, configuration attempts, and execution against predefined targets.
Observation and error detection. Students examine system outputs, logs, and error messages. Unexpected behaviors, failures, or contradictions between LLM guidance and system behavior are explicitly documented.
Validation and cross-checking. LLM-generated suggestions are validated against empirical results and authoritative sources (official documentation, manuals, trusted repositories). Students assess accuracy, completeness, and contextual suitability.
Reflection and strategy refinement. Students reflect on outcomes, revise prompts, adjust configurations, or reformulate hypotheses. Reflection focuses on both cybersecurity reasoning and the effective use of AI.
Iteration or resolution. The cycle repeats until a functional solution is achieved or the problem space is sufficiently understood. The emphasis of learning remains on the quality of reasoning rather than on task completion alone.

5. Case Study

This section presents a classroom-based case study examining how the proposed pedagogical framework can function in an undergraduate cybersecurity course. The purpose of the case study is to answer the main research question: How effectively can an LLM provide contextualized assistance when students configure and use cybersecurity tools and when they require clarification of specialized technical concepts?
This question is operationalized through real learning activities in which students encounter recurring challenges in cybersecurity education, such as fragmented and English-language documentation, ambiguous installation procedures, command-line errors, and conceptual misunderstandings during tool configuration. By examining how LLM support mediates student progress at these points of difficulty, the case study aims to provide insight into the pedagogical value and limitations of integrating LLMs into cybersecurity courses.
Methodologically, the case study adopts a practice-based pedagogical research approach, drawing on traditions of design-based research and reflective teaching practice. Instructional strategies were iteratively designed, enacted, observed, and refined across multiple course offerings. Throughout the study, the LLM was deliberately positioned as a supportive cognitive assistant to assist with interpretation, explanation, and troubleshooting, rather than as a source of authoritative answers. This positioning aligns with the proposed pedagogical framework, which emphasizes active student engagement, verification against authoritative sources, and sustained hands-on learning.

5.1. Course Context

The case study was conducted within an undergraduate cybersecurity course offered over three consecutive semesters (the second semester of 2023 and both semesters of 2024) at Universidad Católica del Norte. The course was an elective within the Computing and Informatics Civil Engineering (ICCI) and Computing and Informatics Execution Engineering (IenCI) programs.
Each semester spanned 14 weeks and included three hours of lectures and 1.5 h of workshop-based practical activities per week. The instructional design followed the pedagogical framework described in Section 4, enabling systematic integration of LLM support across individual and collaborative learning activities.
Across the three iterations, course materials, activity sequencing, and LLM usage guidelines were progressively refined in accordance with principles of pedagogical design research. This multi-semester implementation allowed the professor to examine whether observed patterns of LLM-supported learning, such as improved navigation of technical documentation or iterative troubleshooting behaviors, recurred across cohorts, thereby strengthening the analytical observations.

5.2. Research Design

The study employed a practice-based pedagogical analysis [15], focusing on instructional design and classroom implementation rather than on measuring individual student outcomes. The primary unit of analysis was the design and implementation of the learning activity, not the student as a human subject.
Data sources consist exclusively of observable and non-personal instructional evidence, collected consistently across all three semesters, including
  • Task descriptions and assignment specifications;
  • Prompt structures used for LLM interactions;
  • Technical outputs produced during activities (e.g., installation logs, scripts, tool execution results);
  • Structured professor reflective notes documented after each activity.
No personal, behavioral, demographic, or performance-grade data were collected, stored, or analyzed. The study did not involve surveys, interviews, learning analytics dashboards, or controlled comparisons between experimental conditions. Student interactions with the LLM were examined only at the aggregate and task-design levels (e.g., common prompt patterns, recurring technical issues, types of LLM inaccuracies), without linking observations to individual students.
All activities were conducted within predefined simulated environments (e.g., virtual machines and test targets), ensuring that no personal data, institutional credentials, or real-world systems were used to train LLMs. Consequently, the study remains within the scope of instructional design research and reflective teaching practice, aligning with ethical guidelines for classroom-based educational research that does not involve human-subject data collection.

5.3. Proposed Pedagogical Framework

At the beginning of each semester, students received explicit guidance regarding the intended pedagogical role of the LLM. Consistent with the proposed framework, the LLM was framed as a support mechanism for
  • Clarifying cybersecurity concepts and specialized terminology;
  • Interpreting English-language technical documentation;
  • Suggesting procedural steps during tool installation and configuration;
  • Assisting with troubleshooting when errors or unexpected outputs occurred.
Students were explicitly instructed that LLM-generated outputs required verification through authoritative documentation, empirical testing, and tool execution. The professor’s reflective observations focused on whether LLM support enabled students to move productively through technical impasses, encouraged iterative problem-solving, and supported deeper reasoning rather than passive acceptance of generated instructions.
This framing aligns with the framework’s emphasis on progressing from surface engagement toward more constructive, interactive learning behaviors, while maintaining human oversight and disciplinary rigor.

5.4. Design of Learning Activities

The course included four structured learning activities designed to generate evidence relevant to the research question. These activities combined conceptual inquiry with hands-on technical work and were aligned with different cognitive processes and instructional methods, as summarized in Table 1.

5.4.1. Individual-Based Activities

Individual activities focused on strengthening foundational cybersecurity knowledge, including social engineering, malware versus ransomware, password management, firewall functionality, and risks associated with public Wi-Fi use.
Students used the LLM to generate, refine, and compare definitions, explanations, and examples. A central instructional focus was iterative prompt refinement, allowing the professor to observe how variations in context, specificity, and terminology influenced the accuracy and usefulness of LLM responses. These activities provided evidence of how LLMs support conceptual clarification while also revealing recurring limitations, such as oversimplification or ambiguous explanations.

5.4.2. Team-Based Activities

Team-based activities followed a PBL approach and were conducted in small groups. Students were tasked with installing, configuring, and testing widely used cybersecurity tools on Kali Linux or other Linux distributions (e.g., Debian, Fedora), including tools such as SQLMAP, SQLNINJA, VEGA, w3af, and OWASP ZAP.
  • ZAPROXY (OWASP ZAP): A web application vulnerability scanner used to identify security weaknesses such as cross-site scripting and injection flaws;
  • VEGA: An open-source web security scanner for detecting vulnerabilities in web applications;
  • SQLNINJA: A tool designed to exploit SQL injection vulnerabilities in Microsoft SQL Server environments;
  • w3af: A web application attack and audit framework that supports vulnerability discovery and exploitation;
  • SQLMAP: An automated tool for detecting and exploiting SQL injection vulnerabilities and taking control of database servers.
These tools were selected because reflect real cybersecurity practice, require precise command-line interaction, and depend heavily on English-language documentation. Students applied the tools to predefined simulated targets, enabling realistic experimentation without risk to real systems.
During these activities, the LLM supported students by suggesting installation commands, explaining error messages, and proposing troubleshooting strategies. However, LLM-generated instructions were frequently incomplete, outdated, or incompatible with specific system configurations. These breakdowns required students to diagnose errors, cross-check documentation, and iteratively adapt commands, thereby providing rich pedagogical moments to examine the affordances and limitations of LLMs as personalized technical assistants.

5.5. Analytical Procedure and Bias Mitigation

This study follows a qualitative, practice-based pedagogical design grounded in design-based and action-research traditions. Its objective is not to measure learning gains statistically, but to analyze how LLMs function as pedagogical support tools within real-world cybersecurity learning contexts.
Observations were recorded using a shared analytical protocol applied consistently across semesters. Evidence was examined using descriptive coding focused on
  • Prompt quality and formulation issues;
  • LLM response accuracy and completeness;
  • Procedural adequacy of technical guidance;
  • Student response strategies to LLM limitations.
To reduce subjectivity, analytical claims were derived exclusively from observable evidence and documented task outcomes, rather than from inferred student intentions, perceptions, or affective states. Interpretations are therefore limited to what was directly evidenced through task execution and LLM interaction traces.
To further strengthen transparency, Table A1 provides an explicit mapping between the research question, learning activities, data sources, and resulting findings, demonstrating how conclusions are grounded in instructional design choices rather than anecdotal observation.

6. Findings

This section summarizes the main findings from implementing four LLM-supported learning activities across three consecutive semesters. Rather than sharing isolated classroom stories, the analysis highlights recurring teaching patterns observed in how LLMs facilitated students’ engagement with cybersecurity concepts and technical problem-solving.
The findings are organized by activity type to reflect how LLM support functioned across conceptual understanding, tool configuration, and procedural automation, which together operationalize the central research question.

6.1. Activity 1: Conceptual Understanding of Cybersecurity Threats

This individual activity aimed to strengthen students’ understanding of core cybersecurity concepts (e.g., social engineering, malware vs. ransomware, firewall functionality, password management) while promoting active engagement with LLMs as cognitive support tools rather than as answer generators.
Across semesters, students rarely obtained satisfactory explanations on the first attempt. Initial prompts frequently lacked sufficient context, role definition, or technical precision, resulting in
  • Overly general explanations;
  • Misaligned responses (e.g., quizzes instead of explanations);
  • Occasional conceptual inaccuracies (e.g., conflating denial-of-service attacks with social engineering).
Prompt refinement emerged as a systematic and necessary process, with most students engaging in at least one revision cycle. Improvements in prompt clarity, such as specifying audience level, defining scope, or explicitly requesting examples, consistently led to more precise and more accurate explanations.
LLMs proved effective in providing personalized conceptual assistance when students actively iterated on prompts. The personalization did not stem from adaptive algorithms or student profiling, but from students’ ability to shape the interaction to their immediate knowledge gaps. This aligns with the framework’s emphasis on computational thinking practices, particularly problem decomposition and iterative refinement.

6.2. Activity 2: Tool Installation and Configuration in PBL Contexts

This team-based activity focused on installing, configuring, and testing real-world vulnerability analysis tools in Linux environments. The activity was designed as a PBL task requiring students to diagnose system issues, interpret technical documentation, and collaboratively resolve configuration problems.
Across semesters, the complete installation of all assigned tools was uncommon. Most teams successfully configured one or two tools before encountering technical barriers. LLM-generated guidance frequently
  • Omitted prerequisite steps;
  • Proposed commands incompatible with the specific Linux distribution;
  • Failed to account for missing dependencies.
These limitations consistently triggered productive breakdowns, requiring students to cross-check official documentation, consult external resources, and test alternative solutions.
LLMs were effective as procedural scaffolds, particularly in
  • Explaining error messages;
  • Suggesting diagnostic strategies;
  • Outlining general installation workflows.
However, LLMs were not reliable as autonomous problem solvers. Their pedagogical value emerged when students treated them as interactive assistants within a broader problem-solving ecosystem. This reinforces the framework’s positioning of LLMs as tools to support active inquiry and CT-based troubleshooting, rather than as replacements for domain knowledge or hands-on experimentation.

6.3. Activity 3: Automation and Procedural Reasoning

This individual activity required students to automate software installation processes using LLM-generated instructions, emphasizing procedural abstraction and precision, key components of computational thinking. Three recurring linguistic challenges were identified:
  • Vocabulary ambiguity, leading to incomplete or misleading instructions.
  • Grammatical and syntactic imprecision, reducing command clarity.
  • Semantic inconsistency, producing irrelevant or logically flawed steps.
These challenges appeared consistently across cohorts, indicating that difficulties in technical communication were systemic rather than incidental.
The activity demonstrated that LLMs can support personalized procedural guidance only when students exercise precise control over language. The necessity to debug prompts mirrored the logic of debugging code, reinforcing CT practices such as abstraction, precision, and iterative testing. LLM effectiveness, therefore, depended less on model capability and more on students’ ability to communicate technical intent accurately.

6.4. Activity 4: Definition and Clarification of Specialized Concepts

This activity focused on clarifying specialized cybersecurity terminology quickly, supporting students’ ability to engage meaningfully with technical documentation and lectures. LLM-generated definitions varied substantially in clarity and usefulness. Definitions that included
  • Contextual explanations;
  • Concrete examples;
  • Explicit expansion of acronyms.
LLMs were consistently more usable and reduced reliance on external resources. In contrast, definitions lacking contextual framing often introduce ambiguity or superficial understanding.
LLMs were highly effective for on-demand conceptual clarification, particularly when prompts explicitly requested context and application. This allowed students to quickly resolve conceptual bottlenecks and reallocate cognitive effort toward higher-level reasoning and problem-solving, aligning with the framework’s active learning objectives.

6.5. Cross-Activity Synthesis: Addressing the Research Question

LLMs can effectively provide personalized assistance in cybersecurity education when embedded within an active learning and PBL-oriented framework, provided that students actively engage in prompt refinement, verification, and iterative problem-solving. LLMs provide personalization as
  • Iterative dialog instead of static responses;
  • Alignment with students’ immediate task context;
  • Active verification against authoritative sources.
Concurrently, recurring limitations, including procedural incompleteness, contextual mismatches, and occasional inaccuracies, highlight the importance of human oversight, domain expertise, and critical evaluation. These limitations often reinforce core pedagogical goals by promoting deeper engagement with documentation, collaborative troubleshooting, and computational thinking practices. Consequently, the efficacy of LLMs in this study lies not in delivering correct answers but in cultivating sustained, active, and reflective participation with complex cybersecurity tasks.

7. Discussion

This study aims to investigate the efficacy of LLMs in delivering personalized assistance to students during the configuration and utilization of cybersecurity tools, as well as providing clear explanations of specialized technical concepts within an active, problem-based cybersecurity course.
The findings from the multi-semester case study indicate that LLMs can serve as contextual pedagogical aids, effectively supporting students in the processes of conceptual clarification and technical problem-solving, contingent on their integration within a meticulously crafted pedagogical framework. Instead of serving as definitive sources of knowledge, LLMs are most effective when regarded as cognitive collaborators that facilitate reasoning, aid in interpreting technical documentation, and encourage iterative refinement of student understanding.
Notably, the effectiveness was not solely attributable to the accuracy or completeness of LLM outputs, but also contingent upon how these outputs were pedagogically mediated through task design, instructor guidance, and verification practices. This underscores that the success of LLMs in education is intrinsically linked to instructional design and professor facilitation, rather than to technological characteristics alone.

7.1. LLMs as Support for Active Learning

Throughout all four educational activities, LLMs facilitated active engagement with cybersecurity material, particularly when students encountered ambiguity, unfamiliar terminology, or procedural disruptions. In conceptual exercises (Activities 1 and 4), LLMs provided swift access to explanations, thereby reducing superficial search times and allowing students to devote more attention to comprehension and critical analysis. When prompts were appropriately contextualized, the explanations generated by LLMs consistently enhanced conceptual understanding, especially concerning abstract or English-language technical concepts.
In technical PBL activities (Activities 2 and 3), LLMs supported students during impasses, such as dependency errors, incompatible commands, or unclear documentation, by offering hypotheses, procedural suggestions, and interpretive guidance. However, these activities also revealed clear limits to LLM effectiveness. Incomplete or incompatible instructions occurred repeatedly, prompting students to cross-check official documentation, test alternatives, and reason through system-level constraints.
Rather than undermining learning, these limitations appeared to activate critical thinking processes central to cybersecurity practice. Students were required to evaluate the reliability of AI-generated guidance, identify errors, and reconcile conflicting sources, skills that align closely with professional cybersecurity competencies. In this sense, LLM inaccuracies became pedagogically productive when framed within a verification-oriented learning culture.

7.2. Relevance of Prompt Design

One of the most consistent patterns across semesters was the central role of prompt quality in determining the usefulness of LLM assistance. Activities 1 and 3, in particular, demonstrated that vague, ambiguous, or poorly structured prompts frequently produced misaligned or misleading outputs. In contrast, context-rich, role-defined prompts resulted in more actionable and accurate guidance.
This finding reframes prompt formulation not merely as a technical interaction skill, but as a discipline-specific literacy closely tied to cybersecurity thinking. Writing effective prompts required students to articulate goals precisely, use appropriate technical vocabulary, and anticipate system constraints, abilities that mirror the necessary precision in command-line operations, configuration files, and security policies.
From a pedagogical perspective, this position prompts design as an explicit learning objective rather than a peripheral skill. The iterative refinement observed across cohorts suggests that sustained engagement with LLMs can strengthen students’ metacognitive awareness of how language, precision, and context shape technical outcomes.

7.3. Reposition the Professor as Professor Orchestrator

The case study also highlights an essential shift in the professor’s positionality within AI-augmented cybersecurity education. Instead of serving as the primary source of procedural knowledge, the instructor functioned as a learning orchestrator, designing problem spaces, setting epistemic boundaries, and guiding students in evaluating AI-generated outputs.
Instructor reflections suggest that pedagogical effectiveness is primarily influenced by the careful curation of learning conditions rather than merely controlling the flow of information. This entails defining appropriate roles for LLMs, implementing verification standards, and encouraging reflection when AI guidance proves insufficient. Such insights are consistent with emerging academic research, which posits that AI integration fundamentally reshapes professor–student relationships and redistributes epistemic authority, while preserving the essential role of human judgment.
Most notably, reflective teaching practice provided a stabilizing influence during this transition. The instructor’s iterative process of refining activities over successive semesters, including modifications to prompt guidance, enhancements to scaffolding verification practices, and clarifications of expectations, aligns with findings from reflective pedagogy research that underscore the importance of self-awareness, bias recognition, and adaptive instructional design. In this context, LLM did not supplant pedagogical expertise but rather augmented its significance.

7.4. Personalized Assistance

Although the study deliberately refrained from collecting behavioral, demographic, or performance data, LLMs nonetheless provided functional personalization. Their support was tailored to the task context and the student’s intent, responding flexibly to students’ prompts and the technical challenges they faced.
This indicates that personalization within AI-supported cybersecurity education does not inherently demand intrusive data collection or adaptive profiling. Instead, personalization can be achieved through student-initiated interaction, in which students express their needs in real time and receive guidance tailored to the context. This finding is particularly relevant in ethically constrained educational settings, demonstrating that substantial AI support can align with strict data-minimization principles.

7.5. Contribution to AI-Literacy

This research advances the emerging body of literature on AI in educational contexts through several significant lines of inquiry. Firstly, it mitigates a well-recognized gap between pedagogical theory and practical application in the classroom by presenting a concrete, multi-semester integration of LLMs within an active PBL cybersecurity course. Unlike speculative or tool-centric studies, the results are anchored in consistent instructional practice.
Secondly, it enhances understanding of human–AI collaboration by conceptualizing LLMs as cognitive collaborators integrated into sociotechnical learning frameworks, rather than merely as tutors or answer-producers. Learning outcomes are derived through reciprocal interactions among students, AI systems, and instructor-designed tasks, thereby supporting the new era of AI within the learning process.
Thirdly, the study underscores cybersecurity education as a highly promising domain for AI-supported pedagogy, given its reliance on problem-solving, ambiguity, validation, and adversarial reasoning. The identified patterns suggest that the limitations of LLMs, often regarded as risks, can be pedagogically leveraged when aligned with disciplinary epistemologies.

7.6. Implications for Adopting LLM in Cybersecurity Course

Taken together, the findings suggest that effective LLM integration in cybersecurity education depends on three interdependent conditions:
  • Pedagogical framing that emphasizes active learning, PBL, and critical thinking.
  • Professor orchestration grounded in reflective practice rather than automation.
  • Explicit epistemic norms requiring verification, iteration, and accountability.
When these conditions are satisfied, LLMs can substantially facilitate both conceptual comprehension and technical problem-solving, not by simplifying cybersecurity education, but by rendering its intrinsic complexity more approachable and pedagogically effective.

8. Conclusions

The rapid emergence of LLMs in educational contexts presents promise and complexity, particularly in practice-oriented disciplines such as cybersecurity. This study contributes to this evolving landscape by proposing a pedagogical framework grounded in active learning, PBL, and CT, in which LLMs are intentionally positioned as a supportive cognitive tool within a human-centered learning ecology.
The central contribution of the framework lies in its principled alignment of AI use with pedagogical intent. LLMs are not conceptualized as replacements for instruction or expertise, but as mediating tools that support exploration, reasoning, and iterative problem-solving. By embedding LLMs within carefully designed activities that require verification, refinement, and reflection, the framework ensures that core cybersecurity competencies, such as troubleshooting, abstraction, precision, and critical evaluation, remain firmly anchored in student agency. Across the multi-semester case study, learning gains emerged not from the correctness of AI-generated outputs alone, but from students’ sustained engagement with ambiguity, error, and iterative inquiry.
The professor’s role shifts to that of an orchestrator within this framework. Pedagogical success is contingent upon the meticulous design and management of learning environments, which encompass establishing the epistemic function of LLMs, supporting verification techniques, and assisting students in interpreting interactions supported by AI. Reflective teaching practices are essential for effectively navigating this transition, as LLMs facilitate continuous improvement of pedagogical activities and highlight the instructor’s role in fostering judgment, ethics, and disciplinary standards. Rather than diminishing pedagogical expertise, the integration of AI emphasizes the importance of instructional design, ongoing professional reflection, and contextual comprehension.
The findings demonstrate that effective personalization can be achieved without invasive data collection or profiling. Instead, personalization is cultivated through interactive dialog, shaped by students’ prompts, objectives, and current task contexts. This approach upholds ethical standards in higher education while providing adaptable and responsive support. Importantly, challenges such as limitations of LLMs, procedural gaps, mismatched contexts, and occasional errors are not regarded as pedagogical shortcomings but as valuable opportunities that promote critical thinking and mirror real-world cybersecurity practices.
This study emphasizes that the successful integration of LLMs in cybersecurity education depends more on pedagogical coherence than on technological complexity. When combined with active learning, PBL, and CT, and guided by reflective instructor orchestration, LLMs can enhance both conceptual understanding and technical problem-solving without undermining human judgment or disciplinary standards. Although further empirical validation across various contexts and institutions is necessary, this framework offers a robust and adaptable foundation for the responsible application of LLMs, one that preserves human agency, fosters critical thinking, and acknowledges the intricacies of cybersecurity practice and AI-supported learning environments.

Limitations and Future Directions

Several limitations constrain the interpretation of these findings. This study is qualitative and practice-based, with no controlled comparisons or direct measurement of learning gains. Claims are therefore analytical rather than statistical. Additionally, the findings are situated within a single institutional and cultural context in Latin American higher education, which may limit transferability.
Future research should extend this work using mixed-methods designs, incorporating learning outcome measures, misconception analysis, and longitudinal tracking of skill development. Comparative studies examining courses with and without structured LLM integration would help isolate pedagogical effects. Further investigation into student perceptions, ethical reasoning, and long-term impacts on professional practice would also strengthen the evidence base.

Author Contributions

Conceptualization, R.M.-P., L.J.M., R.O. and V.G.F.; methodology, R.M.-P., V.G.F., V.F., A.G.-P. and H.T.-C.; investigation, V.G.F., A.O.-B., R.O., J.C.R.P. and R.A.F.; resources, A.G.-P., R.M.-P., L.J.M. and V.G.F.; writing, original draft preparation, R.M.-P., A.G.-P., A.O.-B., R.O., J.C.R.P. and R.A.F.; writing, review and editing, V.G.F., V.F., H.T.-C. and L.J.M.; project administration, R.M.-P., V.G.F. and L.J.M.; funding acquisition, R.M.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the UNIVERSIDAD CATOLICA DEL NORTE through the Concurso Fondo de Desarrollo de Proyectos Docentes de Pregrado (FDPD 2023), under grant number DGPRE.N°176/2023.

Institutional Review Board Statement

Ethical review and approval were waived for this study because it examined instructional design and its implementation within regular course activities. No identifiable student data, grades, or evaluative feedback were collected or analyzed, and all activities were part of standard teaching practice.

Informed Consent Statement

Not applicable.

Data Availability Statement

Course materials will be available upon request from the authors.

Acknowledgments

During the preparation of this study, the authors used Grammarly Pro (v1.2.189.1739) for the purpose of improving the quality of writing and ChatGPT 4.1 for the purpose of improving the design of figures.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Reflection of Key Findings.
Table A1. Reflection of Key Findings.
Research Question DimensionLearning ActivityActivity TypeData SourcesKey Findings
How do LLMs support conceptual understanding of cybersecurity concepts?Activity 1: Conceptual exploration of cybersecurity threats and protection strategies using LLMsIndividualStudent prompts and revisions, LLM responses, instructor observationsLLMs supported conceptual clarification when students actively refined prompts. Iterative prompt improvement strengthened computational thinking practices such as problem decomposition and precision. Conceptual understanding improved when LLMs were used as cognitive aids rather than answer providers.
How do LLMs mediate problem-solving in real cybersecurity tasks?Activity 2: Installation and configuration of vulnerability analysis tools in Linux environmentsTeam-based (PBL)Team reports, configuration logs, troubleshooting notes, instructor field notesLLMs functioned as procedural scaffolds but were unreliable as autonomous solvers. Incomplete or incompatible guidance triggered verification behaviors, documentation consultation, and collaborative troubleshooting, reinforcing core PBL and cybersecurity problem-solving practices.
How do LLMs influence procedural reasoning and automation skills?Activity 3: Automation of software installation processes using LLM-generated instructionsIndividualStudent-generated procedures, prompt iterations, observed errors and correctionsEffective use of LLMs depended on linguistic precision and procedural abstraction. Prompt debugging mirrored code debugging, reinforcing computational thinking skills such as abstraction, iteration, and accuracy in technical communication.
How do LLMs assist with specialized terminology and technical literacy?Activity 4: Definition and clarification of specialized cybersecurity conceptsIndividualStudent queries, LLM definitions, classroom interactionsLLMs were effective for rapid conceptual clarification when prompts requested context and application. This reduced cognitive bottlenecks and allowed students to redirect effort toward higher-order reasoning, supporting active learning goals.
Under what conditions do LLMs provide personalized assistance in cybersecurity education?Cross-activity synthesisIndividual and Team-basedCross-semester observations, instructor reflections, comparative activity outcomesPersonalization emerged through student-driven interaction, prompt refinement, and contextual verification rather than adaptive profiling. LLM effectiveness was contingent on pedagogical framing, instructor orchestration, and explicit epistemic norms emphasizing verification and reflection.

References

  1. Maennel, K.; Maennel, O. Human Aspects of Cyber Security for Computing Higher Education: Current Status and Future Directions. ACM Trans. Comput. Educ. 2025, 25, 1–30. [Google Scholar] [CrossRef]
  2. Thombre, S.; Velankar, M. Gamification by Students: An Effective Approach to Cyber Security Concept Learning. JEET J. Eng. Educ. Transform. 2022, 36, 73–81. [Google Scholar] [CrossRef]
  3. Lazarov, W.; Schafeitel-Tähtinen, T.; Squillace, J.; Martinasek, Z.; Coufalikova, A.; Helenius, M.; Gallus, P.; Fujdiak, R. Lessons Learned from Using Cyber Range to Teach Cybersecurity at Different Levels of Education. Technol. Knowl. Learn 2025. [Google Scholar] [CrossRef]
  4. Mukherjee, M.; Le, N.T.; Chow, Y.-W.; Susilo, W. Strategic Approaches to Cybersecurity Learning: A Study of Educational Models and Outcomes. Information 2024, 15, 117. [Google Scholar] [CrossRef]
  5. Al-Mansouri, F. The implicit language of cybersecurity: Educational challenges and implications. Int. J. Cyber Threat Intell. Secur. Netw. 2025, 2, 1–6. [Google Scholar] [CrossRef]
  6. Atlam, H.F. LLMs in Cyber Security: Bridging Practice and Education. Big Data Cogn. Comput. 2025, 9, 184. [Google Scholar] [CrossRef]
  7. Martínez-Peláez, R.; Mena, L.J.; Toral-Cruz, H.; Ochoa-Brust, A.; Potes, A.G.; Flores, V.; Ostos, R.; Pacheco, J.C.R.; Félix, R.A.; Félix, V.G. Can Large Language Models Foster Critical Thinking, Teamwork, and Problem-Solving Skills in Higher Education?: A Literature Review. Systems 2025, 13, 1013. [Google Scholar] [CrossRef]
  8. Perera, P.; Lankathilaka, M. AI in Higher Education: A Literature Review of ChatGPT and Guidelines for Responsible Implementation. IJRISS Int. J. Res. Innov. Soc. Sci. 2023, VII, 306–314. [Google Scholar] [CrossRef]
  9. Eager, B.; Brunton, R. Prompting Higher Education Towards AI-Augmented Teaching and Learning Practice. JUTLP J. Univ. Teach. Learn. Pract. 2023, 20, 1–19. [Google Scholar] [CrossRef]
  10. Lacey, M.M.; Smith, D.P. Teaching and Assessment of the Future Today: Higher Education and AI. Microbiol. Aust. 2023, 44, 124–126. [Google Scholar] [CrossRef]
  11. Alqahtani, T.; Badreldin, H.A.; Alrashed, M.; Alshaya, A.I.; Alghamdi, S.S.; bin Saleh, K.; Alowais, S.A.; Alshaya, O.A.; Rahman, I.; Al Yami, M.S.; et al. The Emergent Role of Artificial Intelligence, Natural Learning Processing, and Large Language Models in Higher Education and Research. Res. Soc. Adm. Pharm. 2023, 19, 1236–1242. [Google Scholar] [CrossRef]
  12. Karagiannis, S.; Magkos, E. Adapting CTF Challenges into Virtual Cybersecurity Learning Environments. Inf. Comput. Secur. 2020, 29, 105–132. [Google Scholar] [CrossRef]
  13. Chan, C.K.Y. A Comprehensive AI Policy Education Framework for University Teaching and Learning. Int. J. Educ. Technol. High. Educ. 2023, 20, 38. [Google Scholar] [CrossRef]
  14. Walczak, K.; Cellary, W. Challenges for higher education in the era of widespread access to Generative AI. Econ. Bus. Rev. 2023, 9, 71–100. [Google Scholar] [CrossRef]
  15. Hennessy, S.; Wishart, J.; Whitelock, D.; Deaney, R.; Brawn, R.; la Velle, L.; McFarlane, A.; Ruthven, K.; Winterbottom, M. Pedagogical Approaches for Technology-Integrated Science Teaching. Comput. Educ. 2007, 48, 137–152. [Google Scholar] [CrossRef]
  16. Sun, R. Can a cognitive architecture fundamentally enhance LLMs? Or vice versa? arXiv 2024, arXiv:2401.10444. [Google Scholar] [CrossRef]
  17. Zheng, C.; Chen, G. The AI Co-Pilot: A Culturally-Responsive Framework for Orchestrating Interactive Learning in the Classroom. Int. J. Chin. Educ. 2025, 14, 2212585X251400819. [Google Scholar] [CrossRef]
  18. Tsai, M.-L.; Ong, C.W.; Chen, C.-L. Exploring the Use of Large Language Models (LLMs) in Chemical Engineering Education: Building Core Course Problem Models with Chat-GPT. Educ. Chem. Eng. 2023, 44, 71–95. [Google Scholar] [CrossRef]
  19. Tang, B.; Liang, J.; Hu, W.; Luo, H. Enhancing Programming Performance, Learning Interest, and Self-Efficacy: The Role of Large Language Models in Middle School Education. Systems 2025, 13, 555. [Google Scholar] [CrossRef]
  20. Currie, G.; Singh, C.; Nelson, T.; Nabasenja, C.; Al-Hayek, Y.; Spuur, K. ChatGPT in Medical Imaging Higher Education. Radiography 2023, 29, 792–799. [Google Scholar] [CrossRef]
  21. Kavadella, A.; da Silva, M.A.D.; Kaklamanos, E.G.; Stamatopoulos, V.; Giannakopoulos, K. Evaluation of ChatGPT’s Real-Life Implementation in Undergraduate Dental Education: Mixed Methods Study. JMIR Med. Educ. 2024, 10, e51344. [Google Scholar] [CrossRef]
  22. Machin, M.A.; Machin, T.M.; Gasson, N. Comparing ChatGPT with Experts’ Responses to Scenarios that Assess Psychological Literacy. Psychol. Learn. Teach. 2024, 23, 265–280. [Google Scholar] [CrossRef]
  23. Matzakos, N.; Moundridou, M. Exploring Large Language Models Integration in Higher Education: A Case Study in a Mathematics Laboratory for Civil Engineering Students. Comput. Appl. Eng. Educ. 2025, 33, e70049. [Google Scholar] [CrossRef]
  24. Haindl, P.; Weinberger, G. Students’ Experiences of Using ChatGPT in an Undergraduate Programming Course. IEEE Access 2024, 12, 43519–43529. [Google Scholar] [CrossRef]
  25. Javaid, M.; Haleem, A.; Singh, R.P.; Khan, S.; Khan, I.H. Unlocking the Opportunities through ChatGPT Tool towards Ameliorating the Education System. Benchmarks Stand. Eval. 2023, 3, 100115. [Google Scholar] [CrossRef]
  26. Law, L. Application of Generative Artificial Intelligence (GenAI) in Language Teaching and Learning: A Scoping Literature Review. Comput. Educ. Open 2024, 6, 100174. [Google Scholar] [CrossRef]
  27. Lo, C.K. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
  28. Vicente-Yagüe-Jara, M.I.; López-Martínez, O.; Navarro-Navarro, V.; Cuéllar-Santiago, F. Writing, Creativity, and Artificial Intelligence. ChatGPT in the University Context. Comun. Rev. Científica Comun. Educ. 2023, 31, 47–57. [Google Scholar] [CrossRef]
  29. Smith, K.; Maynard, N.; Berry, A.; Stephenson, T.; Spiteri, T.; Corrigan, D.; Mansfield, J.; Ellerton, P.; Smith, T. Principles of Problem-Based Learning (PBL) in STEM Education: Using Expert Wisdom and Research to Frame Educational Practice. Educ. Sci. 2022, 12, 728. [Google Scholar] [CrossRef]
  30. Wing, J.M. Computational Thinking. Commun. ACM 2006, 49, 33–35. [Google Scholar] [CrossRef]
  31. Hu, P.J.-H.; Hui, W. Examining the Role of Learning Engagement in Technology-Mediated Learning and Its Effects on Learning Effectiveness and Satisfaction. Decis. Support Syst. 2012, 53, 782–792. [Google Scholar] [CrossRef]
  32. Antonietti, C.; Schmitz, M.-L.; Consoli, T.; Cattaneo, A.; Gonon, P.; Petko, D. Development and Validation of the ICAP Technology Scale to Measure How Teachers Integrate Technology into Learning Activities. Comput. Educ. 2023, 192, 104648. [Google Scholar] [CrossRef]
  33. Chang, H.-Y.; Wang, C.-Y.; Lee, M.-H.; Wu, H.-K.; Liang, J.-C.; Lee, S.W.-Y.; Chiou, G.-L.; Lo, H.-C.; Lin, J.-W.; Hsu, C.-Y.; et al. A Review of Features of Technology-Supported Learning Environments Based on Participants’ Perceptions. Comput. Hum. Behav. 2015, 53, 223–237. [Google Scholar] [CrossRef]
  34. Martínez-Peláez, R.; Velarde-Alvarado, P.; Félix, V.G.; Ochoa-Brust, A.; Ostos, R.; Mena, L.J. Assessing Employee Susceptibility to Cybersecurity Risks. Int. J. Inf. Secur. Priv. 2024, 18, 1–25. [Google Scholar] [CrossRef]
  35. Venegas-Chávez, T.E.; Olivares-Bautista, S.; Wlodarczyk, A.; Pon, C.; Flores, V.; Martínez-Peláez, R. Cybersecurity Culture Assessment at a University in Chile: A Pilot Study. In Proceedings of the 2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS), Manama, Bahrain, 28–29 January 2024; pp. 572–576. [Google Scholar]
  36. AlDaajeh, S.; Saleous, H.; Alrabaee, S.; Barka, E.; Breitinger, F.; Raymond Choo, K.-K. The Role of National Cybersecurity Strategies on the Improvement of Cybersecurity Education. Comput. Secur. 2022, 119, 102754. [Google Scholar] [CrossRef]
  37. Schneider, F.B. Cybersecurity Education in Universities. IEEE Secur. Priv. 2013, 11, 3–4. [Google Scholar] [CrossRef]
  38. Crick, T.; Davenport, J.H.; Hanna, P.; Irons, A.; Prickett, T. Overcoming the Challenges of Teaching Cybersecurity in UK Computer Science Degree Programmes. In Proceedings of the 2020 IEEE Frontiers in Education Conference (FIE), Uppsala, Sweden, 21–24 October 2020; pp. 1–9. [Google Scholar]
  39. Shillair, R.; Esteve-González, P.; Dutton, W.H.; Creese, S.; Nagyfejeo, E.; von Solms, B. Cybersecurity Education, Awareness Raising, and Training Initiatives: National Level Evidence-Based Results, Challenges, and Promise. Comput. Secur. 2022, 119, 102756. [Google Scholar] [CrossRef]
  40. Nagvekar, P.V.; Das, S.; Iyer, S. Teaching Log Data Analysis in Indian Cybersecurity Classrooms: A Mixed-Methods Study of Pedagogical Challenges and Learner Difficulties. Front. Educ. 2025, 10, 1676938. [Google Scholar] [CrossRef]
  41. Catota, F.E.; Morgan, M.G.; Sicker, D.C. Cybersecurity Education in a Developing Nation: The Ecuadorian Environment. J. Cybersecur. 2019, 5, tyz001. [Google Scholar] [CrossRef]
  42. Dempere, J.; Modugu, K.; Hesham, A.; Ramasamy, L.K. The Impact of ChatGPT on Higher Education. Front. Educ. 2023, 8, 1206936. [Google Scholar] [CrossRef]
  43. Xiao, H.; Shah, B.; Spring, J.; Kuzminykh, I.; Janku, S. Scaffolding Student Learning through GenAI in Cybersecurity Education. In Proceedings of the 3rd International Workshop on Cyber Security Education for Industry and Academia (CSE4IA 2025), Munich, Germany, 16–18 June 2025. [Google Scholar]
  44. Ruwe, T.; Mayweg-Paus, E. Embracing LLM Feedback: The Role of Feedback Providers and Provider Information for Feedback Effectiveness. Front. Educ. 2024, 9, 1461362. [Google Scholar] [CrossRef]
  45. Su, J.; Yang, W. Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education. ECNU Rev. Educ. 2023, 6, 355–366. [Google Scholar] [CrossRef]
  46. Ahmed Khan, N.; Ahmad, M.; Akhtar, M.M.; Rajeyyagari, S.; Uddin Siddiqi, A.M. Transformative Impact of Artificial Intelligence on Higher Education: A Comprehensive Analysis of Pedagogical Innovation, Institutional Transformation, and Future Learning Ecosystems. AJAST Asian J. Appl. Sci. Technol. 2025, 09, 57–76. [Google Scholar] [CrossRef]
  47. Chaudhry, M.A.; Cukurova, M.; Luckin, R. A transparency index framework for AI in education. In International Conference on Artificial Intelligence in Education 2022, Proceedings of the Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium, Durham, UK, 27–31 July 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 195–198. [Google Scholar]
  48. Lee, S.; Choe, H.; Zou, D.; Jeon, J. Generative AI (GenAI) in the language classroom: A systematic review. Interactive Learning Environments 2025, 1–25. [Google Scholar] [CrossRef]
  49. Fakhri, M.M.; Ahmar, A.S.; Rosidah, R.; Fadhilatunisa, D.; Tabash, M. Barriers to Effective Learning: Examining the Influence of Delayed Feedback on Student Engagement and Problem Solving Skills in Ubiquitous Learning Programming. J. Appl. Sci. Eng. Technol. Educ. 2024, 6, 69–79. [Google Scholar] [CrossRef]
  50. Cabaj, K.; Domingos, D.; Kotulski, Z.; Respício, A. Cybersecurity Education: Evolution of the Discipline and Analysis of Master Programs. Comput. Secur. 2018, 75, 24–35. [Google Scholar] [CrossRef]
  51. Javaid, M.; Haleem, A.; Singh, R.P. A Study on ChatGPT for Industry 4.0: Background, Potentials, Challenges, and Eventualities. J. Econ. Technol. 2023, 1, 127–143. [Google Scholar] [CrossRef]
  52. Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  53. Indrašienė, V.; Jegelevičienė, V.; Merfeldaitė, O.; Penkauskienė, D.; Pivorienė, J.; Railienė, A.; Sadauskas, J. Value of Critical Thinking in the Labour Market: Variations in Employers’ and Employees’ Views. Soc. Sci. 2023, 12, 221. [Google Scholar] [CrossRef]
  54. Allen, D.E.; Donham, R.S.; Bernhardt, S.A. Problem-Based Learning. New Dir. Teach. Learn. 2011, 2011, 21–29. [Google Scholar] [CrossRef]
  55. Martín-Garin, A.; Millán-García, J.A.; Leon, I.; Oregi, X.; Estevez, J.; Marieta, C. Pedagogical Approaches for Sustainable Development in Building in Higher Education. Sustainability 2021, 13, 10203. [Google Scholar] [CrossRef]
  56. Loyens, S.M.M.; Jones, S.H.; Mikkers, J.; van Gog, T. Problem-Based Learning as a Facilitator of Conceptual Change. Learn. Instr. 2015, 38, 34–42. [Google Scholar] [CrossRef]
  57. Stentoft, D. From saying to doing interdisciplinary learning: Is problem-based learning the answer? Act. Learn. High. Educ. 2017, 18, 51–61. [Google Scholar] [CrossRef]
  58. Angeli, C.; Giannakos, M. Computational Thinking Education: Issues and Challenges. Comput. Hum. Behav. 2020, 105, 106185. [Google Scholar] [CrossRef]
  59. de Jong, I.; Jeuring, J. Computational Thinking Interventions in Higher Education: A Scoping Literature Review of Interventions Used to Teach Computational Thinking. In Proceedings of the 20th Koli Calling International Conference on Computing Education Research, New York, NY, USA, 22 November 2020; pp. 1–10. [Google Scholar]
  60. Lyon, J.A.; Magana, A.J. Computational Thinking in Higher Education: A Review of the Literature. Comput. Appl. Eng. Educ. 2020, 28, 1174–1189. [Google Scholar] [CrossRef]
  61. Yadav, A.; Stephenson, C.; Hong, H. Computational Thinking for Teacher Education. Commun. ACM 2017, 60, 55–62. [Google Scholar] [CrossRef]
  62. Rodzalan, S.A.; Saat, M.M. The perception of critical thinking and problem solving skill among Malaysian undergraduate students. Procedia-Soc. Behav. Sci. 2015, 172, 725–732. [Google Scholar] [CrossRef]
  63. Guo, Y.; Lee, D. Leveraging ChatGPT for enhancing critical thinking skills. J. Chem. Educ. 2023, 100, 4876–4883. [Google Scholar] [CrossRef]
  64. Yildirim, I.; Paul, L.A. From Task Structures to World Models: What Do LLMs Know? Trends Cogn. Sci. 2024, 28, 404–415. [Google Scholar] [CrossRef]
  65. Cook, E.D.; Hazelwood, A.C. An Active Learning Strategy for the Classroom—“Who Wants to Win... Some Mini Chips Ahoy?”. J. Account. Educ. 2002, 20, 297–306. [Google Scholar] [CrossRef]
  66. Dorodchi, M.M.; Dehbozorgi, N.; Benedict, A.; Al-Hossami, E.; Benedict, A. Scaffolding a Team-based Active Learning Course to Engage Students: A Multidimensional Approach. In Proceedings of the 2020 ASEE Virtual Annual Conference Content Access, Virtual Online, 22–26 June 2020. [Google Scholar] [CrossRef]
  67. Belbase, S.; Mainali, B.R.; Kasemsukpipat, W.; Tairab, H.; Gochoo, M.; Jarrah, A. At the Dawn of Science, Technology, Engineering, Arts, and Mathematics (STEAM) Education: Prospects, Priorities, Processes, and Problems. Int. J. Math. Educ. Sci. Technol. 2022, 53, 2919–2955. [Google Scholar] [CrossRef]
  68. Saad, A.; Zainudin, S. A Review of Project-Based Learning (PBL) and Computational Thinking (CT) in Teaching and Learning. Learn. Motiv. 2022, 78, 101802. [Google Scholar] [CrossRef]
  69. Lavado-Anguera, S.; Velasco-Quintana, P.-J.; Terrón-López, M.-J. Project-Based Learning (PBL) as an Experiential Pedagogical Methodology in Engineering Education: A Review of the Literature. Educ. Sci. 2024, 14, 617. [Google Scholar] [CrossRef]
  70. Maalek, R. Integrating Generative Artificial Intelligence and Problem-Based Learning into the Digitization in Construction Curriculum. Buildings 2024, 14, 3642. [Google Scholar] [CrossRef]
  71. Kumar, H.; Musabirov, I.; Reza, M.; Shi, J.; Wang, X.; Williams, J.J.; Kuzminykh, A.; Liut, M. Impact of Guidance and Interaction Strategies for LLM Use on Learner Performance and Perception. arXiv 2023, arXiv:2310.13712. [Google Scholar] [CrossRef]
  72. Robinson, L.; Harris, A.; Burton, R. Saving Face: Managing Rapport in a Problem-Based Learning Group. Act. Learn. High. Educ. 2015, 16, 11–24. [Google Scholar] [CrossRef]
  73. Hadar, L.L.; Tirosh, M. Creative Thinking in Mathematics Curriculum: An Analytic Framework. Think. Ski. Creat. 2019, 33, 100585. [Google Scholar] [CrossRef]
Figure 1. Conceptual interaction of the pedagogical framework.
Figure 1. Conceptual interaction of the pedagogical framework.
Ai 07 00025 g001
Figure 2. Active-learning foundation model.
Figure 2. Active-learning foundation model.
Ai 07 00025 g002
Figure 3. Learning progression model.
Figure 3. Learning progression model.
Ai 07 00025 g003
Table 1. Type of learning activities.
Table 1. Type of learning activities.
IDActivity DescriptionTypeCognitive FocusMethodologyIntended Benefits
A1Conceptual exploration of cybersecurity threats and protection strategies using LLMIndividualRemember, UnderstandExperiential learningStrengthens conceptual clarity and analytical reasoning
A2Installation and configuration of vulnerability analysis tools in Linux with LLM supportTeamApply, Analyze, EvaluatePBLDevelops collaboration and troubleshooting skills
A3Automation of cybersecurity installation using LLM-generated instructionsIndividualApply, Analyze, EvaluatePBLBuilds technical autonomy and procedural reasoning
A4Definition and clarification of key cybersecurity concepts using LLMIndividualRemember, UnderstandDirect instructionSupports rapid conceptual understanding
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ostos, R.; Félix, V.G.; Mena, L.J.; Toral-Cruz, H.; Ochoa-Brust, A.; González-Potes, A.; Félix, R.A.; Ramírez Pacheco, J.C.; Flores, V.; Martínez-Peláez, R. Pedagogical Transformation Using Large Language Models in a Cybersecurity Course. AI 2026, 7, 25. https://doi.org/10.3390/ai7010025

AMA Style

Ostos R, Félix VG, Mena LJ, Toral-Cruz H, Ochoa-Brust A, González-Potes A, Félix RA, Ramírez Pacheco JC, Flores V, Martínez-Peláez R. Pedagogical Transformation Using Large Language Models in a Cybersecurity Course. AI. 2026; 7(1):25. https://doi.org/10.3390/ai7010025

Chicago/Turabian Style

Ostos, Rodolfo, Vanessa G. Félix, Luis J. Mena, Homero Toral-Cruz, Alberto Ochoa-Brust, Apolinar González-Potes, Ramón A. Félix, Julio C. Ramírez Pacheco, Víctor Flores, and Rafael Martínez-Peláez. 2026. "Pedagogical Transformation Using Large Language Models in a Cybersecurity Course" AI 7, no. 1: 25. https://doi.org/10.3390/ai7010025

APA Style

Ostos, R., Félix, V. G., Mena, L. J., Toral-Cruz, H., Ochoa-Brust, A., González-Potes, A., Félix, R. A., Ramírez Pacheco, J. C., Flores, V., & Martínez-Peláez, R. (2026). Pedagogical Transformation Using Large Language Models in a Cybersecurity Course. AI, 7(1), 25. https://doi.org/10.3390/ai7010025

Article Metrics

Back to TopTop