Next Article in Journal
Parents’ Views on the Significance of Formal Preschool Teacher Education in Sweden
Previous Article in Journal
How School Leaders Retain Experienced and Capable Teacher Mentors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Generative AI into Live Case Studies for Experiential Learning in Operations Management

by
David Ernesto Salinas-Navarro
1,*,
Eliseo Vilalta-Perdomo
2,
Jaime Alberto Palma-Mendoza
3,* and
Martina Carlos-Arroyo
4
1
Facultad de Ingeniería, Universidad Panamericana, Augusto Rodín 498, Mexico City 03920, Mexico
2
Aston Business School, Aston University, Birmingham B4 7ET, UK
3
School of Engineering and Sciences, Tecnologico de Monterrey, Mexico City 14380, Mexico
4
Jefatura de Calidad Académica e Investigación, Universidad del Valle de Atemajac, Zapopan 45050, Mexico
*
Authors to whom correspondence should be addressed.
Educ. Sci. 2026, 16(1), 15; https://doi.org/10.3390/educsci16010015
Submission received: 25 November 2025 / Revised: 16 December 2025 / Accepted: 18 December 2025 / Published: 23 December 2025
(This article belongs to the Topic AI Trends in Teacher and Student Training)

Abstract

This research-to-practice study examines how Generative Artificial Intelligence (GenAI) can be integrated into live case studies to enhance experiential learning in higher education. It explores GenAI’s potential as an agent to learn with scaffolding reflection and engagement and addresses gaps in existing applications that often focus narrowly on content generation. To explore GenAI’s agentive potential, the methodology illustrates this approach in a UK postgraduate operations management module. Students engaged in a live case study of a local ethnic restaurant to refine its business model and operations. The data sources used to examine students’ results included module materials, outputs, and feedback surveys. Thematic analysis was employed to assess how GenAI facilitated experiential learning. The findings suggest that GenAI integration facilitated exploration, reflection, conceptualisation, and experimentation. Students reported that the activity was engaging and relevant, facilitating critical decision-making and understanding of operations management. However, the outcomes varied according to GenAI literacy and student participation. Although GenAI-enriched learning is beneficial, human agency and contextual knowledge remain crucial. Overall, this study integrates GenAI as a cognitive partner throughout Kolb’s ELC. This study offers a transferable framework for active learning, illustrating how technology can enhance critical and reflective learning in authentic educational contexts. However, limitations include uneven student participation and engagement, resource constraints, overreliance on artificial intelligence outputs, differentiated impact on learning outcomes, and a single-case report, which must be addressed before the framework can be scaled up. Future research should test this through multi-case studies while developing GenAI literacy, measuring GenAI impact, and implementing ethical practices in the field.

1. Introduction

This research-to-practice study examines the integration of Generative Artificial Intelligence (GenAI) with live case studies to support experiential learning in higher education, particularly in operations management education. GenAI tools, with their capacity for human-like content and idea generation, are increasingly seen as a means of creating personalised, interactive, and efficient learning experiences (Demir, 2021; Harry & Sayudin, 2023; Shoikova et al., 2017).
In this regard, GenAI has become a revolutionary tool that uses patterns from extensive datasets rather than merely delivering information or executing predetermined tasks (Schryen et al., 2025). Its value depends on context and adoption (Harry & Sayudin, 2023; Lee & Low, 2024). Current research emphasises the pedagogical integration of GenAI beyond a task-performing tool (Qian, 2025). In operations management (OM) education, where students analyse complex systems and make decisions under uncertainty (Salinas-Navarro et al., 2024b), GenAI should support exploration and problem solving while exposing students to operational challenges. This position calls for GenAI as a cognitive aid that scaffolds analytical and decision-making processes in OM learning (Ho & Li, 2024; Salinas-Navarro et al., 2024c, 2024d).
While promising, most applications of GenAI in education have been limited to content creation, prompting, writing support, or learning tools, with less attention paid to their role in fostering critical thinking, reflection, and problem-solving skills (Demir, 2021; Harry & Sayudin, 2023; Shoikova et al., 2017; Yusuf et al., 2024). This debate signifies a conceptual divide (Crawford et al., 2023; Lee & Low, 2024), highlighting the need to balance the computational and algorithmic strengths of GenAI tools with the implementation of clear guidelines and pedagogical strategies that address their potential inherent risks and limitations (Kurtz et al., 2024; Symeou et al., 2025).
Consider a quality management module in which students improve a manufacturer’s inspection system. Using GenAI tools, students can quickly generate flowcharts, analyses, and control plans. While some critically refine these outputs, others accept them without scrutiny, missing key details. This idea highlights a core tension: GenAI can enhance problem-solving, but uncritical use may impede higher-order reasoning. This highlights the challenge of balancing GenAI’s capabilities with learning outcomes while avoiding overreliance.
Accordingly, this view recognises GenAI’s underexplored possibilities for pedagogical integration in teaching and learning activities (Michel-Villarreal et al., 2023; Yan et al., 2024). Therefore, this study addresses this void as a research problem by investigating how GenAI can serve as an agent to learn with. In this study, this proposition involves scaffolding student engagement in realistic, immersive learning contexts through GenAI’s human-like interactional and feedback capabilities (Salinas-Navarro et al., 2024c, 2024d). The integration of GenAI into this type of learning scenario calls for a pedagogical framework that goes beyond content generation (Bonwell & Aison, 1991; Markulis, 1985).
In this sense, live case studies and experiential learning have been introduced as pedagogies to expose learners to evolving situations in which they interact with stakeholders and develop practical solutions in real time (Archer et al., 2021; Neubert et al., 2020). GenAI tools are deemed capable of supporting the exploration, reflection, and integration of learning in such scenarios (Belkina et al., 2025; Demir, 2021). Live case studies blend realistic challenges and in-class activities within traditional modules and curriculum designs (Culpin & Scott, 2012; Markulis, 1985). Experiential learning refers to the process of creating knowledge through the transformation of experience (D. Kolb, 1984). Kolb’s experiential learning cycle (ELC) provides a framework for studying learning through an active lens of reflective thinking and realistic action. Although other active learning approaches could incorporate GenAI, pedagogies such as challenge-based learning or service learning require significant instructional adaptations and extensive external engagement, which are beyond the scope of this study.
Despite the growing interest in GenAI applications, empirical evidence on the systematic integration of these tools into active learning frameworks remains lacking. It is essential to understand how GenAI can enhance critical thinking rather than merely automating tasks. This study is among the first to demonstrate how GenAI can be embedded as a cognitive partner in experiential learning and live case studies, thereby contributing to active learning research.
Therefore, the research question (RQ) guiding this study is as follows:
How can GenAI tools be effectively integrated into live case studies using Kolb’s ELC to enhance experiential learning in higher education?
To investigate this question, this study operationalises GenAI integration through Kolb’s ELC within a postgraduate operations management context.
Kolb’s experiential learning cycle (ELC) is proposed as a framework for structuring GenAI-supported live case studies (A. Kolb & Kolb, 2018). Experiential learning emphasises reflection and active engagement, offering a well-established model for designing authentic and motivating tasks across disciplines (A. Kolb & Kolb, 2018; D. Kolb, 1984; Kong, 2021). By positioning GenAI as an agent to learn with live case studies, this study aims to develop an integration framework that supports experiential and active learning beyond simple technology use.
The working hypothesis is that integrating GenAI into live case studies through Kolb’s ELC enhances interest, motivation, relevance, and learning outcomes by encouraging critical thinking and practical application of knowledge (ElSayary, 2024; Lee & Low, 2024). Therefore, interest, motivation, relevance, and learning outcomes were proposed as indicators of successful GenAI integration into live case studies. Other pedagogical, technological, or learning environment aspects were not directly evaluated, but insights were expected to emerge informally during the study period (Radcliffe, 2009).
To illustrate this approach, a single exploratory case study is presented in this work regarding a postgraduate operations management module, involving the development of a live case study about an ethnic restaurant in the UK. The live case study provided an opportunity to address a realistic situation, achieve learning targets, and guide students through GenAI-enhanced activities within the ELC’s structure.
The remainder of this paper is organised as follows. Section 2 reviews the relevant literature. Section 3 outlines the methodology used in this study. Section 4 presents the results of the postgraduate operations management module. Section 5 discusses the study findings, limitations, and future directions. Section 6 presents the concluding remarks, including an overview of the contributions and their implications.

2. Theoretical Background

2.1. The Promises and Pitfalls of GenAI in Higher Education

The integration of GenAI into education has emerged as a transformative development that reshapes traditional learning practices and offers innovative responses to long-term challenges (Michel-Villarreal et al., 2023). Tools such as DeepSeek 3.2, ChatGPT 5.1, Copilot Smart, and Gemini 3 are increasingly being adopted in higher education to enhance teaching, learning, and assessments.
GenAI tools commonly generate lecture materials, quizzes, and summaries, streamline instructor resources, and tailor materials to students’ needs (Bahroun et al., 2023; Wang et al., 2025). As intelligent tutoring systems, they offer real-time support and enhance self-paced learning (Bond et al., 2024). In assessments, GenAI aids grading and provides instant feedback (Bahroun et al., 2023). Additionally, it helps sustain collaboration and cooperation and fosters peer learning (Bahroun et al., 2023; Tillmanns et al., 2025).
GenAI can generate authentic assessment tasks that reflect real-world challenges, thereby promoting reflective thinking, problem-solving, and higher-order skills (Salinas-Navarro et al., 2024d; Xia et al., 2024). Additionally, GenAI facilitates personalised learning experiences, simulates practical environments, and offers tailored prompts and resources to support student learning (Lee & Low, 2024; Salinas-Navarro et al., 2024c; Sen & Deng, 2025).
GenAI applications can support knowledge acquisition, enquiry, discussion, practice, collaboration, and outcome production (Belkina et al., 2025). Therefore, GenAI tools can be considered cognitive partners that augment, rather than replace, human learning (Salinas-Navarro et al., 2024d). In this sense, they may serve as agents to learn with or to support the learning process of humans. Consequently, the practical foundations of GenAI in education must emphasise reflection and active engagement (Lee & Low, 2024; Salinas-Navarro et al., 2024d).
As agents to learn with, GenAI tools can engage students in dialogue, generate reflective questions, and scaffold complex tasks. This aligns closely with constructivist and enquiry-based pedagogies (Ruiz-Rojas et al., 2024). As agents to support learning, they can assist educators in designing differentiated instruction, automating formative assessments, and providing feedback in real time (Xia et al., 2024). Therefore, GenAI tools support reflection, problem solving, and knowledge construction, rather than merely providing information.
Regarding operations management education, only a limited number of studies have examined the use of GenAI and experiential learning in this discipline (Ho & Li, 2024; Salinas-Navarro et al., 2024c, 2024d; Sen & Deng, 2025). GenAI provides helpful but limited guidance for problem-solving within the experiential learning framework in operations management education. GenAI tools can outline a logical sequence of activities, suggest plausible courses of action, and provide practical examples. However, when tested for specific problem resolution, it lacks specificity and accuracy. Therefore, using GenAI tools in operations management requires mediation and the ability to engage in and sustain dialogue to obtain valuable and valid outputs. Therefore, this study proposes not treating GenAI as a mere analytical tool for redesign or operational analysis. Instead, GenAI must be integrated as a cognitive partner within a structured experiential learning activity, allowing learners to critically evaluate AI-generated suggestions, refine them through contextual understanding, and develop situated solutions. This conceptual choice reflects the pedagogical aim of strengthening students’ reasoning, judgement, and problem-solving capabilities, rather than outsourcing these processes to GenAI.
Moreover, effective integration requires careful consideration of the ethical and practical challenges of its use in the classroom. These include data privacy, algorithmic bias, transparency, and potential misuse (Michel-Villarreal et al., 2023). Another key concern is the risk of overreliance, where students may substitute GenAI outputs for critical thinking, thereby compromising the quality, accuracy, originality, and creativity of their work (Selwyn, 2019; Yan et al., 2024). Ultimately, GenAI tools raise concerns regarding the need for clear pedagogical guidelines (Salinas-Navarro et al., 2024c; Yan et al., 2024).
Scholars have emphasised that these risks must be mitigated through ethical safeguards, transparent use, and the development of GenAI literacy and readiness among students and educators (Michel-Villarreal et al., 2023; Shailendra et al., 2024). Some studies have called for community-wide frameworks to ensure the responsible integration of GenAI into education (W. Holmes et al., 2022). In contrast, others caution against overenthusiasm, which could undermine the essential human dimensions of teaching and learning (Selwyn, 2019).
In summary, GenAI holds significant promise for enhancing creativity, productivity, and learning outcomes in higher education (Lee & Low, 2024). However, its integration must be purposeful, ethical, and pedagogically appropriate to be effective. Experiential learning provides a framework for integrating GenAI tools into live case studies and addressing existing challenges (Salinas-Navarro et al., 2024c).

2.2. Live Case Studies as Authentic Learning Contexts

Live case studies represent a dynamic pedagogical approach that immerses students in realistic, real-time, and unstructured problems, often in collaboration with industry partners and communities (Elam & Spotts, 2004; Markulis, 1985; Schonell & Macklin, 2019). Unlike traditional case studies, which are static and retrospective, live cases are dynamic and unpredictable, mirroring the complexity of professional decision-making contexts that require immediate attention (Culpin & Scott, 2012). They promote authentic, relevant, and impactful learning through engagement with real clients or communities, fostering teamwork, communication, and problem-solving skills (Charlebois & Foti, 2017). Frequently multidisciplinary, live case studies demand the integration of knowledge across fields and serve as integrative projects (Bonwell & Aison, 1991; Doolittle et al., 2023; Elam & Spotts, 2004; Neubert et al., 2020).
Live case studies are also valued for their ability to scale and adapt, making them appropriate for large groups, international participants, and distance learners (Schonell & Macklin, 2019). They provide a viable alternative to internships, utilising fewer resources while offering genuine workplace experience. However, successful execution relies on well-defined learning objectives, organised guidance, and alignment with educational principles (Hofmeister & Pilz, 2023; Vítečková et al., 2025).
Therefore, research highlights three implementation stages: design (definition of objectives, case selection, and incorporation of active pedagogical principles), facilitation (student orientation, partner engagement, and student adaptive support), and assessment (learning outcome evaluation, reflection, and iterative improvement) (Charlebois & Foti, 2017; Elam & Spotts, 2004). These stages provide guidelines for the systematic implementation of this approach.
Overall, live case studies can serve as powerful pedagogical tools that enhance student engagement and improve real-world learning outcomes. Nonetheless, challenges remain, such as live cases that may overwhelm students, require significant time and resources, and vary in pedagogical quality (Markulis, 1985; Neubert et al., 2020). Moreover, the absence of standardised criteria for assessing work-integrated learning in live cases highlights the need for further research and development (Roth & Smith, 2009; Schonell & Macklin, 2019).

2.3. Experiential Learning and Kolb’s ELC Framework

Experiential learning is a pedagogical approach that emphasises learning through reflection and practice (D. Kolb, 1984). Grounded in constructivism, it holds that learners build their understanding through interaction with the world around them. Kolb’s ELC is the most widely recognised framework in this field (A. Kolb & Kolb, 2018).
As shown in Figure 1, the cycle consists of four stages: concrete experience (CE), reflective observation (RO), abstract conceptualisation (AC), and active experimentation (AE). In CE, learners engage in hands-on, real-world activities. The RO reflected on these experiences, identifying the patterns and insights that emerged. In the AC, reflections are linked to theoretical frameworks. Finally, in AE, learners apply their conceptual understanding to new contexts, test, and refine their approaches (D. Kolb, 1984). The ELC summarises the purposeful integration of reflective thinking, experiencing, and acting in a situation. Three principles characterise experiential learning:
  • Active engagement requires learners to participate rather than passively receive knowledge (Bonwell & Aison, 1991).
  • Iterative and cyclical progress mirrors natural human learning and reinforces adaptability (A. Kolb & Kolb, 2018).
  • Context dependence is most potent when learning occurs in authentic, real-world environments (Villarroel et al., 2019).
Experiential learning fosters meaningful understanding through authentic contexts, develops transferable skills such as critical thinking and collaboration, and enhances motivation and engagement (Baker et al., 2002; Kong, 2021). Previous studies on operations management education exemplify the possibilities of this pedagogy in practice (Salinas-Navarro & Garay-Rondero, 2020; Salinas-Navarro & Rodríguez Calvo, 2020). However, despite its benefits, there are challenges associated with its use. Therefore, careful planning and alignment with the intended outcomes (A. Kolb & Kolb, 2018; D. Kolb, 1984). Inequities in access to resources, networks, and opportunities may limit student participation. Ethical considerations are critical for ensuring fair and supportive experiential opportunities (Bergsteiner et al., 2010; Bradford, 2019).
When applied to live case studies, Kolb’s cycle provides a structured framework for authentic learning (Salinas-Navarro et al., 2024a). Students encounter real-world challenges (CE), reflect on the outcomes (RO), link their insights to theory (AC), and apply new strategies or solutions to evolving scenarios (AE) (Neubert et al., 2020). This alignment strengthens higher-order thinking, supports knowledge transfer, and naturally lends itself to authentic assessment, as students are evaluated on their engagement with real problems, reflective and analytical processes, and their ability to devise and test solutions (Koh, 2017; Villarroel et al., 2019; Wiggins, 1990).
Additionally, this study employed Kolb’s ELC framework to systematically integrate GenAI into the learning process. At each stage, GenAI can serve distinct roles: (i) enriching CE by providing data and clarifying real-world issues and concerns, (ii) supporting RO by asking theory-based probing questions, (iii) enhancing AC by generating alternative frameworks, and (iv) facilitating AE by providing a low-risk space for testing and refining ideas. Therefore, GenAI complements experiential learning by scaffolding processes without replacing authentic engagement, thereby expanding opportunities for iteration, feedback, and applied learning.

2.4. Student Agency in GenAI-Enhanced Learning

Kolb’s experiential learning emphasises active participation, reflection, and contextualization, with student agency as a vital dimension that enhances this approach. An agent in learning contexts is an autonomous or semi-autonomous entity, either human or computational, that perceives its environment, makes decisions, and takes action to enhance learning (Veletsianos & Russell, 2014). These agents can be pedagogical characters, tutoring systems, or adaptive tools that guide and support learners.
Specifically, in humans, agency refers to the ability to make decisions, critically question, co-construct knowledge, and influence learning processes (Eteläpelto et al., 2013). In live case studies, agency manifests itself at different levels.
  • Concrete Experience: Students choose how to engage with stakeholders and prioritise enquiries.
  • Reflective Observation: Students critically reframe their experiences, challenging dominant biases (Lipponen & Kumpulainen, 2011).
  • Abstract Conceptualisation: Theories are used flexibly to reinterpret findings (Priestley et al., 2012).
  • Active Experimentation: Students propose creative solutions that reflect their values, identity, and commitments (Rajala et al., 2016).
GenAI tools can enhance student agency by providing flexible and low-risk environments for testing alternative approaches. However, they should support rather than replace student autonomy in adapting to and critiquing outputs (Calabrese Barton & Tan, 2020). It is essential to recognise that agency varies according to factors such as interest, motivation, gender, social class, and digital literacy (Crenshaw, 1991). An intersectional approach to pedagogical design is essential for creating inclusive environments that support all students in exercising their agency in the classroom.
Moreover, the exercise of agency in GenAI-supported learning environments is inseparable from ethical reflection (W. Holmes et al., 2022). When students critically evaluate AI outputs by questioning their accuracy, bias, and contextual relevance, they not only demonstrate cognitive autonomy but also develop ethical awareness as part of their learning process. Embedding structured opportunities for students to reflect on the how and why of GenAI use reinforces responsibility, transparency, and intellectual independence, ensuring that agency is practised as both cognitive and ethical know-how (Varela, 1999).
Following this overview, Table 1 synthesises how live case studies, experiential learning, agency, and GenAI tools intersect to create a unified pedagogical framework for conceptualising the study. This framework illustrates the role of GenAI as a cognitive partner in Kolb’s ELC within authentic real-world contexts. The following section outlines the methodological approach used to operationalise this integration.
Despite GenAI’s potential to enhance higher-order thinking skills (Yusuf et al., 2024), the literature lacks insight into its integration into live case studies and experiential learning processes. The methodology of this study aims to bridge this gap by using Kolb’s ELC to structure GenAI-supported live case studies.

3. Methodology

This study employed a mixed-methods methodology (Figure 2) to advance a single exploratory case study (Yin, 2009). The research process encompassed (i) the formulation of the RQ, (ii) the development of the theoretical background, (iii) data collection in a live case study, (iv) data organisation and analysis, and (v) results discussion and reporting (Tharenou et al., 2007b; Timans et al., 2019). The overarching aim was to examine, in a case study, how GenAI could be integrated into live case studies using Kolb’s ELC to support active learning in higher education.
The case study research method was used, as it involves examining a specific situation to elucidate and understand its fundamental elements, rather than focusing on other aspects or general matters (Timans et al., 2019). A case study was selected for its relevance to unique circumstances, locations, groups of people, or events, or for its illustration of the application of novel methods and tools in contexts with limited occurrences. This case study details a singular learning experience (as a pedagogical live case study) in a postgraduate engineering module during the first semester of 2024, as described in Section 2. Consequently, this study did not incorporate comparisons with control groups to draw inferences or generalisations regarding other cases (Crowe et al., 2011; Flyvbjerg, 2006; Tharenou et al., 2007a; Yin, 2009). The case study is summarised in Section 4.

3.1. Data Collection

Data were collected from a postgraduate operations management live case study conducted in a university’s MSc Business and Management programme in the United Kingdom. The case required students (n = 86) to upscale a business model and improve operations through servitisation in collaboration with a local Bangladeshi–Indian restaurant.
The sources included documentary secondary data, such as the module syllabus, assignment briefs, student outputs, and assessment results. In addition, voluntary feedback surveys were distributed at the end of the module. Forty-six students completed the general module evaluation survey (response rate = 53%), while 16 completed a specific survey on the live case study (response rate = 19%). This variation is substantial for contextualising the findings.

3.2. Data Organisation and Analysis

The collected data were organised and analysed thematically, aligned with the research question and theoretical framework. A three-stage analysis was conducted.
  • Guided by Kolb’s ELC, deductive codes were assigned to capture evidence for each cycle stage: CE, RO, AC, and AE (Guest et al., 2012). Additionally, assignment-related codes were included to categorise students’ proposals in the live case study. Codes regarding value proposition, creation mechanisms, network, capture strategies, and operational implications allowed the identification of servitisation enhancements to the business model and its operations. To enhance reliability, the student outputs were independently double-coded by two authors. Discrepancies were discussed to achieve consensus by (i) identifying and contrasting differences, (ii) clarifying views and interpretations, (iii) reconciling and integrating descriptions, and (iv) allocating descriptions to the corresponding code.
  • Inductive coding identified emergent themes beyond the theoretical framework (Ryan & Bernard, 2003). This technique involves identifying recurring concepts, highlighting gaps (i.e., “silences”), and capturing unexpected perspectives. The two coders identified and grouped the new themes.
  • Student achievements and feedback were analysed using descriptive statistics to provide an accessible overview of learning outcomes and engagement patterns, appropriate for the exploratory nature and sample size of this single case (Garay-Rondero et al., 2019).
The goal of this approach was to connect theory, data, evidence, and interpretation, ensuring both deductive alignment and inductive openness, as well as a statistical description. Validation was achieved by contrasting the results across three data sources: module documentation, student outputs, and survey responses. This ensured that the convergence and divergence of the findings could be evaluated critically.
Although the study did not aim for full data saturation, thematic consistency across multiple data sources indicated sufficient depth to address the research question within the exploratory scope (Leung, 2015; Yin, 2009).

3.3. Results Discussion and Reporting

The discussion focuses on the interplay between live case studies, GenAI tools, and experiential learning. To enrich the interpretation and provide transparency, illustrative student quotations from survey responses and written outputs were included alongside descriptive statistics. These quotations highlight both positive experiences and challenges, enabling the analysis to move beyond numerical indicators to capture students’ experiences. This discussion sheds light on integrating GenAI tools into a live case study, offering insights into the theoretical and practical implications, limitations, and future work.

3.4. Methodological Scope

Given its exploratory nature, this study should be understood as a pilot investigation designed to illustrate the potential of integrating GenAI tools with live case studies using Kolb’s ELC (de Zeeuw, 1996; Drisko, 2025; Leung, 2015; Vahl, 1997). As no comparisons or control groups were included, causal and broader inferences or claims could not be made. Furthermore, reliance on a single case and voluntary survey data limits generalisability, as it does not provide automatic, probability-based, universal, or truthful applications to similar circumstances (Drisko, 2025). The results are indicative and require replication in other contexts to assess their broader applicability.
Data sources—module materials, outputs, voluntary surveys, and secondary materials—provided valuable insights, but their scope was limited in this study. In terms of validity, the study reflected authentic experiences within a postgraduate module (Leung, 2015; Vahl, 1997). However, the absence of broader data streams limited the findings, as no further cross-validation of the data sources was possible. Validity was assessed by critically evaluating the results against the established tenets of experiential and active learning theories, thereby identifying areas of congruence and deviation.
Regarding reliability, the relatively low number of survey responses may indicate that the results do not consistently reflect the entire student cohort. However, reliability is supported by a systematic and transparent coding process, with recurring patterns and theme convergence demonstrating strong internal consistency (Leung, 2015; Vahl, 1997).
Concerning transferability, this study offers indicative insights and a structured framework that can guide future replications across various disciplines and educational settings, while remaining attentive to variations and novel emergent themes (Drisko, 2025; Vahl, 1997). The descriptive account of learning activities provides helpful guidelines for applying the approach in settings similar to those considered in this study. In this sense, transferability depends on future studies reproducing the approach across diverse disciplines, institutions, and cultural environments.
Overall, several steps were taken to enhance methodological rigour (Leung, 2015; Vahl, 1997):
  • Transparent coding was aligned with Kolb’s ELC.
  • Reliability checks via intercoder agreement on a sample of student outputs.
  • Validation across multiple data sources was performed.
  • Inclusion of student quotations to support the interpretation.
These measures strengthen the credibility and transferability of the findings while acknowledging the contextual and scope-related constraints.

3.5. Ethical Considerations

All student data in secondary sources were anonymised (and never available to the tutors and researchers) before, during, and after collection and analysis. Participation in the module surveys was voluntary, with informed consent upon acceptance. Therefore, no identifiable personal data were collected from the participants. All data collection procedures were conducted in accordance with the institutional ethical guidelines.

4. Results

The results of this study are presented in three sections. Section 4.1 outlines the development of the live case study, describing student engagement and the supportive role of GenAI. Section 4.2 reports on GenAI integration across Kolb’s experiential learning stages. Section 4.3 summarises students’ perceptions and outcomes, demonstrating the effectiveness of the case study. These findings illustrate how GenAI supported students’ experiential learning and proposal development.

4.1. The Live Case Study Development and Design

The live case study was embedded in a postgraduate operations management module (n = 86) during the spring semester of 2024. Students were required to redesign the business model of a local Bangladeshi-Indian restaurant through servitisation, addressing four key dimensions: customer value proposition, value creation mechanisms, value delivery networks, and value capture (Raddats et al., 2019). Students were expected to demonstrate competitive advantages, identify risks, and assess operational implications supported by industry reports, academic literature, and benchmark practices. Weighted at 67% of the final grade, the project provided a concrete experience (CE) aligned with Kolb’s ELC, with GenAI introduced as a reflective and exploratory tool.
The activity emphasised hands-on, real-world engagement. Over ten weeks, students engaged with management, collected data, and developed proposals on customer value, creation, delivery networks, and capture. Site visits and discussions provided insights into the current operations, challenges, and opportunities in the UK Bangladeshi catering sector.
The restaurant, a small business in a competitive catering market, aimed to offer real Indian food through authentic dishes (Maanvi, 2017). It faced significant challenges, including intense competition from diverse catering options (European, Asian, Caribbean, and African), shifting customer expectations of British curry houses, and the prevalence of simplified or unhealthy dishes that undercut authenticity at lower prices.
Students were encouraged to use GenAI tools for exploration, reflection, conceptualisation, and validation. They were also cautioned about the limitations of text generation in addressing context-specific challenges. To this end, the students received:
  • Explanation of the capabilities and limitations of GenAI,
  • A list of prompts for effective GenAI use, and
  • A GenAI-produced assignment exemplar to highlight both the possibilities and shortcomings.
The exemplar clarified expectations but showed that generic, GenAI-generated responses lacked depth and contextual sensitivity. To support effective use, students were guided through an incremental prompting procedure.
  • Input an initial prompt for conceptual clarification (e.g., servitisation, business models, and Indian–Bangladeshi restaurants).
  • Add incremental prompts on specific topics (e.g., Indian–Bangladeshi restaurants in the UK), and gradually provide more background information.
  • Use iterative refinement prompts for content generation (e.g., exploring alternative value propositions, creation mechanisms, delivery networks, and capture requirements.
  • Combine outputs from incremental steps to create comprehensive, contextually rich analyses (e.g., upscaling business models and their operational implications).
  • Refine outputs using feedback from literature, primary data, and staff/researcher input.
This process illustrated how GenAI could enrich the case study while requiring students to exercise judgement, draw on contextual knowledge, and engage in critical evaluation.

4.2. Integration of GenAI into the ELC and Student Outputs

The execution of Kolb’s ELC four stages (CE, RO, AC, and AE) was scaffolded differently, demonstrating how GenAI functions as a cognitive partner throughout the process. The deductive theme analysis from the students’ written assignments provided the following ELC descriptions:
  • Concrete experience (CE): GenAI supported students by offering contextual information about the UK Bangladeshi catering sector and helping them prepare targeted questions before engaging with the restaurant. One report noted, “Using GenAI helped understand the market and prepare better questions for the restaurant visit”.
  • Reflective observation (RO): Students used GenAI to structure their reflections and explore alternative interpretations of their observations. GenAI-generated probing questions prompted students to critically examine the underlying causes of operational issues. It was reported that “the AI asked questions that hadn’t been considered—it helped think about why certain problems kept recurring”.
  • Abstract conceptualisation (AC): GenAI assisted students in linking their observations to servitisation and business model frameworks by generating alternative conceptual approaches. Students used these outputs to compare potential strategies before selecting and refining their proposals. One cluster of students reported that “GenAI helped brainstorm solutions and compare different business model ideas before selection”.
  • Active experimentation (AE): In the AE stage, students used GenAI as a low-risk “testing space” to iterate their servitisation proposals and explore their operational implications. GenAI-generated suggestions were refined by comparing them with primary data and tutor feedback. Another cluster noted, “GenAI allowed configuring different scenarios and testing new solutions”.
This structured integration ensured that GenAI was used strategically to enhance each stage of Kolb’s cycle while keeping student judgement, creativity, and contextual knowledge central to the process, as presented in Figure 3.
As the assignment was completed individually, each student generated their own proposal. During the theme analysis, the students’ outputs were clustered into groups of similar ideas for analytical purposes, and these clusters reflected a range of servitisation-based solutions. Therefore, it is essential to note that these “groups” refer only to thematic clusters emerging from individual submissions rather than collaboratively formed student teams. Students proposed transitioning the restaurant from a traditional food-service model to a servitisation-based approach, with several innovative ideas that directly addressed the challenges of authenticity, competition, and customer engagement.
Overall, student outputs emphasised enhanced customer experience, enhanced services and meals, technology-enabled personalisation, and diversified revenue streams. These outputs illustrate how GenAI-supported conceptualisation and experimentation informed students’ design decisions and guided the development of more innovative and contextually responsive solutions. Extended examples of student proposals and detailed components of the business model are provided in Appendix A.

4.3. Student Assessment and Reflection

Survey data indicated generally positive perceptions of the module. Students rated it as interesting (mean = 4.3) and engaging (mean = 4.1), suggesting that the live case study was motivating and relevant to learners. The ratings for clarity of learning outcomes (mean = 4.2) and confidence in achieving them (mean = 4.1) were similarly high, indicating a good alignment between teaching design and student expectations. Scores related to academic support (mean = 3.9–4.0) were slightly lower, suggesting that some students welcomed additional guidance. Table 2 summarises these results.
Qualitative feedback showed no significant concerns regarding the live case study or module design. A few students described the live case as “the best part of the module”, while others commented positively on the practical nature of lectures and seminars. Suggestions for improvement varied and mainly reflected preferences for additional interactive activities or recorded sessions rather than issues directly related to the implementation of the live case or GenAI use.
A second survey specifically addressed the live case study (n = 16 participants). Most respondents found the activities interesting (88%) and relevant to their professional development (94%), reinforcing the value of authentic, real-world learning tasks. However, only 69% rated GenAI tools as reasonably or very helpful. This variation suggests that while the live case design was widely appreciated, the role of GenAI was perceived more unevenly, mainly because of differences in GenAI literacy and prompting strategies among the participants. Some students used GenAI effectively for brainstorming and reflection, whereas others struggled to obtain context-specific insights. Table 3 summarises these patterns.
Tutor reflections noted limited seminar attendance (below 50%) and low accomplishment among disengaged students (24% below 66% on progression tasks), with fewer than 10% seeking additional academic support. These patterns were reflected in assessment outcomes: the average mark was 56.3 (Std Dev = 9.05), with 9.5% achieving distinction, 42% merit, and 28% failing the module. The assessment rubrics were based on the module’s learning objectives and intended outcomes. Although mitigation strategies, including reminders, online materials, active in-class tasks, and on-demand tutorials, were implemented, their impact was limited.
Overall, these results indicate that the live case study and GenAI integration benefited students who engaged consistently with the experiential activities. Simultaneously, those with lower attendance or minimal participation struggled to use GenAI effectively. The findings underscore the importance of both GenAI literacy and sustained engagement in maximising the pedagogical value of GenAI-supported experiential learning.

5. Discussion

The discussion interprets the findings of this exploratory study in light of the research question and the theoretical framework connecting GenAI, experiential learning, and live case studies. Overall, the integration of GenAI supported students’ exploration, reflection, conceptualisation, and experimentation, but the outcomes varied with engagement and digital literacy. The following sections unpack these results by examining the pedagogical role of GenAI, its relationship with Kolb’s ELC, implications for student performance, and the broader theoretical and ethical contributions of the study.

5.1. Findings on the Use of GenAI in the Live Case Study

This study provides valuable insights into the use of GenAI-assisted problem solving in live case studies as follows.
  • Realistic experiences (Charlebois & Foti, 2017): Students immersed themselves in a real-world scenario, interacting with staff, assessing the restaurant’s challenges and opportunities, crafting servitisation strategies, and proposing actionable solutions. Because of the dynamic and authentic business context, students adapted their thinking and actions to a real-world scenario, mirroring professional challenges.
  • Live case studies enhance the learning process (Culpin & Scott, 2012): Students reported improvements in skills related to the intended learning outcomes, particularly in identifying areas for decision-making, understanding the role of operations management in performance, and recognising its strategic importance. This suggests a contribution to achieving the learning targets.
  • Integration of theory and practice (Elam & Spotts, 2004; Neubert et al., 2020): By collaborating with the restaurant, students effectively connected theoretical concepts to practical applications in situational contexts. This approach enabled them to analyse, critically implement, and understand the fundamental principles of operations management and business strategy.
These insights are linked to the authentic assessment principles of realism, cognitive challenge, and evaluative judgement (Villarroel et al., 2019). Together, live case studies and GenAI integration provide a pedagogical setting for critical thinking, decision-making, and problem-solving with real-world engagement.

5.2. Findings on Differential Effectiveness of GenAI Use

However, the use of GenAI generated mixed experiences. While some students found it helpful for reflection and idea testing, others reported limited value. This variation stems from uneven GenAI literacy, poor engagement with the guidance provided, reduced participation in seminars where prompting strategies were modelled and the inherent limitations of GenAI in handling complex, context-specific challenges. These patterns highlight the need for scaffolding, effective prompting, and ethical awareness to ensure that GenAI enhances rather than replaces experiential learning. They also indicate the importance of examining the conditions under which GenAI meaningfully supports learning in live case contexts.
The data also revealed several situations in which GenAI proved ineffective. Students frequently struggled with vague and superficial outputs that lacked operational depth or contextual relevance. When prompts were too general, GenAI tended to produce decontextualised recommendations that did not reflect the restaurant’s explicitly communicated capabilities, constraints, or strategic priorities. Some submissions demonstrated an overreliance on unrefined AI responses, with students adopting AI-generated suggestions without questioning inaccuracies—for example, proposing technologies beyond the restaurant’s budget, recommending service expansions inconsistent with the owner’s ethos, or mischaracterising the restaurant’s culinary identity. In other cases, similar phrasing or patterns across assignments suggested that some students relied heavily on generic GenAI outputs rather than critically engaging with live case data.
These examples illustrate that productive GenAI use requires students to craft precise prompts, evaluate the quality and feasibility of AI-generated outputs, and integrate them with contextual knowledge derived from stakeholder interactions and primary data. When these cognitive and evaluative skills are lacking, GenAI becomes a hindrance rather than a cognitive partner, underscoring the need to embed GenAI literacy, critical interrogation, and iterative prompting practices more explicitly within experiential learning activities.
Collectively, these findings indicate that future research should examine how GenAI literacy, prompt design, and authentic learning contexts interact to shape the effectiveness of GenAI as a cognitive partner and how these relationships can inform more robust pedagogical models for technology-enhanced experiential learning.

5.3. Findings on GenAI and Experiential Learning

Additionally, regarding the use of GenAI tools with Kolb’s ELC in the live case study, the results suggest the following:
  • Complemented concrete experience: GenAI helped situate some learners in a real-world problem before and during their direct interactions with the restaurant and its stakeholders, thereby reducing time and effort.
  • Enhanced reflective observation: GenAI-supported reflection helped some students deepen their investigations and make deeper conceptual connections.
  • Facilitated abstract conceptualisation: GenAI accelerated idea generation by offering innovative business models and operational strategies, supporting the transition from observation to conceptual solutions and frameworks.
  • Support for active experimentation: GenAI serves as a low-risk environment for testing business model enhancements and obtaining feedback before formalising recommendations.
These roles highlight GenAI’s potential as a platform for observation, reflection, conceptualisation, and experimentation (A. Kolb & Kolb, 2018). However, they underscore the continued need for critical thinking and ethical evaluation (Salinas-Navarro et al., 2024c, 2024d). Students were required to assess the accuracy, relevance, and implications of GenAI-generated outputs, ensuring that GenAI acted as support rather than a liability or substitute for authentic learning.
In this sense, further enquiry concerns the balance between GenAI-driven insights and human creativity. While GenAI could enrich exploration, the most valuable outcomes must emerge when students apply creativity, judgement, and contextual understanding to adapt or challenge GenAI’s suggestions to meet their needs. This balance aligns with Kolb’s emphasis on reflective practice and reinforces the importance of positioning GenAI as a complement, not a replacement, for experiential learning.
Furthermore, the live case study survey results suggest that, despite the limited responses, the pedagogical implementation was positive. The results highlight that the live case study was interesting, motivating, and relevant, and that it fostered critical thinking. This aligns with the existing literature on experiential learning (Kong, 2021), indicating its ability to link education with professional experience by building knowledge through the conversion of practice into understanding. In this case, learning activities shift from knowledge provision to experience mediation through a systematic process of interaction with learning materials and dynamic participation, leading to intrinsic motivation and interest in course material. However, no further claims or inferences can be made about the generalisability of these results due to the limitations of the case study method.

5.4. Findings on Student Performance

Student performance outcomes reflect the broader challenge of engagement in UK postgraduate education. Attendance was below 50%, fewer than 10% of students sought extra support, and almost one-third (28%) of students failed the module, despite overall module and case evaluations being satisfactory. This suggests that while live cases and GenAI integration benefited engaged students, disengaged students missed authentic activities and clarification of GenAI use, thereby reducing the effectiveness of experiential learning and GenAI adoption. Student engagement and participation are required for conducting live case studies and experiential learning (Bonwell & Aison, 1991). In this study, a high proportion of disengaged students was not anticipated as an obstacle to conducting a live case study.
This issue is not unique to the current case. Research highlights multiple drivers of disengagement, including negative perceptions of teaching quality, external pressures such as work and family commitments (Oldfield et al., 2019), and well-being concerns such as stress and loneliness (Berry et al., 2023). These factors are associated with reduced performance, persistence, and satisfaction (Berry et al., 2023; N. Holmes, 2018). Institutional structures further compounded the problem, as attendance was not mandatory, lecturers were not required to monitor it, and disengaged students could not be systematically identified for follow-up.
Additionally, instructional design issues may have hindered the learning process. Assessment methods focused on summative assessment rather than promoting formative, continuous evaluation, and feedback. This assessment approach could have hindered the performance of disengaged students because of the lack of timely and formal marking information. Moreover, the live case study required engagement and participation in specific seminar sessions; students who failed to attend missed essential information for their assignment development. In a disengaged, low-participation educational setting, the results suggest that live case studies are a challenging pedagogical choice.
Therefore, additional mitigation strategies are necessary for future implementation. Some alternatives include lecture recordings and digital media resources for asynchronous learning (Horlin et al., 2024; Limonova et al., 2024; Salman et al., 2025). Assessment methods may also require adaptation to provide continuous feedback and follow-up.
These findings show that GenAI-supported experiential pedagogy requires systemic and instructional responses. Policy measures that promote attendance and participation, strengthen student well-being, and integrate continuous assessment can provide the consistency needed for the success of these approaches (Berry et al., 2023; N. Holmes, 2018). Without this support, even well-designed experiential activities risk leaving disengaged students behind the learning curve. Lecturers require contextual awareness, pedagogical responsiveness, and broader educational support to effectively fulfil their teaching responsibilities. This situation requires instructional design to balance pedagogical and learning objectives with situational limitations and student preparedness.

5.5. Theoretical and Practical Implications

The theoretical contribution of this work lies in demonstrating a tripartite integration: (i) live case studies as authentic, dynamic pedagogical contexts, (ii) experiential learning as a structuring and reflective process, and (iii) GenAI as a cognitive partner that enhances, rather than replaces, human judgement. This synthesis advances the theoretical discourse by demonstrating how technology, pedagogy, and real-world engagement can be effectively aligned to cultivate critical, creative, and context-sensitive learners.
The study shows that live case studies can provide authentic, dynamic environments in which the potential use of GenAI becomes pedagogically meaningful. In realistic contexts with incomplete information and evolving stakeholder needs, students must critically obtain, evaluate, and apply AI outputs. This authenticity suggests deeper cognitive engagement, ensuring that GenAI-supported learning is grounded in real-world complexity rather than hypothetical tasks.
This study also advances the conceptualisation of GenAI by identifying its affordances across Kolb’s experiential learning cycle. In CE, GenAI provides contextual information and prepares students for stakeholder engagement. In RO, GenAI serves as a scaffold, generating questions that prompt students to examine their assumptions. In AC, GenAI functions as an ideation engine, producing frameworks that students can compare with theory. In AE, GenAI provides a validation environment in which students can test their ideas before conducting real-world interventions. This suggests that GenAI’s value extends beyond content generation; it shapes how learners perceive, reflect, theorise, and act throughout the cycle.
Additionally, GenAI can function as a cognitive partner to develop higher-order skills. Through the use of GenAI in live case studies within the ELC framework, learners can create, analyse, apply, and evaluate knowledge to solve complex problems, make informed judgments, and connect ideas. This approach involves human thinking and judgement alongside the use of AI as an agent to learn with.
Therefore, these insights advance the theory by positioning GenAI as an active participant in contextualised experiential learning, capable of augmenting but never replacing the interpretive work that underpins meaningful learning. Furthermore, our analysis highlights the importance of integrating experiential learning and GenAI to enhance student agency (Eteläpelto et al., 2013). The findings suggest that while engaged students utilised GenAI as a cognitive partner, those with limited participation or digital readiness had fewer opportunities to exercise their agency. This reveals a structural challenge, as agency is influenced by individual autonomy, sociocultural factors, and external constraints (Calabrese Barton & Tan, 2020; Crenshaw, 1991). Recognising agency in experiential learning reframes GenAI from a mere technological tool into a potential catalyst for augmenting (or diminishing) agency, contingent on intentionally designed and equitable conditions, supportive learning environments, and suitable pedagogies for active, experiential learning.
Regarding the ethical aspects of GenAI in live case studies, authentic assessments in learning activities helped eliminate academic integrity issues (Villarroel et al., 2019; Wiggins, 1990). As explained, students were provided with GenAI assignment resolutions and a prompting procedure in advance, highlighting the possibilities and limitations. Moreover, live case studies help address ethical concerns by requiring students to engage with realistic, context-specific developments that pose cognitive challenges and evaluative judgments (Villarroel et al., 2019). Consequently, the ethical and responsible use of GenAI can rely on authentic instructional design and teacher guidance by incorporating activities that address the technology’s potential, risks, and limitations.
Referring to the practical implications of this study, we propose the following guidelines for integrating GenAI into live case studies.
  • Clarify objectives: Align GenAI use with higher-order skills (reflection, critical thinking, and application).
  • Develop GenAI literacy: Provide orientation on prompting strategies and critical evaluation of outputs.
  • Design structured prompts: Scaffold iterative use of GenAI across Kolb’s stages (CE, RO, AC, and AE).
  • Balance GenAI and judgement: Encourage students to adapt to or challenge suggestions from GenAI.
  • Embed ethical use: Explicitly address bias, originality, and academic integrity within the pedagogical and instructional design.
  • Integrate reflection: Require students to document how GenAI supports (or limits) their learning.
  • Plan for scalability: Anticipate resource demands and adapt the approach to different class sizes and disciplines.
  • Anticipate agency barriers: Scaffold agency support by offering equitable conditions and autonomy-supportive teaching.
These recommendations offer actionable steps for educators seeking to design authentic, technology-enhanced learning environments. They also emphasise the need to position GenAI as a complement to, rather than a substitute for, student creativity and contextual judgement. Finally, this study demonstrates that ethical concerns regarding GenAI can be mitigated through authentic instructional design. Students were shown both the possibilities and limitations of GenAI outputs, ensuring that reflective evaluation remained central to the learning process (Villarroel et al., 2019).

5.6. Limitations and Future Work

This study acknowledges the limitations that shape the interpretation of the findings. A central challenge was the limited student involvement and lecture attendance, which restricted the depth of engagement with the live case study. Experiential learning and live case studies depend heavily on active participation; without it, the effectiveness of the learning cycle diminishes (Kong, 2021). Learners who were unable or unwilling to participate fully were less likely to benefit from the activity’s hands-on, reflective nature.
The second limitation concerns the variable use of GenAI tools. Although students were introduced to prompting strategies and warned about the risks of over-reliance, those who did not attend the orientation sessions lacked the literacy needed to employ GenAI effectively. Consequently, the technology’s potential as a reflective and exploratory partner was not fully realised.
Third, the resource-intensive nature of live case designs represents a constraint. The teaching team was required to coordinate with external partners, scaffold learning activities, and provide technical guidance. This raises questions about the scalability of such approaches across programmes and institutions, where resources and staff time may be more limited.
Fourth, methodological constraints must be acknowledged (de Zeeuw, 1996; Drisko, 2025; Leung, 2015; Vahl, 1997). Data collection relied on secondary sources, student outputs, and voluntary feedback surveys, which limited the validity of the findings. The small number of survey responses relative to class size raises concerns about reliability, as the data may not reflect the full spectrum of student experiences. Moreover, the findings are deeply contextualised in a single postgraduate module involving a UK-based ethnic restaurant, limiting generalisability to broader settings, and must be seen as indicative. As this study was conducted with a single cohort engaging in one live case, the findings reflect the characteristics of this specific context. Future studies should employ multiple cases to enhance generalisability. However, this specificity enriches the study and its exploratory nature. It illustrates the transferability of the proposed approach to other disciplines, cultural contexts, and educational levels by providing a structured, integrated framework.
Fifth, the low response rate (19%) to the live case study survey limited the robustness of the findings regarding students’ perceptions of GenAI integration. This small subset may reflect more engaged students, introducing potential selection bias and excluding the opinions of unengaged students. Because survey participation was voluntary, the results likely reflect the perspectives of more engaged students and therefore cannot be assumed to represent the entire cohort. As the survey results suggest, the students favourably evaluated their learning outcomes, the relevance of the live case study, and Gen AI use. Hence, promoting student engagement is essential to enhance their learning experience and results while surfacing challenges and opportunities. Nevertheless, the interpretation of the survey data should be treated with caution. Future studies should aim for higher response rates to strengthen the validity of the findings on student experiences.
Another limitation is the study’s inability to isolate GenAI’s specific contribution to learning outcomes. As the live case design and GenAI activities were implemented together, improvements in engagement, reflection, or understanding cannot be attributed solely to either component. As an exploratory case, this study cannot make causal claims about the effectiveness of GenAI, and the findings should be considered indicative. Future research using comparative designs, such as cohorts completing cases with and without GenAI, would help clarify the distinct pedagogical contributions of GenAI.
Finally, a limitation concerns the high module failure rate (28%). Although disengagement and low attendance were evident early in the semester, the data could not determine whether these outcomes were influenced by students’ GenAI literacy or integration skills. Without systematic data on students’ GenAI use relative to performance, we cannot conclude the relationship between GenAI integration and assessment outcomes. Future studies should examine how GenAI readiness, engagement, and participation in activities influence learning performance.
Future research should extend this study in several ways. (i) Expanding to multiple case studies across disciplines, institutions, and cultural contexts would test the transferability of the proposed framework. (ii) Employing mixed-methods designs that combine surveys, interviews, and direct observations would improve validity and reliability. (iii) Comparative studies between cohorts with and without structured GenAI integration may provide unmistakable evidence of its pedagogical impact. (iv) Simultaneously, longitudinal research could investigate whether the skills developed through GenAI-supported live cases are effectively applied in professional practice. (v) Regarding experiential learning, the evaluation of GenAI’s pedagogical contribution at each ELC stage should assess specific activity enhancements. (vi) Future research should also investigate the differential effectiveness of GenAI use by examining how variations in GenAI literacy, prompting strategies, and contextual engagement influence learning outcomes in authentic experiential settings. Finally, (vii) further investigation is needed into the best practices for enhancing GenAI literacy, risk-mitigation strategies, and scaffolding to ensure that diverse student populations can engage with GenAI in equitable, responsible, and effective ways.
Addressing these limitations in future studies will strengthen the evidence base for GenAI-supported experiential learning and inform institutional strategies for embedding authentic, technology-enhanced pedagogies in higher education.

6. Conclusions

This study examined the integration of GenAI into live case studies through experiential learning in higher education. By aligning live case designs with Kolb’s ELC, this study showed how GenAI can enrich concrete experiences, scaffold reflective observation, support abstract conceptualisation, and provide a low-risk environment for experimentation. The live case format ensured authentic engagement with real-world business challenges, enabling students to bridge the gap between theory and practice in their operations management module.
GenAI tools served as agents to learn with, complementing students’ judgement and creativity, although the benefits were uneven and shaped by students’ literacy and engagement levels. These outcomes highlight the opportunities and limitations of combining GenAI with experiential learning, underscoring the need for scaffolding and critical evaluation.
This study shows that GenAI-supported live case studies can cultivate transferable skills, such as critical thinking, problem solving, and adaptability. However, challenges, including uneven participation, resource demands, and overreliance on generic GenAI outputs, must be addressed before scaling. Although the evidence comes from a single postgraduate case and voluntary module surveys, the framework offers transferable insights to guide future replication across disciplines and contexts.
This study extends previous research on active learning by embedding GenAI across the stages of Kolb’s learning cycle and applying it to an authentic, dynamic pedagogical context in a live case with an external partner. This study advances both the theoretical understanding of human–artificial intelligence collaboration in learning and the practical design of authentic technology-enhanced pedagogies. GenAI as a cognitive partner that enhances, rather than replaces, human judgement. Future research should test this framework through multi-case and longitudinal studies while developing best practices for the literacy and ethical use of GenAI.
Ultimately, this study illustrates how integrating GenAI can strengthen authenticity, reflection, and applied learning in higher education when positioned as a cognitive partner rather than a substitute for human judgement. The study also highlights that student agency is both a condition and an outcome of GenAI-supported experiential learning, requiring autonomy, ethical awareness, and critical engagement. Supporting this agency demands structured guidance and sensitivity to sociocultural factors that influence participation and digital readiness.
Therefore, this study contributes a transferable pedagogical framework that combines live case studies, experiential learning, and GenAI to promote inclusive, technology-enhanced, and reflective learning experiences. The proposed framework can inform institutional strategies to embed GenAI literacy and ethical use into curricula, ensuring that students develop the cognitive and moral competencies necessary for responsible innovation in higher education.

Author Contributions

Conceptualization, D.E.S.-N.; methodology, D.E.S.-N.; validation, D.E.S.-N. and E.V.-P.; formal analysis, D.E.S.-N. and E.V.-P.; investigation, D.E.S.-N.; resources, D.E.S.-N.; data curation, D.E.S.-N.; writing—original draft preparation, D.E.S.-N.; writing—review and editing, D.E.S.-N., E.V.-P., J.A.P.-M. and M.C.-A.; visualisation, D.E.S.-N.; supervision, D.E.S.-N.; project administration, D.E.S.-N.; funding acquisition, J.A.P.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Institutional Review Board Statement

Ethical review and approval were not required for this study, as it was based solely on retrospective documentary research methods and techniques, conceptual analysis, and publicly available literature, and did not involve human participants, animals, or any form of personal or sensitive data. All sources have been appropriately cited to ensure academic integrity and transparency of this study.

Informed Consent Statement

This study used retrospective documentary research methods to analyse previously collected data from the module survey responses. Before data collection, informed consent was formally obtained to access and use anonymised institutional student feedback for research and quality enhancement. No personal or identifying information was collected or disclosed at any stage of the study. All data were handled in accordance with institutional ethical guidelines and data protection regulations. The study posed no risk to the participants, and the findings are reported solely in an aggregated, non-identifiable form.

Data Availability Statement

The data supporting the findings of this study are available upon request from the corresponding author (D.E.S.-N.). These data are not publicly available because they contain information that can compromise the privacy of the academic records.

Acknowledgments

The APC was funded by the Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey, Mexico.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this manuscript.

Abbreviations

The following abbreviations are used in this manuscript.
AIArtificial intelligence
ACAbstract conceptualisation
AEActive experimentation
CEConcrete experience
ELCExperiential Learning Cycle
FCumulative frequency
GenAIGenerative Artificial Intelligence
nNumber of respondents
RQResearch Question
Std DevStandard deviation

Appendix A

Appendix A.1. Example 1: Subscription-Based Culinary Experience Model

One student group proposed transforming the restaurant into an experience-oriented service provider by introducing a subscription-based dining model. This approach bundles regular meals with value-added cultural and educational components, such as:
  • Hands-on cooking workshops led by the restaurant’s chefs
  • Virtual cooking tutorials offered through a digital platform
  • Cultural storytelling events themed around Bangladeshi–Indian cuisine and traditions
Through this model, the restaurant becomes more than a food provider; it functions as a cultural and community learning space. The subscription mechanism also generates recurring revenue, strengthens customer loyalty, and differentiates the restaurant from competitors that offer standard dine-in services.

Appendix A.2. Example 2: Technology-Enabled Personalisation Model

A second group emphasised the use of digital technologies to strengthen the customer experience. Their proposal included the development of a mobile application featuring:
  • Personalised menu recommendations based on customer preferences
  • A data-driven loyalty and rewards programme
  • Pre-ordering and table-planning functionalities
  • A customer feedback dashboard to inform operational improvements
This model positions restaurants as technologically responsive service providers. This approach leverages data analytics to increase operational efficiency while delivering an enhanced, personalised dining experience. Students argued that such digitalisation could help restaurants compete effectively in a saturated market where convenience and personalisation are increasingly valued.

Appendix A.3. Example 3: Sustainability-Driven Business Model

A third group developed a proposal centred on sustainability and ethical dining. Their recommendations included the following:
  • Partnering with local organic suppliers to increase ingredient quality and reduce food miles
  • Introducing eco-friendly packaging for takeaway services
  • Offering a sustainable menu tier with environmentally responsible meal options
  • Promoting sustainability practices through marketing and community engagement
The group proposed that sustainability could serve as a differentiating value proposition, positioning the restaurant as premium and appealing to environmentally conscious consumers. This model includes both operational adjustments and strategic identity building through sustainability credentials.

Appendix A.4. Cross-Cutting Elements Across Student Proposals

Despite the variety of ideas, several shared themes emerged across the groups.
  • Value Proposition. Students consistently aimed to enhance the customer experience by offering the following:
    • Authentic and immersive dining experiences
    • Personalised and technology-enhanced services
    • Cultural and educational components
    • Health-conscious or sustainability-oriented offerings
  • Value-creation mechanisms. Proposals involved:
    • Integrating digital tools for personalisation and efficiency
    • Developing staff skills to deliver enhanced services
    • Leveraging culinary expertise to create unique experiences
    • Establishing partnerships with suppliers and cultural organisations
  • Value Network. Students recommended forming alliances with:
    • Technology providers
    • Local organic producers
    • Cultural associations
    • Community influencers
    These partnerships broaden operational capabilities and deepen community engagement.
  • Value-Capture Strategies. The revenue diversification strategies included:
    • Subscription plans
    • Event hosting and workshops
    • Premium dining packages
    • Virtual classes and branded products
  • Operational Implications. Students identified several operational adjustments necessary to support the redesigned business model.
    • Strengthened supply chain management and collaboration
    • Use of data analytics for operations and customer insights
    • Staff training for service expansion and digital adoption
    • Scalable services to meet variable demand
    • Integration of sustainable practices in sourcing and packaging
    • Enhanced customer-centric processes

Appendix A.5. Summary

These extended examples demonstrate how student groups applied GenAI-supported reasoning to conceptualise and experiment with new business-model configurations. The proposals reflect diverse yet coherent applications of servitisation principles, supported by GenAI-assisted brainstorming, framework generation, and scenario testing. This appendix provides additional details to complement the condensed discussion of student outputs in Section 4.2.

References

  1. Archer, M., Morley, D. A., & Souppez, J.-B. R. G. (2021). Real world learning and authentic assessment. In D. A. Morley, & M. G. Jamil (Eds.), Applied pedagogies for higher education (pp. 323–341). Springer International Publishing. [Google Scholar] [CrossRef]
  2. Bahroun, Z., Anane, C., Ahmed, V., & Zacca, A. (2023). Transforming education: A comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis. Sustainability, 15(17), 12983. [Google Scholar] [CrossRef]
  3. Baker, A. C., Jensen, P. J., & Kolb, D. A. (2002). Conversational learning: An experiential approach to knowledge creation. Quorum Books. [Google Scholar]
  4. Belkina, M., Daniel, S., Nikolic, S., Haque, R., Lyden, S., Neal, P., Grundy, S., & Hassan, G. M. (2025). Implementing generative AI (GenAI) in higher education: A systematic review of case studies. Computers and Education: Artificial Intelligence, 8, 100407. [Google Scholar] [CrossRef]
  5. Bergsteiner, H., Avery, G. C., & Neumann, R. (2010). Kolb’s experiential learning model: Critique from a modelling perspective. Studies in Continuing Education, 32(1), 29–46. [Google Scholar] [CrossRef]
  6. Berry, C., Niven, J. E., & Hazell, C. M. (2023). Predictors of UK postgraduate researcher attendance behaviours and mental health-related attrition intention. Current Psychology, 42(34), 30521–30534. [Google Scholar] [CrossRef]
  7. Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W., & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21(1), 4. [Google Scholar] [CrossRef]
  8. Bonwell, C. C., & Aison, J. A. (1991). Active learning: Creating excitement in the classroom (No. 1 ED340272). Jossey-Bass.
  9. Bradford, D. L. (2019). Ethical issues in experiential learning. Journal of Management Education, 43(1), 89–98. [Google Scholar] [CrossRef]
  10. Calabrese Barton, A., & Tan, E. (2020). Beyond equity as inclusion: A framework of “rightful presence” for guiding justice-oriented studies in teaching and learning. Educational Researcher, 49(6), 433–440. [Google Scholar] [CrossRef]
  11. Charlebois, S., & Foti, L. (2017). Using a live case study and co-opetition to explore sustainability and ethics in a classroom: Exporting fresh water to China. Global Business Review, 18(6), 1400–1411. [Google Scholar] [CrossRef]
  12. Crawford, J., Cowling, M., Central Queensland University Australia, Allen, K.-A., & Monash University Australia. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). Journal of University Teaching and Learning Practice, 20(3), 3. [Google Scholar] [CrossRef]
  13. Crenshaw, K. (1991). Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, 43(6), 1241. [Google Scholar] [CrossRef]
  14. Crowe, S., Cresswell, K., Robertson, A., Huby, G., Avery, A., & Sheikh, A. (2011). The case study approach. BMC Medical Research Methodology, 11(1), 100. [Google Scholar] [CrossRef]
  15. Culpin, V., & Scott, H. (2012). The effectiveness of a live case study approach: Increasing knowledge and understanding of ‘hard’ versus ‘soft’ skills in executive education. Management Learning, 43(5), 565–577. [Google Scholar] [CrossRef]
  16. Demir, K. A. (2021). Smart education framework. Smart Learning Environments, 8(1), 29. [Google Scholar] [CrossRef]
  17. de Zeeuw, G. (1996). Three phases of science: A methodological exploration. Working Paper 7. Centre for Systems and Information Sciences, University of Lincolnshire and Humberside. [Google Scholar]
  18. Doolittle, P., Wojdak, K., & Walters, A. (2023). Defining active learning: A restricted systematic review. Teaching and Learning Inquiry, 11. [Google Scholar] [CrossRef]
  19. Drisko, J. W. (2025). Transferability and generalization in qualitative research. Research on Social Work Practice, 35(1), 102–110. [Google Scholar] [CrossRef]
  20. Elam, E. L. R., & Spotts, H. E. (2004). Achieving marketing curriculum integration: A live case study approach. Journal of Marketing Education, 26(1), 50–65. [Google Scholar] [CrossRef]
  21. ElSayary, A. (2024). Integrating generative AI in active learning environments: Enhancing metacognition and technological skills. Journal of Systemics, Cybernetics and Informatics, 22(3), 34–37. [Google Scholar] [CrossRef]
  22. Eteläpelto, A., Vähäsantanen, K., Hökkä, P., & Paloniemi, S. (2013). What is agency? Conceptualizing professional agency at work. Educational Research Review, 10, 45–65. [Google Scholar] [CrossRef]
  23. Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12(2), 219–245. [Google Scholar] [CrossRef]
  24. Garay-Rondero, C. L., Calvo, E. Z. R., & Salinas-Navarro, D. E. (2019, October 16–19). Developing and assessing engineering competencies at experiential learning spaces. 2019 IEEE Frontiers in Education Conference (FIE) (pp. 1–5), Covington, KY, USA. [Google Scholar] [CrossRef]
  25. Guest, G., MacQueen, K., & Namey, E. (2012). Applied thematic analysis. SAGE Publications, Inc. [Google Scholar] [CrossRef]
  26. Harry, A., & Sayudin, S. (2023). Role of AI in education. Interdisciplinary Journal and Humanity (INJURITY), 2(3), 260–268. [Google Scholar] [CrossRef]
  27. Ho, J. C., & Li, Y. (2024). Generative AI for operations management education and learning: A critical assessment and analysis. International Journal of Services and Standards, 14(4), 10072276. [Google Scholar] [CrossRef]
  28. Hofmeister, C., & Pilz, M. (2023). Do cases always deliver what they promise? A Quality analysis of business cases in higher education. Education Sciences, 14(1), 7. [Google Scholar] [CrossRef]
  29. Holmes, N. (2018). Engaging with assessment: Increasing student engagement through continuous assessment. Active Learning in Higher Education, 19(1), 23–34. [Google Scholar] [CrossRef]
  30. Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504–526. [Google Scholar] [CrossRef]
  31. Horlin, C., Hronska, B., & Nordmann, E. (2024). I can be a “normal” student: The role of lecture capture in supporting disabled and neurodivergent students’ participation in higher education. Higher Education, 88(6), 2075–2092. [Google Scholar] [CrossRef]
  32. Koh, K. H. (2017). Authentic assessment. In Oxford research encyclopedia of education. Oxford University Press. [Google Scholar] [CrossRef]
  33. Kolb, A., & Kolb, D. (2018). Eight important things to know about the experiential learning cycle. Australian Educational Leader, 40, 3. [Google Scholar]
  34. Kolb, D. (1984). Experiential learning: Experience as the source of learning and development. Prentice-Hall. [Google Scholar]
  35. Kong, Y. (2021). The role of experiential learning on Students’ motivation and classroom engagement. Frontiers in Psychology, 12, 771272. [Google Scholar] [CrossRef] [PubMed]
  36. Kurtz, G., Amzalag, M., Shaked, N., Zaguri, Y., Kohen-Vacs, D., Gal, E., Zailer, G., & Barak-Medina, E. (2024). Strategies for integrating generative AI into higher education: Navigating challenges and leveraging opportunities. Education Sciences, 14(5), 503. [Google Scholar] [CrossRef]
  37. Lee, C. C., & Low, M. Y. H. (2024). Using genAI in education: The case for critical thinking. Frontiers in Artificial Intelligence, 7, 1452131. [Google Scholar] [CrossRef]
  38. Leung, L. (2015). Validity, reliability, and generalizability in qualitative research. Journal of Family Medicine and Primary Care, 4(3), 324–327. [Google Scholar] [CrossRef]
  39. Limonova, V., Dos Santos, A. M. P., Mamede, J. H. P. S., & De Jesus Filipe, V. M. (2024). Maximising attendance in higher education: How AI and gamification strategies can boost student engagement and participation. In Á. Rocha, H. Adeli, G. Dzemyda, F. Moreira, & A. Poniszewska-Marańda (Eds.), Good practices and new perspectives in information systems and technologies (Vol. 988, pp. 64–70). Springer Nature. [Google Scholar] [CrossRef]
  40. Lipponen, L., & Kumpulainen, K. (2011). Acting as accountable authors: Creating interactional spaces for agency work in teacher education. Teaching and Teacher Education, 27(5), 812–819. [Google Scholar] [CrossRef]
  41. Maanvi, S. (2017, December 5). What’s the difference between a curry house and an indian restaurant? The Salt What’s on Your Plate. Available online: https://www.npr.org/sections/thesalt/2017/12/05/567004913 (accessed on 15 December 2025).
  42. Markulis, P. M. (1985). The live case study: Filling the gap between the case study and the experiential exercise. In Developments in business simulation and experiential learning: Proceedings of the annual ABSEL conference (Vol. 12). Open Journal Systems. Available online: https://absel-ojs-ttu.tdl.org/absel/article/view/2195 (accessed on 15 December 2025).
  43. Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sciences, 13(9), 856. [Google Scholar] [CrossRef]
  44. Neubert, M., Rams, W., & Utikal, H. (2020). Experiential learning with live case studies. International Journal of Teaching and Case Studies, 11(2), 173. [Google Scholar] [CrossRef]
  45. Oldfield, J., Rodwell, J., Curry, L., & Marks, G. (2019). A face in a sea of faces: Exploring university students’ reasons for non-attendance to teaching sessions. Journal of Further and Higher Education, 43(4), 443–452. [Google Scholar] [CrossRef]
  46. Priestley, M., Edwards, R., Priestley, A., & Miller, K. (2012). Teacher agency in curriculum making: Agents of change and spaces for Manoeuvre. Curriculum Inquiry, 42(2), 191–214. [Google Scholar] [CrossRef]
  47. Qian, Y. (2025). Pedagogical applications of generative AI in higher education: A systematic review of the field. TechTrends, 69(5), 1105–1120. [Google Scholar] [CrossRef]
  48. Radcliffe, D. (2009). A Pedagogy-space-technology (PST) framework for designing and evaluating learning places. In D. Radcliffe, H. Wilson, D. Powell, & B. Tibbetts (Eds.), Learning spaces in higher education: Positive outcomes by design (pp. 9–16). The University of Queensland. [Google Scholar]
  49. Raddats, C., Kowalkowski, C., Benedettini, O., Burton, J., & Gebauer, H. (2019). Servitization: A contemporary thematic review of four major research streams. Industrial Marketing Management, 83, 207–223. [Google Scholar] [CrossRef]
  50. Rajala, A., Martin, J., & Kumpulainen, K. (2016). Agency and learning: Researching agency in educational interactions. Learning, Culture and Social Interaction, 10, 1–3. [Google Scholar] [CrossRef]
  51. Roth, K. J., & Smith, C. (2009). Live case analysis: Pedagogical problems and prospects in management education. American Journal of Business Education (AJBE), 2(9), 59–66. [Google Scholar] [CrossRef]
  52. Ruiz-Rojas, L. I., Salvador-Ullauri, L., & Acosta-Vargas, P. (2024). Collaborative working and critical thinking: Adoption of generative artificial intelligence tools in higher education. Sustainability, 16(13), 5367. [Google Scholar] [CrossRef]
  53. Ryan, G. W., & Bernard, H. R. (2003). Techniques to identify themes. Field Methods, 15(1), 85–109. [Google Scholar] [CrossRef]
  54. Salinas-Navarro, D. E., Da Silva-Ovando, A. C., & Palma-Mendoza, J. A. (2024a). Experiential learning labs for the post-COVID-19 pandemic era. Education Sciences, 14(7), 707. [Google Scholar] [CrossRef]
  55. Salinas-Navarro, D. E., & Garay-Rondero, C. L. (2020, November 24–27). Experiential learning for sustainable urban mobility in industrial engineering education. 2020 IEEE International Conference on Technology Management, Operations and Decisions (ICTMOD) (pp. 1–8), Marrakech, Morocco. [Google Scholar] [CrossRef]
  56. Salinas-Navarro, D. E., Pacheco-Velazquez, E., & Da Silva-Ovando, A. C. (2024b). (Re-)shaping learning experiences in supply chain management and logistics education under disruptive uncertain situations. Frontiers in Education, 9, 1348194. [Google Scholar] [CrossRef]
  57. Salinas-Navarro, D. E., & Rodríguez Calvo, E. Z. (2020). Social lab for sustainable logistics: Developing learning outcomes in engineering education. In A. Leiras, C. A. González-Calderón, I. de Brito Junior, S. Villa, & H. T. Y. Yoshizaki (Eds.), Operations management for social good (pp. 1065–1074). Springer International Publishing. [Google Scholar] [CrossRef]
  58. Salinas-Navarro, D. E., Vilalta-Perdomo, E., Michel-Villarreal, R., & Montesinos, L. (2024c). Designing experiential learning activities with generative artificial intelligence tools for authentic assessment. Interactive Technology and Smart Education, 21(4), 708–734. [Google Scholar] [CrossRef]
  59. Salinas-Navarro, D. E., Vilalta-Perdomo, E., Michel-Villarreal, R., & Montesinos, L. (2024d). Using generative artificial intelligence tools to explain and enhance experiential learning for authentic assessment. Education Sciences, 14(1), 83. [Google Scholar] [CrossRef]
  60. Salman, M. F., Yahaya, L. A., Adedokun-Shittu, N. A., Bello, M. B., Ogunjimi, M. O., -Steve, F. B., Atolagbe, A. A., Abdullahi, M. S., Alabi, H. I., Dominic, O. L., & Olaitan, O. L. (2025). Enhancing academic engagement through an interactive digital manual: A study on general studies students at the University of Ilorin. Pengabdian: Jurnal Abdimas, 3(1), 34–45. [Google Scholar] [CrossRef]
  61. Schonell, S., & Macklin, R. (2019). Work integrated learning initiatives: Live case studies as a mainstream WIL assessment. Studies in Higher Education, 44(7), 1197–1208. [Google Scholar] [CrossRef]
  62. Schryen, G., Marrone, M., & Yang, J. (2025). Exploring the scope of generative AI in literature review development. Electronic Markets, 35(1), 13. [Google Scholar] [CrossRef]
  63. Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. John Wiley & Sons. [Google Scholar]
  64. Sen, R., & Deng, X. (2025). Using generative AI to enhance experiential learning: An exploratory study of ChatGPT Use by university students. Journal of Information Systems Education, 36(1), 53–64. [Google Scholar] [CrossRef]
  65. Shailendra, S., Kadel, R., & Sharma, A. (2024). Framework for adoption of generative artificial intelligence (GenAI) in education. IEEE Transactions on Education, 67(5), 777–785. [Google Scholar] [CrossRef]
  66. Shoikova, E., Nikolov, R., & Kovatcheva, E. (2017). Conceptualising of smart education. Electrotechnica & Electronica (E+ E), 52(3–4), 29–37. [Google Scholar]
  67. Symeou, L., Louca, L., Kavadella, A., Mackay, J., Danidou, Y., & Raffay, V. (2025). Development of evidence-based guidelines for the integration of generative AI in university education through a multidisciplinary, consensus-based approach. European Journal of Dental Education, 29(2), 285–303. [Google Scholar] [CrossRef]
  68. Tharenou, P., Donohue, R., & Cooper, B. (2007a). Case study research designs. In Management research methods (pp. 72–87). Cambridge University Press. [Google Scholar] [CrossRef]
  69. Tharenou, P., Donohue, R., & Cooper, B. (2007b). The research process. In Management research methods (pp. 3–30). Cambridge University Press. [Google Scholar] [CrossRef]
  70. Tillmanns, T., Salomão Filho, A., Rudra, S., Weber, P., Dawitz, J., Wiersma, E., Dudenaite, D., & Reynolds, S. (2025). Mapping tomorrow’s teaching and learning spaces: A systematic review on GenAI in higher education. Trends in Higher Education, 4(1), 2. [Google Scholar] [CrossRef]
  71. Timans, R., Wouters, P., & Heilbron, J. (2019). Mixed methods research: What it is and what it could be. Theory and Society, 48(2), 193–216. [Google Scholar] [CrossRef]
  72. Vahl, M. (1997). Doing research in the social domain. In F. A. Stowell, R. L. Ison, R. Armson, J. Holloway, S. Jackson, & S. McRobb (Eds.), Systems for sustainability (pp. 147–152). Springer. [Google Scholar] [CrossRef]
  73. Varela, F. J. (1999). Ethical know-how: Action, wisdom, and cognition. Stanford University Press. [Google Scholar]
  74. Veletsianos, G., & Russell, G. S. (2014). Pedagogical Agents. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (pp. 759–769). Springer. [Google Scholar] [CrossRef]
  75. Villarroel, V., Boud, D., Bloxham, S., Bruna, D., & Bruna, C. (2019). Using principles of authentic assessment to redesign written examinations and tests. Innovations in Education and Teaching International, 57(1), 38–49. [Google Scholar] [CrossRef]
  76. Vítečková, K., Cramer, T., Pilz, M., Tögel, J., Albers, S., Van Den Oord, S., & Rachwał, T. (2025). Case studies in business education: An investigation of a learner-friendly approach. Journal of International Education in Business, 18(2), 149–176. [Google Scholar] [CrossRef]
  77. Wang, P., Jing, Y., & Shen, S. (2025). A systematic literature review on the application of generative artificial intelligence (GAI) in teaching within higher education: Instructional contexts, process, and strategies. The Internet and Higher Education, 65, 100996. [Google Scholar] [CrossRef]
  78. Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research, and Evaluation, 2(2), 2. [Google Scholar] [CrossRef]
  79. Xia, Q., Weng, X., Ouyang, F., Lin, T. J., & Chiu, T. K. F. (2024). A scoping review on how generative artificial intelligence transforms assessment in higher education. International Journal of Educational Technology in Higher Education, 21(1), 40. [Google Scholar] [CrossRef]
  80. Yan, L., Greiff, S., Teuber, Z., & Gašević, D. (2024). Promises and challenges of generative artificial intelligence for human learning (Version 3). arXiv. [Google Scholar] [CrossRef]
  81. Yin, R. K. (2009). Case study research: Design and methods (4th ed.). Sage Publications. [Google Scholar]
  82. Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21(1), 21. [Google Scholar] [CrossRef]
Figure 1. Integration of Experiential Learning into Live Case Studies. The arrows describe flows between the stages of the learning cycle (authors’ elaboration).
Figure 1. Integration of Experiential Learning into Live Case Studies. The arrows describe flows between the stages of the learning cycle (authors’ elaboration).
Education 16 00015 g001
Figure 2. Integrating GenAI into Live Case Studies through Experiential Learning. Arrows describe the research-process flow, including a deductive, an inductive, and a final integration path (authors’ elaboration).
Figure 2. Integrating GenAI into Live Case Studies through Experiential Learning. Arrows describe the research-process flow, including a deductive, an inductive, and a final integration path (authors’ elaboration).
Education 16 00015 g002
Figure 3. Kolb’s experiential learning cycle applied to the live case study with GenAI as a cognitive partner. The arrows of the two concentric cycles describe flows between the stages of the experiential cycle (authors’ elaboration).
Figure 3. Kolb’s experiential learning cycle applied to the live case study with GenAI as a cognitive partner. The arrows of the two concentric cycles describe flows between the stages of the experiential cycle (authors’ elaboration).
Education 16 00015 g003
Table 1. Conceptual Framework: Integrating Live Case Studies, Experiential Learning, Agency, and GenAI (authors’ elaboration).
Table 1. Conceptual Framework: Integrating Live Case Studies, Experiential Learning, Agency, and GenAI (authors’ elaboration).
DimensionKey CharacteristicsContribution to LearningRole of GenAI
Live Case StudiesReal-time, evolving, authentic problems; interaction with external stakeholders; and situated in real-world contexts. Authentic assessment, decision-making under uncertainty, teamwork, and applied knowledge were fostered.They provide background information, suggest resources, generate scenario variations, and scaffold problem framing.
Experiential LearningThe four stages are: CE, RO, AC, and AE. Higher-order thinking, critical reflection, transferable skills, and contextualised knowledge were developed.Scaffold each stage: CE (contextual data), RO (probing questions), AC (alternative frameworks), AE (safe testing environment)
Learner Agency Capacity to decide, critically question, co-construct, and transform one’s own learning, shaped by sociocultural and intersectional factors.Enhancing autonomy, empowerment, creativity, and critical engagement in real-world contexts.Acts as a catalyst: GenAI expands possibilities, but students exercise their agency by adapting, critiquing, filtering, or reorienting GenAI outputs.
Table 2. Module Survey Results (authors’ elaboration).
Table 2. Module Survey Results (authors’ elaboration).
Views on the ModuleRespondents (n)MeanModeStd Dev
The module is interesting464.340.7
I am clear what the learning outcomes are for the module464.240.6
I feel I can achieve the module learning outcomes464.140.6
The mode of delivery works well for the module content463.940.7
I have received sufficient academic support when I asked for it433.940.6
I know how to get academic support if I need it464.240.6
I have engaged well with this module464.140.7
Students provided 46 of the 86 answers. Scale: 5-Definitely agree, 1-Definitely disagree.
Table 3. Live Case Study Survey Results (authors’ elaboration).
Table 3. Live Case Study Survey Results (authors’ elaboration).
QuestionsModen-ModeCumulative F (3–5)
How INTERESTING was carrying out your Live Case Study learning activities?3714 (88%)
How MOTIVATING was carrying out your Live Case Study learning activities?3613 (81%)
How RELEVANT was the Live Case Study to develop skills for your studies and professional practice?3, 4615 (94%)
How helpful were the GenAI tools in supporting the Live Case Study activities?3511 (69%)
Improvement in the ability to critically identify key decision-making areas in operations management and apply them to different contexts 1.4814 (88%)
Improvement in the ability to critically examine the contribution of operations management to organisational performance 1.4712 (75%)
Improvement in the ability to critically discuss the strategic importance of operations management 1.4712 (75%)
Students provided 16 of the 86 answers. Scale: 5—Extremely/Great, 4—Very/Considerably, 3—Reasonably/Moderately, 2—Slightly, 1—Not at all. 1 Module’s intended learning outcomes.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Salinas-Navarro, D.E.; Vilalta-Perdomo, E.; Palma-Mendoza, J.A.; Carlos-Arroyo, M. Integrating Generative AI into Live Case Studies for Experiential Learning in Operations Management. Educ. Sci. 2026, 16, 15. https://doi.org/10.3390/educsci16010015

AMA Style

Salinas-Navarro DE, Vilalta-Perdomo E, Palma-Mendoza JA, Carlos-Arroyo M. Integrating Generative AI into Live Case Studies for Experiential Learning in Operations Management. Education Sciences. 2026; 16(1):15. https://doi.org/10.3390/educsci16010015

Chicago/Turabian Style

Salinas-Navarro, David Ernesto, Eliseo Vilalta-Perdomo, Jaime Alberto Palma-Mendoza, and Martina Carlos-Arroyo. 2026. "Integrating Generative AI into Live Case Studies for Experiential Learning in Operations Management" Education Sciences 16, no. 1: 15. https://doi.org/10.3390/educsci16010015

APA Style

Salinas-Navarro, D. E., Vilalta-Perdomo, E., Palma-Mendoza, J. A., & Carlos-Arroyo, M. (2026). Integrating Generative AI into Live Case Studies for Experiential Learning in Operations Management. Education Sciences, 16(1), 15. https://doi.org/10.3390/educsci16010015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop