Next Article in Journal
Efficient Failure Prediction: A Transfer Learning-Based Solution for Imbalanced Data Classification
Previous Article in Journal
Design and Application of a Nurse-Following Medical Bed Robot with a Negative Pressure Chamber for Patient Transportation in the Hospital: A Korean Case of Federated Digital Twins
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Curriculum to Immersion: A Conceptual Framework of Artificial Intelligence-Assisted Scenario Generation in Extended Reality for Primary and Secondary Education

by
Tudor-Mihai Ursachi
1,* and
Maria-Iuliana Dascalu
2,*
1
Faculty of Automatic Control and Computers, National University of Science and Technology Politehnica Bucharest, 060042 Bucharest, Romania
2
Faculty of Engineering in Foreign Languages, National University of Science and Technology Politehnica Bucharest, 060042 Bucharest, Romania
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(24), 4955; https://doi.org/10.3390/electronics14244955
Submission received: 30 September 2025 / Revised: 24 November 2025 / Accepted: 15 December 2025 / Published: 17 December 2025

Abstract

In this paper, we present a conceptual design framework for developing immersive learning experiences at scale with generative AI and extended reality (XR) for primary and secondary education. Based on the synthesis of current literature, our framework asserts a practical five-step pipeline: curriculum ingestion, AI-powered blueprinting, asset assembly, educator review, and classroom deployment with formative assessment. The model is designed to be flexible, focusing on narrative and gamification for primary students, moving on to sophisticated simulations and analytical activities for secondary students. We place this framework into the context of recent developments in generative 3D models, bridging fundamental technical and ethical gaps between concept and classroom practice. Finally, we summarize a prioritized research agenda around evaluation, access, and teacher workflows to enable near-term pilot studies. This work is intended to inform educators, researchers, and stakeholders who are interested in implementing effective AI-XR solutions in schools in a pedagogically sound way.

Graphical Abstract

1. Introduction

Primary and secondary (K-12) education sits at a technological nexus, created by the convergence of two powerful forces: the maturity of Artificial Intelligence (AI) and the emergence of Extended Reality (XR). Together, these technologies promise to bring about a paradigm shift in pedagogy by creating learning environments that are both deeply personalized and highly immersive, interactive, and engaging. However, there is a big gap between this technological promise and its widespread application. The biggest challenge is the content creation bottleneck; creating high-quality, curriculum-aligned XR experiences is a complex, expensive, and time-consuming process that is out of reach for most educators and institutions.
This paper fills this important gap by presenting a conceptual framework for systematically applying generative AI to translate formal educational curricula into dynamic, immersive XR scenarios to solve the content bottleneck. We claim that the answer is to use the synergy between these two technologies. From intelligent tutoring systems that have long been used to personalize student feedback to the more recent and powerful Generative AI models that can create new educational content on the fly, AI has evolved in many ways. While the potential of this evolution is immense, it also brings about serious pedagogical and ethical questions on student autonomy, data privacy, and algorithmic bias, which can only be addressed through a human-centric focus.
At the same time, Extended Reality provides unprecedented opportunities for situated and experiential learning, and there is an emerging evidence base showing that it is effective in enhancing learning outcomes. For example, meta-analyses have demonstrated that immersive virtual environments can have a substantially positive effect on student achievement when compared to traditional classroom approaches [1,2]. Yet, its adoption is critically hampered by the enormous challenge and expense of creating high-quality digital content.
This framework also contains a differentiated pedagogical focus, with narrative and gamification at the forefront of primary levels, and complex simulations at the forefront of secondary levels. This conceptual model is not put forward as a final product, but as a strategic framework for future research and development. Its importance is that it provides a realistic and scalable framework that can enable educators to become creators of personalized immersive experiences for content consumers. Ultimately, we must close the divide between technological promise and classroom reality to accelerate responsible integration of these transformative tools to enrich learning and prepare students for a complex future.
The research design used in this study is non-experimental, conceptual, which is appropriate in cases when the researcher intends to synthesize the available knowledge and suggest new theoretical models in a rapidly developing technological sphere [3,4]. This framework is a strategic requirement of the emerging domain of generative AI and XR in education, where a guiding framework is a precondition of subsequent empirical research. The main objective is to construct a theoretical object that can be experimentally tested.
Also, while conducting the framework construction, we considered three research questions, which we tried to answer when we validated it.
  • RQ1: How can generative Artificial Intelligence be systematically applied to transform formal educational curricula into immersive Extended Reality (XR) learning experiences?
  • RQ2: What are the key components and processes required to ensure that such AI-driven scenario generation remains pedagogically sound, ethically responsible, and adaptable across K-12 educational contexts?
  • RQ3: To what extent can the proposed conceptual framework address the primary content-creation bottleneck that limits large-scale XR adoption in schools?
The remainder of this paper is organized as follows. Section 2 reviews the related literature and methodological foundations that support the development of the proposed conceptual framework. Section 3 presents the structure and operational components of the Artificial Intelligence-assisted Scenario Generation Framework, detailing its four interconnected layers: Curriculum, AI, Extended Reality, and Adaptive Learning. Section 4 describes the validation process conducted through an expert-based Delphi review and summarizes the key modifications derived from expert feedback. Section 5 discusses the pedagogical and technological implications of the framework, its comparative advantages over traditional approaches, and its ethical and practical limitations. Finally, Section 6 concludes the paper by summarizing its main contributions, outlining implications for K-12 education, and proposing directions for future research.

2. Related Work

To develop the conceptual framework of the study, a scoping review was chosen as the most suitable methodology. Since the research field is wide and constantly evolving, a scoping review is appropriate in defining essential concepts, deciphering meanings, and highlighting gaps in the literature [5]. The decision is compared to the systematic review that has limited questions or a narrative review that usually does not have a replicable methodology [6].
The review was directed by the five-stage framework by Arksey and O Malley and the PRISMA-ScR [7]. The review attempted to answer the following questions: (1) What are the existing applications of AI in educational content creation? What is XR used for in scenario-based learning? What are the models that can be used to relate pedagogy and technology-enhanced learning environments? Initial literature discovery was performed using AI-powered research assistants such as Research Rabbit and Elicit, but the entire screening, data extraction, and synthesis were performed by human researchers to maintain academic integrity [8,9].
A thorough search of the databases, such as ACM Digital Library, IEEE Xplore, Scopus, and Web of Science, identified 1582 records. Also, a series of blog entries was taken into consideration. The duplicates were removed, leaving 1127 unique records, which were screened based on the title and abstract, and the pool was narrowed down to 152 articles to be reviewed in full text. A thorough evaluation of the studies against the eligibility criteria narrowed down the number of studies to 25 to be used in thematic synthesis, as presented in Table 1.
Thematic analysis of the data charted showed a major methodological gap: the absence of combined structures that can methodically inform the transfer of formal curriculum standards into AI-based immersive scenarios. This is the gap upon which the conceptual framework presented in this paper is based.
The data sources used in our study allowed us to anchor the proposed framework in the realities of curriculum and instruction. There are two major data categories that we used:
  • Written statements issued by education authorities that state learning objectives and competencies [10].
  • Pedagogical Case Studies: Case studies in teaching that are published and peer reviewed [11].
A stringent selection process was applied to ensure the quality of these inputs, as shown in Table 2.

3. The Conceptual Framework of Artificial Intelligence-Assisted Scenario Generation

The main concept of the framework is the translational process of mapping abstract curriculum standards to the practical elements of an immersive scenario. This procedure is organized according to existing instructional design (ID) models, which serve as a scaffold in the development of successful learning experiences [12]. Such formalization transforms the declarative what of a standard into the procedural performing of a scenario.
A combination of two complementary frameworks is used as a hybrid ID model:
  • Merrill Principles of Instruction (MPI): Chosen as the general framework due to its emphasis on problem-centered learning, which is equivalent to scenario-based education [13]. The mapping process guarantees the incorporation of the five core principles of Merrill into each scenario: Problem-Centered, Activation, Demonstration, Application, and Integration [14].
  • Bloom Taxonomy Revised: A precision tool to be used to ensure that the cognitive demand of the standard (e.g., analyze, evaluate) and the interactive activities in the scenario match. An example is that an analysis objective is mapped to tasks where students are asked to compare conflicting evidence in the simulation, rather than recite facts.

3.1. Framework Components

The increasing pace of the emergence of generative artificial intelligence (AI) and extended reality (XR) shows significant potential in the educational environment [15]. Nonetheless, the deployment of these technologies frequently proceeds without alignment to the core principles of curriculum design and pedagogy [16]. In a bid to fill this disconnect, the current study proposes a four-layered conceptual framework that is holistic and designed to bridge the gap identified structurally [17]. The architecture of the framework is systematic and chronological in nature, starting with pedagogical intent (Curriculum), then intelligent creation of content (AI), advancing to immersive delivery (XR), and finally dynamic interaction with the learner and feedback [18]. This type of process-based structure ensures that technologically rich learning tools are not lost to the established learning goals [19].
The architecture of the framework is a direct reaction to the need for more systematic approaches to educational technology research, to be able to adapt to fast-paced technological changes without losing theoretical rigor [19]. It provides a systematic process between problem identification and a model that can be used in practice and implemented, thus making the development process less complicated, and pedagogical congruency. The newness of the framework lies in the synthesis of different research paradigms in the conceptual framework, thus creating an interdisciplinary pipeline:
  • Layer 1 (Curriculum) is based on curriculum theory and known instructional-design models.
  • Layer 2 (AI) integrates computer science concepts and the most recent developments in multimodal generative models.
  • Layer 3 (XR) incorporates human–computer interaction (HCI) and cognitive psychology.
  • Layer 4 (Learner Interaction) is based on learning analytics and intelligent-tutoring systems.
The framework represented in Figure 1 explicitly connects these domains, holding that the pedagogical objectives established in the first layer directly constrain AI generation procedures in the second; that the AI-generated content in the second layer feeds the immersive environment in the third; and that the interactions the learner has with the third layer supply the data that drives the analytics and adaptive processes in the fourth, in turn creating a feedback loop that can dynamically adjust the experience.

3.1.1. Curriculum and Learning Objectives Layer

The base layer of the framework provides the pedagogical and ethical standards within which the whole scenario generation is conducted. It is mainly dedicated to the task of converting high-level educational policy into a machine-readable format.
Input: The resources that are fed into this layer include formal educational standards such as ISTE Standards of technology integration, Next Generation Science Standards (NGSS) of science education, and Common Core State Standards (CCSS) of English Language Arts and Mathematics [20]. The purpose of the layer is to ensure that every scenario is probably aligned with relevant learning outcomes. Such a grounding is essential because the curriculum needs to capture the most necessary knowledge and skills that students should learn [21].
Process: Semantic Analysis and Objective Structuring
This layer uses the methods of Natural Language Processing (NLP) to perform a thorough analysis of standards documents, breaking them down into machine-readable parts: target concepts, required skills (marked by action verbs), context, and performance criteria [22].
  • Target Concepts: The main areas of knowledge or concepts (e.g., biodiversity, carrying capacity, historical point of view).
  • Required Skills: The mental activities students are expected to carry out, which are usually determined by looking at the action verbs (e.g., analyze, compare, use mathematical representations).
  • Context: The concrete circumstances or environments where the skills are to be used (e.g., ecosystems of various sizes, primary and secondary sources).
  • Performance Criteria: The criteria according to which mastery is evaluated.
One of the most important roles of this layer is to act as an ethical gatekeeper. The generative AI models can learn and amplify the biases present in society and, as a result, generate unfair content [23]. To avert this risk, the Curriculum Layer conducts a critical interpretive review, interrogating language and implied contexts to be inclusive and representative, and then directs objectives to the AI. This is the main channel through which the inequity in the creation of learning situations can be addressed through this so-called ethical audit.
The resulting output is an ordered, machine-readable list of learning objectives (e.g., in JSON format). These specifications outline the variables that both inform and restrict the AI Content Generation Layer, thus providing consistency and pedagogical faithfulness.

3.1.2. Artificial Intelligence-Driven Content Generation Layer

It is the creative driver of the framework that converts the structured pedagogical instructions provided in the Curriculum Layer into multimodal stories and interactive elements.
Input: The input is the machine-readable learning objectives that give the AI an unambiguous set of constraints and objectives necessary to generate relevant educational content.
Processes: There are multiple stages involved in the process of content generation, and they are interrelated:
  • Semantic Mapping: The AI examines the structured goal and aligns its ideas and capabilities with an entire internal knowledge graph based on academic and curated materials. This correlation recognizes the relevant principles, events, or problems that form the thematic basis of the scenario [24]. An example of this is an objective that involves ecosystem resilience, which can be mapped to the concepts of keystone species, trophic cascades, and the effect of invasive species.
  • Multimodal Generative AI Modeling: The essence of this layer is that a set of specialized generative models is deployed to generate a wide range of assets that result in a highly sensory learning experience.
  • Text Generation: Textual backbone of the scenario is generated using large language models like GPT-4, Llama-3, or Claude-3. This includes the main storyline, the backgrounds of the characters, interactive conversations, educational text, and problem sets.
  • Image Generation: DALL-E3 or Midjourney are models that are asked to generate 2D visual components, such as concept art of an environment, user interface, or character portrait, or texture of a 3D model.
  • Audio Generation: The use of artificial-intelligence audio synthesis models to produce character voice-overs based on the written text, create ambient sound effects, and create background music that resonates with the tone of the scenario.
  • 3D Model Generation: Initial geometric meshes of objects, props, and other environmental features can be created with the help of emerging text-to-3D models and refined and optimized to be introduced into a real-time engine [25,26].
  • Adaptive Personalization: The system uses personalization algorithms by default, using the learner profile to adjust the complexity and language of the generated content. Given a single learning objective, it is possible to create a variety of different versions of a scenario: one with simplified language and more explicit instructions that are given to novice learners, and another with more complex vocabulary and open-ended challenges that are given to advanced learners. This guarantees that the resources are at the right level of difficulty and availability to a wide group of students [27].
Output: The deliverable is a set of structured scenario templates—detailed digital blueprints of the XR experience—which include the entire narrative script, dialog trees, descriptions of assets, interactive event triggers, and logic behind adaptive feedback loops.
The AI layer is model-agnostic and informed by the current state-of-the-art multimodal generative tools, including:
  • Large Language Models (LLMs): (e.g., GPT-4, Claude 3) to create text such as dialog, scripts, and in-world documents.
  • Text-to-Image/3D Models: (e.g., Midjourney, DALL-E 3) to produce visual assets.
  • Text-to-Speech/Audio Models: To make voiceovers and ambient soundscapes.
The main technique of this layer is the systematic approach to prompt engineering that is perceived as a rigorous design practice of encoding pedagogical intent into a format that can be executed by an AI [15]. This framework provides pedagogically sound AI-generated content. The prompt structure is divided into the major parts, as shown in Table 3.

3.1.3. Extended Reality Integration Layer

The Extended Reality (XR) Integration Layer serves as the core construction node where abstract scenario templates are translated into concrete, interactive, and immersive learning environments with a heavy focus on cognitive science and human–computer interaction.
Input: The main input to this layer is a collection of structured scenario templates produced by the AI Content Generation Layer.
Processes: The integration process combines automated processes with required human control:
  • Integration into XR Environments: Scenario templates are integrated into a real-time 3D game engine such as Unreal Engine or Unity [28,29]. Part of this step is automated with custom scripts that read the template data (e.g., a JSON file) and construct the virtual scene. The scripts automatically insert 3D models, apply generated textures, fill dialog systems with scripted conversations, and set up the underlying logic that drives interactive components.
  • Integration of the Interactive Components: The interaction system, which is portrayed by physics-based manipulation of objects, creation of intuitive user-interface (UI) controls of a virtual tool or menu, or scripting of non-player character (NPC) actions, is programmed by developers.
  • Alignment with Cognitive and Usability Principles: This design stage is critical in ensuring that the XR experience is optimally designed to suit the target learners, especially children. Cognitive Load Theory (CLT) is a clear guide for design with the aim of making the learning process effective and manageable [30]:
  • Intrinsic Cognitive Load Management: Complex tasks outlined in the scenario are broken down into smaller consecutive steps. The scaffolding provided by the environment, such as indicating what is to be performed next or giving a step-by-step instruction, is to mitigate the natural complexity of the learning content.
  • Reduction in Extraneous Cognitive Load: The user interface and environment are designed in such a way as to maximize simplicity and clarity, reducing extraneous cognitive load. Clutter on the screen is avoided with a purpose; on-screen text is not repeated when the text is accompanied by narration, and navigation is minimized by preferring onPress events to onRelease events, which is in line with the natural interaction patterns of users.
  • Maximizing Germane Cognitive Load: The design is purposely designed to facilitate deep thinking, to maximize the germane cognitive load. Students are helped to direct attention to the core learning activity, and other features encourage reflection, self-elaboration, and application of knowledge in an immersive environment.
Output: The resulting learning experience is comprehensive, immersive, and interactive, and packaged to be available on school-provided XR devices, including VR headsets and AR-enabled tablets.

3.1.4. Adaptive Learning and Personalization Mechanisms Layer

This last layer completes the circle, making the XR scenario not a one-dimensional experience, but a dynamic, responsive, and personalized learning environment.
Input: The input includes rich real-time data streams of interactions between the learner, such as interaction data (object manipulation, paths taken), performance data (correctness of answers), behavioral data (gaze tracking), and physiological data (heart rate or electrodermal activity as a proxy to cognitive load).
Processes: This layer works based on two fundamental AI-based processes:
  • AI-based Learning Analytics: A state-of-the-art analytics engine works with real-time data streams. It builds a dynamic profile of the current state of the learner, models the change in knowledge, identifies certain misconceptions, and tracks the level of engagement by leveraging machine learning methods [31].
  • Real-Time Adaptive Scaffolding: The system provides real-time, adaptive scaffolding, which is a pillar of successful intelligent tutoring systems, based on the analytics. This support is dynamic, depending on the learners’ competencies. This may be in the form of hints and prompts, difficulty modulation, or alternative narrative options, taking a struggling student through a remedial loop or a high-achieving student through an extension activity [32].
Output: The main deliverable is the dynamically customized, student-focused experience in immersion. Another essential secondary output is an educator feedback mechanism, including a teacher-facing dashboard that includes actionable information about the overall progress of the classes and the most prevalent misconceptions, which can be used to intervene through human-centered measures.

3.2. Framework Use Case

To illustrate the practical implementation of the conceptual framework, we further present a use case in STEM: Ecosystem Dynamics and Biodiversity. The workflow in Figure 2 illustrates how the framework systematically translates the Next Generation Science Standard HS-LS2-2 (NGSS) [33] (Layer 1) into a multimodal blueprint (Layer 2), generates a dynamic VR simulation (Layer 3), and uses real-time data to provide adaptive scaffolding (Layer 4), completing the instructional loop.
A four-layer simulation pipeline where each layer contributes a distinct function—from curriculum encoding to adaptive feedback—represents an end-to-end learning framework. In Table 4, we can see the role, the main technical functionality, as well as the input/output of each layer, and the key technologies that are used to implement it.
Curriculum Layer Input: This situation is based on the NGSS, which states that students should be able to use mathematical representations to justify and update evidence-based explanations about factors that influence biodiversity and populations in ecosystems of various sizes. This layer breaks down the important concepts (biodiversity, carrying capacity, ecosystem resilience) and skills (data analysis, evidence-based explanation) of this topic. The layer implements the Standard by putting its main ideas and abilities in a machine-readable form. The Python 3.14 version was used to create a structured JSON file (“layer1_ngss.json”), which breaks down the NGSS statement into individual conceptual, procedural, and performance units.
The script introduces some of the main ecological concepts, including biodiversity, carrying capacity, ecosystem resilience, and trophic cascades, and scientific processes, such as data analysis, evidence-based reasoning, and modeling. Using this semantic representation in a programmatically captured form, the curriculum layer gives a transferrable, reproducible data representation that enters directly into the process of AI generation, as shown in Figure 3.
AI Layer Output: The AI creates a template scenario for a virtual forest ecosystem simulation. It generates a story that puts the learner in the role of a field biologist studying a dying gray wolf population, dialog with an AI research assistant, simulated longitudinal data on species populations and environmental factors such as annual rainfall and human development encroachment over 20 years, and a library of 3D models of the related flora and fauna. The second layer is where the AI model decodes the Layer 1 JSON and generates a multimodal blueprint in the form of a JSON file (“layer2_ai.json”). This blueprint determines the narrative, the 3D asset library, and the simulation parameters in the form of population data and environmental variables. All the parts of the scenario, wolf and deer, are linked to a path compatible with Unity. By using the following prompt in Claude, we were able to interpret the structured data generated in Layer 1: “Using this JSON curriculum structure, generate a virtual forest ecosystem scenario where the learner investigates declining wolf populations. Include: storyline, dataset, dialogs, environment description, and 3D asset tags.”
The resulting JSON serves as a point of connection between conceptual information and visualization. It has an array of asset libraries, variants, and datasets in its schema, which enables Unity to read into information without having to be configured manually. This methodology shows how to have an AI-offering and an XR-implementing semantic hand-off, where the concepts taught at the curriculum level are automatically translated into the manipulable 3D objects in the learning environment, as shown in Figure 4.
XR Layer Implementation: The template is integrated within a high-fidelity VR experience. The student can navigate through the forest and view data and a simulation tool through a virtual tablet. They can play with variables (e.g., reintroducing wolves) and simulate the outcome to see the predicted impact on the ecosystem.
The AI blueprint is translated by Layer 3 into an interactive 3D ecosystem. Immersive VR interaction with the Unity project (Unity 6.2 + XR Interaction Toolkit v3.2.2) was enabled with OpenXR support. The prefab assets were arranged in a hierarchical folder within the Assets/ Resources/Prefabs so that every model could be instantiated on-demand through the Resources.Load() method.
The Unity environment was initially created by developing a new HDRP 3D Project, then the XR Plugin Management and the XR Interaction Toolkit were installed using the Package Manager to add the feature of extended reality. This was followed by the activation of OpenXR support to make it compatible across head-mounted displays of the time. The default Main Camera was deleted and, in its place, the XR Origin (XR Rig) prefab was used, offering locomotion and controller-based interaction in the virtual environment in its entirety. To provide a naturalistic spatial environment to the ecosystem simulation, a Terrain object and a Sun Light source were included to provide a representation of ambient lighting and ground topography, respectively, as shown in Figure 5.
An original C# script (DataManager.cs) is used to read the AI blueprint and instantiate all the mentioned prefabs in the area around the initial position of the learner. The objects are randomly placed within a radius of view that can be adjusted (e.g., 8 m) of the main camera and are facing the user, giving the impression of an open observation clearing within the woods. This design ensures that students are exposed to the relevant organisms as soon as they enter the simulation, as shown in Figure 6.
The novel coordinate sampling and optional terrain detection of Unity are used to provide ecological authenticity of Unity; prefabs harmoniously blend with the ground surface and lighting of the environment. Where the diversity of the assets is needed, the script allows array variants in the JSON, which allows randomization of several visual variants of environmental heterogeneity (e.g., a variety of wolf or tree textures).
The project uses an XR Origin (XR Rig) that is set up with two ray interactors for teleportation and UI manipulation. The students drive through the simulation and operate the data instruments, which are shown in a Tablet UI, which is a world-space canvas. Two sliders that demonstrate variables within the ecosystem, such as the population of wolves and the population of deer, are found on this tablet. The sliders have events that are attached to the UpdateSimulation() method in the SimulationManager, which provide the learner with instant feedback between the learner and the state of the simulation, as shown in Figure 7 and Figure 8.
On the tablet, there is a TextMeshPro component used as the adaptive feedback display. The AdaptiveFeedback script interprets the values of the existing sliders and creates prompts in the context (e.g., the fact that the stable deer population with decreasing aspen groves indicates overgrazing). This element is the scaffolding adaptive feature that directs learners in the identification of higher-level ecological interrelations, such as trophic cascades.
To make it more realistic, the prefabs were textured with hand-created or imported materials in Resources/Materials. The assignment of textures was performed either by the Unity material system or programmatically by name-based matching scripts. In case of FBX imports without textures, the geometry was cleaned in Blender to eliminate non-manifold polygons, recalculated normals, and re-exported again so that it can be used in Unity by the rendering pipeline (see Figure 9).
Adaptive Layer in Action: This system monitors the hypotheses of the student. Assuming that the student is merely interested in the predator–prey relationship and does not manage to stabilize the ecosystem, the AI assistant gives a scaffolded prompt: “Your data indicates that, despite the stable deer population, the aspen groves are not recovering. Have you wondered what the grazing habits of the deer could be doing to the rest of the ecosystem?” This leads the student to the more complicated idea of a trophic cascade.
The fourth layer completes the instructional loop because it captures the learner interactions and adaptive states. The AdaptiveDataManager script replicates the AI blueprint JSON (at runtime), and the alterations in the variables (e.g., the level of population, rainfall) are saved in a mutable file (adaptive_state.json) in the Unity persistent data path. This allows tracking the state continuously without updating the fixed AI blueprint, allowing it to be reproducible. Everything in the sliders automatically updates this runtime JSON that, in turn, can subsequently be reloaded to put the simulation back to a past state. This mechanism can be empirically traced with decisions of the learners and allows for analyzing the adaptive feedback efficacy over time, as shown in Figure 10 and Figure 11.
Various places on the pipeline are verified through error logs and various checking mechanisms: unavailable prefabs or materials caused non-blocking warnings; the order conflicts during the initialization were overcome by loading the runtime JSONs in the Awake () preceding the slider callbacks; mesh cleanup in Blender and Optimize Mesh and Recalculate Normals options in Unity were created to handle geometry import warnings (e.g., self-intersecting polygons). The optimization of performance is achieved by constraining dynamic instantiating to within the camera and allowing the instantiation of materials with the use of GPUs.
The system that was developed offers a working example of how NGSS-based learning objectives may be computationally transferred into immersive XR experiences. The workflow permits the removal of manual design bottlenecks by combining AI-generated blueprints with procedural Unity instantiation, allowing reproducible and data-driven simulations of an ecosystem. The interaction of every learner is recorded at an adaptive level, thereby making it possible to empirically test instructional scaffolds and cognitive loading factors. To ensure reproducibility, both code (C#, Python) and asset directory layouts are clearly documented, and all data output at runtime is human-readable in the form of a JSON file. This transparency is end-to-end and can be used to replicate the pedagogical side of the framework as well as to validate the feasibility of the framework technically.

4. Validation

To determine the conceptual validity of the framework, its credibility, and potential usefulness, a preliminary review by experts was carried out, according to examples from the literature [34].
An adapted version of the Delphi approach was used, which is a method that is most appropriate when it comes to establishing expert opinion on a complex issue and testing a conceptual framework. Through purposive sampling, ten experts were recruited, including researchers in educational technology, specialists in K-12 curriculum, AI engineers, and XR developers [17]. It was reviewed in an asynchronous two-round format:
  • Round 1 (Open-Ended Exploration): The panelists were given a detailed document on the four-layer framework, the theoretical foundation of the framework, and the two case scenarios. They were requested to give open, qualitative feedback on the perceived strengths, weaknesses, pragmatic feasibility, innovation, and possible challenges of applying the framework to real-world educational contexts.
  • Round 2 (Consensus Building and Refinement): Thematic analysis was used to examine the qualitative data collected during Round 1 to determine consistent themes and areas of disagreement. The themes were integrated into a set of 15 statements concerning the key attributes of the framework. In Round 2 of the Delphi process, the experts were asked to indicate their level of agreement with each statement on a 5-point Likert scale (1 = Strongly Disagree, 5 = Strongly Agree) and to provide a brief justification of their level of agreement. There was a predefined consensus threshold of 80% (rating 4 or 5). Such a process of iteration enabled the refinement of ideas and the obvious areas of high agreement and areas that need further attention.
The review offered a high level of agreement on the innovation and pedagogical strength of the framework, as well as a high level of criticism for improving the framework. Table 5 summarizes the key findings.
The response of the expert panel gives a good initial approval of the conceptual integrity of the framework. The areas of improvement were included in the design of the framework, which reinforced the model considerably in the next stages of prototyping and empirical testing.

5. Discussion

5.1. How Artificial Intelligence Improves Scenario Diversity and Relevance

A major strength of the proposed framework is the ability to utilize generative AI to produce a new range of educational content on a large scale. Traditional scenario design suffers from time, budget, and creative bandwidth limitations of a human design team producing a limited set of standardized experiences. On the other hand, generative AI can create an immense number of assets and stories aligned with teaching requirements.
The AI layer of the framework uses a set of multimodal generative services to achieve this diversity. Large Language Models (LLMs) such as GPT-4 and Claude-3 can be used to develop branching narratives, character conversations, and in-world texts, adjusting the tone, complexity, and vocabulary to different grade levels or reading skills [25,26].
Furthermore, text-to-image, audio, and 3D models can populate these scenarios with rich visual and auditory assets ranging from re-creating historical scenes to visualizing complex biological processes. Automated generation of immersive and contextualized multimodal content overcomes the static nature of traditional textbooks and e-learning modules and promises to enable more pedagogically engaging and culturally responsive educational materials. Because much of the content development is automated in the framework, educators and designers can focus on the higher-level effort of ensuring pedagogical soundness and alignment to learning goals [35].

5.2. Comparison to Traditional Approach to Scenarios

When compared to conventional instructional design, the AI-enabled framework offers nothing less than a paradigm shift in terms of efficiency, scalability, and personalization. Traditional curriculum development is a labor-intensive process of working by hand and can be slow to adapt to new educational standards or learner needs. Millions of students are subjected to a “one size fits all” model that caters to the average learner and occasionally leaves the slower-moving and differently thinking students behind [36].
The AI-assisted framework comes with several distinct advantages:
  • Speed and Scalability: Where traditional design can take months to produce a single learning module, AI can produce initial drafts of scenarios, assets, and assessments in minutes. This makes it possible for fast prototyping and iteration. Furthermore, the AI-based platforms are inherently scalable and can provide tailored learning services to many students at the same time without a proportional increase in human resources.
  • Personalization: Conventional education cannot provide much personalization since it has a rigid structure. The AI layer of the framework, on the other hand, is intended to be personalized at the basis of its existence, as it can also adapt educational content to each student’s own style and pace of learning, accommodating for strengths, weaknesses, and interests. This is a step away from a static curriculum towards a learning ecosystem.
  • Cost-Efficiency: While there is an initial cost associated with implementing AI and XR, the long-term potential cost savings are substantial. By automating content and administrative processes such as grading, AI can lower the resources needed to create and deliver high-quality instruction, making cutting-edge educational tools more accessible.
However, traditional design has a huge advantage in that it relies on human empathy and nuanced understanding. AI might not be able to comprehend the context and develop emotionally driven stories that a human instructional designer would be able to produce. Thus, the proposed framework is not a substitute for human designers, but a complement, a “thought partner” which takes on the routine aspects, while humans direct the creative and strategic aspects that make learning meaningful [35].
Beyond comparing AI-XR with traditional teaching, it is also important to consider its value relative to simpler digital tools such as video-based lessons or LMS-embedded simulations. While these lower-cost innovations improve access and engagement, they rarely achieve the same depth of experiential learning or adaptive personalization. The proposed framework delivers a higher pedagogical return on investment (ROI) by automating content creation, promoting knowledge transfer through immersion, and enabling differentiated learning at scale—benefits that justify the higher initial investment in hardware and training.

5.3. Ability to Adapt in Real Time to Student Needs

Perhaps the most powerful potential of the framework is the fourth layer: the ability to adapt in real time. Traditional educational resources are fixed; after construction, there is no way for them to respond to a learner’s immediate difficulties or successes. Combining AI-based analytics with an XR environment results in a successful feedback loop that adapts the learning process moment to moment. XR environments are a rich source of data on learner behavior, which includes patterns of interaction, gaze tracking, and performance metrics.
The framework’s analytics engine can take this data and create a moving picture of the learner’s state: what a student knows, how they are learning, where they are stalling, and what misconceptions they may be developing [31,37]. This real-time knowledge has the power to allow a powerful type of adaptive scaffolding. Grounded in Vygotsky’s theory of Zone of Proximal Development (ZPD), AI in the E-Learning Ecosystem (2025) [38] provides learners with scaffolded guidance to help them accomplish tasks just beyond their current level, with gradually reduced scaffolding as proficiency increases. This support can be provided in many ways in the immersive scenario:
  • Context-Sensitive Hints: In case the system notices a student who is having problems, then it can provide a hint or a more overt one from a virtual tutor.
  • Difficulty Modulation: AI can modify the difficulty of the problems in real time and offer more challenging problems to the students who are performing better, and simplify tasks for the struggling ones.
  • Branching Pathways: The story can vary depending on the student’s performance, and sends a learner through a remedial exercise or provides a more advanced enrichment.
This degree of flexibility and personalized attention makes sure that students are never bored with the challenge they are in, and this has proven to have a considerable impact on the learning process and engagement [39]. It changes the educational process of absorbing information to an interactive discussion between the learner and the educational context.

5.4. Limitations, Ethical, and Pedagogical Considerations

Nevertheless, the practicality of the AI-aided framework still has its limitations and challenges that have to be addressed to make responsible use of AI in education settings possible. Such issues include the viability of technology itself and the realities of application to the far more disturbing ethical and pedagogical concerns that its utilization provokes.
In addition, the quality and reliability of AI-generated content are also a major issue. The generative models are inclined to generate either factually erroneous data (hallucinations) or reflect the biases inherent in their training data [40]. A significant ethical issue is the problem of bias in algorithms since models of AI might recreate and even enhance the existing social disparities as they are trained on past data. As an example, low-income students were disproportionately punished by the automated systems of formative assessment, or their texts were incorrectly labeled as AI-written by AI systems [41]. The framework can propagate false information and create an inequitable learning experience without human controls in place.
Outside of technology itself, equity and access are some of the key obstacles to mass adoption. The framework relies on state-of-the-art technology infrastructure in terms of high-performance computers to provide AI and XR hardware to deliver. Regrettably, though, schools (particularly in underserved areas) frequently lack the financial means to acquire and maintain such technology, and this adds to the already present digital divide. What is more, this kind of framework does not offer guarantees to the extent of investment in teacher training and professional development that is required to make it effective. To do that, teachers must not just be technically savvy but also possess new methods of pedagogy that connect to the incorporation of these tools into teaching.
The moral issue which is of the greatest concern is data security and data privacy. The adaptive layer of the framework operates by collecting big data that relates to the student, including performance, behavior, and even potentially biometric data with XR sensors. This poses significant concerns regarding the ownership of data, its use, and protection against misuse and attacks. Such systems may be surveillance tools that destroy autonomy and trust among students without policies in place. Already, the alarm has gone off on this matter, as surveys indicate that nearly seven out of ten parents are opposed to the idea of making student data available to AI software [42].
Pedagogically, a risk of over-dependence on AI exists that can lead to the fact that students will not have their critical thinking and problem-solving skills developed [43]. They may also lose the desire to engage in productive struggle, a crucial component of deep learning, when the students receive immediate feedback provided by an AI tutor. In the same way, the increased human engagement with the automated systems can lead to the decreased necessity of human interaction, which can lower the social and emotional dimension of learning, which is essential in the development [44]. These difficulties prove that the role of the human teacher cannot be replaced. The model is supposed to be a supplement to teachers, but not a replacement [45]. By handling the more mundane tasks, AI can enable the teachers to focus on their area of strength: arousing curiosity, mentoring, fostering teamwork, and providing the subtle, human-to-human support that computers will never be able to provide. Learning is not going to be an AI-type future or a traditional one, but one that will integrate both [46].
Another implementation barrier arises from recent educational policies in several countries that limit digital device use in classrooms; this “low-tech turn” can be mitigated through carefully timed, teacher-guided XR sessions integrated with offline collaborative learning.

5.5. System Scalability, Hardware and Cost Considerations

The proposed framework was also constructed in terms of scalability and accessibility, which would benefit a variety of hardware options and levels of institutional resources. The proof-of-concept was written in Unity 2022 LTS, with the High-Definition Render Pipeline (HDRP) to look as realistic as possible; nevertheless, the modular design of the framework can be run on low-resource environments with the Universal Render Pipeline (URP). A desktop computer with a mid-range graphics card (like RTX 3060 graphics card, 16 GB RAM) or self-contained VR devices (such as the Meta Quest 3 or Pico Neo 4) are common system requirements to run the system in full-fidelity mode. Such designs strike a balance in terms of visual fidelity, interactivity, and costs so that the approach can be implemented by both research laboratories and regular classrooms without significant modifications. The framework contributes to a tiered model of deployment to overcome cost-related barriers (Figure 12):
  • Tier 1 (High-Fidelity XR): Immersive VR that adapts to the user feedback and generates plot-full narratives, which is proposed to be used in specialized studies or research projects.
  • Tier 2 (Desktop/Tablet Mode): Reduced expression, yet completely interactive 3D simulation suited for either conventional school computers or tablets.
  • Tier 3 (Web Preview Mode): A non-interactive 2D version for demonstration and access compliance. This non-invasive architecture allows schools of varying financial and technological resources to adopt the structure. With local caching of assets and asynchronous data collection, network requirements are kept to a bare minimum and limit bandwidth-deprived environments.
Figure 12. Scalability tiers.
Figure 12. Scalability tiers.
Electronics 14 04955 g012

5.6. Cross-Disciplinary Implementation Scenarios

Although the described prototype involves the dynamics of the ecosystem in the field of biology, the four-layer architecture can be adapted to various academic fields. This would be used in physics, as in the Curriculum Layer, attractive forces in the solar system, which conform to the NGSS, would be encoded, and in the AI Layer, a virtual planetary system would be created. In the XR Layer, students calculate the work with mass, velocity, and orbital parameters, and for the Adaptive Layer, it gives feedback on the principles of energy conservation.
The same thing could be applied to history, where the system may recreate complicated social interactions, e.g., the Boston Tea Party or the Renaissance era, where the AI Layer would create historically accurate conversations and artifacts. Students learn through the experience of virtual historical personalities and settings, meaning that they can take up a perspective and analyze a context.
Examples of such scenarios in environmental science might include open repository real-time data about deforestation or climate impact (e.g., NASA Earthdata or NOAA). The feedback would help learners to be directed towards the concept of statistical correlation of activities by humans and their ecological effects. These examples show how versatile the framework can be in both the humanities and sciences, and thus point to its significance as a cross-curricular learning infrastructure, rather than a discipline-specific instrument.

5.7. Risks and Barriers

The possible risks and barriers to implementation of the framework were logically studied to make certain that the framework could be applied in practice (Table 6). The greatest obstacles are the cost of hardware, teacher preparedness, data privacy, and AI bias.

6. Conclusions

6.1. Summary of Key Contributions

The collaborative creation of a comprehensive, process-based conceptual framework is the main contribution of the research that systematizes the design of AI-generated immersive learning experiences. Its major contributions may be summed up as follows:
  • A Pedagogy-First, Interdisciplinary Model: The most notable contribution of the framework is the demand that is made to use the curriculum-first approach. This means that it makes the Curriculum and Learning Objectives Layer the input (so that the technology is applied with pedagogical intent, rather than the other way around). This goes directly to the main criticism that learning technology is usually offered without a clear linkage to learning outcomes [19]. The framework offers a one-of-a-kind synthesis of separate disciplines, synthesizing the concepts out of curriculum theory, instructional plan, computer science, human–computer interaction, and learning analytics into one pipeline, which is coherent.
  • Systematization of a Complex Process: The manufacturing of immersive learning content can be a resource-heavy process that was mostly ad hoc. The proposed framework provides a methodological system that simplifies this complexity by disaggregating the process into four separate layers that are easy to manage. Such systematization promotes consistency and reproducibility of the design of scenarios, shifting it away towards a more professional practice of engineering [2].
  • Operationalizing AI to Pedagogical Alignment: The framework transcends the general discussion of the possibility of AI by operationalizing its application via the AI-driven Content Generation Layer. It describes a systematic prompt engineering approach, which offers a practical way to map more organized learning goals and pedagogically reasonable scenario elements. This is an elaborate practice that is essential to direct AI to create work that is not merely plausible but also didactically valid and morally sound.
  • Adaptive Feedback Loop Integration: With the Adaptive Learning and Personalization Mechanisms Layer in place, the instructional loop is closed. This makes the XR scenario become an active environment capable of providing personalized support on a real-time basis [27]. Such convergence of learning analytics and adaptable scaffolding becomes essential to achieving ideal personalized learning on a scale.
Compared with both traditional and simpler digital solutions, the framework demonstrates a positive pedagogical ROI by combining deeper engagement, scalable content generation, and sustained learning outcomes. Conversely, the framework can also bridge resource gaps by enabling schools without physical laboratories to conduct virtual experiments, thereby democratizing access to authentic STEM learning experiences.
In relation to the guiding research questions (RQ1–RQ3), the study’s findings indicate that the generative capability of AI can effectively mitigate the XR content-creation bottleneck when combined with a curriculum-aligned and educator-validated workflow. The four-layer framework provides a systematic process for curriculum translation (addressing RQ1), a clear methodological structure balancing human and AI roles (addressing RQ2), and practical evidence from expert validation supporting its feasibility (addressing RQ3). Therefore, the original hypothesis—that AI can directly alleviate the primary limitation of XR in education—is supported at a conceptual level. Nevertheless, further empirical research is required to evaluate its effectiveness in authentic classroom environments.

6.2. Implications for Primary and Secondary Education

The versatility of the framework has specific implications for both primary and secondary education, which underpins differentiated learning throughout the developmental stages.
In the case of primary school, the multi-sensory, interactive, and playful learning environments created by the framework are of especially great use. XR technologies have the potential to make abstract things real, and young learners can feel the phenomena that would otherwise remain unreachable, which may include studying an ancient civilization or the solar system [47]. With the AI layer, it is possible to have these experiences presented in age-appropriate language and simplified interfaces, which encourage curiosity and engagement and develop foundational knowledge and digital literacy skills.
In the case of secondary school, the framework aids the shift to the more complex and probing inquiry. The examples of STEM and the humanities help to see how AI-based XR can then help learners think critically, analyze information, and evaluate various viewpoints—skills that are vital to college and career readiness. The framework can assist students in closing the gap between theory and practice and equip them with the challenges of higher learning and the contemporary labor market by constructing realistic simulations and challenges that are problem-based [48]. Adaptation potentials have a critical role to play at this point to accommodate the expanding scope of student knowledge and abilities and to supply specialized help to those who are struggling, and challenges to those who are prepared to be challenged.
The framework’s viability depends not only on technical readiness but also on cultural acceptance within school systems, increasingly cautious about technology use. Future work should explore hybrid instructional models that combine limited, high-impact XR engagement with traditional interaction to reconcile innovation with emerging well-being policies.

6.3. Future Research Directions

This is a theoretical but not an empirical validation of a particular study, as it is a conceptual paper. The framework as such is a falsifiable theory, and its empirical efficacy is to be tested by providing rigorous empirical research. Future research must continue in several major lines of enquiry:
  • Empirical Validation in Classroom Situations: The most important step to be made next is to apply and test the framework in real-world education settings. The studies in the classroom are required to evaluate how the created scenarios influence measurable learning outcomes, student engagement, and cognitive load [49].
  • Comparative Effectiveness Studies: The studies should be conducted with a direct comparison of learning experience and outcomes in AI-assisted XR cases and the traditional instructional methods. These studies would offer key pieces of evidence on the effectiveness of the framework and aid in determining which pedagogical strategies would be the most effective with the help of this technology.
  • Longitudinal Studies of Adaptive Personalization: As exciting as real-time adaptation is, its long-term outcomes are not understood. Even longitudinal research is required to determine how extended use of AI-based personalization can contribute to growth in learners, their metacognitive abilities, and the possibility of overdependence on technological support [50].
  • Driving AI to Pedagogy and Ethics: Future research ought to strive to make generative AI more reliable and more teachable. This will involve finding ways of minimizing factual errors, reducing algorithmic bias, and making AI-generated text inclusive and culturally responsive [42].
  • Building Teacher Training Models: To be implemented on the scale, efficient models of teacher development are necessary. Research ought to explore how best to enable educators with the technical expertise and the pedagogical methods of effecting and managing these sophisticated learning experiences as expert human-in-the-loop validators and facilitators of these learning experiences.

6.4. Evaluation Methodology for Validation

To test the pedagogical performance of the proposed AI-XR ecosystem, a mixed-methods research design should be used. This is a method that combines both quantitative and qualitative learning analytics that helps to capture the cognitive and affective aspects of learning (Figure 13).
A quasi-experimental design would be used in the quantitative phase that would compare an experimental condition with the adaptive XR system with a control condition with the use of static digital simulations. Dependent variables would encompass pre/post content test scores, task completion times, and cognitive load (e.g., NASA-TLX). Simultaneously, the qualitative stage would entail the student reflection journals, semi-structured interviews, and teacher observation to determine the level of engagement, presence, and perceived learning value.
The data recorded on interactions in Unity (gaze tracking, recordings of manipulation of various variables, and decision-making processes) would be processed to determine trends in conceptual learning and exploration. Possible further application may be carried as far as an evaluation of long-term retention effects in the form of a randomized controlled trial (RCT).
Triangulated indicators will be used to evaluate the outcomes of students:
  • Conceptual Mastery: The topic-based knowledge tests and evidence-based reasoning.
  • Metacognitive Awareness: Assessed by using reflective questionnaires.
  • Behavioral Engagement: Measured through engagement measures and XR analytics.
The evaluation strategy is directly aligned with the research questions developed in the Introduction, as each of the hypotheses is directly related to a measurable construct of learning or engagement.
The authors acknowledge that, while the present study establishes a formal conceptual model, its practical feasibility and pedagogical impact must be empirically verified. Future work will therefore focus on conducting pilot implementations of the framework in collaboration with experienced teachers across multiple curricular areas. Such studies will test its adaptability, usability, and effect on student learning outcomes, providing the empirical evidence needed to refine and validate the model’s real-world applicability.

Author Contributions

Conceptualization: T.-M.U. and M.-I.D.; methodology: T.-M.U.; software: T.-M.U.; validation: T.-M.U. and M.-I.D.; formal analysis: T.-M.U.; investigation: T.-M.U.; resources: T.-M.U.; data curation: T.-M.U.; writing—original draft preparation: T.-M.U.; writing—review and editing: T.-M.U. and M.-I.D.; visualization: T.-M.U.; supervision: M.-I.D.; project administration: T.-M.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Acknowledgments

During the preparation of this manuscript/study, we used Google Gemini (version 2025 release) to identify and filter relevant literature sources for the background review. ChatGPT (GPT-5, OpenAI, 2025 release) was used for language refinement, text structuring, and idea clarification. We have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CCSSCommon Core State Standards
CLTCognitive Load Theory
HCIHuman Computer Interaction
LLMLarge Language Model
NGSSNext Generation Science Standards
NLPNatural Language Processing
NPCNon-Player Character
RCTRandomized Controlled Trial
ROIReturn on Investment
STEMScience Technology Engineering Mathematics
TLXTask Load Index
UIUser Interface
XRExtended Reality

References

  1. Sage Research Methods Community. Quantitative Research with Nonexperimental Designs. Available online: https://researchmethodscommunity.sagepub.com/blog/quantitative-research-with-non-experimental-designs (accessed on 24 August 2025).
  2. Tusquellas, N.; López-Villanueva, D.; Palau, R.; Santiago, R. Educational Conceptual Model Design Research Methodology; UTE Teaching & Technology (Universitas Tarraconensis): Pembroke Pines, FL, USA, 2025; p. e4103. [Google Scholar] [CrossRef]
  3. Pound, P.; Campbell, R. Exploring the feasibility of theory synthesis: A worked example in the field of health related risk-taking. Soc. Sci. Med. 2015, 124, 57–65. [Google Scholar] [CrossRef] [PubMed]
  4. Paperpal. What is a Conceptual Framework? How to Make It (with Examples). Paperpal Blog. Available online: https://paperpal.com/blog/academic-writing-guides/what-is-a-conceptual-framework-how-to-make-it-with-examples (accessed on 24 August 2025).
  5. Why a Scoping Review. Available online: https://jbi-global-wiki.refined.site/space/MANUAL/355862553/10.1.1+Why+a+scoping+review%3F (accessed on 24 August 2025).
  6. Green, B.N.; Johnson, C.D.; Adams, A. Writing narrative literature reviews for peer-reviewed journals: Secrets of the trade. J. Chiropr. Med. 2006, 5, 101–117. [Google Scholar] [CrossRef] [PubMed]
  7. Stringer, L.R.; Lee, K.M.; Sturm, S.; Giacaman, N. A scoping review of research exploring teachers’ experiences with Digital Technologies curricula. J. Res. Technol. Educ. 2024, 56, 733–751. [Google Scholar] [CrossRef]
  8. Office of Teaching, Learning, and Technology—The University of Iowa. AI-Assisted Literature Reviews. Available online: https://teach.its.uiowa.edu/news/2024/03/ai-assisted-literature-reviews (accessed on 24 August 2025).
  9. Leung, T.I.; de Azevedo Cardoso, T.; Mavragani, A.; Eysenbach, G. Best Practices for Using AI Tools as an Author, Peer Reviewer, or Editor. J. Med. Internet Res. 2023, 25, e51584. [Google Scholar] [CrossRef] [PubMed]
  10. Groth, R.E. Applying Design-Based Research Findings to Improve the Common Core State Standards for Data and Statistics in Grades 4–6. J. Stat. Educ. 2019, 27, 29–36. [Google Scholar] [CrossRef]
  11. Emerald Publishing. Write a Teaching Case Study. Available online: https://www.emeraldgrouppublishing.com/how-to/authoring-editing-reviewing/write-a-teaching-case-study (accessed on 24 August 2025).
  12. Cathy Moore. 6 Popular Instructional Design Models Every Pro Should Know. Training Design. Available online: https://blog.cathy-moore.com/popular-instructional-design-models/ (accessed on 24 August 2025).
  13. Gutierrez, K. A Quick Guide to Four Instructional Design Models—Shift E-Learning. Available online: https://www.shiftelearning.com/blog/top-instructional-design-models-explained (accessed on 24 August 2025).
  14. Instructional Design Australia. Principles of Instructional Design. Available online: https://discoverlearning.com.au/2021/06/how-to-apply-merrills-instructional-design-principles/ (accessed on 24 August 2025).
  15. Nextra. Prompt Engineering Guide. Available online: https://www.promptingguide.ai/ (accessed on 24 August 2025).
  16. Child, S.; Shaw, S. A Conceptual Approach to Validating Competence Frameworks. Res. Matters 2023, 35, 27–40. [Google Scholar] [CrossRef]
  17. Malkawi, A.; Dahalin, Z. Review of the Delphi Method in The Higher Educational Research. Kongzhi Yu Juece/Control Decis. 2023, 38, 777–790. [Google Scholar]
  18. McIntyre-Hite, L. A Delphi study of effective practices for developing competency-based learning models in higher education. J. Competency-Based Educ. 2016, 1, 157–166. [Google Scholar] [CrossRef]
  19. Yang, S.; Taylor-Griffiths, F.; Taylor-Guy, P.; Saubern, R. Empowering Teaching and Learning with Educational Technology: Literature Review; Australian Council for Educational Research: Camberwell, Australia, 2025. [Google Scholar] [CrossRef]
  20. ISTE Standards. 2025. Available online: https://iste.org/standards (accessed on 30 August 2025).
  21. Stanovich, P.J.; Stanovich, K.E. Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research to Make Curricular & Instructional Decisions. Available online: https://www.nichd.nih.gov/publications/pubs/using_research_stanovich (accessed on 30 August 2025).
  22. Mahamuni, N. Natural Language Processing in EdTech: A Deep Dive into the Future of Learning. Quixl. Available online: https://www.quixl.ai/blog/natural-language-processing-in-edtech-future-of-learning/ (accessed on 30 August 2025).
  23. Ethical Considerations for AI Use in Education. Available online: https://www.enrollify.org/blog/ethical-considerations-for-ai-use-in-education (accessed on 30 August 2025).
  24. Silk Data. AI-Powered Tool for Semantic Mapping. Available online: https://silkdata.tech/semantic-map (accessed on 30 August 2025).
  25. Center for Teaching Innovation. Appendix A: State of the art in Generative AI. Available online: https://teaching.cornell.edu/generative-artificial-intelligence/report-generative-artificial-intelligence-education-and-0 (accessed on 24 August 2025).
  26. Mittal, U.; Sai, S.; Chamola, V.; Sangwan, D. A Comprehensive Review on Generative AI for Education. IEEE Access 2024, 12, 142733–142759. [Google Scholar] [CrossRef]
  27. AI in the Classroom: Personalized Learning and the Future of Education. Workday Blog. Available online: https://blog.workday.com/en-us/ai-in-the-classroom-personalized-learning-and-the-future-of-education.html (accessed on 30 August 2025).
  28. Unreal Engine. The Most Powerful Real-Time 3D Creation Tool. Available online: https://www.unrealengine.com/en-US/home (accessed on 30 August 2025).
  29. Unity. Unity AI: AI Game Development Tools & RT3D Software. Available online: https://unity.com/products/ai (accessed on 30 August 2025).
  30. Gkintoni, E.; Antonopoulou, H.; Sortwell, A.; Halkiopoulos, C. Challenging Cognitive Load Theory: The Role of Educational Neuroscience and Artificial Intelligence in Redefining Learning Efficacy. Brain Sci. 2025, 15, 203. [Google Scholar] [CrossRef] [PubMed]
  31. ISM. The Power of AI: Transforming VR and XR Training Experiences. Available online: https://ismguide.com/ai-transform-vr-and-xr-training/ (accessed on 30 August 2025).
  32. Wang, F.; Zhou, X.; Li, K.; Cheung, A.C.K.; Tian, M. The effects of artificial intelligence-based interactive scaffolding on secondary students’ speaking performance, goal setting, self-evaluation, and motivation in informal digital learning of English. Interact. Learn. Environ. 2025, 33, 4633–4652. [Google Scholar] [CrossRef]
  33. Next Generation Science Standards. HS-LS2-2 Ecosystems: Interactions, Energy, and Dynamics. Available online: https://www.nextgenscience.org/pe/hs-ls2-2-ecosystems-interactions-energy-and-dynamics (accessed on 12 November 2025).
  34. Blieck, Y.; Ooghe, I.; Zhu, C.; Depryck, K.; Struyven, K.; Laer, H.V. Validation of a Conceptual Quality Framework for Online and Blended Learning with Success Factors and Indicators in Adult Education: A Qualitative Study. Turk. Online J. Educ. Technol. 2017, 16, 162–182. [Google Scholar]
  35. Disco. AI for Instructional Design Using the ADDIE Model (2025 Edition). Available online: https://www.disco.co/blog/ai-for-instructional-design-using-the-addie-model (accessed on 30 August 2025).
  36. AI vs. Traditional Teaching Methods: The Future of Education. Available online: https://www.timesofai.com/industry-insights/ai-vs-traditional-teaching-methods/ (accessed on 30 August 2025).
  37. Carter, R. AI & Immersive Learning: Accelerating Skill Development with AI and XR; XR Today. Available online: https://www.xrtoday.com/mixed-reality/ai-immersive-learning-accelerating-skill-development-with-ai-and-xr/ (accessed on 17 August 2025).
  38. AI in the E-Learning Ecosystem: Adaptability, Co-Agents, and Ethical Pathways. Available online: https://publish.illinois.edu/online-grad-innovation/ai-in-the-e-learning-ecosystem-adaptability-co-agents-and-ethical-pathways/ (accessed on 30 August 2025).
  39. Liu, V.; Latif, E.; Zhai, X. Advancing Education through Tutoring Systems: A Systematic Literature Review. arXiv 2025, arXiv:2503.09748. [Google Scholar] [CrossRef]
  40. Walden University. 5 Pros and Cons of AI in the Education Sector. Available online: https://www.waldenu.edu/programs/education/resource/five-pros-and-cons-of-ai-in-the-education-sector (accessed on 30 August 2025).
  41. García-López, I.M.; Trujillo-Liñán, L. Ethical and regulatory challenges of Generative AI in education: A systematic review. Front. Educ. 2025, 10, 1681252. [Google Scholar] [CrossRef] [PubMed]
  42. The Times of India. AI in K-12 Schools: Reports Show Nearly 70% of Parents Oppose Sharing Student Data with Artificial Intelligence. Available online: https://timesofindia.indiatimes.com/education/news/ai-in-k-12-schools-reports-show-nearly-70-of-parents-oppose-sharing-student-data-with-artificial-intelligence/articleshow/123530598.cms (accessed on 30 August 2025).
  43. Elshall, A.S.; Badir, A. Balancing AI-assisted learning and traditional assessment: The FACT assessment in environmental data science education. Front. Educ. 2025, 10, 1596462. [Google Scholar] [CrossRef]
  44. College of Education. AI in Schools: Pros and Cons. Available online: https://education.illinois.edu/about/news-events/news/2024/10/24/ai-in-schools--pros-and-cons (accessed on 30 August 2025).
  45. Nandhini. AI Replace Traditional Teaching? Pros and Cons of AI in Education. ColorWhistle. Available online: https://colorwhistle.com/ai-education-pros-cons/ (accessed on 30 August 2025).
  46. Rochelle, S.; Sushith, D. Exploring the AI Era: A Comparative Analysis of AI-Driven Education and Traditional Teaching Methods. Int. J. Multidiscip. Res. 2024, 6, 1–9. [Google Scholar]
  47. UON. University of Northampton. Extended Reality (XR) in Primary Education. Available online: https://www.northampton.ac.uk/research-blogs/extended-reality-xr-in-primary-education/ (accessed on 31 August 2025).
  48. The Case Method. 2025. Available online: https://citl.illinois.edu/citl-101/teaching-learning/resources/teaching-strategies/the-case-method (accessed on 25 August 2025).
  49. Luan, H.; Geczy, P.; Lai, H.; Gobert, J.; Yang, S.J.; Ogata, H.; Baltes, J.; Guerra, R.; Li, P.; Tsai, C.C. Challenges and Future Directions of Big Data and Artificial Intelligence in Education. Front. Psychol. 2020, 11, 580820. [Google Scholar] [CrossRef] [PubMed]
  50. University of Minnesota. Talking AI and the future of education with the University of Minnesota. Available online: https://twin-cities.umn.edu/news-events/talking-ai-and-future-education-university-minnesota (accessed on 31 August 2025).
Figure 1. Conceptual framework layered flow diagram.
Figure 1. Conceptual framework layered flow diagram.
Electronics 14 04955 g001
Figure 2. Conceptual framework, layered flow diagram of the proposed framework use case.
Figure 2. Conceptual framework, layered flow diagram of the proposed framework use case.
Electronics 14 04955 g002
Figure 3. Translation of Next Generation Science Standard HS-LS2-2 into machine-readable form.
Figure 3. Translation of Next Generation Science Standard HS-LS2-2 into machine-readable form.
Electronics 14 04955 g003
Figure 4. AI-generated multimodal blueprint.
Figure 4. AI-generated multimodal blueprint.
Electronics 14 04955 g004
Figure 5. Unity scene hierarchy.
Figure 5. Unity scene hierarchy.
Electronics 14 04955 g005
Figure 6. Simulation snippet.
Figure 6. Simulation snippet.
Electronics 14 04955 g006
Figure 7. Interactive tablet UI.
Figure 7. Interactive tablet UI.
Electronics 14 04955 g007
Figure 8. Console log with slider updates.
Figure 8. Console log with slider updates.
Electronics 14 04955 g008
Figure 9. Deer-embedded materials.
Figure 9. Deer-embedded materials.
Electronics 14 04955 g009
Figure 10. Adaptive rules.
Figure 10. Adaptive rules.
Electronics 14 04955 g010
Figure 11. JSON runtime initialization.
Figure 11. JSON runtime initialization.
Electronics 14 04955 g011
Figure 13. Mixed methods evaluation.
Figure 13. Mixed methods evaluation.
Electronics 14 04955 g013
Table 1. Inclusion and exclusion criteria for study selection.
Table 1. Inclusion and exclusion criteria for study selection.
Inclusion CriteriaExclusion Criteria
Publication Type: Peer-reviewed journal articles, full conference papers.Publication Type: Editorials, opinion pieces, marketing materials, dissertations.
Timeframe: Published between January 2020 and the present.Timeframe: Published before January 2020.
Language: Full text available in English.Language: Not published in English.
Focus: Primary focus on the application or theory of AI or XR in education.Focus: Purely technical papers with no discussion of pedagogy.
Context: K-12 or higher education learning environments.Context: Primarily corporate, military, or industrial applications.
Content: Substantive discussion of pedagogy or instructional design.Content: Full text unavailable.
Table 2. Data sources and selection criteria.
Table 2. Data sources and selection criteria.
Data Source TypeSelection CriteriaJustification
Curriculum StandardsAuthority: Published by a recognized educational body. Currency: Currently in effect or recently updated. Clarity: Unambiguous learning objectives. Scope: Covers a diverse range of subjects and skills.Ensures the framework aligns with current, official educational policy, making it relevant to formal schooling contexts.
Pedagogical Case StudiesAuthenticity: Based on documented, real-world situations. Richness: Provides sufficient narrative context and detail. Pedagogical Relevance: Illustrates a specific learning challenge or principle. Peer-Reviewed: Ensures quality and rigor.Provides grounded examples that bridge the gap between abstract theory and classroom practice.
Table 3. Prompt engineering framework.
Table 3. Prompt engineering framework.
Prompt ComponentDescriptionExample (History: Boston Tea Party)
Role and GoalAssigns a persona and pedagogical objective to the AI.“You are an expert instructional designer and historian. Your goal is to generate components for a VR scenario where a student analyzes the causes of the Boston Tea Party.”
ContextProvides learner profile, subject matter, and the curriculum standard.“The target is a 10th-grade student. The standard is ‘Analyze multiple and complex causes and effects of events in the past.’ The event is the Boston Tea Party.”
Input DataProvides deconstructed elements from the conceptual mapping process.“Learning Objective (Bloom’s): Analyze. Key Concepts: Taxation without representation, Tea Act. Core Task (Merrill’s): Examine virtual primary sources and NPC dialogs.”
Task and ConstraintsDelivers explicit step-by-step instructions for content generation.“1. Generate a branching dialog script between a Loyalist and a Son of Liberty. 2. Generate text for three virtual primary source documents. 3. Create three descriptive prompts for a text-to-image model to generate key visual assets.”
Output FormatSpecifies the desired structure (e.g., JSON) for integration into an XR pipeline.“Provide the entire output in a single JSON object with keys for ‘dialogueScript’, ‘primarySources’, and ‘assetPrompts’.”
Table 4. Layers’ Functionalities.
Table 4. Layers’ Functionalities.
LayerNameRoleFunctionalityKey Technology
Layer 1Curriculum InputDefines the educational goalsConverts official NGSS learning standards into machine-readable JSON (“layer1_ngss.json”). These specify concepts, skills, and learning objectives.Python
Layer 2AI LayerGenerates learning content dynamicallyUses LLMs (e.g., GPT/Claude) to produce scenario blueprints, dialogs, task descriptions, and metadata for 3D assets, stored as “layer2_ai.json”.GPT/Claude
Layer 3XR LayerBuilds the immersive environmentImplemented in Unity (C#), it loads 3D scenes and prefabs to create the interactive simulation itself.Unity (C#)
Layer 4Adaptive LayerPersonalizes learner experienceUses real-time behavioral data (e.g., actions, response times) with rules in “adaptive_rules.json” and optional Python logic to deliver adaptive feedback and difficulty scaling.Unity + Optional Python
Table 5. Thematic synthesis of initial expert review.
Table 5. Thematic synthesis of initial expert review.
ThemePerceived Strengths (Representative Comments)Areas for Refinement (Representative Comments)Proposed Action/Modification to Framework
Pedagogical Soundness“Grounding the entire process in formal curriculum standards (Layer 1) is a major strength. It prevents the technology from becoming a solution in search of a problem.”“The leap from broad standards to specific, machine-readable objectives is non-trivial. The framework needs to detail the ‘human-in-the-loop’ role in validating the AI’s semantic interpretation.”Add a “Human Validation Checkpoint” sub-process within Layer 1 to ensure pedagogical experts approve the structured objectives before they are passed to the AI generation layer.
Technical Feasibility“The modular, four-layer design is logical. The use of multimodal generative AI is forward-thinking and aligns with current technological trajectories.”“The computational cost of real-time, AI-driven adaptive scaffolding within a high-fidelity XR environment could be prohibitive for many school systems. Scalability is a major concern.”Acknowledge scalability as a limitation. Propose a “tiered” implementation model where a “lighter” version uses pre-calculated branching paths instead of real-time AI analytics for less-resourced environments.
Innovation and Contribution“This framework uniquely integrates four critical domains (curriculum, AI, XR, analytics) that are often discussed in isolation. Its primary contribution is this synthesis.”“The ethical implications of AI-driven personalization and student data collection are mentioned but need to be more explicitly integrated into the framework’s structure.”Expand the “Ethical Gatekeeper” function in Layer 1. Add an “Ethical and Privacy Protocol” to Layer 4 that defines data handling, anonymization, and consent procedures.
Learner Experience“The explicit focus on Cognitive Load Theory and usability for children is crucial and often overlooked in technically focused frameworks. This enhances its potential for real-world efficacy.”“The framework assumes a baseline of digital literacy. More detail is needed on how the system will onboard and support learners with varying levels of comfort with XR interfaces.”Incorporate an adaptive “onboarding module” as the initial part of any generated XR scenario. This module will assess the user’s familiarity with XR controls and provide tailored tutorials as needed.
Table 6. Risks and barriers analysis.
Table 6. Risks and barriers analysis.
RiskDescriptionMitigation Strategy
Hardware CostsVR headsets and GPUs may be unaffordable for some schools.Adopt a three-tier system; offer low-fidelity desktop and web versions using open-source assets.
Teacher Training GapsEducators may lack familiarity with XR or AI systems.Provide modular professional development kits and in-app tutorials.
AI Bias and ReliabilityAI-generated dialog or content may misrepresent scientific facts.Integrate human-in-the-loop validation at the Curriculum (Layer 1) stage and maintain editable AI blueprints.
Student Data PrivacyAdaptive logs could expose sensitive behavioral data.Use anonymized local storage (e.g., Unity Persistent Data Path) and comply with GDPR/FERPA standards.
System MaintenanceUpdating prefabs, textures, or AI components may require technical expertise.Employ version-controlled repositories (Git) with structured documentation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ursachi, T.-M.; Dascalu, M.-I. Curriculum to Immersion: A Conceptual Framework of Artificial Intelligence-Assisted Scenario Generation in Extended Reality for Primary and Secondary Education. Electronics 2025, 14, 4955. https://doi.org/10.3390/electronics14244955

AMA Style

Ursachi T-M, Dascalu M-I. Curriculum to Immersion: A Conceptual Framework of Artificial Intelligence-Assisted Scenario Generation in Extended Reality for Primary and Secondary Education. Electronics. 2025; 14(24):4955. https://doi.org/10.3390/electronics14244955

Chicago/Turabian Style

Ursachi, Tudor-Mihai, and Maria-Iuliana Dascalu. 2025. "Curriculum to Immersion: A Conceptual Framework of Artificial Intelligence-Assisted Scenario Generation in Extended Reality for Primary and Secondary Education" Electronics 14, no. 24: 4955. https://doi.org/10.3390/electronics14244955

APA Style

Ursachi, T.-M., & Dascalu, M.-I. (2025). Curriculum to Immersion: A Conceptual Framework of Artificial Intelligence-Assisted Scenario Generation in Extended Reality for Primary and Secondary Education. Electronics, 14(24), 4955. https://doi.org/10.3390/electronics14244955

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop