Next Article in Journal
Benchmarking Virtual Physics Labs: A Multi-Method MCDA Evaluation of Curriculum Compliance and Pedagogical Efficacy
Previous Article in Journal
Business Logic Vulnerabilities in the Digital Era: A Detection Framework Using Artificial Intelligence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital-Twin-Based Ecosystem for Aviation Maintenance Training

Engineering Faculty, Transport and Telecommunication Institute, Lauvas 2, LV-1019 Riga, Latvia
Information 2025, 16(7), 586; https://doi.org/10.3390/info16070586
Submission received: 7 June 2025 / Revised: 24 June 2025 / Accepted: 3 July 2025 / Published: 8 July 2025

Abstract

The increasing complexity of aircraft systems and the growing global demand for certified maintenance personnel necessitate a fundamental shift in aviation training methodologies. This paper proposes a comprehensive digital-twin-based training ecosystem tailored for aviation maintenance education. The system integrates three core digital twin models: the learner digital twin, which continuously reflects individual trainee competence; the ideal competence twin, which encodes regulatory skill benchmarks; and the learning ecosystem twin, a stratified repository of instructional resources. These components are orchestrated through a real-time adaptive engine that performs multi-dimensional competence gap analysis and dynamically matches learners with appropriate training content based on gap severity, Bloom’s taxonomy level, and content fidelity. The system architecture uses a cloud–edge hybrid model to ensure scalable, secure, and latency-sensitive delivery of training assets, ranging from computer-based training modules to high-fidelity operational simulations. Simulation results confirm the system’s ability to personalize instruction, accelerate competence development, and support continuous regulatory readiness by enabling closed-loop, adaptive, and evidence-based training pathways in digitally enriched environments.

Graphical Abstract

1. Introduction

The global aviation industry is undergoing a rapid digital transformation, driven by increasing demands for safety, operational efficiency, and real-time decision-making. Amid this evolution, the role of aviation maintenance technicians has grown more complex, requiring proficiency not only in traditional mechanical procedures but also in data interpretation, system-level diagnostics, and adaptive learning. According to forecasts by leading aviation stakeholders, over 700,000 new maintenance professionals will be needed by 2043 to support fleet expansion and technological modernization [1]. Existing training models, largely reliant on static courseware and infrequent practical assessments, struggle to deliver the depth, adaptability, and individualization needed in today’s data-rich aviation environment.
Digital twin (DT) technology, originally developed in the manufacturing sector, refers to a dynamic digital representation of a physical system or process that enables real-time monitoring, simulation, and decision-making across its lifecycle. This concept emphasizes the value of virtual replicas in enhancing operational efficiency and system understanding [2]. Building upon this, the architecture of Digital Twin as a Service was formalized within Industry 4.0 frameworks to support scalable, cloud-integrated, and service-oriented applications across sectors [3]. These developments have led to the emergence of DT ecosystems that can replicate, predict, and adapt to complex system behaviors, making them particularly suitable for aviation training, where real-time diagnostics, predictive maintenance, and regulatory alignment are critical.
In response to these challenges, this study proposes a digital-twin-based training (DTBT) ecosystem specifically designed for aviation maintenance. The system builds on three foundational concepts: learner digital twins (LDTs) that dynamically reflect a trainee’s evolving competence profile; ideal competence twins (ICTs) that encode the regulatory and operational skill benchmarks derived from aviation normative documents; and a four-level learning ecosystem twin (LET) that curates training resources across a fidelity spectrum from traditional computer-based training (CBT) to high-fidelity operational twins.
The emergence of digital twin technology has opened innovative pathways for educational methodology transformation. Learners can acquire educational experiences and hands-on investigations that mirror physical learning contexts through the concurrent operation of tangible entities or their digital counterparts within digital twin educational environments [4]. This concurrent functionality not only strengthens students’ hands-on capabilities but also enriches their comprehension and knowledge implementation [5]. Nevertheless, existing studies predominantly remain within conceptual validation and initial deployment phases, requiring further confirmation of practical impacts and widespread adoption [6]. Concerning robust interactivity, learners can employ wearable technology to engage with elements in digital twin spaces or participate in immersive discovery as virtual representations within digital twin educational settings [7]. This engagement approach offers substantial assistance for pervasive, geographically distributed learning activities and productive team collaboration [8].
However, successfully incorporating these technologies practically to guarantee students’ educational experiences and tangible results presents numerous obstacles [9]. As computational technology progresses, digital twin technology implementation in education has steadily broadened. Contemporary research mainly emphasizes developing digital twin educational environments and instructional support frameworks [10]. Through digital twin technology utilization, diverse digital twin learning contexts have been established, incorporating intelligent digital twin learning spaces [11], digital twin systems [12], and immersive architectural prototype twin environments [13].
The instantaneous engagement, virtual–physical integration, and comprehensive understanding delivered by digital twins provide learners with environments and materials that are viewable, experiential, manipulable, testable, and evolutionary [14]. Additionally, digital twin technology has facilitated creating cognitive digital twins [15] and digital-twin-supported instructional systems [16]. These frameworks deliver customized learning approaches and educational blueprints through continuous data evaluation and response, illustrating digital twin technology’s capacity for improving instructional quality and effectiveness.
Digital twins are being actively researched in relation to various areas of aviation. A paper [17] introduces a holistic framework for applying digital twins in aircraft lifecycle management, emphasizing the use of data-driven models to improve decision-making and operational performance. The research presented in [18] introduces an innovative digital twin framework tailored for twin-spool turbofan engines, aiming to enhance accuracy by integrating the strengths of both mechanism-based models and data-driven approaches. In a broader fleet-wide context, a study [19] proposes a comprehensive monitoring and diagnostics framework for aircraft health management.
The term DTBT ecosystem refers to a structured educational framework that integrates dynamic digital replicas of learners, competence models, and learning environments to enable real-time monitoring, personalized instruction, and regulatory alignment in technical education. This ecosystem relies on continuous feedback loops and fidelity-stratified content delivery to adapt instruction according to evolving learner profiles. Several technology providers have already explored the application of digital twin technology to address aviation training challenges. For instance, PTC has developed digital twin solutions integrated with its platforms, allowing interactive procedural training via augmented reality and 3D system simulations [20]. Similarly, IBM has implemented digital twin-based predictive analytics and procedural training modules through its Maximo platform, focusing on system diagnostics and asset performance in aviation maintenance contexts [21].
All major aviation companies have been actively developing platforms to predict component wear and optimize maintenance strategies. Some of the most significant advancements towards digital twins in the aviation sector include Aviatar (Lufthansa Technik, Hamburg, Germany) [22], Skywise (Airbus, Blagnac, France) [23], Predix (General Electric, General Electric. San Ramon, CA, USA) [24], PROGNOS (Air France Industries and KLM Engineering & Maintenance, Paris, France) [25], AnalytX (Boeing, Crystal City, VA, USA) [26], and others.
Innovative approaches to aviation maintenance training include the use of virtual reality (VR) and adaptive game-based environments. A paper [27] investigates the effectiveness of virtual simulation-based training for aviation maintenance technicians, concluding that VR technology can enhance training outcomes when combined with traditional methods. Additionally, a study [28] discusses maintenance training based on an adaptive game-based environment using a pedagogic interpretation engine, which dynamically adapts training scenarios to maximize effectiveness.
Virtual maintenance training offers interactive 3D simulations for effective skill development. A study [29] developed an aircraft maintenance virtual reality system for training students in the aviation industry, demonstrating its effectiveness in improving training outcomes. A comprehensive analysis of the issues of application and training of artificial intelligence in aviation is studied in one article [30].
Despite significant progress in the application of digital twin technologies and immersive environments for training, existing studies in aviation maintenance education primarily remain at the conceptual or prototype stage. These approaches often lack robust orchestration mechanisms that adapt training content dynamically based on individual learner profiles, real-time performance data, and regulatory compliance metrics. Moreover, current systems do not fully integrate fidelity-stratified content assignment or closed-loop feedback for competence tracking, which limits their scalability, auditability, and instructional precision.
This article addresses these critical gaps by introducing a fully realized digital-twin-based ecosystem for aviation maintenance training that combines LDT, ICT, and LET within a modular, cloud–edge hybrid architecture. The main contribution lies in the design and experimental validation of an adaptive orchestration engine capable of real-time gap analysis, fidelity-matched content delivery, and comprehensive validation logging aligned with European Union Aviation Safety Agency (EASA) requirements.
The remainder of this paper is structured as follows: Section 2 presents the conceptual framework, system architecture, and mathematical models underlying the ecosystem; Section 3 describes the simulation setup and evaluates system behavior and learning outcomes; Section 4 discusses scalability, regulatory readiness, and lessons from deployment, and Section 5 concludes with insights into limitations and future research directions.

2. Materials and Methods

2.1. Conceptual Framework

The proposed digital-twin-based training ecosystem is grounded in a multi-layered conceptual framework that integrates real-time learner monitoring, regulatory competence modeling, and adaptive content delivery. This framework orchestrates the interaction of three interdependent digital twin models—LDT, ICT and LET—within a continuous feedback loop (Figure 1).
The learner digital twin serves as a real-time digital replica of the trainee’s evolving competence profile. It continuously captures individual learning activities, assessment outcomes, interaction patterns, and behavioral markers. The LDT reflects not only the current level of mastery across technical domains but also temporal patterns such as learning speed, error recurrence, and decision-making latency. This enables personalized tracking and adaptive interventions over the course of the training program.
The ideal competence twin functions as a normative reference model, encoding the required knowledge, skills, and performance standards derived from international aviation maintenance regulations and instructional frameworks. Specifically, it integrates the structure of EASA Part-66 training modules [31], organizes content by Air Transport Association (ATA) e-Business Program chapters [32], and applies Bloom’s Taxonomy [33] to define expected levels of cognitive, psychomotor, and affective learning. The ICT thus serves as the benchmark against which each LDT is periodically compared to identify competence gaps at a granular level.
The learning ecosystem twin is a structured repository of all instructional assets available within the training environment. These include static resources such as manuals and CBTs, interactive simulations, VR-based procedural walkthroughs, and sliced versions of full operational digital twins derived from actual aircraft telemetry and maintenance logs. The LET not only catalogues assets by topic and fidelity level but also annotates each resource with metadata such as learning objectives, fidelity rating, expected duration, and regulatory alignment.
Together, these three twins are continuously aligned through an orchestration engine that identifies deviations between the LDT and ICT, ranks the magnitude of detected gaps, and selects an appropriate training resource from the LET. The orchestration logic is governed by predefined thresholds that classify gaps into high, medium, or low severity, each mapped to a corresponding content fidelity tier. Once a resource is deployed to the learner, all interactions are streamed back in real time to update the LDT and inform the next cycle of training decisions.
This conceptual framework transforms traditional maintenance training into a closed-loop, evidence-based learning system, where instructional content dynamically adapts to learner needs, and competence progression is continuously benchmarked against formal regulatory expectations. The ecosystem ensures both pedagogical relevance and regulatory compliance across all phases of technical education through anchoring the learning process in validated digital twins.

2.2. System Architecture and Data Flow

The architecture of the proposed aviation maintenance training ecosystem is designed to support dynamic learner modeling, personalized training orchestration, and traceable validation within a modular and scalable framework (Figure 2). The system integrates multiple interacting components within a cloud–edge hybrid infrastructure that facilitates both real-time responsiveness and resource-intensive simulation delivery.
At the core of the architecture is the adaptive orchestration engine, which serves as the decision-making hub. It processes data from three digital twin layers: the LDT, which maintains an up-to-date representation of the learner’s competence profile; the ICT, which encodes regulatory skill requirements and serves as the benchmark for gap analysis; and the LET, which categorizes all available training resources by fidelity, Bloom’s taxonomy level, domain relevance, and regulatory linkage.
Central to this architecture is a secure, event-driven infrastructure that connects the three core digital twin layers of LDT, ICT, and LET via a centralized orchestration engine and a publish–subscribe message backbone based on Apache Kafka [34] or Message Queuing Telemetry Transport (MQTT) [35] protocols.
The system initiates with the ingestion of heterogeneous data streams from multiple sources. These include operational telemetry from aircraft digital twins, xAPI-formatted learning activity logs (xAPI is an eLearning specification that makes it possible to collect data about the wide range of experiences a person has within online and offline training activities [36]), and document updates such as revised maintenance manuals or airworthiness directives. All inputs are encrypted and serialized before being published to their respective message topics on the streaming backbone. A schema registry ensures that data structures remain consistent across different training modules, learner groups, and content sources.
The presence of two “Message Broker” elements in Figure 2 reflects their dual operational context within the cloud–edge hybrid infrastructure. Although they share the same functional role (streaming and routing of data), they are deployed separately at the edge and cloud layers to manage data locality, latency, and security.
The upper message broker operates within the orchestration layer, handling the routing of incoming telemetry, training events, and document updates toward the orchestration engine. In contrast, the lower cloud-symbol-associated broker represents the backend cloud-level message exchange layer that supports inter-institutional synchronization, long-term data storage, and asynchronous delivery of training records across distributed edge nodes.
This architectural split reflects the system’s design principle of decoupling latency-sensitive training orchestration from backend analytics and archival services. The directional flow between these brokers, shown as a one-way arrow, illustrates the push of raw data from the edge environment toward centralized orchestration logic while maintaining modular separation of real-time orchestration and long-term analytics. This dual-broker pattern ensures scalability, modularity, and robustness in multi-institutional deployments while supporting real-time responsiveness for training orchestration at the learner-facing edge.
At the heart of the system, the orchestration engine subscribes to these real-time data streams and performs dynamic alignment between each learner’s current competence profile (LDT) and the regulatory and operational expectations encoded in the ICT. By comparing multi-dimensional competence vectors, the engine identifies gaps, ranks their severity based on threshold deltas across cognitive depth, domain coverage, and operational importance, and selects appropriate instructional resources from the LET. This selection process considers multiple factors, including gap size, prior learner behavior, instructional metadata, and resource fidelity.
Once a resource is selected, it is deployed to the learner through a front-end delivery interface, which may include a traditional learning management system (LMS), virtual reality (VR) or augmented reality systems, or portable CBT platforms. To manage performance and accessibility, the ecosystem employs a hybrid deployment model: latency-sensitive and lightweight resources such as CBT modules are delivered via local edge nodes, while GPU-intensive simulations and VR training modules are rendered through elastic cloud environments. This model ensures scalability, responsiveness, and compatibility with diverse training scenarios and locations.
During each training session, all learner interactions, ranging from diagnostic decisions to simulation paths and time-based performance metrics, are captured and streamed back to the orchestration engine in real time. These interactions are used to update the LDT, refine the learner’s competence vector, and inform the next iteration of gap analysis. Simultaneously, all training events are logged into a secure validation matrix that includes metadata such as scenario identifiers, asset versions, session outcomes, and cryptographic hashes to guarantee audit integrity. This matrix can be queried by instructors, quality managers, or regulatory auditors to validate training relevance and compliance with EASA Part-66 standards.
Security and data governance are integral to the system. All communications are protected via TLS encryption, and role-based access control (RBAC) ensures that users only access the data relevant to their function—be it learner, instructor, auditor, or system administrator. To protect proprietary data, especially in operational twin slices used for training, digital rights management (DRM) is enforced. Furthermore, all sensitive component identifiers are scrubbed during the transformation of full operational twins into training-ready digital slices, thus preserving behavior-critical dynamics while safeguarding OEM intellectual property.
Through this architectural design, the training ecosystem achieves a high degree of automation, adaptability, and regulatory accountability. It enables continuous alignment of instructional resources with both individual learner needs and evolving operational contexts, thereby transforming maintenance training into a responsive and evidence-based process.
Table 1 summarizes the key components of the proposed DTBT ecosystem, along with their core functions, enabling technologies, and roles within the data flow pipeline.

2.3. Four-Level Learning Architecture of the Digital Twin Ecosystem

The training ecosystem is built on a four-level learning architecture that enables progressive, personalized, and regulation-aligned development of aviation maintenance competencies. This structure is embedded within the LET and allows the orchestration engine to deliver content dynamically, based on the severity of detected competence gaps and the learner’s evolving profile. The levels represent a fidelity gradient from static theoretical instruction to full-system digital twin simulations mapped to increasing Bloom’s taxonomy stages and operational complexity (Figure 3).
At the foundation is Level 1, comprising static and prescriptive learning resources. These include traditional CBT modules, digital manuals, procedural checklists, and multimedia tutorials. Content at this level is designed to support fundamental knowledge acquisition, particularly for learners exhibiting high-severity gaps in low-order cognitive domains such as remembering and understanding. Delivery typically occurs via learning management systems, mobile apps, or downloadable documents. Digital twins are absent or represented symbolically, serving primarily a referential function.
Level 2 introduces scripted simulations and fault scenarios with medium fidelity. Learners engage in guided diagnostic sequences or procedural walk-throughs using reduced-function digital twins. These assets simulate predictable faults, procedural errors, or system states, enabling learners to practice applying knowledge in controlled environments. The focus is on mid-level Bloom’s taxonomy objectives, such as applying and analyzing, and the delivery may take the form of interactive browser-based modules or tablet-based tools. The digital twin slices used at this level allow limited input manipulation and visual feedback.
In Level 3, the training experience incorporates operational digital twin playback combined with fault injection. These high-fidelity modules simulate real-world system behavior using synchronized telemetry from actual aircraft, augmented with realistic anomalies. Learners observe and interact with system states under semi-structured fault conditions, practicing higher-order competencies such as system evaluation and procedural validation. This level supports immersive learning through VR-enabled environments or advanced simulators and represents a transition to near-operational realism.
At the highest tier, Level 4, learners interact with adaptive, full-system digital twins that simulate complex operational environments in real time. These environments embed dynamic fault progression, decision consequences, timing tolerances, and behavioral scoring. The scenarios are designed to assess mastery-level performance, with learners required to synthesize and apply procedural knowledge in realistic, time-sensitive situations. The orchestration engine adapts the scenario parameters based on the learner’s inputs, enabling a closed-loop instructional cycle. Delivery takes place in virtual or augmented reality settings, often using headsets or high-performance simulation workstations.
Learners do not progress through these levels in a linear fashion. Instead, the orchestration engine selects the appropriate level for each identified gap, based on gap severity, operational criticality, learner history, and regulatory alignment. For instance, a significant gap in hydraulic actuation understanding may begin with Level 1 theory-based content, proceed through Level 2 fault simulation, and culminate in a Level 4 VR-based full-system diagnostic scenario. This adaptive logic ensures that instructional resources are matched precisely to learner needs while optimizing cost, engagement, and outcome fidelity.
This four-level architecture enables modular deployment, targeted remediation, and measurable skill development. It supports scalability across institutions, traceability through the xAPI-powered validation matrix, and regulatory compliance via alignment with EASA Part-66 modules and Bloom’s-taxonomy-level competence standards. As such, it provides a pedagogically rigorous and technically robust framework for aviation maintenance training in digitally transformed learning environments.

2.4. Content Fidelity Stratification

A critical feature of the proposed training ecosystem is its ability to tailor instructional delivery based on the severity of identified competence gaps through a structured fidelity stratification model (Figure 4). This model categorizes learning assets within LET into three tiers of increasing realism and complexity: low fidelity, medium fidelity, and high fidelity. The selection of content is governed by the orchestration engine, which dynamically aligns training asset fidelity with the learner’s current proficiency level and the nature of the learning objective.
Low-fidelity content is designed to address broad or foundational skill gaps, typically aligned with lower-order cognitive objectives such as recall and comprehension. These assets include traditional CBT modules, digital documentation, instructional videos, and quizzes. For example, in ATA Chapter 24 (Electrical Power), a low-fidelity module may introduce the basic components of the AC power distribution system, common wiring standards, and safety procedures using annotated diagrams and narrated slides. This type of content is particularly effective for new trainees or those who require recertification on standard system knowledge before progressing to interactive tasks.
Medium-fidelity content is suitable for addressing moderate competence gaps, where the learner exhibits partial understanding or inconsistent procedural application. These resources include reduced-function digital twins and semi-interactive simulations that model selected system behaviors with scripted inputs and faults. For instance, in ATA Chapter 29 (Hydraulic Power), a medium-fidelity simulation may allow the learner to operate virtual hydraulic pumps, manipulate selector valves, and identify procedural faults (e.g., loss of pressure due to actuator leaks) within a predefined scenario. This level of fidelity helps reinforce applied diagnostic logic, procedural flow, and cause–effect relationships in moderately complex tasks.
High-fidelity content targets learners with minimal gaps who are preparing for system-level mastery, especially in operationally critical or safety-sensitive areas. These assets are derived from sliced operational digital twins that mirror the behavior of actual aircraft subsystems in real-world conditions, based on telemetry and maintenance records. For example, in ATA Chapter 36 (Pneumatic Systems), a high-fidelity training twin might simulate dynamic pressure changes across multiple bleed air zones during different phases of flight. The scenario would include real-time sensor data, cascading failures, and time-constrained decision points, requiring the learner to conduct a full diagnostic sweep using onboard indications and fault isolation procedures. Such content is typically delivered via immersive virtual or augmented reality interfaces and supports Bloom’s taxonomy’s highest levels—evaluation and synthesis.
Each content tier is associated with metadata tags that define its coverage, fidelity, scenario type, and regulatory mapping. The orchestration engine uses these attributes, along with the learner’s historical performance and current LDT profile, to assign content that is educationally appropriate and computationally efficient.
Learning assets are structured to align with gap severity and ATA-specific training objectives, ensuring targeted, scalable, and regulation-compliant instruction. This approach to fidelity stratification enhances both personalization and resource efficiency while reinforcing the pedagogical connection between simulated behavior and real-world aircraft operations.

2.5. Validation and Learning Record Management

A foundational requirement of aviation maintenance training is the ability to demonstrate regulatory compliance, instructional validity, and traceable learner progression. The proposed training ecosystem addresses this requirement through a robust validation and learning record management framework, designed to ensure that each instructional interaction is verifiable, auditable, and pedagogically aligned (Figure 5).
At the core of this framework is the validation matrix, a structured and immutable ledger that captures the complete metadata of every training session. For each learning event, whether initiated through a CBT module, reduced-fidelity simulator, or high-fidelity operational twin, the system records key attributes including learner ID, scenario ID, gap classification, selected LET asset, resource version, interaction timestamps, performance metrics, and result outcomes. Additionally, digital hash values are appended to ensure content integrity and support forensic-level audits.
All learner interactions are tracked using the xAPI standard, which enables granular logging of behaviors such as clicks, decisions, timing, resource transitions, and task completions. These records are continuously streamed into the LDT, updating the trainee’s competence profile in real time. Each completed session triggers an automated evaluation process that recalculates the competence vector, repositions the learner within the ideal skill space, and identifies new or unresolved gaps for the next orchestration cycle.
Instructors and administrators can access these records via role-based dashboards that visualize training progression, fidelity history, assessment scores, and readiness levels per ATA chapter or regulatory module. For quality assurance personnel and auditors, the system enables targeted queries across the validation matrix, for example, retrieving all training sessions tied to ATA Chapter 32 (Landing Gear) that used a specific simulation version or verifying that a cohort has completed all required EASA Part-66 modules at the specified Bloom’s taxonomy level.
The ecosystem supports version control and traceability for all LET assets. Each learning resource carries a version identifier, instructional metadata, and update history. When a resource is revised, due to a regulatory update, OEM change notice, or pedagogical improvement, its new version is registered and tracked. Historical records maintain the linkage between learner activity and the exact content version used, ensuring that all competence assessments are contextualized and valid for their timeframe.
To safeguard data privacy and regulatory integrity, the system enforces strict security and governance protocols. All record streams are encrypted using TLS, stored in tamper-resistant databases, and governed by digital-rights management rules. Learner records are anonymized for research or cross-fleet analytics and are accessible only through role-based permissions compliant with GDPR and industry data standards.
This validation and learning record management architecture transforms the training process from a sequence of isolated events into a continuous, traceable learning trajectory. It empowers stakeholders to track progress, audit instructional quality, and document compliance with aviation safety standards thereby reinforcing trust, accountability, and operational readiness across the training ecosystem.
Table 2 summarizes the core components of the validation and learning record management framework. It identifies each element’s primary function, enabling technologies, and contribution to the overall integrity and auditability of the training ecosystem.

2.6. Adoption Strategy and Experimental Scope

The adoption of a DTBT ecosystem in aviation maintenance requires a phased, evidence-driven strategy that supports institutional readiness, regulatory compliance, and instructional effectiveness. To ensure manageable deployment and measurable outcomes, the proposed approach emphasizes modular onboarding, iterative validation, and scalable expansion across learning cohorts and technical domains.
The initial stage focuses on pilot implementation within a single ATA chapter or system domain, such as ATA 32 (Landing Gear) or ATA 36 (Pneumatics), where learning assets and operational digital twin slices are already available or easily acquired. A small group of trainees selected based on prior training history or cohort diversity is enrolled in a controlled trial using the full orchestration pipeline, including LDT tracking, ICT comparison, adaptive LET content delivery, and validation matrix logging. This limited-scope deployment allows institutions to assess technical feasibility, user engagement, and gap closure rates before scaling to additional systems or learner populations.
Instructional outcomes from the pilot are analyzed across multiple dimensions, including learning efficiency, competence vector progression, resource utilization, and training impact measured against Bloom’s-taxonomy-level attainment and EASA Part-66 thresholds. Feedback is gathered from trainees, instructors, and quality managers through surveys, dashboard analytics, and regulatory audits. Any issues related to fidelity matching, orchestration logic, or platform usability are addressed prior to broader rollout.
In subsequent phases, the ecosystem is expanded across additional ATA chapters, with each new module validated through structured onboarding protocols. As the learning asset library grows, high-fidelity training twins are migrated to the cloud layer, enabling greater resource elasticity and centralized version control. Institutions may begin to adopt blockchain-secured digital credentials or badges to document learner progression and competence gap closure, making qualifications more portable and transparent.
Throughout the adoption cycle, cross-cohort and cross-system analytics are used to identify recurring skill deficits, bottlenecks in instructional delivery, or correlations between learner profiles and training outcomes. These insights inform decisions at both the curriculum level (e.g., which systems require additional fidelity) and the strategic level (e.g., workforce planning or OEM feedback loops).
This strategy allows organizations, whether training academies, airline MRO divisions, or regulatory authorities, to adopt the ecosystem in measured increments, minimizing disruption while maximizing visibility into its instructional and operational value. Rather than replacing legacy systems all at once, this model supports coexistence and gradual integration, ensuring that transformation proceeds at a pace aligned with institutional capacity, instructor acceptance, and regulatory approval cycles.

2.7. Mathematical Framework of Competence Gap Analysis and Content Matching

To operationalize the orchestration logic that underpins the adaptive training loop, this study formalizes a mathematical framework for real-time competence gap detection and content fidelity selection. The framework ensures that the alignment between the LDT and the ICT is not only rule-based, but also quantitatively transparent, scalable, and auditable.
Let C L R n represent the learner competence vector, where each element c L , i [ 0 , 1 ] denotes the normalized mastery level of a specific skill, as derived from the LDT, and C T R n represents the target competence vector (ICT), where c T , i [ 0 , 1 ] defines the regulatory or operational requirement for the i -th competence unit. Notation R n denotes the n -dimensional real vector space. This refers to a vector with n real-valued components.
The competence gap vector is then defined as
G = C T C L
Each component g i is classified into a gap severity level based on threshold parameters:
  • High severity if g i > θ H
  • Medium severity if θ M < g i θ H
  • Low severity if 0 < g i θ M
  • No gap if g i 0
where θ H , i θ M are empirically derived based on Bloom’s-taxonomy-level mappings or prior training data (e.g., θ H = 0.5 ,   θ M = 0.2 ) .
For each identified gap g i , the orchestration engine selects a content asset, R j R , from the learning ecosystem twin (LET), where R j represents a set of training resources designed to cover j skill dimensions. Each resource is tagged with metadata:
  • Fidelity level f i { L , M , H } (low, medium, high)
  • Skill target vector s j R n
  • Bloom’s taxonomy level b j { 1,2 , , 6 }
  • Duration estimates d j , which are important for time budgeting and scheduling
  • Regulatory mapping ρ j , which is important for ensuring content satisfies regulatory constraints.
Only resources satisfying the regulatory mapping are required:
R j R g i   s u c h   t h a t   ρ i = ρ i
where ρ i is the regulatory module associated with skill i (e.g., M09.02 for the emergency landing gear extension).
And optionally,
d j Δ t a v a i l a b l e
if the learner has time constraints (e.g., session limits, device availability).
The resource selection function S aims to minimize a multi-criteria cost function:
S g i = arg min R j R α · s j e i 2 + β · f j + γ b j
where e i is the unit vector for the targeted competence gap g i , and weights α , β , γ [ 0 , 1 ] are assigned based on instructional policy (e.g., learning outcome priority vs. fidelity cost).
The selected resource R j is then streamed to the learner, and its effectiveness is measured by observing the post-intervention competence vector C L n e w , with
Δ C = C L n e w C L
This framework enables quantitative tracking of training impact, automated resource matching, and regulatory justification for instructional pathways. It also lays the foundation for simulation-driven validation and model-based audit querying, as all interactions and gap resolutions are recorded within the validation matrix using scenario and vector metadata.

2.8. Experimental Methodology

To evaluate the performance and behavioral characteristics of the proposed digital-twin-based training ecosystem, a simulation-based experimental setup was constructed using a controlled learner cohort and a well-defined training domain. The objective was to replicate realistic orchestration cycles under varying gap severities and learner responsiveness profiles, and to assess how effectively the system adapts, personalizes content, and tracks outcomes across multiple iterations.
The simulation focused on ATA Chapter 32 (Landing Gear Systems), with six key skill areas serving as the competence vector dimensions. A cohort of five virtual learners was created, each initialized with a unique competence profile reflecting plausible variations in baseline knowledge and skill. These profiles were generated using a bounded normal distribution and normalized to a [0, 1] scale. The target competence vector, representing regulatory and operational training expectations, remained fixed for all learners.
During each simulation cycle, learners were evaluated against the target profile to identify competence gaps. These gaps were then classified into severity levels, which guided the orchestration engine in selecting content assets from the LET. The resources varied in fidelity and instructional design, ranging from low-complexity CBT modules to high-fidelity operational digital twin scenarios. Content assignment was dynamically adapted in each cycle based on the learner’s evolving profile and prior training outcomes.
The simulation was run over eight discrete iterations for each learner. At each step, the learner engaged with the assigned content, and their updated competence was recorded based on a weighted progression model influenced by content fidelity and individual responsiveness. All training transactions including the learner ID, skill domain, assigned content, pre- and post-competence scores, and scenario metadata were logged in a validation matrix that ensured traceability and auditability.
This methodology allowed for precise observation of how the ecosystem’s orchestration logic performs over time, how competence gaps evolve with fidelity-matched training, and how the system scales personalization within a structured, regulation-aligned framework. The results, detailed in Section 3, demonstrate both quantitative improvements in learner competence and qualitative evidence of adaptive behavior aligned with instructional intent.

3. Results

3.1. Initial Learner Competence Profiling

To evaluate the practical application of the proposed DTBT training ecosystem, a simulated learner cohort was created based on a representative training scenario involving ATA Chapter 32: Landing Gear Systems. This chapter was selected due to its high operational relevance and structured procedural content, which aligns well with the modular design of the ideal competence twin.
The target competence vector C T was defined across six key skill areas, each normalized to a scale between 0 and 1, reflecting the expected level of mastery for regulatory compliance and operational safety. These areas included the following:
  • LG-1: System structure and component recognition.
  • LG-2: Retraction/extension sequencing logic.
  • LG-3: Hydraulic actuation mechanisms.
  • LG-4: Landing gear position indication and sensor logic.
  • LG-5: Manual override and emergency extension procedures.
  • LG-6: Typical failure modes and diagnostic interpretation.
Each trainee’s initial profile was generated to simulate realistic variance in baseline skill levels. Five virtual learners, L 1 through L 5 , were assigned individual competence vectors C L ( k ) R 6 , representing their current knowledge and skill status across the six areas. These values were derived as random samples from a bounded normal distribution N ( m = 0.45 ,   σ = 0.15 ) clipped to [0, 1], simulating partial or incomplete learning.
Table 3 presents the normalized competence vectors for five simulated learners across six skill domains in ATA Chapter 32 (Landing Gear). The target vector represents ideal mastery levels, while each learner vector shows the initial competence profile used in simulation.
This profile illustrates the multi-dimensional competence landscape against which each learner’s readiness is evaluated. Across the cohort, notable weaknesses were observed in LG-3 (hydraulics) and LG-5 (emergency extension), indicating systemic knowledge gaps in subsystems involving actuation and abnormal scenarios. These gaps become the primary focus of intervention during the orchestration cycle.

3.2. Gap Detection and Classification

Once learner competence vectors were initialized, the system’s orchestration engine proceeded to identify discrepancies between the learner’s current state and the predefined target competence profile. These discrepancies, referred to as competence gaps, serve as the foundation for adaptive content delivery within the digital-twin-based training ecosystem. For each skill dimension under consideration, a gap value was computed by subtracting the learner’s score from the corresponding target threshold. A positive result indicated a shortfall in competence, while zero or negative values reflected full or surplus mastery.
To enable instructional differentiation, each gap was categorized according to its severity. The classification thresholds were empirically defined: a gap greater than 0.5 was deemed high severity, values between 0.2 and 0.5 were considered medium, and values from 0 to 0.2 were labeled low severity. If the learner’s score exceeded the target, the system registered no gap. This classification schema was used to match the learner’s developmental needs with the most appropriate content fidelity level, ensuring that training remained efficient, targeted, and context sensitive.
The result of this analysis was a multi-dimensional gap profile for each learner, indicating not just the magnitude of skill deficits but also their relative instructional urgency. For example, a high-severity gap in emergency extension procedures (LG-5) would prompt foundational remedial training, whereas a low-severity gap in component recognition (LG-1) would trigger advanced content to reinforce and consolidate knowledge. The gap profiles were stored as intermediate vectors in the orchestration logic and fed forward into the adaptive content assignment module.
In addition to informing content selection, the classified gaps were logged into the validation matrix alongside learner ID, skill domain, and timestamp metadata. This ensured that every instructional decision made by the system could be traced to a quantifiable learning need. By structuring this process as a formalized detection and classification phase, the system preserved consistency in how instructional pathways were initiated, enabling both learner personalization and downstream auditing of training efficacy.
Figure 6 presents a chart that summarizes the distribution of initial competence gaps across five simulated learners in six critical skill domains under ATA Chapter 32. Each bar is subdivided by severity level as determined by the system’s classification thresholds.
This severity breakdown serves as a diagnostic foundation for the orchestration engine, allowing the system to allocate training assets in a gap-sensitive and learner-specific manner. It also facilitates instructional planning and regulatory oversight by offering a transparent snapshot of readiness levels prior to training intervention.

3.3. Adaptive Content Assignment

Following gap detection and severity classification, the orchestration engine dynamically assigned training resources tailored to each learner’s specific competence profile. The core objective was to optimize instructional effectiveness by aligning the severity of each skill gap with the fidelity of the training asset chosen from the LET. This process accounts for not only the magnitude of skill shortfall but also the skill’s regulatory relevance and learner history.
Each assignment decision was governed by a resource matching function that prioritized content along three dimensions: (1) instructional fidelity, (2) alignment with the targeted skill, and (3) efficiency with respect to learner time and system resources. High-severity gaps triggered low-fidelity, foundational content such as CBT modules, designed to build conceptual understanding. Medium-severity gaps were matched with reduced-fidelity simulation twins, often containing scripted fault trees to facilitate diagnostic reasoning. Low-severity gaps, reflecting partial or near-complete mastery, were addressed using high-fidelity operational digital twins that replay live aircraft data in a scenario-driven format.
Table 4 presents the assignment outcomes for five simulated learners across six skill domains. Each cell indicates the specific content type allocated based on the corresponding gap severity. For example, Learner 1, who exhibited high-severity gaps in LG-1 and LG-5, received CBT content for those skills. Conversely, Learner 4, with only low-severity gaps in LG-2 and LG-6, was assigned operational digital twin modules for experiential reinforcement.
Figure 7 illustrates the decision logic used by the orchestration engine to select appropriate training content based on competence gap severity. The process begins with a learner’s current skill vector being evaluated against the target profile, followed by classification into severity levels. Based on the severity, the engine routes the learner to one of three fidelity tiers: low (CBT), medium (simulations), or high (operational twins). Each assignment is then logged into the validation matrix to support traceability and audit compliance.

3.4. Simulation Setup and Process Flow

To evaluate the effectiveness of the digital-twin-based orchestration engine, a discrete-time simulation was developed to model learner progression over multiple training cycles. This simulation captures the iterative nature of the ecosystem’s feedback loop, where competence gaps are continuously assessed, content is assigned, and learning outcomes are recorded into the validation matrix.
The simulation setup is based on the normalized competence framework introduced in Section 2.7 and uses the learner vectors established in Section 3.1. The process is implemented in a modular structure consisting of five key stages executed in a loop across multiple iterations.
Step 1. Initialization.
Each learner L k is assigned an initial competence vector C L ( k ) = c L , 1 ( k ) , , c L , n ( k ) [ 0 , 1 ] n , representing mastery in n = 6 skill areas within ATA Chapter 32 (Landing Gear). These vectors are compared against a predefined target vector C T , which encodes the ideal competence thresholds drawn from EASA Part-66 modules and Bloom’s-taxonomy-level mappings.
Step 2. Gap computation and severity classification.
For each skill domain i , the competence gap is computed as
g i ( k ) = c T , i c L , i ( k )
Gap values are classified into four categories based on the severity thresholds: high ( g i > 0.5 ), medium ( 0.2 < g i 0.5 ), low ( 0 < g i 0.2 ), and no gap ( g i 0 ). This classification determines the fidelity level of training content to be delivered in the next step.
Step 3. Resource assignment via orchestration engine.
Based on gap severity, each learner is assigned content from the LET. Each training asset includes a fidelity multiplier ϕ f i { 0.3 ,   0.5 ,   0.8 } for low, medium, and high fidelity, respectively. These values influence the magnitude of learning gains during competence updates.
Step 4. Competence update.
After simulated engagement with the assigned content, the learner’s competence in skill domain i is updated using
c L , i ( t + 1 ) = c L , i ( t ) + λ k ·   ϕ f i · g i ( t )
where λ k [ 0.1 , 0.5 ] is a responsiveness coefficient specific to each learner, ϕ f i is the fidelity multiplier, and g i ( t ) is the gap for skill i at iteration t .
Step 5. Logging and validation.
Each iteration produces an xAPI-compliant training record, which is logged in the validation matrix. Logged metadata includes the following:
  • Scenario and learner IDs
  • Gap severity classification
  • Resource ID and fidelity tier
  • Pre- and post-training competence scores
  • Completion time and confidence interval (if simulated)
This information is used for downstream auditing, analytics, and performance forecasting.
The simulation loop (Figure 8) runs for eight iterations per learner, allowing observation of convergence trends, learning velocity, and system responsiveness. Subsequent sections will analyze these results numerically and visually, demonstrating how competence gaps are progressively reduced and personalized instruction dynamically evolves.

3.5. Learning Progression Results

The discrete-time simulation described in the previous section was executed over five orchestration cycles per learner, with content assignments dynamically adjusted at each step according to gap severity. This section presents the results of the simulation, highlighting how the learners’ competence vectors evolved over time under the influence of fidelity-matched instructional content.
Each learner’s competence vector C L ( k ) was updated iteratively based on the assigned content’s fidelity multiplier ϕ f i , the magnitude of the initial gap g i ( t ) , and an individual responsiveness factor λ k . The responsiveness coefficient was randomly assigned per learner from the interval [ 0.2 ,   0.5 ] to reflect individual variation in learning effectiveness.
The primary performance metric was the competence gap norm, computed as the Euclidean distance between the learner’s evolving competence vector and the target vector:
C T C L ( t ) 2 = i = 1 n c T , i c L , i ( t ) 2
where C T is target competence vector, C L ( t ) is the learner’s current competence vector at iteration t , and · 2 is the Euclidean ( L 2 ) norm, which computes the distance between the two vectors.
This metric captures overall proximity to the regulatory competence standard across all six skill domains. Figure 9 displays the trajectory of each learner’s gap norm over five iterations. All learners showed a consistent downward trend in their gap norms, indicating progressive closure of identified competence gaps.
The rate of learning varied among individuals, reflecting differences in initial competence, responsiveness λ k , and content fidelity assigned. Learner 1, who started with the highest overall gaps, required multiple iterations with low- and medium-fidelity content before being assigned high-fidelity digital twin scenarios in iterations 4 and 5. In contrast, Learner 5, who began with relatively minor deficits, progressed rapidly and was transitioned to consolidation scenarios by the third cycle.
The simulation also logged the distribution of content fidelity levels used per learner. Learners with initially high-severity gaps received a greater share of low- and medium-fidelity interventions in early cycles, while low-severity learners engaged primarily with high-fidelity operational twins. This supports the effectiveness of the adaptive orchestration engine in allocating resources efficiently and pedagogically appropriately.
Moreover, all training events and associated outcomes were captured in the validation matrix, including content version hashes, learner interaction timestamps, and post-assessment scores. This ensures full auditability and enables downstream analysis of training pathway effectiveness, instructional efficiency, and regulatory traceability.

3.6. System Behavior and Validation Matrix Output

To complete the simulation cycle, the training ecosystem logs every instructional interaction, decision, and outcome into a structured validation matrix. This matrix serves as the central audit mechanism for regulatory compliance, system transparency, and instructional traceability. It captures the contextual and performance metadata associated with each orchestration decision, forming a verifiable training history that can be queried by instructors, auditors, and certification bodies.
Each row in the validation matrix corresponds to a unique training event. The validation matrix is a structured, tamper-evident data store that captures detailed logs of every training interaction executed through the orchestration engine. It acts as the digital audit trail of the system, providing visibility into what was taught, to whom, using what resource, at what time, and with what outcome. This matrix plays a central role in verifying instructional alignment with regulatory standards (e.g., EASA Part-66), facilitating continuous improvement, and enabling third-party audits.
Each row in the matrix represents a single, discrete training transaction—a learner engaging with a content asset to address a specific competence gap. Columns (fields) in the matrix include the following elements:
  • Learner ID—Pseudonymized identifier.
  • Skill Area—ATA-coded domain (e.g., LG-3: Hydraulic Actuation).
  • Gap Severity—Classification at the time of content selection.
  • Resource ID—Assigned instructional asset with version hash.
  • Fidelity Level—Low, medium, or high.
  • Pre/Post Scores—Normalized competence values before and after training.
  • Compliance Flags—Tags for EASA Part-66 module mapping.
A sample output from the simulation is shown in Table 5, which records five selected events from Learner 2 across different skill domains and iterations. These entries illustrate how the orchestration engine dynamically adjusts training fidelity and records measurable gains aligned with the learner’s competence trajectory.
This matrix provides multiple benefits:
  • Every decision, asset, and learner outcome is verifiable with a version-controlled record.
  • Regulators can query training activity by module, scenario, or skill area to confirm compliance.
  • System designers and instructors can identify which resources yield the highest learning gains or where instructional strategies may need revision.
  • The matrix can be used for cohort-level skill gap heatmaps, training efficiency dashboards, or federated reporting across institutions.
From a systems perspective, the simulation confirms that the orchestration engine adheres to the expected behavior:
  • Learners with larger gaps receive foundational resources.
  • Gains are progressively achieved in a personalized and trackable manner.
  • All training interactions are logged in a format suitable for automated review and continuous improvement.
With this validation mechanism in place, the DTBT ecosystem not only adapts dynamically to learner needs but also provides the infrastructure for data-driven certification, enabling aviation organizations to meet future regulatory expectations in a digitally transformed training environment.
To ensure that real-time orchestration and validation processes remain computationally efficient, the system architecture integrates both edge computing elements and latency-aware task scheduling. Low-latency operations such as competence vector updates and fidelity tier assignments are performed on-site or via local server clusters, while higher-complexity analytics (e.g., session trace audits, performance forecasting) are routed through cloud services during non-critical intervals. This hybrid cloud–edge topology minimizes bandwidth consumption and prevents bottlenecks during high-frequency user interaction.
Furthermore, data transmissions use event-driven protocols (e.g., MQTT) and compressed xAPI log formats to reduce overhead without sacrificing granularity. The orchestration engine operates on pre-processed metadata rather than full simulation logs, further optimizing runtime responsiveness. Preliminary deployment benchmarks indicate that orchestration decisions are executed within sub-second latency even in multi-user sessions, supporting practical classroom use without perceived delay.

4. Discussion

4.1. Instructional Outcomes and Design Implications

The instructional outcomes of this study underscore the potential of the proposed digital twin–based training ecosystem to deliver adaptive, evidence-driven learning experiences within the regulated context of aircraft maintenance education. Unlike conventional training platforms that offer static instructional sequences, the developed system dynamically constructs personalized learning pathways by aligning learner competence gaps with content fidelity and scenario complexity.
This orchestration is not merely a technical routing mechanism, but an instructional logic grounded in pedagogical precision. The system’s capacity to differentiate between procedural, conceptual, and psychomotor deficits and to assign appropriate training content enables targeted remediation that supports skill mastery and knowledge retention. Moreover, the competence gap normalization strategy embedded in the orchestration engine ensures that learning progression is calibrated to both learner profile and curriculum structure, enhancing instructional equity across varied cohorts.
From a design perspective, the modular layering of the ecosystem supports scalability and institutional integration. The four-level architecture allows training centers to implement the system incrementally, beginning with competence tracking and low-fidelity assets, and expanding toward fully integrated scenario orchestration and validation. Importantly, the validation matrix not only serves a monitoring function but also acts as a data source for instructional analytics, enabling instructors to fine-tune delivery and training design based on actual learning dynamics.
These outcomes signal a shift toward a competence-centric training paradigm in aviation education, where instructional design is increasingly informed by real-time performance data, digital twin representations, and fidelity-aware decision support. As such, the framework offers not just an instructional tool, but a foundation for pedagogical transformation aligned with evolving regulatory expectations and operational demands.

4.2. Deployment Model

The deployment model proposed in this study reflects a shift from traditional linear content delivery toward orchestrated, data-driven learning ecosystems. Central to this model is the ability to deploy training resources dynamically based on competence diagnostics, scenario fidelity, and learner response history. This structure enables training institutions to optimize their instructional resources without compromising regulatory or pedagogical rigor.
Rather than imposing a monolithic infrastructure, the framework supports modular integration into existing Part-147 training environments. Figure 10 illustrates how the system can be incrementally deployed through hybrid configurations combining on-premise learning assets with cloud-based orchestration engines. This hybrid architecture allows institutions with varying digital maturity to gradually adopt orchestration logic, starting with scenario assignment matrices and validation tracking, before scaling up to include real-time learner twin modeling.
One of the most impactful design innovations is the orchestration model’s fidelity-sensitive resource allocation. Figure 11 visualizes how learners are dynamically routed to appropriate simulation tiers based on gap severity and learning objectives. This logic enables selective deployment of high-fidelity simulations only when pedagogically necessary, thus addressing both cost-efficiency and instructional relevance. Such stratification not only conserves VR resource usage but also aligns with the principles of adaptive learning and cognitive load optimization.
Phase 1 establishes a pilot use case with full orchestration logic on a limited module. Phase 2 evaluates training effectiveness through gap reduction, learner feedback, and regulatory alignment. Phase 3 expands the system to additional ATA chapters and cohorts, introducing full-scale analytics and asset control. Phase 4 focuses on cross-cohort learning analysis, digital credentialing, and long-term curriculum optimization.
Beyond technical implementation, the deployment model emphasizes traceability, transparency, and interoperability. The system’s validation matrix functions as both an audit trail and a feedback mechanism, ensuring that every learning interaction can be monitored, reviewed, and optimized. This model anticipates future demands from aviation authorities for evidence-based validation of simulation-based training and provides a technical foundation for integration with digital credentialing platforms and regulatory oversight tools.

4.3. Limitations, Challenges, and Future Research Directions

While the DTBT ecosystem introduced in this study marks an advancement in aviation maintenance education, several limitations and challenges emerged throughout its conceptualization and experimental deployment.
One of the primary challenges lies in the creation and maintenance of high-fidelity training assets. While fidelity stratification allows efficient allocation of content, the development of VR simulations, interactive digital twins, and context-rich scenarios remains resource-intensive. This issue is particularly acute for smaller institutions that lack the infrastructure or OEM partnerships necessary to produce scalable, immersive training assets. Similarly, while the orchestration engine automates many instructional decisions, it introduces a steep onboarding curve for instructors. Educators must shift from conventional lesson planning to interpreting real-time competence vectors, managing dynamic gap feedback, and understanding orchestration logic.
Another limitation concerns regulatory and cultural variability across international jurisdictions. While the system was aligned with EASA Part-66 standards, global deployment may necessitate localized competence models and audit frameworks to comply with different regulatory bodies or training cultures. Moreover, the integration of real aircraft telemetry and maintenance records into the learning ecosystem presents cybersecurity and privacy risks. Despite anonymization procedures and digital rights management, additional governance mechanisms will be necessary to prevent intellectual property exposure or unauthorized use of sensitive data.
The current validation was conducted on a limited cohort of learners and within a restricted set of ATA chapters. Although results demonstrated promising trends in adaptive learning and gap closure, broader generalizability will require multi-institutional deployments across diverse content areas, aircraft systems, and learner populations. Only then can the system’s scalability and pedagogical effectiveness be fully assessed under operational conditions.
Considering these limitations, several avenues for future research have emerged. One direction is the development of federated learning frameworks that allow multiple training organizations to share competence analytics without exposing raw learner data. This would enable large-scale curriculum optimization and benchmarking across institutions. Another promising direction is the evolution of dynamic competence modeling, where skill vectors are treated as temporally evolving profiles. Such modeling could account for learning decay, transfer effects, and readiness forecasting, opening pathways for just-in-time retraining and recertification.
The ecosystem also stands to benefit from improved cross-fleet simulation integration, enabling training twins built for one aircraft type to be modularly adapted to others. Combined with digital credentialing backed by blockchain technologies, this would support tamper-proof certification and decentralized skill verification. Additionally, future work should explore how instructors can collaborate with the orchestration engine through intelligent dashboards that translate real-time learning data into actionable insights. This would transform instructors from content deliverers to adaptive learning strategists.

5. Conclusions

This study presents a comprehensive digital-twin-based training ecosystem designed to address the evolving demands of aviation maintenance education in a digitally transformed operational landscape. By integrating learner digital twins, ideal competence twins, and a learning ecosystem twin into a unified orchestration framework, the system enables real-time gap analysis, personalized content delivery, and continuous alignment with regulatory standards such as EASA Part-66. The ecosystem advances current training practices by replacing static instructional models with a dynamic, data-driven architecture that adapts to individual learner profiles and instructional needs.
A key strength of the proposed system lies in its ability to stratify training content by fidelity and match it precisely to the severity of identified competence gaps. This ensures that learners receive pedagogically appropriate and resource-efficient training while progressing toward mastery across technical domains. The inclusion of a validation matrix and xAPI-based logging framework provides full auditability of instructional decisions, supporting regulatory compliance, quality assurance, and cross-institutional benchmarking.
The simulation-based evaluation demonstrated the system’s capacity to accelerate learning progression, efficiently allocate content resources, and support fine-grained instructional personalization. Learners showed measurable improvement across competence domains, with training pathways dynamically adjusted based on their real-time performance data. Moreover, the cloud–edge hybrid deployment model ensures that the ecosystem can scale across training institutions of varying sizes and technological capabilities without sacrificing performance or data security.
The DTBT ecosystem transforms aviation maintenance training into a closed-loop, adaptive, and evidence-based process. It not only enhances instructional effectiveness and learner engagement but also meets the growing need for traceable, competence-aligned, and scalable training solutions in the aviation sector.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Boeing. 2023 Pilot and Technician Outlook; Boeing Commercial Airplanes: Seattle, WA, USA, 2023; Available online: https://www.boeing.com/commercial/market/pilot-technician-outlook (accessed on 6 June 2025).
  2. Grieves, M. Digital Twin: Manufacturing Excellence Through Virtual Factory Replication; Whitepaper; University of Central Florida: Orlando, FL, USA, 2015. [Google Scholar]
  3. Aheleroff, S.; Xu, X.; Zhong, R.Y.; Lu, Y. Digital Twin as a Service (DTaaS) in Industry 4.0: An Architecture Reference Model. Adv. Eng. Inform. 2021, 47, 101225. [Google Scholar] [CrossRef]
  4. Shin, M.H. Effects of Project-Based Learning on Students’ Motivation and Self-Efficacy. Engl. Teach. 2018, 73, 95–114. [Google Scholar] [CrossRef]
  5. Kwok, P.K.; Yan, M.; Qu, T.; Lau, H.Y. User Acceptance of Virtual Reality Technology for Practicing Digital Twin-Based Crisis Management. Int. J. Comput. Integr. Manuf. 2021, 34, 874–887. [Google Scholar] [CrossRef]
  6. Geng, R.; Li, M.; Hu, Z.; Han, Z.; Zheng, R. Digital Twin in Smart Manufacturing: Remote Control and Virtual Machining Using VR and AR Technologies. Struct. Multidiscip. Optim. 2022, 65, 321. [Google Scholar] [CrossRef]
  7. Madni, A.M.; Erwin, D.; Madni, A. Exploiting Digital Twin Technology to Teach Engineering Fundamentals and Afford Real-World Learning Opportunities. In Proceedings of the 2019 ASEE Annual Conference & Exposition, Tampa, FL, USA, 16–19 June 2019. [Google Scholar] [CrossRef]
  8. Krupas, M.; Kajati, E.; Liu, C.; Zolotova, I. Towards a Human-Centric Digital Twin for Human–Machine Collaboration: A Review on Enabling Technologies and Methods. Sensors 2024, 24, 2232. [Google Scholar] [CrossRef] [PubMed]
  9. Hänggi, R.; Nyffenegger, F.; Ehrig, F.; Jaeschke, P.; Bernhardsgrütter, R. Smart Learning Factory–Network Approach for Learning and Transfer in a Digital & Physical Set Up. In Proceedings of the PLM 2020, Rapperswil, Switzerland, 5–8 July 2020; Springer: Cham, Switzerland, 2020; pp. 15–25. [Google Scholar] [CrossRef]
  10. Shi, T. Application of VR Image Recognition and Digital Twins in Artistic Gymnastics Courses. J. Intell. Fuzzy Syst. 2021, 40, 7371–7382. [Google Scholar] [CrossRef]
  11. Zaballos, A.; Briones, A.; Massa, A.; Centelles, P.; Caballero, V. A Smart Campus’ Digital Twin for Sustainable Comfort Monitoring. Sustainability 2020, 12, 9196. [Google Scholar] [CrossRef]
  12. Ahuja, K.; Shah, D.; Pareddy, S.; Xhakaj, F.; Ogan, A.; Agarwal, Y.; Harrison, C. Classroom Digital Twins with Instrumentation-Free Gaze Tracking. In Proceedings of the 2021 CHI Conference, Yokohama, Japan, 8–13 May 2021; pp. 1–9. [Google Scholar] [CrossRef]
  13. Bevilacqua, M.G.; Russo, M.; Giordano, A.; Spallone, R. 3D Reconstruction, Digital Twinning, and Virtual Reality: Architectural Heritage Applications. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Christchurch, New Zealand, 12–16 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 92–96. [Google Scholar] [CrossRef]
  14. Halverson, L.R.; Graham, C.R. Learner Engagement in Blended Learning Environments: A Conceptual Framework. Online Learn. 2019, 23, 145–178. [Google Scholar] [CrossRef]
  15. Zheng, X.; Lu, J.; Kiritsis, D. The Emergence of Cognitive Digital Twin: Vision, Challenges and Opportunities. Int. J. Prod. Res. 2022, 60, 7610–7632. [Google Scholar] [CrossRef]
  16. Hazrat, M.A.; Hassan, N.M.S.; Chowdhury, A.A.; Rasul, M.G.; Taylor, B.A. Developing a Skilled Workforce for Future Industry Demand: The Potential of Digital Twin-Based Teaching and Learning Practices in Engineering Education. Sustainability 2023, 15, 16433. [Google Scholar] [CrossRef]
  17. Kabashkin, I. Digital Twin Framework for Aircraft Lifecycle Management Based on Data-Driven Models. Mathematics 2024, 12, 2979. [Google Scholar] [CrossRef]
  18. Wang, Z.; Wang, Y.; Wang, X.; Yang, K.; Zhao, Y. A Novel Digital Twin Framework for Aeroengine Performance Diagnosis. Aerospace 2023, 10, 789. [Google Scholar] [CrossRef]
  19. Zaccaria, V.; Stenfelt, M.; Aslanidou, I.; Kyprianidis, K.G. Fleet Monitoring and Diagnostics Framework Based on Digital Twin of Aeroengines. In Proceedings of the ASME Turbo Expo, Oslo, Norway, 11–15 June 2018; Volume 6. [Google Scholar] [CrossRef]
  20. PTC. Next-Gen Solutions for Aerospace and Defense. 2023. Available online: https://www.ptc.com/en/industries/aerospace-and-defense (accessed on 30 June 2025).
  21. IBM. Transforming Aviation with IBM Maximo Digital Twin Exchange. Available online: https://www.ibm.com/products/maximo (accessed on 30 June 2025).
  22. Alasim, F.; Almalki, H. Virtual Simulation-Based Training for Aviation Maintenance Technicians: Recommendations of a Panel of Experts. SAE Int. J. Adv. Curr. Pract. Mobil. 2021, 3, 1285–1292. [Google Scholar] [CrossRef]
  23. Charles River Analytics. Maintenance Training Based on an Adaptive Game-Based Environment Using a Pedagogic Interpretation Engine (MAGPIE). Available online: https://cra.com/blog/maintenance-training-based-on-an-adaptive-game-based-environment-using-a-pedagogic-interpretation-engine-magpie/ (accessed on 6 June 2025).
  24. Wu, W.-C.; Vu, V.-H. Application of Virtual Reality Method in Aircraft Maintenance Service—Taking Dornier 228 as an Example. Appl. Sci. 2022, 12, 7283. [Google Scholar] [CrossRef]
  25. Lufthansa Technik. AVIATAR. Available online: https://www.lufthansa-technik.com/de/aviatar (accessed on 6 June 2025).
  26. Airbus. Skywise. Available online: https://aircraft.airbus.com/en/services/enhance/skywise (accessed on 6 June 2025).
  27. GE Digital. PREDIX Analytics Framework. Available online: https://www.ge.com/digital/documentation/predix-platforms/afs-overview.html (accessed on 6 June 2025).
  28. AFI KLM E&M. PROGNOS—Predictive Maintenance. Available online: https://www.afiklmem.com/en/solutions/about-prognos (accessed on 6 June 2025).
  29. Boeing Global Services. Enhanced Digital Solutions Focus on Customer Speed and Operational Efficiency. Available online: https://investors.boeing.com/investors/news/press-release-details/2018/Boeing-Global-Services-Enhanced-Digital-Solutions-Focus-on-Customer-Speed-and-Operational-Efficiency/default.aspx (accessed on 6 June 2025).
  30. Kabashkin, I.; Misnevs, B.; Zervina, O. Artificial Intelligence in Aviation: New Professionals for New Technologies. Appl. Sci. 2023, 13, 11660. [Google Scholar] [CrossRef]
  31. European Union Aviation Safety Agency (EASA). Part-66—Maintenance Certifying Staff. Available online: https://www.easa.europa.eu/en/acceptable-means-compliance-and-guidance-material-group/part-66-maintenance-certifying-staff (accessed on 6 June 2025).
  32. iSpec 2200: Information Standards for Aviation Maintenance, Revision 2024.1. Available online: https://publications.airlines.org/products/ispec-2200-information-standards-for-aviation-maintenance-revision-2024-1 (accessed on 6 June 2025).
  33. Bloom, B.S.; Engelhart, M.D.; Furst, E.J.; Hill, W.H.; Krathwohl, D.R. Taxonomy of Educational Objectives: The Classification of Educational Goals. In Handbook I: Cognitive Domain; David McKay: New York, NY, USA, 1956. [Google Scholar]
  34. Apache Kafka. Available online: https://kafka.apache.org/ (accessed on 6 June 2025).
  35. MQTT: The Standard for IoT Messaging. Available online: https://mqtt.org/ (accessed on 6 June 2025).
  36. xAPI Solved and Explained. Available online: https://xapi.com/?utm_source=google&utm_medium=natural_search (accessed on 6 June 2025).
Figure 1. Conceptual framework of the digital-twin-based training ecosystem.
Figure 1. Conceptual framework of the digital-twin-based training ecosystem.
Information 16 00586 g001
Figure 2. System architecture and data flow of the DTBT ecosystem.
Figure 2. System architecture and data flow of the DTBT ecosystem.
Information 16 00586 g002
Figure 3. Four-level learning architecture.
Figure 3. Four-level learning architecture.
Information 16 00586 g003
Figure 4. Stratification of training content by fidelity level.
Figure 4. Stratification of training content by fidelity level.
Information 16 00586 g004
Figure 5. Validation and learning record management workflow.
Figure 5. Validation and learning record management workflow.
Information 16 00586 g005
Figure 6. Severity-level breakdown per skill per learner.
Figure 6. Severity-level breakdown per skill per learner.
Information 16 00586 g006
Figure 7. Adaptive logic flow for content assignment.
Figure 7. Adaptive logic flow for content assignment.
Information 16 00586 g007
Figure 8. Simulation pipeline diagram for one full orchestration cycle.
Figure 8. Simulation pipeline diagram for one full orchestration cycle.
Information 16 00586 g008
Figure 9. Learner competence gap norm reduction over iterations.
Figure 9. Learner competence gap norm reduction over iterations.
Information 16 00586 g009
Figure 10. Deployment model for aviation maintenance DTBT ecosystem.
Figure 10. Deployment model for aviation maintenance DTBT ecosystem.
Information 16 00586 g010
Figure 11. Roadmap of adaptation strategy.
Figure 11. Roadmap of adaptation strategy.
Information 16 00586 g011
Table 1. Summary of system architecture and data flow components.
Table 1. Summary of system architecture and data flow components.
ComponentDescriptionKey TechnologiesOutput/Role
Aircraft TelemetryReal-time operational data from aircraft systemsIoT sensors, APIsInforms LET and contextualizes training scenarios
Training EventsLearner interaction data from CBTs, VR, diagnostics, etc.xAPIFeeds LDT updates and orchestration engine
Document UpdatesRevised manuals, bulletins, and regulationsDocument managementUpdates ICT and LET content alignment
Message BrokerStreaming backbone for real-time data ingestion and routingApache Kafka/MQTTDelivers structured data to orchestration engine
Orchestration EngineCore logic module that analyzes learner gaps and selects instructional resourcesRule engine, AI logicAligns LDT and ICT, controls LET content flow
Learner Digital TwinReal-time digital replica of individual trainee’s skills and learning statusxAPI records, vector DBStores evolving learner profile
Ideal Competence TwinBenchmark model of required regulatory and operational competenceEASA Part-147, ATA, BloomServes as reference for gap analysis
Learning Ecosystem TwinRepository of learning resources with metadata on fidelity, scope, and relevanceCBTs, simulations, twinsProvides content for targeted skill development
Adaptive Content DeliveryMulti-platform training interfaces (LMS, VR, mobile)Cloud/edge architectureDelivers matched content based on learner needs
Validation MatrixStructured logging of training sessions for auditing and regulatory complianceAudit DB, metadata logsEnables verification of training quality and compliance
Table 2. Key components of the validation and learning record management framework.
Table 2. Key components of the validation and learning record management framework.
ComponentFunctionKey TechnologiesContribution to Ecosystem
Learning InteractionsCapture user actions during training sessionsCBT platforms, VR simulators, LMSGenerates behavioral and assessment data
xAPILog, structure, and transmit learning activitiesExperience API (xAPI)Provides a standardized learning record format
Learner Digital TwinMaintain real-time competence profile of each traineeVector database, analytics engineEnables personalized adaptation and longitudinal tracking
Validation MatrixStore session outcomes with audit metadataTamper-proof databases, hash verificationEnsures traceability, compliance, and data integrity
DashboardsVisualize training performance and validation resultsWeb-based analytics toolsSupports instructor decision-making and regulatory audits
Table 3. Target vs. learner competence vectors.
Table 3. Target vs. learner competence vectors.
Skill AreaDescription C T C L ( 1 ) C L ( 2 ) C L ( 3 ) C L ( 4 ) C L ( 5 )
LG-1Component recognition0.90.40.60.30.450.55
LG-2Sequencing logic0.850.680.660.590.60.69
LG-3Hydraulic actuation0.880.60.670.640.70.71
LG-4Position indication and sensors0.920.750.70.660.740.73
LG-5Emergency/manual extension0.870.380.620.290.580.64
LG-6Failure modes and diagnostics0.90.650.720.60.670.66
Table 4. Gap-to-resource assignment matrix for learners.
Table 4. Gap-to-resource assignment matrix for learners.
LearnerLG-1LG-2LG-3LG-4LG-5LG-6
L1CBT ModuleReduced-Fidelity TwinReduced-Fidelity TwinOperational TwinCBT ModuleReduced-Fidelity Twin
L2Reduced-Fidelity TwinOperational TwinOperational TwinOperational TwinReduced-Fidelity TwinOperational Twin
L3CBT ModuleReduced-Fidelity TwinOperational TwinOperational TwinCBT ModuleReduced-Fidelity Twin
L4Reduced-Fidelity TwinOperational TwinOperational TwinOperational TwinReduced-Fidelity TwinOperational Twin
L5Reduced-Fidelity TwinOperational TwinOperational TwinOperational TwinReduced-Fidelity TwinOperational Twin
Table 5. Sample validation matrix output (Learner 2—selected events).
Table 5. Sample validation matrix output (Learner 2—selected events).
Learner IDSkill AreaGap SeverityResource IDFidelityPre ScorePost ScoreModule Tag
L2LG-3MediumR_SIM_203.v1Medium0.500.61M07.03
L2LG-5HighR_CBT_105.v3Low0.420.53M09.02
L2LG-1LowR_TWIN_401.v2High0.600.73M06.01
L2LG-4LowR_SENSOR_311.v2High0.650.79M08.01
L2LG-2MediumR_DIAG_221.v4Medium0.450.56M07.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kabashkin, I. Digital-Twin-Based Ecosystem for Aviation Maintenance Training. Information 2025, 16, 586. https://doi.org/10.3390/info16070586

AMA Style

Kabashkin I. Digital-Twin-Based Ecosystem for Aviation Maintenance Training. Information. 2025; 16(7):586. https://doi.org/10.3390/info16070586

Chicago/Turabian Style

Kabashkin, Igor. 2025. "Digital-Twin-Based Ecosystem for Aviation Maintenance Training" Information 16, no. 7: 586. https://doi.org/10.3390/info16070586

APA Style

Kabashkin, I. (2025). Digital-Twin-Based Ecosystem for Aviation Maintenance Training. Information, 16(7), 586. https://doi.org/10.3390/info16070586

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop