Next Article in Journal
Analysis of Reliability and Efficiency of Information Extraction Using AI-Based Chatbot: The More-for-Less Paradox
Previous Article in Journal
Integrating Machine Learning Techniques and the Theory of Planned Behavior to Assess the Drivers of and Barriers to the Use of Generative Artificial Intelligence: Evidence in Spain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of Digital Training Twins in the Aircraft Maintenance Ecosystem

Engineering Faculty, Transport and Telecommunication Institute, Lauvas 2, LV-1019 Riga, Latvia
Algorithms 2025, 18(7), 411; https://doi.org/10.3390/a18070411
Submission received: 15 June 2025 / Revised: 1 July 2025 / Accepted: 2 July 2025 / Published: 3 July 2025

Abstract

This paper presents an integrated digital training twin framework for adaptive aircraft maintenance education, combining real-time competence modeling, algorithmic orchestration, and cloud–edge deployment architectures. The proposed system dynamically evaluates learner skill gaps and assigns individualized training resources through a multi-objective optimization function that balances skill alignment, Bloom’s cognitive level, fidelity tier, and time efficiency. A modular orchestration engine incorporates reinforcement learning agents for policy refinement, federated learning for privacy-preserving skill analytics, and knowledge graph-based curriculum models for dependency management. Simulation results were conducted on the Pneumatic Systems training module. The system’s validation matrix provides full-cycle traceability of instructional decisions, supporting regulatory audit-readiness and institutional reporting. The digital training twin ecosystem offers a scalable, regulation-compliant, and data-driven solution for next-generation aviation maintenance training, with demonstrated operational efficiency, instructional precision, and extensibility for future expansion.

Graphical Abstract

1. Introduction

The increasing digitalization of aircraft systems and the corresponding shift in aviation maintenance practices have transformed the skill requirements for future technicians. With growing fleets, evolving regulatory frameworks, and the proliferation of data-rich onboard systems, the training environment must adapt to develop highly responsive, context-aware, and audit-ready learning environments. Traditional training methods, including static courseware and generic simulations, often fail to reflect the dynamic operational realities and individualized learning trajectories required for modern aviation maintenance professionals.
Recent advancements in immersive simulation [1,2] and cyber–physical modeling [3,4] have introduced digital twin concepts into education. However, most implementations to date emphasize system representation rather than learner dynamics or intelligent training orchestration. The potential of digital twins to model not just physical assets but also cognitive development and skill progression remains underutilized, particularly when it comes to applying algorithmic personalization and optimization within regulated training environments.
This paper proposes a novel training ecosystem based on the concept of digital training twins (DTTs), which integrates the learner’s evolving skill state, the pedagogical logic governing scenario progression, and the operational complexity of real-world aircraft systems. Unlike prior digital twin-based educational frameworks, the proposed approach is structured around algorithmic decision-making engines that dynamically adapt learning content, pathway selection, and competence evaluation based on multi-dimensional feedback from virtual training activities.
The emergence of digital twin technology has introduced new opportunities for transforming educational methodologies. By enabling learners to interact simultaneously with both physical objects and their digital representations within digital twin-based educational environments, students gain hands-on experience that mirrors real-world contexts, thereby deepening their understanding and enhancing the application of knowledge [5,6]. However, many existing studies remain at the stage of conceptual frameworks or early implementations, and the widespread practical impact of digital twin adoption in education still requires further validation [7].
To facilitate robust interactivity, learners can employ wearable devices to engage directly with digital twin components or participate as virtual avatars within immersive digital twin educational platforms [8]. This capability supports pervasive, geographically distributed learning and fosters productive team-based collaboration [9]. Despite these advantages, numerous challenges remain in fully integrating digital twin technologies into educational practice while ensuring tangible learning outcomes for students [10]. As computational capabilities continue to advance, research has increasingly focused on developing digital twin educational environments and instructional support systems [11]. Such developments include the creation of intelligent digital twin learning spaces [12], digital twin systems for education [13], and immersive architectural prototype twin environments [14].
Through real-time interaction, integration of virtual and physical learning elements, and dynamic system adaptability, digital twin technologies provide learners with flexible environments where materials can be observed, manipulated, tested, and iteratively refined [15]. The evolution of cognitive digital twins [16] and digital-twin-supported instructional systems [17] has further enabled personalized learning strategies and adaptive educational designs that respond continuously to learners’ progress, improving both instructional quality and learning outcomes.
In aviation, digital twin technology is being actively explored across multiple domains. The study [18] introduces a comprehensive framework for applying digital twins to aircraft lifecycle management, leveraging data-driven models to enhance decision-making and operational efficiency. A specific framework for twin-spool turbofan engines is proposed in [19], where a hybrid approach combining mechanism-based and data-driven models is employed for improved accuracy. At the fleet-wide level, a diagnostic and health management framework utilizing digital twins for comprehensive aircraft monitoring is presented in [20].
Major aviation companies have also made significant advancements in digital twin platforms to support predictive maintenance and optimize operational strategies. Notable industry initiatives include Aviatar (Lufthansa Technik) [21], Skywise (Airbus) [22], Predix (General Electric) [23], PROGNOS (Air France Industries and KLM Engineering & Maintenance) [24], and AnalytX (Boeing) [25].
In aviation maintenance training, digital twin technologies are complemented by virtual reality (VR) and adaptive game-based systems. The study [26] evaluates virtual simulation-based training for aviation maintenance technicians, demonstrating that VR-enhanced training can significantly improve skill acquisition when integrated with conventional training methods. Another study [27] explores adaptive game-based maintenance training, utilizing a pedagogic interpretation engine that adjusts scenarios dynamically to maximize training effectiveness.
Interactive 3D virtual maintenance training systems have also been developed to enhance practical skills in aviation maintenance. A study [28] presents a virtual reality training system for aviation students, showing its effectiveness in improving training outcomes. The comprehensive analysis provided in [29] reviews the applications and training challenges of integrating artificial intelligence into aviation, highlighting both opportunities and barriers to its effective deployment.
Beyond aviation and education, recent advances in digital twin and AI-driven predictive maintenance are transforming asset management in other industrial sectors. A systematic literature review on intelligent maintenance in the mining industry underscores the growing adoption of digital twins, deep learning, and hybrid AI models to address challenges of fault detection and operational efficiency under extreme conditions [30]. Similarly, a hybrid approach for remaining useful life prediction of rolling element bearings combines digital twin modeling with long short-term memory networks and particle swarm optimization, achieving high prediction accuracies [31]. This fusion of simulation precision and data-driven forecasting supports proactive maintenance and health management, reinforcing the applicability of digital twin architectures across high-stakes, reliability-driven domains. These recent studies expand the technological landscape from which aviation education and maintenance training systems may draw, encouraging cross-domain integration of AI-enhanced, twin-based frameworks.
In contemporary aviation education research, there remains a substantial gap between the theoretical capabilities of digital twin technologies and their practical implementation in competence-based training ecosystems. While prior studies have successfully demonstrated the applicability of digital twins in simulating aircraft systems, predictive maintenance, and immersive learning environments, these approaches often neglect several critical dimensions required for full educational integration. Specifically, existing frameworks tend to focus predominantly on static system representations without incorporating adaptive learner modeling, dynamic competence tracking, or personalized instructional orchestration based on real-time performance metrics. Moreover, limited attention has been given to the formalization of orchestration algorithms that simultaneously address multi-dimensional instructional objectives such as skill alignment, cognitive complexity, resource fidelity, and regulatory compliance.
This study aims to address these research gaps through the development and validation of a DTT ecosystem that incorporates real-time competence modeling, algorithmic orchestration grounded in multi-objective optimization, federated learning for privacy-preserving analytics, and knowledge graph-based curriculum structuring. In contrast to earlier conceptual or domain-limited implementations, the proposed framework provides a fully integrated, auditable, and regulation-compliant system architecture capable of supporting scalable individualized aviation maintenance training.
The remainder of this paper is organized as follows. Section 2 presents the architecture, algorithms, and orchestration methodology of the proposed DTT ecosystem. Section 3 reports the simulation-based validation results, highlighting learner convergence behavior, fidelity allocation, personalization metrics, and auditability features. Section 4 provides a detailed discussion of the system’s instructional efficiency, personalization accuracy, transparency, and the role of integrated algorithmic modules, while outlining future research directions. Section 5 concludes the paper, summarizing key contributions.

2. Materials and Methods

The proposed DTT ecosystem introduces an intelligent training platform capable of real-time learner modeling, competence gap detection, and adaptive instructional delivery within aviation maintenance education. By integrating competence vector modeling, algorithmic orchestration, simulation-driven feedback, and modular cloud–edge deployment, the system addresses both pedagogical and regulatory demands while optimizing training efficiency. Its architecture supports adaptive scenario allocation, personalized skill development, and audit-compliant data tracking.

2.1. Digital Training Twin Model

The core of the proposed system is the digital training twin, which provides a real-time, data-driven representation of each learner’s evolving competence profile and supports personalized, adaptive training within an algorithmically controlled environment. The DTT ecosystem integrates multiple subsystems that track learner progress, analyze competence gaps, and dynamically assign appropriate training assets based on an intelligent orchestration process that operates across a hybrid cloud–edge infrastructure. Figure 1 illustrates the high-level architecture of the DTT ecosystem.
The system architecture consists of eight key functional components. The Cloud Twin Server serves as the centralized knowledge base, hosting GPU-intensive simulations and immersive Extended Reality (XR) scenarios, including both virtual reality (VR) and Augmented Reality (AR), alongside the primary Training Database (DB). These XR-based training twins allow highly detailed, scalable simulation of complex aircraft maintenance tasks. The cloud infrastructure also ensures consistent access to up-to-date training resources and version-controlled instructional content across all deployment sites.
At the algorithmic core of the system operates the orchestration engine (OE), responsible for continuously analyzing each learner’s current skill profile, calculating competence gaps, classifying gap severity, and selecting appropriate training resources. The OE applies cost-optimization models that balance instructional relevance, training fidelity, regulatory compliance, and session duration. This allows the system to orchestrate non-redundant, efficient learning pathways tailored to individual learner needs.
The Learner DTT Vectors Database stores real-time competence vectors for every learner, representing mastery levels across various technical domains. These vectors are dynamically updated after each training session, incorporating performance data collected from simulations, assessments, and practical exercises. Target competence levels are defined in the competence profiles, which reflect external regulatory standards such as those defined by EASA Part-66 [32]. These profiles define the knowledge and skill levels that learners are required to achieve across specific aviation system modules, including ATA chapters [33].
To support detailed performance tracking and auditability, the system captures learning interactions through xAPI logs using the Experience API (xAPI) standard [34]. These logs include rich data describing simulation paths, learner decisions, scenario outcomes, completion times, and error frequencies, all of which feed directly into updating learner vectors and validating training progress.
The available instructional content is organized in the Fidelity-Tagged Content Repository, where each resource is pre-annotated with structured metadata describing its instructional target (skill domain vector), Bloom’s Taxonomy level, associated regulatory module, estimated duration, and fidelity classification (low, medium, or high). This structured tagging enables the orchestration engine to make precise, data-driven assignments for each learner at every stage of training.
The edge device/XR delivery layer serves as the front-end interface, delivering training content directly to learners based on their competence state and local device capabilities. This layer supports Computer-Based Training (CBT), simplified diagnostic simulations, and full XR-based immersive scenarios. The edge deployment allows learners to complete foundational or remedial training even under limited network conditions while using the cloud for high-fidelity simulations as required.
To maintain continuous data synchronization across the distributed architecture, the system employs a Kafka [35] and MQTT Message Backbone [36]. Apache Kafka supports high-throughput event streaming between orchestration modules, simulation servers, and analytics dashboards, while MQTT provides a lightweight protocol optimized for low-latency messaging between edge devices and cloud nodes in bandwidth-constrained environments.
This tightly integrated architecture supports a continuous real-time loop of observation, orchestration, and feedback. The orchestration engine processes incoming learner performance data, selects optimal training content, deploys resources through the cloud–edge network, and updates competence profiles based on observed outcomes. The hybrid deployment structure ensures scalable operation across both centralized institutional training centers and decentralized or remote educational environments, while maintaining regulatory compliance, instructional efficiency, and data security.
The cloud component of the DTT ecosystem is built upon a containerized microservices architecture using Docker and orchestrated through Kubernetes. This allows for scalable deployment of simulation engines, orchestration services, and learner data analytics modules. High-performance computing tasks such as federated model aggregation and reinforcement learning policy evaluation are executed on GPU-enabled nodes hosted in the cloud, utilizing platforms such as NVIDIA CUDA and TensorFlow. The edge layer supports real-time content delivery through lightweight clients deployed on local workstations or XR headsets. These clients are implemented using Unity or Unreal Engine for immersive training modules, and standard HTML5/JavaScript frameworks for CBT delivery. Secure communication between cloud and edge layers is maintained via TLS-encrypted MQTT for real-time telemetry and Kafka for high-throughput streaming of xAPI logs and orchestration events. This hybrid deployment model ensures seamless interaction between centralized orchestration and decentralized learning environments while maintaining performance, scalability, and data protection.

2.2. Algorithmic Orchestration Workflow

The orchestration engine operates as the central decision-making unit within the DTT ecosystem. Its primary function is to analyze each learner’s evolving competence state, classify existing skill gaps, and assign personalized training resources through a mathematically optimized multi-criteria selection process. The orchestration cycle continuously monitors learner progression and dynamically adapts content allocation to ensure that each learner receives efficient, regulation-compliant instruction fully aligned with their current capabilities.
The learner’s competence profile is mathematically modeled as a normalized vector:
C l e a r n e r = [ c 1 , c 2 , , c n ] [ 0,1 ] n
where each element c i reflects the learner’s mastery of skill domain i . The regulatory training standards are encoded in a target competence vector:
C t a r g e t = [ c 1 * , c 2 * , , c n * ] [ 0,1 ] n
The competence gap vector is computed as follows:
G = C t a r g e t C l e a r n e r = [ g 1 , g 2 , , g n ]
For each skill i , the orchestration engine classifies gap severity by applying predefined thresholds θ H (high) and θ M (medium). This severity level influences both the fidelity and complexity of the instructional content selected for the learner.
Each training resource R j R is described by a structured metadata vector, including the following:
s j —the skill focus vector;
f j —fidelity tier (low, medium, high);
d j —expected duration (minutes);
b j —Bloom’s Taxonomy cognitive level (1 to 6) [37];
ϕ —instructional effectiveness coefficient;
ρ j —regulatory compliance tag for module alignment.
To assign the most appropriate training asset for each identified gap g i , the system employs a multi-objective optimization function that balances skill alignment, instructional cost, cognitive load, and time efficiency. The resource selection function is defined as follows:
S G i = arg min R j R α · s j e i 2 + β · c o s t f j + γ · d j + δ · ψ ( b j )
where
e i is the unit vector isolating skill domain i ;
c o s t f j reflects fidelity assignment penalties, defined as follows:
c o s t f j = 1     f o r   l o w f i d e l i t y   C B T   m o d u l e s ,                                     2     f o r   m e d i u m f i d e l i t y   t a b l e t   s i m u l a t i o n s ,         3     f o r   h i g h f i d e l i t y   X R   s i m u l a t i o n s .
ψ ( b j ) is the normalized Bloom complexity factor, expressed as follows:
ψ b j = b j 1 5 [ 0,1 ]
α , β , γ , δ [ 0,1 ] are tunable hyperparameters controlling the weighting of skill relevance, fidelity cost, session duration, and cognitive demand, respectively.
This selection function ensures that training assignments remain skill-targeted while minimizing instructional redundancy, cognitive overload, and unnecessary use of high-cost simulation resources.
Following each training session, the learner’s competence vector is incrementally updated to reflect newly acquired skills:
c i ( t + 1 ) = c i ( t ) + λ i · ϕ j · g i ( t )
where λ i [ 0,1 ] represents the learner-specific responsiveness coefficient capturing variability in individual learning rates, and ϕ j denotes the training effectiveness factor associated with the assigned resource.
Let the learner’s competence at iteration t be represented by the vector C t R n , which means that the learner’s competence profile is represented by a vector with n skills, and each skill score is a real number (typically normalized between 0 and 1). Each element C t ( k ) [ 0,1 ] indicates skill mastery in domain k . After training with resource r , the updated competence is computed as follows:
C t + 1 = C t + α i · E r · G r
where α i 0,1 is the learner’s responsiveness coefficient, E r ( 0,1 ] is the effectiveness of resource r , G t R n is the skill gain vector of the resource.
To select the best resource, the system minimizes a cost function:
J r = w 1 · D C t , T r + w 2 · ϕ r + w 3 · β r + w 4 · τ r
where T r R n is the target skill vector associated with resource r , D C t , T r = T r C t 2 2 is the squared Euclidean distance measuring the skill mismatch, ϕ r { 0 , 1 , 2 } is the fidelity cost (e.g., 0 = low, 1 = medium, 2 = high), β r [ 0 , 1 ] is the normalized Bloom cognitive complexity score (scaled from levels 1 to 6), τ r is the session duration, and w 1 , w 2 , w 3 , w 4 are configurable weights.
The optimal training resource r * is then as follows:
r * = arg min r R J r
This ensures that resource selection is competence-driven, cost-aware, and pedagogically efficient.
The target competence vector T r R n represents the ideal skill profile that a given training resource aims to reinforce. These vectors are defined during the system configuration phase by instructional designers in consultation with subject matter experts. Each dimension corresponds to a specific competence (e.g., diagnosing electrical faults, performing torque checks), and its value indicates the desired mastery level on a normalized [ 0,1 ] scale. This allows the model to account for different depths of knowledge, including basic recognition, procedural application, and problem-solving capabilities.
To reflect hierarchical cognitive depth, each training resource is also assigned a Bloom-level complexity score β r normalized to the interval [ 0,1 ] , where lower values correspond to simple recall tasks (e.g., knowledge) and higher values represent synthesis or evaluation (e.g., diagnosis, troubleshooting). These values are assigned based on existing instructional taxonomies and learning outcome frameworks commonly used in aviation maintenance training requirements such as those defined by EASA Part-66 [32].
The orchestration engine uses T r to calculate competence misalignment and β r to penalize inappropriate cognitive overload for lower-skilled learners. This dual consideration enables adaptive training paths that match both the domain-specific competence gaps and the cognitive maturity of each learner.
This orchestration process operates as a closed-loop control system: after each competence update, new gaps are re-evaluated, and subsequent training resources are re-optimized accordingly. The system iterates until all gaps are sufficiently minimized below an acceptable tolerance, thus ensuring complete regulatory compliance while optimizing the learning trajectory for each individual learner.
Figure 2 presents the conceptual training flow diagram, which illustrates the core sequence of operations in the orchestration logic.
Each training cycle begins with the initialization or update of the learner’s digital profile. This profile is represented as a competence vector containing normalized values across multiple skill domains (e.g., aircraft hydraulic systems, sensor diagnostics, failure recovery procedures). These values are derived from xAPI logs and prior simulation results.
The system determines whether additional training is required by comparing the learner’s current profile against predefined regulatory or operational target vectors. If the learner’s performance already meets or exceeds the expected thresholds across all competencies, no further training is assigned.
For skills where gaps exist, the orchestration engine computes the magnitude of the discrepancy. The comparison yields a multi-dimensional gap vector, highlighting which areas fall short and by how much.
Each skill gap is classified into severity categories—typically low, medium, or high—based on empirically defined thresholds. These categories directly inform the fidelity level of content that will be assigned in the next step. For example, a high-severity gap might trigger a foundational CBT module, whereas a low-severity gap may prompt an operational digital twin scenario.
Based on the severity classification, the system selects a training asset from the Fidelity-Tagged Content Repository. The selection considers multiple factors, including regulatory compliance (e.g., EASA Part-66 module alignment), Bloom’s Taxonomy level, expected training duration, and prior learner interaction history.
The assigned training is delivered through the appropriate interface (CBT, tablet-based simulation, or immersive XR module) via the edge or cloud layer. After the session, learner performance is logged, competence vectors are updated, and the loop begins again if needed.
This closed-loop orchestration approach ensures that learners receive targeted, scalable, and regulation-compliant instruction throughout the training process. The integration of dynamic learner modeling, severity-based gap analysis, and fidelity-matched content delivery represents a significant advancement over static training models, enabling real-time personalization and continuous competence validation.
To enhance procedural clarity and support implementation reproducibility, the following summary outlines the core orchestration logic as a step-by-step workflow. While the preceding section has already provided the mathematical foundations and detailed descriptions of each component, this condensed summary serves to highlight the sequential decision-making process employed by the orchestration engine.
Step 1. Analyze the learner’s current competence vector and compare it to the regulatory target vector to compute the skill gap.
Step 2. Classify each gap as high, medium, or low severity based on threshold values.
Step 3: Filter candidate training resources based on alignment with the most significant competence gaps.
Step 4. Evaluate each candidate resource using a multi-objective cost function that considers skill alignment, fidelity penalty, Bloom-level complexity, and session duration.
Step 5. Select the resource with the minimum cost as the optimal assignment for the learner.
Step 6. Deliver the assigned training via appropriate interface (e.g., CBT, simulation, or XR) depending on device capability and fidelity requirement.
Step 7. After the training session, update the learner’s competence vector using a gain function based on responsiveness and instructional effectiveness.
Step 8. Log all orchestration actions in the validation matrix to support traceability and compliance.
Step 9. Repeat the orchestration cycle until all competence gaps fall below acceptable thresholds.

2.3. Cloud–Edge Deployment Architecture

The architecture of the DTT system employs a modular cloud–edge hybrid design that supports scalable, latency-sensitive, and secure training delivery. This hybrid deployment model allows the orchestration engine to operate seamlessly across decentralized institutional networks, remote training centers, and mobile learning environments while maintaining transparent auditability and regulatory readiness.
The overall deployment model of the digital training twin system is summarized in Figure 3, which illustrates the hierarchical relationship between cloud infrastructure, message broker layer, and edge-based delivery nodes.
In the cloud layer, the orchestration engine operates as the central control logic, continuously evaluating learner profiles, selecting training resources, and generating assignment decisions based on real-time competence state updates.
At the local training sites, edge nodes manage direct learner interaction and training delivery. Each edge node hosts both CBT modules, which include foundational knowledge instruction, procedural videos, and theoretical assessments, and Simulation Clients, which deliver mid-fidelity diagnostic scenarios and task-specific simulation exercises that operate with lower computational overhead than full XR environments. This design allows stable training continuity even when cloud connectivity is limited, while preserving centralized orchestration control for competence gap evaluation and resource allocation decisions.
The hybrid cloud–edge deployment model employs Apache Kafka for high-throughput event streaming between orchestration modules, simulation engines, and analytics services, ensuring consistent and scalable data flow. For real-time, low-latency communication between edge devices and the cloud, the system uses the MQTT protocol, secured via TLS encryption to protect learner data. A bi-directional synchronization controller manages updates of competence vectors and xAPI logs between edge nodes and the central database, supporting offline operation and ensuring data consistency across distributed environments.
The hybrid cloud–edge deployment model was specifically selected to optimize training delivery under real-world operational constraints, such as variable network quality, hardware diversity, and learner mobility. The cloud layer hosts centralized, resource-intensive modules—including reinforcement learning agents, federated aggregation servers, and immersive XR simulations—to ensure high computational efficiency and centralized policy control. In contrast, edge nodes act as lightweight execution clients, deployed on local workstations, tablets, or XR headsets, enabling real-time training continuity even in disconnected or low-bandwidth environments. This distributed setup enables seamless load balancing, latency minimization for learner interactions, and localized data processing for GDPR-compliant privacy. Furthermore, orchestration decisions made in the cloud are propagated to edge nodes through encrypted Kafka and MQTT protocols, ensuring low-latency synchronization while maintaining full traceability through xAPI event logs and competence vector updates. This layered infrastructure ensures robustness, scalability, and regulatory readiness in diverse institutional and field-based aviation training contexts.

2.4. Algorithms for Orchestration and Analytics

The orchestration engine of the DTT ecosystem is supported by an integrated ensemble of algorithmic modules that enable adaptive scenario assignment, skill analytics, privacy-preserving data management, and knowledge-graph-based learning path optimization. These modules operate in coordination to form the system’s adaptive intelligence core, ensuring that training pathways dynamically adjust to individual learner profiles while maintaining regulatory compliance and operational efficiency.
At the foundation of the orchestration process is a combination of rule-based agents and reinforcement learning (RL) strategies. In the initial deployment stage, rule-based policies govern scenario assignment by applying deterministic logic based on competence gap severity and predefined fidelity allocation thresholds. These rules ensure reliable system performance under regulatory constraints. However, the architecture is designed to support progressive evolution toward reinforcement learning agents that optimize training pathways over time through reward-based learning cycles. In this configuration, the RL agent observes learner state transitions, selects optimal training scenarios, and adjusts its policy through trial-and-reward updates where reward functions are based on competence gap reduction and time-to-convergence metrics.
To extend personalization while preserving learner privacy, the system incorporates federated learning (FL) protocols. FL enables distributed model training across multiple edge nodes without exposing individual learner data to centralized servers. Each node trains its local model on learner-specific performance data, transmitting only encrypted model updates to the cloud-based aggregator. These aggregated models enable system-wide skill analytics and predictive performance modeling while ensuring compliance with data protection regulations such as GDPR and institutional data governance policies.
In parallel, the system uses a knowledge graph curriculum model to represent the hierarchical structure of aviation maintenance competencies. Each skill domain is modeled as a node, while prerequisite relationships between skills are encoded as directed edges. This graph-based representation enables the orchestration engine to enforce dependency constraints during content sequencing, ensuring that learners are only exposed to advanced scenarios after mastering prerequisite competencies. The knowledge graph also enables gap clustering, early bottleneck detection, and real-time path re-optimization based on dynamic learner performance. As an example, the knowledge graph of the ATA 36 Pneumatic (PN) Systems curriculum with six modules (PN-1…, PN-6) is shown in Figure 4.
Each node represents a skill domain, and directed edges indicate prerequisite relationships. This structure allows the orchestration engine to ensure learners acquire foundational knowledge before advancing to dependent concepts like Leak Detection Procedures or Safety Relief Systems.
This visual emphasizes the sequential and conditional structure of the learning path, making it suitable for integration into orchestration algorithms and learner progression planning.
The integration of these algorithmic subsystems is illustrated in Figure 5, which summarizes their functional roles and interactions within the orchestration process.
In this architecture, the orchestration engine operates as the central hub, receiving inputs from both reinforcement learning agents and Rule-Based Scenario Selection Modules for adaptive decision-making. Simultaneously, federated skill analytics modules aggregate encrypted local learning models into institution-wide analytics without violating data privacy, while the knowledge graph curriculum model informs structural learning dependencies to ensure valid training progression. This multi-layered architecture enables the digital training twin system to operate as a fully adaptive, privacy-compliant, and audit-ready training framework that dynamically evolves with both learner performance and regulatory demands.
The reinforcement learning agent shown in Figure 5 operates in a discrete action space, where each action corresponds to the selection of a specific training resource r for a given learner profile C t . The environment returns a scalar reward signal R t after each action based on the learner’s updated competence and the efficiency of the training session.
The reward function is defined as follows:
R t = λ 1 · Δ g a p λ 2 · Δ t i m e
where Δ g a p = C t T r C t + 1 T r measures the improvement in alignment with the target competence vector, and Δ t i m e represents the session duration or training cost. Coefficients λ 1 and λ 2 are tunable parameters that balance pedagogical gains and time/resource efficiency. This reward structure encourages the reinforcement learning agent to recommend resources that maximize learning gains with minimal resource expenditure.
As system usage grows and learner diversity increases, the Q-learning policy can be extended to a Deep Q-Network that handles continuous state representations and enables policy generalization across a broader range of competence profiles.
For federated learning, the system adopts a horizontal FL architecture, where each edge node trains a local model on learner interaction data (such as xAPI logs and competence updates). Local model weights are periodically aggregated on the central cloud aggregator using the FedAvg algorithm. All communication between edge nodes and cloud infrastructure is conducted over a secure Kafka-MQTT messaging backbone, ensuring compliance with data privacy regulations such as GDPR, while enabling global model refinement without direct sharing of raw learner data.

2.5. Simulation Methodology

To validate the orchestration engine, a discrete-time simulation was conducted using the ATA 36 Pneumatic Systems as the domain scope. Six simulated learners were initialized with randomized competence vectors sampled from a truncated normal distribution. Competence vectors evolved over six iterations as the system assigned individualized training resources based on gap magnitude, resource fidelity, and estimated instructional efficiency.
Each training session updated the learner’s competence according to the previously defined update rule, with responsiveness coefficients and instructional gain factors sampled from empirically plausible ranges. The orchestration logic dynamically adjusted resource selection based on gap severity and scenario suitability at each iteration. Performance metrics such as competence gap norm reduction, convergence speed, and instructional precision were recorded for quantitative evaluation.
The simulation results, discussed in the next section, demonstrate the orchestration engine’s ability to drive efficient competence convergence, minimize redundant instruction, and optimize resource utilization across diverse learner profiles.
The simulation parameters were carefully selected to reflect realistic learner variation and training dynamics. Initial competence vectors were sampled from a truncated normal distribution with mean μ = 0.55 , standard deviation σ = 0.08 , and bounds [ 0.4 , 0.7 ] to ensure variability while maintaining plausible pre-training skill levels. The responsiveness coefficients α i for learners were uniformly drawn from the interval [ 0.3 , 0.6 ] , capturing individual differences in learning rates. Instructional effectiveness coefficients E r for training resources were assigned values between 0.3 and 0.8, depending on fidelity level and resource complexity. Each orchestration cycle involved competence gap evaluation, resource selection, and profile update using the defined mathematical model. The simulation was executed over six iterations, sufficient for observing convergence behavior and system responsiveness without overfitting to predefined target vectors.

3. Results

This section presents the simulation-based evaluation of the DTT system applied to the domain of ATA 36 Pneumatic Systems. The simulation validates how the orchestration engine dynamically adapts scenario assignment and personalizes training trajectories across learners with varying initial skill profiles.

3.1. Learner Initialization and Competence Profiles

The simulation targeted six critical skill domains under ATA 36 Pneumatic Systems:
  • PN-1: Air Supply Control;
  • PN-2: Duct Pressure Regulation;
  • PN-3: Engine Bleed Monitoring;
  • PN-4: Isolation Valve Logic;
  • PN-5: Leak Detection Procedures;
  • PN-6: Safety Relief Systems.
Six virtual learners were initialized with individualized competence vectors, generated from a truncated normal distribution to simulate realistic variation in pre-existing skills. The regulatory target competence vector was held constant for all learners, representing full regulatory compliance. Table 1 presents the initial competence vectors for all six learners.

3.2. Orchestration Cycles and Convergence Behavior

Each learner progressed through six orchestration cycles, during which competence gaps were evaluated, resources assigned, and vectors updated based on the personalized learning model. The orchestration engine selected content by jointly minimizing skill misalignment, fidelity cost, training duration, and Bloom-level complexity. Figure 6 presents the convergence trajectories of competence gap norms across all learners.
Figure 6 presents the competence gap norm trajectories observed across all six simulated learners during the orchestration cycles applied to Pneumatic System training. The trajectories illustrate the progressive reduction in competence gaps as the orchestration engine dynamically assigned individualized training resources. Initial gap norms represent significant heterogeneity in learners’ starting skill profiles. Throughout the six adaptive training iterations, all learners demonstrated rapid and monotonic reductions in their competence gaps. These results confirm the effectiveness of the orchestration framework in accelerating skill convergence while highlighting the potential for further refinement through additional training cycles or enhanced personalization strategies.
Figure 7 presents the mean competence gap norm across all learners at each orchestration cycle, accompanied by 95% confidence intervals. The narrow intervals around each mean indicate low inter-learner variability and confirm the system’s ability to guide all learner profiles toward consistent convergence trajectories. The smooth downward trend also reaffirms the stability of the competence update process across multiple simulation iterations.
Figure 8 provides a boxplot representation of the competence gap norms after the sixth orchestration cycle. The distribution shows that all learners achieved low gap values below the instructional threshold, with minimal spread and no outliers. This supports the conclusion that the system effectively reduces individual learning deficits in a reliable and uniform manner, regardless of initial competence variability.

3.3. Fidelity Allocation Behavior

Throughout the simulation, the orchestration engine dynamically adjusted content fidelity according to gap severity, ensuring resource-efficient learning. Fidelity distribution results are summarized in Figure 9.
The system prioritized medium-fidelity simulations for the majority of assignments, followed by low-fidelity CBT resources and high-fidelity XR scenarios. This fidelity allocation reflects the orchestration logic’s ability to reserve computationally expensive immersive content for learners operating near convergence thresholds while using lower-fidelity remediation for larger competence gaps.

3.4. Domain-Specific Competence Gains

The orchestration engine dynamically allocated training resources across the six skill domains of ATA 36 Pneumatic Systems, resulting in domain-specific competence improvements that closely reflected the initial gap severity profiles for each learner. Skill domains with larger initial deficits received more intensive remediation and therefore demonstrated greater cumulative competence gains.
Figure 10 presents the average competence gains achieved across all learners for each domain.
As shown in Figure 8, the largest competence gains were recorded in the following:
  • PN-2: Duct Pressure Regulation—average gain + 0.34;
  • PN-5: Leak Detection Procedures—average gain + 0.31;
  • PN-3: Engine Bleed Monitoring—average gain + 0.30.
These domains exhibited the largest initial skill gaps across the learner population and therefore triggered more extensive remedial assignments during the orchestration cycles. In contrast, skill domains where learners entered with higher baseline competence—such as PN-1. Air Supply Control and PN-4. Isolation Valve Logic—required fewer instructional cycles to meet target thresholds, yielding comparatively smaller competence gains of + 0.29 and + 0.25, respectively.
The orchestration engine’s competence-gap-driven assignment logic ensured that remediation resources were efficiently targeted to the most critical deficiencies for each individual learner, while minimizing unnecessary repetition for skills that were already well-developed. This adaptive prioritization directly supports training efficiency by reducing redundant exposure and focusing instructional effort where it yields the highest competence improvement.

3.5. Personalization Accuracy Metrics

To quantitatively evaluate the orchestration engine’s ability to personalize training delivery, two internal precision metrics were monitored throughout the simulation cycles: the redundancy ratio and the overreach ratio. Both metrics serve as indicators of how accurately the orchestration engine aligned training resource assignments with real-time learner competence levels.
The redundancy ratio reflects the proportion of training assignments allocated to skill domains where the learner had already reached or exceeded the regulatory target competence level. In an ideal scenario, no resources would be assigned to already-mastered skills; thus, a lower redundancy ratio indicates more efficient, non-redundant orchestration behavior.
The overreach ratio tracks the proportion of high-fidelity training resources (i.e., immersive XR-based digital twins) assigned prematurely to learners whose competence levels were not yet sufficient to fully benefit from advanced scenarios. Excessive overreach may indicate pedagogical inefficiency or premature exposure to complex training tasks.
Simulation results demonstrate that both personalization metrics remained well within acceptable boundaries, confirming the orchestration engine’s high precision. The observed redundancy ratio was 3.2%, while the overreach ratio was 1.1%, as summarized in Figure 11.
A detailed audit of the orchestration assignment statistics is presented in Table 2, which summarizes the simulation event counts that yielded these metrics.

3.6. System Traceability and Auditability

Beyond dynamic orchestration and adaptive personalization, the DTT system incorporates a robust internal traceability framework through the implementation of a structured validation matrix. This matrix functions as a real-time logging mechanism that captures all orchestration decisions, learner competence updates, and training resource allocations. The resulting dataset enables comprehensive auditability for regulatory certification, instructor oversight, institutional reporting, and advanced learning analytics.
For each orchestration cycle, the validation matrix records detailed metadata describing the entire instructional transaction. Specifically, the following parameters are logged:
  • Learner ID—uniquely identifies each learner within the system;
  • Iteration—sequential orchestration cycle;
  • Skill Domain—specific competence dimension addressed during the session;
  • Pre-Training Gap—calculated skill gap prior to resource assignment;
  • Resource ID—internal identifier of the training asset assigned;
  • Fidelity Tier—resource fidelity classification (low, medium, high);
  • Bloom Level—cognitive complexity level based on Bloom’s Taxonomy (levels 1–6);
  • Session Duration—scheduled or actual training time for the assignment;
  • Instructional Effectiveness Coefficient—resource-specific learning gain multiplier;
  • Post-Training Competence—updated competence value following session completion.
This architecture ensures that every adaptive decision is fully documented and reproducible, allowing for granular reconstruction of individual learner trajectories and institutional compliance reviews. The matrix further supports real-time auditing by mapping orchestration behavior directly to competence progression pathways, offering fully transparent evidence for external certification bodies such as EASA or institutional accreditation boards.
An illustrative extract from the validation matrix is provided in Table 3.
The structure of the validation matrix supports both horizontal auditing (across learners and scenarios) and vertical auditing (over time for each learner), enabling direct verification that each training assignment was pedagogically justified, competence-driven, and regulation-compliant. Additionally, the matrix serves as a foundation for institutional learning analytics, future orchestration algorithm calibration, and continuous improvement of training resource allocation strategies.

4. Discussion

The simulation results demonstrate the feasibility, efficiency, and audit-readiness of the proposed DTT ecosystem for adaptive aviation maintenance training. By combining competence gap modeling, multi-objective orchestration algorithms, and real-time data-driven personalization, the system effectively overcomes key limitations of traditional static training methodologies, such as redundancy, under-adaptation, and inefficient resource utilization.

4.1. Integrated Evaluation of Orchestration Efficiency

The simulation results provide evidence of the operational feasibility, adaptive performance, and regulatory auditability of the proposed DTT framework. The algorithmic orchestration engine demonstrated a consistent capacity to drive rapid competence convergence across a heterogeneous learner cohort. All simulated learners achieved substantial reductions in competence gap norms within a limited number of six orchestration cycles, despite significant variability in their initial skill profiles. This competence-driven adaptive allocation ensured that instructional resources were concentrated on critical deficiencies, thereby minimizing redundant exposure to previously mastered content while accelerating skill acquisition in underdeveloped domains.
The orchestration function’s multi-objective optimization framework—incorporating competence gap magnitude, Bloom’s cognitive complexity levels, training fidelity tiers, and instructional efficiency—enabled precise balancing between cognitive workload, regulatory compliance, and resource utilization. The integration of Bloom-level penalties effectively regulated the introduction of higher-order cognitive challenges, ensuring that learners were progressively exposed to complex training content only when their competence levels justified such advancement. Concurrently, the fidelity allocation model prevented unnecessary overuse of resource-intensive XR-based immersive simulations, preserving these high-cost assets for advanced remediation scenarios where they delivered the greatest pedagogical value.
Quantitative personalization accuracy was further validated through the monitoring of redundancy and overreach metrics throughout the orchestration process. These metrics reflect the orchestration engine’s capacity to maintain instructional efficiency while simultaneously preserving pedagogical safety, avoiding both unnecessary repetition and premature cognitive overload.
Beyond instructional optimization, the DTT system architecture incorporates a fully integrated validation matrix, enabling comprehensive auditability and traceability of instructional decisions. This matrix records detailed metadata for each orchestration cycle, including learner identifiers, competence gaps, assigned resources, fidelity tiers, Bloom levels, session durations, and post-training competence updates. The resulting data structure provides complete longitudinal records of individual learner trajectories and supports both real-time instructional oversight and retrospective institutional audits. This traceability capability is particularly critical for compliance with external regulatory standards such as EASA Part-66, ensuring that each training decision can be transparently reconstructed and pedagogically justified in both certification and accreditation contexts.
The modular integration of reinforcement learning agents, federated learning protocols, and knowledge graph-based curriculum models further extends the DTT system’s adaptability, scalability, and regulatory readiness. The reinforcement learning component enables progressive refinement of orchestration policies as longitudinal learner data accumulate. Federated learning mechanisms support institution-wide analytics while preserving data privacy, thus ensuring compliance with data protection regulations such as GDPR. Simultaneously, the knowledge graph curriculum structure enforces prerequisite mastery constraints and dynamically manages dependency relationships within complex aviation maintenance training domains.
The simulation results confirm not only the convergence of learner competence vectors but also the robustness and adaptability of the orchestration engine under diverse training conditions. One important implication is that the system consistently reduces the competence gap even when learners begin with varying skill levels and responsiveness rates. This supports the suitability of digital training twin systems for scalable deployment in environments where learners exhibit heterogeneous profiles, such as in airline maintenance academies or vocational institutions with mixed-experience cohorts.
Moreover, the low variance in final competence outcomes—supported by the confidence intervals and boxplot visualizations—demonstrates the effectiveness of algorithmic orchestration in reducing instructor dependency and supporting semi-autonomous training ecosystems. This has implications for instructor workload optimization and the operational flexibility of training centers, especially in distributed or resource-constrained environments.
The modular design of the reinforcement learning and federated learning layers also enables extensibility. For example, institutions could incorporate additional constraints such as certification timelines, economic costs, or regulatory constraints into the orchestration logic. Future real-world deployments could leverage this flexibility to dynamically adjust training plans in response to aircraft maintenance events, technician availability, or changing safety requirements, aligning digital twin ecosystems more closely with operational realities.
Scalability is an important consideration for real-world deployment. The modular architecture of the proposed system supports horizontal scaling across larger learner populations by distributing orchestration logic to edge nodes and using containerized cloud services for model aggregation and analytics. The use of MQTT for low-bandwidth communication and Apache Kafka for high-throughput data streaming ensures that system performance remains stable even with increased concurrency and data flow.
The competence modeling framework is domain-agnostic and can be extended to support multiple skill domains by augmenting the vector dimensionality and updating the gain profiles of new training resources. The orchestration logic, cost function, and reinforcement learning models are generalizable, provided that domain-specific taxonomies (e.g., Bloom levels, fidelity scores) are appropriately defined. These features position the system for use not only in aviation maintenance but also in other structured training environments such as medical simulation, logistics, or industrial equipment maintenance.

4.2. Ethical and Regulatory Considerations

The implementation of a digital training twin ecosystem that collects, processes, and analyzes learner data must adhere to strict ethical standards and regulatory requirements. The present system design incorporates safeguards focused on data privacy, security, informed consent, and compliance with regional and international regulations, most notably the General Data Protection Regulation (GDPR).
To ensure data privacy, all learner-related data—including competence vectors, session logs, and behavioral metrics—is processed locally on edge devices. No raw personal data is transmitted to the cloud. Instead, federated learning enables only encrypted model updates to be shared, preventing central aggregation of sensitive data. All communications use TLS-encrypted channels over MQTT and Kafka protocols, ensuring confidentiality and integrity during transfer.
Informed consent is a foundational element of system deployment. Learners must be provided with clear, accessible information outlining what data is being collected, for what purpose, how long it will be stored, and who has access to it. Consent must be explicit, recorded, and revocable, with procedures in place to fulfill right-to-erasure or data access requests, in compliance with Articles 7 and 17 of the GDPR.
Regarding data minimization and purpose limitation (Articles 5 and 6 GDPR), the system collects only the minimum data necessary for competence assessment and orchestration. No biometric, location-based, or unrelated personal identifiers are stored. All analytical processes are bound to a clearly defined educational purpose.
To support fairness and transparency, all orchestration decisions can be traced via audit logs, and learners are entitled to explanations of how their training paths are determined (Article 22 GDPR). The orchestration engine is regularly reviewed for potential algorithmic bias across demographic or skill-based learner segments.
Additionally, institutions deploying the system are encouraged to conduct Data Protection Impact Assessments (DPIAs) prior to launch, especially in scenarios involving sensitive learner groups, high-volume data collection, or cross-border model sharing. Integration with federated unlearning modules in future versions will further empower users with control over their data footprint.

4.3. Limitations and Future Research Directions

While the simulation results validate the system’s core functionality, several areas warrant future investigation:
  • The current competence update model assumes fixed responsiveness coefficients λ i across learners. Incorporating adaptive learning rate estimation based on real-world learner behavior could further improve personalization precision.
  • The simulation presently focuses on a single ATA domain (ATA 36 Pneumatic Systems); expansion to multi-domain, cross-ATA orchestration remains an important next step.
  • Integration of real-world learner data into the federated learning modules will enable predictive modeling for skill gap evolution and facilitate early intervention strategies.
  • Extending orchestration algorithms to incorporate real-time instructor feedback may enable hybrid human-in-the-loop personalization models that balance algorithmic optimization with expert pedagogical judgment.
  • While federated learning was implemented in a horizontal architecture for privacy-preserving updates, no adversarial scenarios (e.g., model poisoning or node dropout) were simulated. Future work should explore robust federated learning techniques, including differential privacy, secure aggregation, and federated unlearning, to improve resilience and trust in decentralized training environments.
Future research should focus on validating the proposed framework through pilot deployments in actual training centers. This includes collecting empirical learner data, evaluating long-term learning retention, and assessing instructor feedback. Furthermore, integrating multimodal inputs—such as eye-tracking, biometric feedback, or voice-based interactions—could improve personalization and real-time adaptation of training pathways.

5. Conclusions

This study introduced and validated an algorithmically orchestrated DTT framework for adaptive aviation maintenance training, with application demonstrated in the ATA 36 Pneumatic Systems domain. The system integrates dynamic competence gap modeling, multi-criteria resource selection algorithms, federated learning analytics, and knowledge graph-based curriculum structuring to deliver fully personalized, regulation-compliant, and resource-efficient training experiences.
Simulation results confirmed that the orchestration engine successfully guided learners with diverse initial competence profiles to full regulatory proficiency within six adaptive training cycles. The competence-driven orchestration logic ensured targeted remediation of individual skill gaps while minimizing instructional redundancy and preventing premature exposure to advanced high-fidelity simulations.
The validation matrix architecture demonstrated full-cycle traceability of instructional decisions, supporting both real-time learning analytics and transparent auditability for regulatory certification bodies. The orchestration engine’s multi-objective optimization framework—jointly balancing skill alignment, instructional fidelity, cognitive complexity, and time efficiency—proved effective in dynamically assigning training resources while preserving pedagogical safety and instructional rigor.
The modular integration of reinforcement learning agents, federated skill analytics, and knowledge graph curriculum models provides a flexible foundation for future system expansion. These algorithmic modules enable both ongoing orchestration policy refinement and scalable institutional deployment across diverse aviation maintenance training contexts.
Future work will extend this framework to multi-domain orchestration, cross-fleet training scenarios, adaptive learning rate modeling, and hybrid human-in-the-loop personalization strategies. The DTT ecosystem represents an approach to next-generation aviation education, offering both methodological innovation and direct applicability to industry practice.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Dewan, M.H.; Godina, R.; Chowdhury, M.R.K.; Noor, C.W.M.; Wan Nik, W.M.N.; Man, M. Immersive and Non-Immersive Simulators for the Education and Training in Maritime Domain—A Review. J. Mar. Sci. Eng. 2023, 11, 147. [Google Scholar] [CrossRef]
  2. Muzata, A.R.; Singh, G.; Stepanov, M.S.; Musonda, I. Immersive Learning: A Systematic Literature Review on Transforming Engineering Education Through Virtual Reality. Virtual Worlds 2024, 3, 480–505. [Google Scholar] [CrossRef]
  3. Vachálek, J.; Šišmišová, D.; Vašek, P.; Fiťka, I.; Slovák, J.; Šimovec, M. Design and Implementation of Universal Cyber-Physical Model for Testing Logistic Control Algorithms of Production Line’s Digital Twin by Using Color Sensor. Sensors 2021, 21, 1842. [Google Scholar] [CrossRef] [PubMed]
  4. Vodyaho, A.; Abbas, S.; Zhukova, N.; Chervoncev, M. Model Based Approach to Cyber–Physical Systems Status Monitoring. Computers 2020, 9, 47. [Google Scholar] [CrossRef]
  5. Shin, M.H. Effects of Project-Based Learning on Students’ Motivation and Self-Efficacy. Engl. Teach. 2018, 73, 95–114. [Google Scholar] [CrossRef]
  6. Kwok, P.K.; Yan, M.; Qu, T.; Lau, H.Y. User Acceptance of Virtual Reality Technology for Practicing Digital Twin-Based Crisis Management. Int. J. Comput. Integr. Manuf. 2021, 34, 874–887. [Google Scholar] [CrossRef]
  7. Geng, R.; Li, M.; Hu, Z.; Han, Z.; Zheng, R. Digital Twin in Smart Manufacturing: Remote Control and Virtual Machining Using VR and AR Technologies. Struct. Multidiscip. Optim. 2022, 65, 321. [Google Scholar] [CrossRef]
  8. Madni, A.M.; Erwin, D.; Madni, A. Exploiting Digital Twin Technology to Teach Engineering Fundamentals and Afford Real-World Learning Opportunities. In Proceedings of the 2019 ASEE Annual Conference & Exposition, Tampa, FL, USA, 16–19 June 2019. [Google Scholar] [CrossRef]
  9. Krupas, M.; Kajati, E.; Liu, C.; Zolotova, I. Towards a Human-Centric Digital Twin for Human–Machine Collaboration: A Review on Enabling Technologies and Methods. Sensors 2024, 24, 2232. [Google Scholar] [CrossRef] [PubMed]
  10. Hänggi, R.; Nyffenegger, F.; Ehrig, F.; Jaeschke, P.; Bernhardsgrütter, R. Smart Learning Factory–Network Approach for Learning and Transfer in a Digital & Physical Set Up. In Proceedings of the PLM 2020, Rapperswil, Switzerland, 5–8 July 2020; Springer: Cham, Switzerland, 2020; pp. 15–25. [Google Scholar] [CrossRef]
  11. Shi, T. Application of VR Image Recognition and Digital Twins in Artistic Gymnastics Courses. J. Intell. Fuzzy Syst. 2021, 40, 7371–7382. [Google Scholar] [CrossRef]
  12. Zaballos, A.; Briones, A.; Massa, A.; Centelles, P.; Caballero, V. A Smart Campus’ Digital Twin for Sustainable Comfort Monitoring. Sustainability 2020, 12, 9196. [Google Scholar] [CrossRef]
  13. Ahuja, K.; Shah, D.; Pareddy, S.; Xhakaj, F.; Ogan, A.; Agarwal, Y.; Harrison, C. Classroom Digital Twins with Instrumentation-Free Gaze Tracking. In Proceedings of the 2021 CHI Conference, Yokohama, Japan, 8–13 May 2021; pp. 1–9. [Google Scholar] [CrossRef]
  14. Bevilacqua, M.G.; Russo, M.; Giordano, A.; Spallone, R. 3D Reconstruction, Digital Twinning, and Virtual Reality: Architectural Heritage Applications. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Christchurch, New Zealand, 12–16 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 92–96. [Google Scholar] [CrossRef]
  15. Halverson, L.R.; Graham, C.R. Learner Engagement in Blended Learning Environments: A Conceptual Framework. Online Learn. 2019, 23, 145–178. [Google Scholar] [CrossRef]
  16. Zheng, X.; Lu, J.; Kiritsis, D. The Emergence of Cognitive Digital Twin: Vision, Challenges and Opportunities. Int. J. Prod. Res. 2022, 60, 7610–7632. [Google Scholar] [CrossRef]
  17. Hazrat, M.A.; Hassan, N.M.S.; Chowdhury, A.A.; Rasul, M.G.; Taylor, B.A. Developing a Skilled Workforce for Future Industry Demand: The Potential of Digital Twin-Based Teaching and Learning Practices in Engineering Education. Sustainability 2023, 15, 16433. [Google Scholar] [CrossRef]
  18. Kabashkin, I. Digital Twin Framework for Aircraft Lifecycle Management Based on Data-Driven Models. Mathematics 2024, 12, 2979. [Google Scholar] [CrossRef]
  19. Wang, Z.; Wang, Y.; Wang, X.; Yang, K.; Zhao, Y. A Novel Digital Twin Framework for Aeroengine Performance Diagnosis. Aerospace 2023, 10, 789. [Google Scholar] [CrossRef]
  20. Zaccaria, V.; Stenfelt, M.; Aslanidou, I.; Kyprianidis, K.G. Fleet Monitoring and Diagnostics Framework Based on Digital Twin of Aeroengines. In Proceedings of the ASME Turbo Expo, Oslo, Norway, 11–15 June 2018; Volume 6. [Google Scholar] [CrossRef]
  21. Alasim, F.; Almalki, H. Virtual Simulation-Based Training for Aviation Maintenance Technicians: Recommendations of a Panel of Experts. SAE Int. J. Adv. Curr. Prac. Mobil. 2021, 3, 1285–1292. [Google Scholar] [CrossRef]
  22. Charles River Analytics. Maintenance Training Based on an Adaptive Game-Based Environment Using a Pedagogic Interpretation Engine (MAGPIE). Available online: https://cra.com/blog/maintenance-training-based-on-an-adaptive-game-based-environment-using-a-pedagogic-interpretation-engine-magpie/ (accessed on 12 June 2025).
  23. Wu, W.-C.; Vu, V.-H. Application of Virtual Reality Method in Aircraft Maintenance Service—Taking Dornier 228 as an Example. Appl. Sci. 2022, 12, 7283. [Google Scholar] [CrossRef]
  24. Lufthansa Technik. AVIATAR. Available online: https://www.lufthansa-technik.com/de/aviatar (accessed on 12 June 2025).
  25. Airbus. Skywise. Available online: https://aircraft.airbus.com/en/services/enhance/skywise (accessed on 12 June 2025).
  26. GE Digital. PREDIX Analytics Framework. Available online: https://www.ge.com/digital/documentation/predix-platforms/afs-overview.html (accessed on 12 June 2025).
  27. AFI KLM E&M. PROGNOS—Predictive Maintenance. Available online: https://www.afiklmem.com/en/solutions/about-prognos (accessed on 12 June 2025).
  28. Boeing Global Services. Enhanced Digital Solutions Focus on Customer Speed and Operational Efficiency. Available online: https://investors.boeing.com/investors/news/press-release-details/2018/Boeing-Global-Services-Enhanced-Digital-Solutions-Focus-on-Customer-Speed-and-Operational-Efficiency/default.aspx (accessed on 12 June 2025).
  29. Kabashkin, I.; Misnevs, B.; Zervina, O. Artificial Intelligence in Aviation: New Professionals for New Technologies. Appl. Sci. 2023, 13, 11660. [Google Scholar] [CrossRef]
  30. Rojas, L.; Peña, Á.; Garcia, J. AI-Driven Predictive Maintenance in Mining: A Systematic Literature Review on Fault Detection, Digital Twins, and Intelligent Asset Management. Appl. Sci. 2025, 15, 3337. [Google Scholar] [CrossRef]
  31. Lu, Q.; Li, M. Digital Twin-Driven Remaining Useful Life Prediction for Rolling Element Bearing. Machines 2023, 11, 678. [Google Scholar] [CrossRef]
  32. European Union Aviation Safety Agency (EASA). Part-66-Maintenance Certifying Staff. Available online: https://www.easa.europa.eu/en/acceptable-means-compliance-and-guidance-material-group/part-66-maintenance-certifying-staff (accessed on 12 June 2025).
  33. iSpec 2200: Information Standards for Aviation Maintenance, Revision 2024.1. Available online: https://publications.airlines.org/products/ispec-2200-information-standards-for-aviation-maintenance-revision-2024-1 (accessed on 12 June 2025).
  34. xAPI Solved and Explained. Available online: https://xapi.com/?utm_source=google&utm_medium=natural_search (accessed on 12 June 2025).
  35. Apache Kafka. Available online: https://kafka.apache.org/ (accessed on 12 June 2025).
  36. MQTT: The Standard for IoT Messaging. Available online: https://mqtt.org/ (accessed on 12 June 2025).
  37. Bloom, B.S.; Engelhart, M.D.; Furst, E.J.; Hill, W.H.; Krathwohl, D.R. Handbook I: Cognitive Domain. In Taxonomy of Educational Objectives: The Classification of Educational Goals; David McKay: New York, NY, USA, 1956. [Google Scholar]
Figure 1. System architecture of the digital training twin ecosystem.
Figure 1. System architecture of the digital training twin ecosystem.
Algorithms 18 00411 g001
Figure 2. Training flow diagram illustrating the simulation loop for competence-based orchestration.
Figure 2. Training flow diagram illustrating the simulation loop for competence-based orchestration.
Algorithms 18 00411 g002
Figure 3. Cloud–edge deployment architecture for digital training twin delivery.
Figure 3. Cloud–edge deployment architecture for digital training twin delivery.
Algorithms 18 00411 g003
Figure 4. Knowledge graph representation of Pneumatic Systems curriculum.
Figure 4. Knowledge graph representation of Pneumatic Systems curriculum.
Algorithms 18 00411 g004
Figure 5. Integrated algorithms supporting orchestration and analytics in the DTT system.
Figure 5. Integrated algorithms supporting orchestration and analytics in the DTT system.
Algorithms 18 00411 g005
Figure 6. Convergence trajectories for Pneumatic System training.
Figure 6. Convergence trajectories for Pneumatic System training.
Algorithms 18 00411 g006
Figure 7. Convergence trajectories with 95% confidence interval.
Figure 7. Convergence trajectories with 95% confidence interval.
Algorithms 18 00411 g007
Figure 8. Distribution of final competence gaps.
Figure 8. Distribution of final competence gaps.
Algorithms 18 00411 g008
Figure 9. Distribution of training content by fidelity level.
Figure 9. Distribution of training content by fidelity level.
Algorithms 18 00411 g009
Figure 10. Average competence gains by skill domain.
Figure 10. Average competence gains by skill domain.
Algorithms 18 00411 g010
Figure 11. Personalization accuracy metrics.
Figure 11. Personalization accuracy metrics.
Algorithms 18 00411 g011
Table 1. Initial competence vectors for ATA 36 Pneumatic Systems.
Table 1. Initial competence vectors for ATA 36 Pneumatic Systems.
Skill DomainTargetLearner 1Learner 2Learner 3Learner 4Learner 5Learner 6
PN-1: Air Supply Control0.920.550.660.610.610.640.58
PN-2: Duct Pressure Reg.0.880.480.470.560.640.660.67
PN-3: Engine Bleed Mon.0.900.520.540.490.550.560.61
PN-4: Isolation Valve Logic0.890.620.540.630.670.520.68
PN-5: Leak Detection Proc.0.910.440.500.420.500.550.51
PN-6: Safety Relief Systems0.930.490.600.460.530.630.52
Table 2. Detailed statistics of training assignments and fidelity allocation events.
Table 2. Detailed statistics of training assignments and fidelity allocation events.
MetricValueInterpretation
Initial Competence Gap Norm0.38–0.60Significant variability in baseline learner profiles
Final Competence Gap Norm<0.10Full regulatory convergence achieved in all learners
Convergence Iterations≤6 iterationsRapid adaptation within limited training cycles
Redundancy Ratio3.2%Minimal unnecessary assignments
Overreach Ratio1.1%Very few inappropriate high-fidelity assignments
High-Fidelity Usage (XR)24%Immersive twins used sparingly for refinement
Low-Fidelity Usage (CBT)44%Broadly used for gap remediation
Regulatory Threshold Achievement100%Full compliance across entire learner cohort
Table 3. Example of validation matrix log.
Table 3. Example of validation matrix log.
Learner IDIterationSkill DomainPre-Training GapResource IDFidelity TierBloom LevelSession Duration (min)EffectivenessPost-Training Competence
L11PN-20.40R102Low2200.30.62
L12PN-50.25R205Medium3350.50.72
L21PN-30.38R315Low2250.30.75
L31PN-20.42R102Low2200.30.60
L43PN-50.18R205High5500.80.88
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kabashkin, I. Development of Digital Training Twins in the Aircraft Maintenance Ecosystem. Algorithms 2025, 18, 411. https://doi.org/10.3390/a18070411

AMA Style

Kabashkin I. Development of Digital Training Twins in the Aircraft Maintenance Ecosystem. Algorithms. 2025; 18(7):411. https://doi.org/10.3390/a18070411

Chicago/Turabian Style

Kabashkin, Igor. 2025. "Development of Digital Training Twins in the Aircraft Maintenance Ecosystem" Algorithms 18, no. 7: 411. https://doi.org/10.3390/a18070411

APA Style

Kabashkin, I. (2025). Development of Digital Training Twins in the Aircraft Maintenance Ecosystem. Algorithms, 18(7), 411. https://doi.org/10.3390/a18070411

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop