1. Introduction
The transition toward Industry 4.0 has fundamentally transformed industrial systems by integrating cyber–physical systems, advanced automation, and data-driven decision-making [
1,
2,
3,
4]. Manufacturing environments increasingly rely on interconnected technologies such as robotics, artificial intelligence (AI), and the Internet of Things (IoT) to support real-time monitoring, optimization, and predictive analytics. Within this technological shift, digital twin (DT) technology has emerged as a key enabler for improving system visibility, operational efficiency, and lifecycle management [
1,
2].
Digital twins extend traditional simulation models by maintaining a continuous synchronization between physical assets and their virtual representations. This capability enables real-time feedback, predictive maintenance, and informed decision-making throughout the system lifecycle [
2,
3]. As industrial systems become more complex and interconnected, digital twins provide a mechanism for managing uncertainty, reducing downtime, and improving overall system performance.
Modern production systems increasingly exhibit characteristics of System of Systems (SoS), defined as collections of autonomous yet interdependent constituent systems that interact across organizational, technological, and geographical boundaries [
5,
6,
7]. Such systems introduce challenges in coordination, control, emergent behavior, and system-level optimization that cannot be adequately addressed by reductionist engineering approaches. Addressing these challenges requires holistic perspectives grounded in systems engineering and systems thinking principles [
8,
9].
Recent research highlights the importance of systemic quality management frameworks in SoS environments. Agmon et al. demonstrate that global quality management systems grounded in systems thinking are essential for addressing interdependencies and dynamic evolution in complex, distributed systems [
8]. Further work shows that integrating quality management with systems engineering principles enhances coherence, adaptability, and decision-making across multi-organizational systems [
9].
Within this complex systems landscape, digital twins function not only as technological tools but also as system-level integrators that support learning, coordination, and continuous improvement [
10,
11,
12]. From a systems modeling perspective, digital twins enable closed-loop synchronization between physical and digital states, supporting predictive insights and adaptive control [
10]. Comprehensive literature reviews emphasize their role in managing complexity and integrating heterogeneous subsystems in manufacturing environments [
11,
12].
Robotic fastening systems represent a compelling example of complex industrial systems operating under these conditions. These systems involve tight coupling between mechanical components, control algorithms, production workflows, and quality requirements. They are particularly sensitive to variability, faults, and downtime, making them suitable candidates for digital twin-based enhancement. Ensuring robustness, reliability, and long-term performance in such systems requires an integrated systems engineering approach that combines technical, operational, and managerial perspectives. While recent studies have advanced reliability-focused fault diagnosis—including signal processing methods for non-stationary fault detection in rotating machinery [
13] and multi-sensor data fusion frameworks for bearing diagnostics [
14]—a critical gap remains existing approaches address isolated analytical techniques without integrating multiple systems engineering tools into a unified, end-to-end design and evaluation framework. This study addresses that gap by embedding QFD, AHP, RAMST, and SPC within a Digital Twin-driven architecture applied to robotic fastening systems. Despite significant advances in Digital Twin technologies and systems engineering methodologies, existing literature reveals several critical gaps. First, while Digital Twins have been extensively applied in aerospace and automotive industries for lifecycle monitoring, their integration with structured systems engineering frameworks (QFD, AHP, RAMST) during the design and manufacturing phases remains underexplored, particularly for robotic fastening applications. Second, most studies focus either on digital modeling or on decision-making tools in isolation, lacking a holistic methodology that systematically combines stakeholder requirements translation, competitive analysis, reliability assessment, and real-time process control within a unified framework. Third, although predictive maintenance using Digital Twins is well documented, few studies demonstrate integration of real-time torque monitoring, fault detection, and statistical process control (SPC) for fastening quality assurance in production environments. This research addresses these gaps by proposing an integrated systems engineering framework that embeds Digital Twin capabilities within a structured decision-making process, specifically tailored for robotic screw-fastening systems.
Accordingly, this study proposes an integrated framework for enhancing robotic fastening systems through digital twin technology, aligned with systems engineering and systems thinking principles. The framework integrates QFD, AHP, RAMST, and SPC within a unified digital twin environment. By embedding these methodologies within a system-oriented framework, the proposed approach addresses the multidimensional complexity of Industry 4.0 manufacturing systems and supports the transition toward more adaptive, resilient, and data-informed industrial operations.
3. Methodology
The methodology employed in this research integrates various specialized tools designed to achieve defined research objectives and system goals. Initially, data collection was conducted through structured requirement questionnaires to needs. Subsequently, the Quality Function Deployment (QFD) [
29] method facilitated the characterization of system requirements based on these user inputs, ensuring alignment with customer expectations. Additionally, the Analytic Hierarchy Process (AHP) [
29] provided a robust decision-analysis framework for comparative evaluation against competitors. These two methods operate synergistically in a hierarchical decision-making framework: QFD first establishes weighted technical priorities from stakeholder requirements (e.g., downtime reduction weight: 0.11, torque accuracy: 0.10), which then directly inform the criteria and weightings used in the AHP competitor evaluation, ensuring that system selection maintains explicit traceability from customer needs to design decisions. This sequential integration embodies systems engineering principles where customer voice (QFD output) becomes the input for architectural trade-off analysis (AHP input), creating a coherent requirements-to-design transformation rather than isolated tool application. MATLAB R2024a software (MathWorks, Natick, MA, USA) was employed to create digital twin models and perform screw fastening simulations, complemented by Statistical Process Control (SPC) [
29] techniques executed through JMP 17 software (SAS Institute, Cary, NC, USA) for real-time data analysis and process monitoring. This carefully selected toolset, chosen based on quality, efficiency, and reliability criteria, significantly reduced operational errors and enhanced production quality, thereby promoting a flexible, well-integrated approach tailored to specific operational and technical demands.
3.1. System Engineering Tools
Quality Function Deployment (QFD) was utilized to translate customer requirements into measurable technical specifications. A questionnaire distributed to 30 stakeholders identified key needs, including reduced downtime, improved system reliability, and accurate torque control. These needs were prioritized, and their relative importance was mapped against system features, identifying critical areas for improvement. QFD is a structured methodology that employs the “House of Quality” matrix framework to systematically convert qualitative customer demands into quantifiable engineering parameters. In this study, stakeholders comprised production engineers, maintenance technicians, and quality managers from industrial automation facilities. The comprehensive questionnaire addressed three primary categories: operational usage aspects, performance characteristics, and general system requirements. A total of 18 customer requirements were identified and rated on a 1–10 importance scale, yielding a cumulative rating of 129.3 points. The analysis revealed that stakeholder priorities were dominated by: maximum payload capacity of 5 kg (rating: 9.7, 8% relative importance), torque and screw assembly data monitoring (rating: 9.3, 8% importance), accurate torque control within 1–10 Lbf.in range (rating: 9.0, 7% importance), system cost not exceeding ₪500,000 (rating: 8.0, 7% importance), and minimal downtime of 0–5 min (rating: 7.0, 6% importance). These weighted priorities directly informed the Digital Twin architecture design, specifically guiding sensor selection for real-time torque monitoring, data acquisition system specifications for multi-parameter tracking, and predictive maintenance algorithm development to minimize system downtime.
3.2. AHP-Based Competitor Analysis
This phase facilitated competitor analysis by comparing criteria such as price, robotic capabilities, downtime, and screw control across leading companies. Pairwise comparison matrices were constructed, and weights were assigned to determine the most competitive solution. The DTwin Screw Robot consistently scored highest, demonstrating superior cost-efficiency and downtime minimization. The Analytic Hierarchy Process (AHP), is a multi-criteria decision-making methodology that decomposes complex decisions into hierarchical pairwise comparisons using a 1–9 importance scale. The purpose of employing AHP was to objectively evaluate the DTwin Screw Robot against three commercial competitors (Desoutter Industrial Tools (Saint-Herblain, France), Kawasaki Robotics (Kobe, Japan), and Atlas Copco (Stockholm, Sweden) across four criteria: machine downtime during programming, screw control accuracy, system price, and robotic capabilities. Seven industry experts with minimum 10 years of experience conducted pairwise comparisons through structured interviews, ensuring consistency ratios below 0.10. Each expert was briefed on the CR threshold prior to the interview session. In cases where an individual expert’s initial comparison set yielded CR ≥ 0.10-computed as CR = CI/RI, where CI = (
λ_max −
n)/(
n − 1) and RI = 0.90 for n = 4-the expert was asked to review and revise the inconsistent judgments in a follow-up session. The matrices reported in
Table 1,
Table 2,
Table 3,
Table 4,
Table 5 and
Table 6 represent the final post-revision consensus values, all satisfying the CR < 0.10 requirement. The calculation employed the geometric mean approach: for each row, Vi = (a
1 × a
2 × a
3 × a
4)
(1/4), followed by normalization (Weight = Vi/ΣVi) to obtain priority weights. All pairwise judgments were expressed using Saaty’s standard 1–9 integer scale; reciprocal values (e.g., 1/3, 1/5) were entered as their decimal equivalents (0.33, 0.20) and are understood to represent the exact integer reciprocals of the corresponding scale values. All Decimal Values (normalized weights) are reported rounded to two decimal places; entries displaying as 0.00 reflect non-zero weights whose exact values are less than 0.005 after normalization, and entries displaying as 0.01 or 0.02 reflect similarly small but non-negligible contributions. Final scores were synthesized through weighted aggregation of criterion-specific performance evaluations. The complete expert-consensus pairwise comparison matrices, constituting the full record of survey inputs, are presented in
Table 1,
Table 2,
Table 3,
Table 4,
Table 5 and
Table 6 of the main text. These tables represent the post-revision consensus values following the iterative CR validation process described above. Individual expert-level response data is available from the corresponding author upon reasonable request.
3.3. Morphological Tables and Pugh Matrix
To address design challenges, morphological tables identified potential solutions to issues such as screw feeding, obstacle detection, and system safety. Each solution was evaluated with reference to risk and performance. A Pugh Matrix further refined these options, systematically comparing alternatives to select the optimal design concept. Morphological Analysis is a systematic design exploration method that deconstructs complex problems into independent functional parameters and identifies all possible solution variants for each parameter, creating a comprehensive solution space. The Pugh Matrix is a structured concept selection tool that evaluates design alternatives against multiple criteria using a baseline reference (datum), employing a simple scoring system (+1 for better, 0 for equivalent, −1 for worse) to identify optimal configurations. The purpose in this research was to systematically explore and evaluate design alternatives for critical subsystems of the Digital Twin-enhanced robotic fastening system, ensuring comprehensive coverage of the solution space while maintaining objective design decisions. Implementation involved a multidisciplinary team decomposing the system into eight functional parameters (screw feeding, obstacle detection, actuation, torque sensing, safety interlocks, communication protocol, end-effector, power transmission), generating multiple variants per parameter, then using the Pugh Matrix with eight weighted criteria derived from QFD results to iteratively refine and select the optimal design concept through structured evaluation workshops.
3.4. RAMST
In this study, the selection of the optimal system was based on the criterion of improving system reliability and safety, without conducting analyses of availability, maintainability, and testability. This focus aligns with the primary stakeholder priorities identified through QFD, where reliability (24% importance) and downtime reduction (28% importance) were ranked highest, while routine maintenance requirements were considered secondary. It is noted as a capability observation—not as an analyzed output of this study—that the Digital Twin’s remote diagnostics and predictive maintenance alerts have the potential to reduce mean-time-to-repair (MTTR) and support condition-based maintenance scheduling in future operational phases; however, the formal modeling of Maintainability, Availability, and Testability lies outside the defined scope of this work The system’s reliability was rigorously assessed through a comprehensive failure analysis, incorporating Fault Tree Analysis (FTA) to systematically identify primary failure modes and their cascading effects, alongside Failure Modes and Effects Analysis (FMEA), which evaluated and ranked potential risks according to their severity, frequency, and detectability, employing Pareto Analysis to prioritize corrective actions effectively. The FMEA employs a custom weighted, non-linear S-O-D scale adapted for safety-critical robotic systems. Severity (S) uses a power-of-two scale: 32 = Human Safety risk; 16 = Mechanism Safety; 8 = General system failure; 4 = Local failure; 2 = Reduced performance; 1 = No operational impact. Occurrence (O) uses five levels: 1 = very low, 2 = low, 3 = medium, 4 = high, 6 = very high. Detection (D) uses three levels: 1 = fully detectable (automated monitoring), 3 = partially detectable, 9 = impossible to detect—a higher D value increases RPN, consistent with standard FMEA convention. The theoretical maximum RPN under this scale is 1728 (32 × 6 × 9); the total cumulative RPN across all 21 analyzed items is 1456. Furthermore, a System Lifecycle Mapping approach was utilized to thoroughly examine operational states spanning from installation through maintenance to eventual scrapping, thereby enabling the prediction and mitigation of potential vulnerabilities within the system. RAMST is a comprehensive systems engineering framework evaluating five interdependent operational characteristics: Reliability (probability of failure-free operation), Availability (proportion of operational readiness time), Maintainability (ease and speed of restoration), Safety (hazard mitigation), and Testability (diagnostic effectiveness). This research strategically focused exclusively on Reliability and Safety dimensions, as these directly addressed the dominant stakeholder priorities identified in QFD analysis (system reliability: 24% importance, downtime reduction: 28% importance) and represented the primary value proposition of Digital Twin integration for predictive fault prevention. The deliberate scoping decision to exclude detailed Availability, Maintainability, and Testability analyses was justified by: (1) resource constraints requiring focused allocation to highest-impact areas, and (2) Digital Twin capabilities being most directly applicable to real-time reliability enhancement and safety monitoring rather than physical maintenance operations; Availability, Maintainability, and Testability were therefore entirely excluded from the analytical scope of this study and no outputs were generated for these dimensions. The analyzed RAMST deliverables produced in this study are explicitly limited to the Reliability and Safety dimensions. For Reliability, the analysis incorporated Fault Tree Analysis (FTA) to systematically identify primary failure modes and their cascading effects, Failure Modes and Effects Analysis (FMEA) to evaluate and rank potential risks by severity, frequency, and detectability with Pareto Analysis applied to prioritize corrective actions, and System Lifecycle Mapping to examine operational states from installation through maintenance to decommissioning. For Safety, the delivery consisted of hazard identification and mitigation analysis integrated within the Digital Twin real-time monitoring framework, targeting the prevention of fault propagation and unsafe operating conditions. No deliverables were produced for Availability, Maintainability, or Testability within this study.
3.5. Digital Twin Development in MATLAB
The digital twin of the robotic screw-driving system was constructed in the MATLAB Simscape environment, following a structured approach that encompassed the following key steps: initially, during the system modeling phase, detailed 3D models of the robot, screwdriver, screw feeder, and electronic card assembly were created to replicate the physical system’s geometry and kinematics; these models provided a visual representation of the system and enabled the simulation of movements and interactions. Subsequently, the screwing process was specifically designed within the digital twin, incorporating a detailed model that captured critical parameters such as screw angle, closing torque, and operation time, facilitating analysis of process dynamics and enabling optimization of screwing parameters. Additionally, sensor integration was carried out by embedding virtual sensors in the digital twin, mirroring the precise placement and functionality of the physical sensors, thus providing simulated data streams corresponding to screw angle, closing torque, robot joint acceleration, and force, thereby enabling the digital twin to accurately reflect the real-time behavior of the physical system. Finally, real-time sensor data were processed and visualized within the digital twin, using Statistical Process Control (SPC) charts and custom dashboards to provide comprehensive insights into system performance, effectively aiding in the detection of anomalies and prediction of potential faults.
3.6. Validity and Reliability
To ensure the validity and reliability of the research findings, several measures were taken. Experts in robotics, automation, and manufacturing reviewed research design, data collection instruments, and analysis methodologies. This external scrutiny ensured the research’s rigor and scientific soundness. Digital twin simulations and data analysis were conducted multiple times to verify the consistency and repeatability of the results. This iterative approach enhanced the reliability of the findings and minimized the potential for bias. To address potential noise originating from the model itself rather than the measurements, validation efforts focused on repeated simulations, cross-verification with real-time sensor data, and expert reviews. Data collected from multiple sources, including customer questionnaires, competitor analysis, and real-time sensor data, was triangulated to corroborate findings and enhance the robustness of the conclusions. The complete methodological framework integrating all validation measures across the six research phases is presented in
Figure 1, which illustrates the systematic application of quality assurance checkpoints and feedback mechanisms throughout the investigation.
4. Results
The methodology outlined in this research serves as the foundation for understanding the role of digital twin technology in enhancing the efficiency and quality of a robotic screwdriving system. This section presents the results of applying the proposed methods, including customer needs analysis, QFD, AHP, morphological analysis, digital twin simulation, and RAMST-based evaluation.
To reflect the system-oriented and lifecycle-driven nature of the proposed framework, the results are organized across three system levels: (1) requirements alignment and design decisions, (2) operational integration through the digital twin, and (3) system-level performance and learning outcomes. This structure supports explicit traceability between stakeholder needs, engineering decisions, and operational performance, consistent with the integrative perspective adopted in this study.
4.1. Requirement Alignment and Design Outcomes
The first level of results addresses how stakeholder requirements were translated into coherent design and configuration decisions, combining customer needs analysis, QFD, AHP-based competitor evaluation, and architectural trade-off analysis. This integration ensured that design choices were not derived from isolated technical considerations, but from a structured systems engineering process that maintained traceability from stakeholder expectations to system architecture.
Findings of the customer needs analysis revealed several critical requirements. An average of 5–8 screws per product was reported by 63% of respondents, while 90% identified a desired system downtime of under five minutes as essential. A minimum warranty period of five years was required by 80% of respondents, and 90% emphasized the need for precise torque control within the range of 1–10 lbf-in. These requirements directly shaped the prioritization of system features and performance targets and are summarized in the QFD matrix.
The QFD matrix translated these customer needs into technical specifications, revealing high priorities for minimizing downtime and ensuring accurate torque control. System-level weights were assigned to each requirement, with downtime reduction (weight: 0.11) and torque accuracy (weight: 0.10) emerging as dominant drivers of system architecture These requirements were addressed through the integration of real-time fault detection mechanisms, high-accuracy torque sensors, and secure remote operation interfaces, ensuring consistency between stakeholder expectations and engineering solutions.
To support these design decisions, an AHP-based competitor analysis was conducted across four criteria: machine shutdown during programming, screw control, price, and robotic capabilities. The relative importance of these criteria is shown in
Table 1, where the Decimal Value for “Robotic capabilities” appears as 0.00 due to two-decimal rounding of the exact normalized weight (Vi = 0.01; exact weight = 0.0009). Similarly, several sub-criteria Decimal Values across
Table 2,
Table 3,
Table 4,
Table 5 and
Table 6 display as 0.00 or 0.01, reflecting small but arithmetically non-zero contributions that are negligible relative to the dominant competitors on each criterion. All such entries result from the normalization procedure (Weight = Vi/ΣVi) applied to the geometric-mean products, followed by rounding to two decimal places; the sum of all Decimal Values in each table equals 1.00 as required. The pairwise comparisons for each criterion are presented in
Table 2,
Table 3,
Table 4,
Table 5 and
Table 6. The near-zero weights assigned to “Robotic capabilities” (decimal value ≈ 0.00, Vi = 0.01) reflects a deliberate and context-specific expert judgment rather than a methodological artifact. Within the bounded operational context of a dedicated precision-fastening cell, all four candidate systems—Desoutter, Kawasaki Robotics, Atlas Copco, and the DTwin Screw Robot—fulfill the minimum threshold of robotic manipulation capability required for the screw-tightening task. Since all alternatives adequately satisfy this criterion, it does not serve as a differentiating factor in the selection decision. The three criteria that do distinguish the alternatives are machine downtime during programming, screw-torque control accuracy, and total acquisition cost, which directly determine production continuity, product quality, and return on investment. The weight assigned to “Robotic capabilities” therefore accurately reflects its role as a threshold condition—a capability that all systems meet equally—rather than a competitive dimension. The resulting weighted comparison (
Table 7 and
Figure 2) demonstrates the DTwin Screw Robot’s superior performance in minimizing downtime and overall lifecycle cost, providing quantitative justification for the selected system configuration. To assess the robustness of this outcome with respect to the low weight of the “Robotic capabilities” criterion, a sensitivity analysis was conducted in which the weight of w
4 was systematically varied from its baseline value (0.001) up to 0.30, while the weights of the remaining three criteria were rescaled proportionally to preserve the unit-sum constraint. The composite weighted scores were recalculated at each scenario using the sub-criteria decimal values from
Table 2,
Table 4,
Table 5 and
Table 6. As shown in
Table 8, the DTwin Screw Robot retains the highest composite score across all tested weight scenarios. A break-even analysis further reveals that Kawasaki Robotics—the competitor with the highest score on the “Robotic capabilities” criterion (decimal value = 0.96,
Table 6)—could overtake the DTwin Screw Robot only if w
4 reached 0.429, meaning “Robotic capabilities” would need to account for 43% of the total decision weight, with a simultaneous Reduction In The “Machine Shutdown” criterion from 58.1% to approximately 33%. This hypothetical re-weighting is inconsistent with the operational priorities of a precision-fastening application and is not supported by the expert panel’s judgment. The AHP ranking is therefore robust to perturbations of the criterion weights. The methodological integration between QFD and AHP is evident in the results: QFD-derived weights for downtime reduction (0.11) and torque accuracy (0.10) directly influenced the AHP criteria prioritization, where “machine shutdown during programming” received 58.09% importance and “screw control” received 20.91%, demonstrating how customer priorities (QFD outputs) cascade into technical evaluation criteria (AHP inputs). This interface ensures that the DTwin Screw Robot’s superior AHP score (0.7157) is not merely a technical comparison, but a reflection of how well the system addresses weighted stakeholder needs, exemplifying the value of integrated systems engineering tools over isolated method application.
At the architectural level, morphological analysis and the Pugh matrix were used to evaluate alternative design concepts against system-level criteria such as reliability, automation, safety, and scalability. The morphological alternatives and selected solutions are presented in
Table 9, while the comparative evaluation of design concepts is summarized in the Pugh matrix (
Table 10). The selected configuration optimized electro-mechanical screw feeding, torque-controlled fastening, hybrid communication, and dual-layer safety mechanisms, ensuring alignment between functional requirements, architectural decisions, and operational constraints.
Together, these results demonstrate that the proposed digital twin framework enables structured, traceable, and defensible design decisions at the system level, rather than isolated component optimization.
The RAMST-based analysis, focusing on reliability and safety dimensions, provides a structured framework to evaluate the performance and robustness of the robotic fastening system integrated with digital twin technology.
The FMEA (
Table 11) identified 21 potential failure modes across six critical subsystems: XYZ robot, Digital Twin simulation developed by MATLAB software, screw feeder, safety curtain, screwdriver, and controllers. Risk Priority Numbers (RPN), calculated as RPN = Severity × Occurrence × Detection, ranged from 8 to 288.
Table 11 presents the three highest-risk failure modes, ranked by RPN from highest to lowest. The complete 21-item FMEA table is available from the corresponding author upon request. Three critical high-priority failure modes requiring immediate mitigation were identified:
(1) Screwdriver torque control failure (Risk #13, RPN = 288): Undesired operations including incorrect torque application and rotation jumps caused by communication interference, with Severity = 32 (product quality failure), Occurrence = 3 (medium frequency), and Detection = 3 (partially detectable). Improvement measures implemented: Installation of shielded communication cables and electromagnetic interference (EMI) filters to eliminate signal noise, integration of real-time torque monitoring in the Digital Twin with ±2% tolerance thresholds triggering immediate alerts, and implementation of Statistical Process Control (SPC) charts for continuous torque trend analysis. Post-mitigation RPN reduction: Occurrence reduced from 3 to 1 (very low) through noise elimination and Detection improved from 3 to 1 (full detectability) through Digital Twin monitoring, resulting in RPN reduction to 32 (89% reduction).
(2) Safety curtains stop mechanism failure (Risk #9, RPN = 192): Failure of emergency stop mechanism due to electrical/software interference, with Severity = 32 (human safety hazard), Occurrence = 6 (high frequency), and Detection = 1 (fully detectable). Improvement measures implemented: Redundant safety circuit architecture with dual-channel verification, preventive maintenance schedule every 500 operational hours with mandatory stop mechanism functional testing, and Digital Twin virtual safety zone monitoring providing predictive alerts 5 s before potential safety breach events. Post-mitigation RPN reduction: Occurrence reduced from 6 to 2 (low) through redundancy and preventive maintenance, resulting in RPN reduction to 64 (67% reduction).
(3) Screwdriver uncontrolled operation (Risk #14, RPN = 192): Motor operates without control due to software bugs, with Severity = 32 (product failure), Occurrence = 2 (low), and Detection = 3 (partially detectable). Improvement measures implemented: Comprehensive software testing protocol including unit tests, integration tests, and Hardware-in-the-Loop (HIL) simulation before deployment, code review procedures with minimum two-engineer approval requirement, and Digital Twin predictive fault detection using machine learning algorithms trained on historical failure patterns. Post-mitigation RPN reduction: Detection improved from 3 to 1 (full detectability) through Digital Twin predictive monitoring, resulting in RPN reduction to 64 (67% reduction).
The Pareto analysis of all 21 failure modes revealed that 7 critical risks (33% of failure modes) accounted for 75.8% of cumulative risk, validating the 80/20 principle and enabling focused resource allocation. The systematic mitigation strategies, integrated with Digital Twin real-time monitoring and predictive capabilities, achieved an average RPN reduction of 74% across the three highest-priority failure modes, demonstrating measurable risk mitigation effectiveness and enhanced system reliability through proactive design modifications.
4.2. Operational Integration Through the Digital Twin
The second level of results focuses on the operational integration enabled by the digital twin, which connects physical processes, control logic, and quality monitoring within a unified digital environment. The MATLAB Simscape-based digital twin replicated the automatic screw fastening process while continuously synchronizing virtual models with operational data from the physical system. The MATLAB Simscape model makes several practical simplifications to ensure real-time operation: rigid-body dynamics (neglecting minor elastic deformations), constant friction coefficients, and omission of thermal effects during normal operating conditions. Model validation was conducted by comparing simulation results against physical fastening experiments across multiple torque ranges, yielding acceptable agreement for cycle-time prediction and torque accuracy. The simulation achieves near-real-time performance, enabling live synchronization with the physical robotic system during operation
Within this environment, the digital twin served as an operational coordination layer rather than a standalone simulation tool. The Threading Control Screen (
Figure 3) provided real-time visibility into critical process variables, including closing angle, torque, and cycle time, while Statistical Process Control (SPC) charts (
Figure 3) supported trend analysis, anomaly detection, and early maintenance alerts. Process data were continuously logged to an SQL database, enabling systematic sampling, historical analysis, and performance comparison across production cycles.
Specifically, closing angle values were recorded after each fastening operation and monitored for sustained deviations beyond control limits, triggering alerts following seven consecutive out-of-control events. Closing torque measurements were similarly tracked, with SPC charts (
Figure 4) identifying deviations from target levels that could indicate tool wear or calibration drift. Closing time data were logged at the completion of each program cycle, allowing the system to detect abrupt efficiency changes and emerging operational instabilities.
Through this integration, the digital twin reduced the latency between deviation detection and corrective action, enabling coordinated responses across control, maintenance, and quality functions. These results demonstrate how the digital twin functions as an operational integrator that supports continuous monitoring, proactive intervention, and stable system performance under real production conditions.
The Motion Control Screen collects data on the robot’s acceleration and joint forces and displays SPC graphs to help identify potential robot faults. The Maintenance Screen displays all faults reported by motion controllers and robots, as well as closing data. An analysis algorithm (
Figure 5) provides recommendations for immediate or future actions. The rationale is that the system learns online and continuously triggers alerts (
Figure 6).
4.3. System-Level Performance and Learning Outcomes
Beyond localized performance metrics, the integrated digital twin enabled observable system-level improvements. These results are grounded in a simulation dataset of n = 1050 screwing operations logged over a 35-month observation window (January 2022–November 2024), generated from the Digital Twin model under realistic production conditions as presented in
Table 12. Fastening quality was defined and measured across three parameters: (1) torque compliance—the proportion of operations within the specification range [4.7, 5.2] N·m; (2) angle compliance—operations within [1300°, 1700°]; and (3) cycle-time compliance—operations within [4.0, 6.0] seconds. Across the full dataset, mean torque was 5.16 ± 0.34 N·m (95% CI: [5.14, 5.18] N·m), mean rotation angle was 1534 ± 89° (95% CI: [1529°, 1539°]), and mean cycle time was 5.12 ± 0.78 s (95% CI: [5.07, 5.17] s). Angle compliance and cycle-time compliance both reached 95.2% over the full period. The 15% fastening-quality improvement reflects the increase in combined parameter compliance relative to the pre-integration baseline derived from the simulation. The 30% downtime reduction is based on the reduction in programming-related machine shutdown time quantified in the AHP analysis (
Table 3): baseline downtime of 1 h per program across an expected 256 programs (256 h. total) was reduced by approximately 77 h through Digital Twin enabled offline programming and predictive maintenance, consistent with a ~30% reduction in total scheduled and unscheduled downtime. System downtime was reduced by approximately 30%, while fastening quality improved by 15%, reflecting the cumulative effect of coordinated decision-making rather than isolated process optimization.
System downtime was reduced by approximately 30%, while fastening quality improved by 15%, reflecting the cumulative effect of coordinated decision-making rather than isolated process optimization.
In addition to these quantitative gains, the system demonstrated learning behavior over repeated operational cycles. SPC thresholds were refined, fault patterns became more distinguishable, and maintenance recommendations shifted from reactive to preventive actions. This evolution reflects the system’s ability to adapt operational responses based on accumulated data, supporting continuous improvement without manual reconfiguration.
Furthermore, the digital twin facilitated system-level coordination across multiple vendors and subsystems, supporting managerial independence and reducing integration friction. This capability is essential for manufacturing environments that exhibit Systems-of-Systems characteristics, where performance emerges from interactions among autonomous yet interdependent elements.
4.4. Traceability Across the System Lifecycle
A key outcome of the proposed framework is the establishment of explicit traceability across the system lifecycle. Customer requirements defined in the QFD phase were directly linked to torque tolerances, sensor configurations, SPC rules, and maintenance actions within the digital twin environment. This traceability enabled transparent evaluation of how design decisions propagated to operational behavior and performance outcomes.
Such lifecycle traceability is essential for safety-critical and regulated manufacturing systems, as it supports accountability, auditability, and informed system evolution over time.
5. Discussion and Analysis
As industrial systems evolve toward increasing autonomy, connectivity, and complexity, the central challenge is no longer the development of isolated technologies, but the integration of heterogeneous technical and organizational elements into coherent, resilient systems. This study addresses this challenge by advancing digital twin (DT) research and positioning the digital twin as a system-level integrator, rather than merely an operational monitoring or visualization tool. While much of the existing literature emphasizes real-time visualization, process control, or predictive analytics [
11,
12,
28], the present work demonstrates how a digital twin can function as an architectural mechanism that orchestrates multiple systems engineering methodologies across the system lifecycle. By adopting this perspective, the study emphasizes holistic, integrative, and lifecycle-oriented approaches to complex engineered systems.
Recent systematic reviews indicate that most industrial DT implementations lack structured mechanisms for translating business and stakeholder requirements into technical design decisions [
30,
31]. The integrated QFD-AHP-RAMST-DT framework proposed in this study directly addresses this gap by embedding requirements traceability within the DT architecture itself. Through this structured progression, stakeholder needs are systematically transformed into design parameters, reliability considerations, and operational controls, ensuring coherence between managerial objectives and technical implementation.
The theoretical contribution of this work is articulated through three primary dimensions. First, the digital twin is conceptualized as an architectural pattern enabling cross-functional integration. The hierarchical progression from QFD to AHP, RAMST, and ultimately the DT operationalizes systems thinking principles [
8,
9,
24,
25], providing explicit traceability from stakeholder needs to operational and design decisions. This transforms the DT from a passive representation into an active system-of-interest that supports informed decision-making across engineering domains.
Second, the framework explicitly addresses key System-of-Systems (SoS) characteristics [
5,
6,
7,
29]. The robotic fastening system exhibits operational independence (autonomous subsystems), managerial independence (multi-vendor integration), evolutionary development (SPC-based learning), and emergent behavior resulting from interactions among mechanical, control, and quality subsystems. Following the DT composition patterns identified by Khedr and Fitzgerald [
32], this study adopts an orchestrated (centralized) integration architecture, which is necessary for robotic fastening processes that require millisecond-level temporal coordination. This contrasts with federated approaches that are more suitable for loosely coupled or asynchronous systems [
33,
34].
Third, the proposed framework realizes what Loaiza and Cloutier [
12] describe as a Digital Twin Manufacturing System, in which the DT serves as a shared operational environment rather than an isolated model. Mechanical models (Simscape), control algorithms (SPC), quality metrics, and reliability data (FMEA) interact continuously within the same digital space, reducing organizational silos [
35] and enabling explicit requirements traceability in safety-critical manufacturing contexts [
36]. This integrative capability is particularly relevant for Industry 4.0–5.0 environments, where technical performance, human oversight, and system resilience must be addressed simultaneously.
The implementation of such a digital twin requires navigating several fundamental trade-offs. Fidelity versus computational efficiency represents a primary tension: while Simscape enables high-fidelity kinematic modeling, simplifications are required to maintain real-time SPC responsiveness, prioritizing kinematic accuracy over thermal and material effects [
37,
38]. Data granularity versus system complexity constitutes another trade-off, as the selection of five monitoring parameters balances diagnostic capability with infrastructure manageability, avoiding data saturation without proportional decision-making value [
34]. Automation versus human oversight reflects Industry 5.0 principles, where automated detection supports but does not replace human judgment in corrective actions [
16,
39]. Finally, cost–benefit considerations are quantitatively justified through the AHP analysis, which demonstrates the DTwin Robot’s superior downtime minimization and validates the economic rationale for the initial investment.
The integration of machine learning and artificial intelligence within digital twin frameworks represents a rapidly evolving research frontier with significant implications for industrial automation. Recent advances demonstrate transformative potential across diverse manufacturing contexts, including predictive maintenance optimization, quality control enhancement, and adaptive process control. The framework proposed in this study, while currently employing statistical process control and structured systems engineering tools, provides a foundation for integrating such machine learning capabilities. The mention of foundation models for adaptive DT behavior in future research directions [
40] reflects this emerging trend, where AI-driven systems can learn from operational data, adapt to changing conditions, and provide predictive insights beyond traditional rule-based approaches. Future iterations of the digital twin architecture could incorporate supervised learning models for torque prediction, unsupervised anomaly detection algorithms for early fault identification, and reinforcement learning approaches for adaptive control strategies. The explicit requirements traceability and lifecycle data infrastructure established in this work position the framework to benefit from these emerging AI capabilities while maintaining the transparency and accountability essential for safety-critical manufacturing applications aligned with Industry 5.0 principles [
16].
Several limitations warrant consideration, particularly when transitioning from controlled pilot implementations to real production environments. From a technical perspective, model-reality fidelity gaps remain, as MATLAB Simscape abstracts compliance, thermal effects, and component wear, while sensor noise may reduce fault detection sensitivity. In real production settings, these limitations are further compounded by environmental variability: temperature fluctuations, humidity changes, electromagnetic interference, and dust accumulation can degrade sensor accuracy and compromise the continuous synchronization between physical and virtual states that the framework relies upon.
Moreover, multi-vendor integration presents substantial technical and managerial challenges beyond conventional system integration. From a technical perspective, coordinating subsystems from different vendors introduces interoperability barriers including proprietary communication protocols, incompatible data formats requiring continuous translation layers [
3,
11], and temporal coordination challenges where millisecond-level synchronization demands exceed loosely coupled federated architectures [
33]. These technical complexities are compounded by managerial challenges including conflicting warranty and liability frameworks, divergent maintenance schedules that complicate coordinated interventions, and organizational boundaries that inhibit information sharing essential for root cause analysis [
36,
41]. The digital twin framework addresses these challenges through several mechanisms. First, the DT functions as a vendor-neutral integration layer that abstracts vendor-specific interfaces into standardized virtual representations, enabling heterogeneous subsystems to interact without modifying proprietary systems [
12]. Second, the orchestrated (centralized) DT architecture provides temporal coordination necessary for robotic fastening, synchronizing operations across vendor boundaries at millisecond precision [
32]. Third, the unified digital environment enables system-level visibility that transcends organizational silos [
36], supporting coordinated fault diagnosis and maintenance planning even when physical subsystems remain under separate vendor management [
8,
9]. Finally, the explicit requirements traceability established through QFD provides a shared performance baseline that clarifies accountability and facilitates vendor collaboration. However, implementing this integration layer requires substantial middleware development and organizational commitment to vendor-neutral architectural principles [
42].
Data-related limitations arise from SPC’s reliance on stable baselines, whereas real production environments exhibit non-stationarity due to wear, environmental variation, and workpiece differences, potentially violating underlying assumptions [
3,
41]. Beyond statistical assumptions, production data suffers from sensor drift, calibration degradation, intermittent connectivity failures, and measurement noise that compound over time, leading to false positives that erode operator trust or false negatives that allow faults to propagate undetected. Robust deployment therefore requires automated sensor health monitoring, redundant sensing for critical parameters, data validation pipelines embedded within the DT infrastructure, and explainable AI techniques that provide operators with transparent reasoning behind automated alerts, supporting informed human judgment aligned with Industry 5.0 principles [
15].
Organizationally, the framework challenges traditional functional silos and may require cultural and managerial transformation [
42]. Operators accustomed to manual inspection and reactive maintenance may resist automated monitoring systems, particularly when recommendations conflict with experiential knowledge. Successful deployment necessitates change management strategies that engage stakeholders early, comprehensive cross-functional training programs, clear governance frameworks defining decision authority, and pilot programs demonstrating tangible value before large-scale rollout. Additionally, cybersecurity vulnerabilities introduced by connecting manufacturing systems to digital platforms must be addressed through end-to-end encryption, network segmentation, role-based access control, and regular security audits tailored to cyber–physical environments [
3,
11].
In terms of scalability and economic viability, the current validation is limited to a single manufacturing cell; extension to multi-station systems introduces additional challenges such as buffer management, resource contention, cascading failures, and computational bottlenecks that can exceed available resources and cause synchronization drift. Hidden costs including IT infrastructure upgrades, software licensing, specialized personnel, extended commissioning periods, and production disruptions during implementation may outweigh projected benefits, particularly for small and medium enterprises operating in low-margin contexts. Realistic deployment requires hierarchical DT architectures, edge-cloud hybrid strategies, phased implementation approaches delivering incremental value, and lifecycle cost models accounting for total ownership costs.
Addressing the scalability challenge requires elaborating on multi-DT orchestration mechanisms for production environments spanning multiple units and lines. A hierarchical DT architecture aligned with manufacturing system structure offers a practical pathway: unit-level DTs monitor individual robotic cells with high temporal fidelity, capturing real-time process parameters and fault events; line-level DTs aggregate data from multiple unit DTs to coordinate workflow scheduling, predict bottlenecks, and synchronize maintenance interventions across cells; and enterprise-level DTs integrate insights from multiple lines to support factory-wide resource allocation and strategic decision-making [
32,
35]. This hierarchical decomposition addresses the computational bottlenecks identified in single-cell validation by distributing processing across system levels while maintaining coherent system-wide coordination. Practical implementation challenges include establishing standardized data interfaces to ensure interoperability across vendor-specific subsystems, maintaining data synchronization and consistency across distributed DTs without excessive computational overhead, and developing event-driven update mechanisms that propagate critical state changes rather than continuous full-state synchronization [
12]. The edge-cloud hybrid strategies mentioned earlier support this architecture by enabling low-latency unit-level processing while leveraging centralized analytics for line-level and enterprise-level coordination [
34]. However, validating such orchestration frameworks requires pilot implementations in multi-line production environments to measure impact on overall equipment effectiveness, refine orchestration algorithms for dynamic reconfiguration, and establish scalability benchmarks that inform architectural decisions for diverse manufacturing contexts [
32].
6. Conclusions
This study proposed and validated an integrated systems engineering framework in which the digital twin (DT) operates as a system-level integrator within manufacturing Systems-of-Systems environments. By embedding Quality Function Deployment (QFD), Analytic Hierarchy Process (AHP), Reliability and Safety analysis (RAMST), and Statistical Process Control (SPC) within a unified DT architecture, the framework enables consistent alignment between requirements, design decisions, and operational performance across the system lifecycle. The results demonstrate that integrating decision-making, monitoring, and learning mechanisms into a single digital environment can deliver measurable operational improvements, 30% reduction in system downtime and 15% improvement in fastening quality, while supporting transparency and human oversight. The digital twin thus functions not only as a technical artifact but as a structural enabler for managing complexity in modern manufacturing systems, aligning with Industry 5.0 principles of resilience and human–machine collaboration.
This work advances digital twin conceptualization from passive simulation tools to active system-level integrators that orchestrate cross-functional engineering methodologies, addressing a critical gap in Systems-of-Systems research where emergent behavior and managerial independence require holistic coordination mechanisms rather than component-level optimization. For systems engineering practice, the structured integration of QFD, AHP, RAMST, and SPC demonstrates that requirements traceability can be operationalized through DT architectures, enabling transparent decision-making and lifecycle accountability essential for safety-critical and regulated manufacturing domains. The industrial impact is demonstrated through quantified benefits and actionable implementation guidance, while the framework’s emphasis on explainable AI and human oversight supports the Industry 4.0–5.0 transition toward human-centric collaboration.
Implementation and validation of the framework yielded critical lessons that inform future research and practice. First, integration complexity requires explicit coordination mechanisms; simply deploying multiple engineering tools in parallel does not guarantee coherent outcomes, as the hierarchical progression from QFD-derived requirements to AHP criteria proved essential. Second, real-world data deviates from ideal modeling assumptions, SPC stability assumptions were frequently violated due to environmental variability and sensor drift, necessitating adaptive monitoring strategies and explainable alerting mechanisms. Third, organizational change proved as critical as technical implementation, with operator resistance requiring early stakeholder engagement, comprehensive training, and pilot demonstrations. Fourth, scalability demands hierarchical architecture, as extending high-fidelity models to multi-station systems introduces computational bottlenecks requiring cell-level detailed models coordinated by simplified aggregate models. Fifth, cybersecurity cannot be an afterthought; integration with digital platforms introduced vulnerabilities requiring end-to-end encryption, network segmentation, and role-based access control embedded from the design phase. Finally, economic viability requires lifecycle perspective, as hidden implementation costs emerged that demand comprehensive total ownership models, particularly for small and medium enterprises.
These lessons point toward critical research priorities for strengthening the practical applicability of digital twins as system-level enablers in complex industrial ecosystems. Future research should explore multi-DT orchestration and comparative architectural studies [
30,
33], lifecycle cost analysis integrating both capital and operational expenditures, integration of foundation models for adaptive DT behavior [
40], formal verification and validation (V&V) methodologies ensuring model correctness under diverse operating conditions, longitudinal field studies in heterogeneous production environments, and cross-domain generalization of the proposed framework. Addressing these directions will bridge the gap between laboratory validation and sustained real-world performance, establishing digital twins as mature system-level enablers for Industry 4.0–5.0 ecosystems. As artificial intelligence increasingly permeates systems engineering and production environments, such integrated digital twin frameworks provide a critical foundation for embedding AI capabilities in a transparent, traceable, and accountable manner across the system lifecycle.