Next Article in Journal
How Digital Technologies Promote the Development of Modern Rural Industrial Systems: Based on the Dual Mechanisms of Supply and Demand
Previous Article in Journal
Single Technology or Technology Combinations? The Impact of Soil Improvement Technology Use on Farmers’ Fertilizer Use
Previous Article in Special Issue
A Hybrid Physics-Based and AI-Enabled Framework for Mine Road Infrastructure Maintenance Using Inertial Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Lead-Time Prediction for Resilient and Sustainable Supply Chains

Computer Science and Engineering, College of Applied Studies, King Saud University, Riyadh P.O. Box 11451, Saudi Arabia
Sustainability 2026, 18(10), 4748; https://doi.org/10.3390/su18104748 (registering DOI)
Submission received: 26 March 2026 / Revised: 25 April 2026 / Accepted: 28 April 2026 / Published: 10 May 2026
(This article belongs to the Special Issue AI for Sustainable and Resilient Operations Management)

Abstract

Reliable prediction of supplier lead times is important for understanding resilience in complex adaptive supply chains, which function as socio-technical systems characterized by high variability, dynamic interactions, and operational unpredictability. This study proposes a simulation-based adaptive lead-time prediction framework that unifies uncertainty-aware statistical modeling, digital twin-enabled simulation, IoT-linked operational adjustment, and AI-driven temporal learning within a single system-oriented architecture. Semi-synthetic datasets are used to emulate lead-time variability and disruption patterns across multiple operating scenarios under intermediate and elevated levels of uncertainty. The novelty of the study lies not in the use of individual techniques in isolation, but in their integration within a closed-loop predictive framework that links probabilistic modeling, adaptive correction, and digital twin-based system updating. The results indicate that the baseline statistical model performs satisfactorily under stable conditions; however, its performance declines significantly when exposed to parameter variations and extreme disruptions. Under high-variability conditions, for example, RMSE at μ = 3.0 and σ = 1.2 decreases from 65.00 weeks in the baseline model to 13.45 weeks in the IoT-adaptive model and to 3.00 weeks in the AI-enhanced model. These findings show that the proposed framework improves predictive accuracy, robustness, and adaptability relative to both the baseline statistical and IoT-adaptive alternatives. Overall, the proposed framework contributes to supply chain analytics by providing an integrated and simulation-based proof-of-concept for resilient lead-time prediction in complex supply environments. Its sustainability relevance should be understood as prospective: although the study does not directly measure emissions, energy use, or waste reduction, improved predictive stability and adaptive decision support may inform future sustainability-oriented planning and empirical evaluation.

1. Introduction

In increasingly uncertain and dynamically evolving production environments, ac-curate supplier lead time prediction has emerged as a critical enabler of resilient supply chain operations in complex and disruption-prone environments [1,2]. Modern supply chains are no longer linear and deterministic; rather, they function as complex adaptive socio-technical systems characterized by high variability, interdependencies, and continuous interactions between technological infrastructures and human decision-making processes. Within such systems, supplier lead times are influenced by heterogeneous operational conditions, supplier capabilities, and external disruptions, often resulting in non-linear, skewed, and heavy-tailed behaviors that limit the effectiveness of conventional forecasting approaches [3,4]. These complexities necessitate predictive frameworks that can simultaneously capture uncertainty, adapt to evolving system conditions, and support system-level decision-making [3,4]. These complexities necessitate predictive frameworks that can simultaneously capture uncertainty, adapt to evolving system conditions, and support system-level decision-making.
Recent studies have explored digital technologies such as IoT, digital twins, and AI for supply chain analytics, but the literature remains fragmented in how these capabilities are combined and evaluated. Some studies emphasize IoT-enabled visibility and operational monitoring [5,6,7], and still others investigate simulation or digital twin approaches for system experimentation and disruption analysis [8,9,10]. Although these streams have advanced the field, prior work has often treated uncertainty modeling, adaptive correction, and system-level simulation as partially separate tasks. As a result, an important scientific gap remains: there is still limited evidence on how uncertainty-aware statistical modeling, IoT-linked adaptation, digital twin-based state updating, and AI-driven temporal learning can be integrated into a closed-loop predictive architecture for lead-time prediction in complex supply chains.
This gap is particularly relevant in disruption-prone environments, where lead-time behavior is shaped not only by average trends, but also by asymmetric variability, rare delay events, and evolving operational feedback. Existing integration-oriented studies often combine digital components at a functional level, yet they do not always make explicit how predictive uncertainty, adaptive correction, and system-state transition are linked within a unified decision cycle. In this study, the novelty therefore does not lie in simply combining AI, IoT, and simulation, but in structuring them as an adaptive and feedback-based framework in which baseline uncertainty modeling, operational adjustment, predictive learning, and digital twin updating operate together.
To address these limitations, this study proposes an advanced predictive modeling framework that integrates quantile regression and Extreme Value Theory (EVT), digital twin-enabled discrete event simulation (DES), and deep learning-based temporal modeling using Temporal Fusion Transformers (TFT), supported by reinforcement learning (RL) and online adaptive learning mechanisms. Unlike conventional approaches based on single-distribution assumptions, the integration of EVT enables the explicit modeling of extreme events and tail risks, which are critical in disruption-prone supply chains and essential for analyzing continuity under extreme conditions. Quantile regression further enhances this capability by capturing heterogeneous effects across the distribution of lead times, providing a more comprehensive representation of variability. The digital twin-DES environment enables system-level experimentation and evaluation, allowing the proposed framework to simulate dynamic interactions, disruptions, and adaptive responses within complex supply chain systems in a closed-loop analytical setting.
In addition, the incorporation of RL facilitates intelligent and autonomous adaptation by enabling the system to learn optimal response strategies based on IoT-inspired operational signals within the simulation environment. This shifts the predictive paradigm from static or rule-based adjustment toward adaptive decision-making, where the system continuously refines its actions in response to environmental feedback. The TFT further enhances predictive performance by capturing complex temporal dependencies, multi-horizon forecasting patterns, and uncertainty structures inherent in supply chain data. To address dynamic uncertainty, particle filtering and online learning mechanisms are employed to continuously update model parameters, enabling robust performance under non-stationary conditions and evolving system dynamics [8,9,10].
From a systems perspective, this study advances the understanding of supply chain resilience as an emergent property of complex adaptive socio-technical systems [11,12,13]. Rather than viewing resilience solely as a reactive capability, this study conceptualizes it as a proactive, data-driven function enabled by predictive intelligence and system-level coordination. By integrating digital twin simulation, AI-driven prediction, and structured IoT-linked operational signals, the proposed framework is intended to examine simulation-based anticipatory decision support, allowing supply chain actors to potentially mitigate risks through early detection of disruptions and adaptive response strategies. In the present study, however, the analytical focus is limited to predictive accuracy, robustness, and resilience under dynamic operating conditions. Although these capabilities may be relevant to broader supply chain planning, sustainability is not directly operationalized or empirically evaluated in the current work. Specifically, the study does not quantify carbon emissions, energy consumption, waste reduction, resource efficiency, or other explicit sustainability indicators. Accordingly, the contribution of this paper should be understood as a resilience- and prediction-focused simulation study rather than a direct assessment of sustainability performance.
This study is designed as a simulation-based proof-of-concept, employing semi-synthetic datasets augmented using generative models to emulate realistic supply chain variability while maintaining experimental control. The framework is evaluated under multiple variability scenarios to assess robustness, predictive accuracy, and system responsiveness rather than directly measured sustainability outcomes. This approach provides a rigorous foundation for future empirical validation and potential real-world deployment, particularly in digitally enabled supply chain environments where predictive resilience and adaptive decision support are of primary interest. Explicit sustainability assessment remains an important direction for future work.
To further clarify the purpose and scope of this study, the following research questions are explicitly addressed:
  • RQ1: How effectively can a hybrid quantile regression-EVT framework represent supplier lead-time uncertainty, including both routine variability and extreme disruption effects, in complex adaptive supply chains?
  • RQ2: To what extent does the integration of IoT-driven real-time data and digital twin-enabled discrete event simulation improve adaptive lead-time prediction under dynamic operational conditions?
  • RQ3: How much additional predictive accuracy, robustness, and resilience can be achieved by incorporating AI-driven adaptive learning mechanisms, specifically TFT, RL, particle filtering, and online learning, compared with baseline statistical and IoT-adaptive approaches?
To address these research questions, the key contributions of this work are outlined as follows. First, it introduces an integrated modeling approach that combines EVT-based statistical modeling and quantile regression to capture both central tendencies and extreme disruptions in supplier lead times. Second, it develops a digital twin-driven simulation framework that enables system-level analysis of predictive performance under dynamic conditions. Third, it incorporates RL and advanced deep learning architectures, particularly TFT, to enable adaptive, data-driven decision-making in IoT-enabled supply chain environments through a closed-loop predictive structure that links sensing, correction, forecasting, and digital twin state updating. Fourth, it contributes a simulation-based analytical foundation for future empirical work on how resilient lead-time prediction may inform broader supply chain planning and operational decision support, while explicitly recognizing that sustainability-related effects are outside the direct evaluation scope of the present study.
The remainder of this paper is organized as follows. Section 2 presents a comprehensive review of the relevant literature on lead time prediction, digital twin systems, IoT-enabled analytics, and AI-driven supply chain modeling. Section 3 introduces the proposed methodological framework, encompassing EVT-based modeling, digital twin-driven simulation design, and the associated learning components. Section 4 presents the experimental findings across multiple scenarios, followed by a robustness analysis. Section 5 examines the theoretical and practical implications within the context of complex socio-technical systems and resilient supply chain planning and uncertainty-aware decision support, while Section 6 concludes the paper by highlighting key limitations and outlining directions for future research, including empirical validation and prospective real-world deployment as well as possible extension toward direct sustainability-oriented performance assessment.

2. Background and Literature Review

This section reviews the theoretical and methodological foundations relevant to resilient lead-time prediction in complex adaptive supply chains. Rather than only summarizing prior work, the discussion critically evaluates the strengths and limitations of existing approaches in order to position the proposed framework more clearly. It begins by examining advanced lead-time modeling approaches, with particular attention to uncertainty-aware and system-oriented methods. It then discusses the operational challenges associated with highly variable and customized supply chain environments, followed by recent developments in IoT-enabled predictive modeling and digital twin-supported analytics. The section next examines probabilistic and AI-driven learning approaches that support adaptive, data-driven decision-making in dynamic supply chain systems. Finally, it synthesizes the principal research gaps in the existing literature to clarify the specific theoretical and methodological contributions of the present study. Because this study is submitted to Sustainability, the review also distinguishes between work that explicitly measures sustainability outcomes and work, such as the present study, that is primarily focused on predictive resilience and only prospectively linked to sustainability-oriented planning. Together, these strands of literature establish the conceptual basis for the integrated framework proposed in this study.

2.1. Advanced Lead-Time Modeling in Complex Adaptive Supply Chains

Lead-time modeling in contemporary supply chains has evolved from traditional stochastic approaches toward advanced, system-oriented frameworks that account for the complex, adaptive, and socio-technical nature of modern supply networks [14,15,16]. In contemporary supply chains, which increasingly operate as complex adaptive socio-technical systems, lead time dynamics are shaped by interactions among technological infrastructures, organizational processes, and human decision-making. These interdependencies introduce non-linearity, uncertainty, and emergent behavior, making accurate lead time prediction both critical and inherently difficult.
Traditional stochastic modeling approaches have long been employed to represent lead time uncertainty, with distributions such as exponential, gamma, and log-normal commonly used due to their ability to capture non-negative and right-skewed characteristics [15,16]. Among these, the log-normal distribution has been widely adopted in supply chain contexts characterized by multiplicative effects and variability amplification [17,18]. However, while such parametric models offer analytical tractability, they are often limited in their ability to capture extreme disruptions and evolving system dynamics, particularly in highly volatile environments. This limitation is important because many real supply chains exhibit not only central variability but also disruption-driven tail behavior that cannot be represented adequately by a single fixed distribution.
With the growing recognition of supply chains as complex adaptive and data-driven systems, recent research has shifted toward more flexible and robust modeling approaches capable of capturing heterogeneous behaviors and tail risks. In this context, quantile regression has emerged as a powerful tool for modeling conditional distributions across different quantiles, enabling a more detailed understanding of variability beyond mean-based estimates. Complementarily, EVT provides a rigorous statistical framework for modeling rare but high-impact events, which are critical for understanding disruption-driven lead time variability in modern supply chains. Together, these approaches offer a more comprehensive representation of uncertainty, particularly in environments characterized by heavy-tailed and non-stationary behaviors. Even so, prior studies that employ these methods often stop at the level of statistical characterization and do not connect them to adaptive correction, digital twin updating, or learning-based decision cycles.
The increasing digitalization of supply chains, driven by IoT technologies, has further transformed lead time modeling by enabling real-time data acquisition and system monitoring [19]. IoT-enabled systems provide continuous streams of operational data from suppliers, logistics networks, and production assets, enhancing visibility and enabling dynamic adaptation to disruptions. When integrated with digital twin architectures, these data streams facilitate the creation of real-time, virtual representations of supply chain systems, allowing for simulation-based evaluation of lead time behavior under varying conditions [20]. Such digital twin–driven DES environments support system-level analysis by capturing interactions, feedback loops, and emergent behaviors inherent in complex supply chains.
In parallel, recent developments in artificial intelligence and machine learning have introduced new functionalities for predictive modeling in supply chain contexts. Deep learning models, particularly sequence-based architectures such as the TFT, have demonstrated strong performance in capturing temporal dependencies, multi-horizon forecasting patterns, and complex interactions among variables. Unlike traditional models, TFT integrates attention mechanisms and interpretable components, enabling both high predictive accuracy and enhanced decision support in dynamic environments. Furthermore, RL has emerged as a promising approach for adaptive decision-making, allowing supply chain systems to learn optimal responses to disruptions through interaction with their environment. When combined with IoT data streams, RL facilitates real-time adaptation and continuous improvement in operational performance. Yet these models also have limitations, including higher data requirements, increased implementation complexity, and reduced transparency compared with simpler forecasting approaches. Their relevance therefore depends not only on predictive performance but also on how well they can be embedded within operationally interpretable frameworks.
Another critical dimension in lead time modeling is the management of uncertainty in dynamic and non-stationary environments. Conventional approaches often assume static parameters derived from historical data; however, such assumptions may not hold in rapidly changing supply chain conditions. To address this limitation, online learning and particle filtering techniques have been proposed to enable continuous model updating and real-time uncertainty management. These approaches allow predictive models to remain robust under evolving system conditions, thereby enhancing resilience and adaptability.
From a systems perspective, the integration of advanced statistical modeling, digital twin simulation, IoT-enabled data acquisition, and AI-driven learning mechanisms represents a shift toward holistic, system-oriented supply chain modeling. This paradigm aligns with the growing emphasis on resilience and sustainability, where predictive capabilities are leveraged to anticipate disruptions, optimize resource allocation, and support proactive decision-making. By combining these approaches, modern supply chain models can better capture the complexity, uncertainty, and adaptive behavior inherent in real-world operations, ultimately contributing to more resilient and sustainable supply chain systems.

2.2. Lead-Time Challenges in Complex Adaptive and Highly Customized Supply Chains

Supply chains involving high levels of customization and dynamic interactions present substantial challenges for accurate lead-time prediction due to inherent variability, process uncertainty, and interdependent system behavior [21,22]. In modern contexts, these environments are better understood as complex adaptive supply chains, where operational outcomes emerge from continuous interactions among technological systems, human decision-makers, and organizational processes. These characteristics introduce considerable variability and nonlinearity, rendering such systems inherently more unstable than make-to-stock or assemble-to-order environments. Even comparatively small disturbances, such as delays in design validation processes or shortages of critical materials, can propagate across interdependent system components, leading to cascading effects throughout multiple stages of the supply chain [22,23].
The complexity of these supply chains is largely driven by late-stage definition of product structures, non-standardized workflows, and extensive coordination across distributed supplier networks. From a socio-technical systems perspective, this complexity arises not only from process variability but also from the interaction between digital infrastructures (e.g., IoT-enabled monitoring systems) and human-driven decision processes. These factors collectively increase uncertainty in procurement and production planning, particularly under dynamic and data-rich environments [24,25]. Variability in customized components further amplifies delivery uncertainty and system instability. This aligns with the findings of Jackson [25], who showed that variations in lead time exert a nonlinear and economically substantial effect on overall system performance. The results further indicate that relatively small reductions in variability can yield considerable improvements in efficiency, responsiveness, and system resilience. What remains less clear in the literature is how such improvements can be produced predictively, rather than reactively, when uncertainty is evolving in real time.
To address these challenges, prior research has explored a range of operational and analytical approaches. Simulation-based platforms, including Simio and Arena, are widely applied to assess scheduling policies and capacity requirements under conditions of variability [26,27,28]. In contrast, digital twin-enabled simulation approaches, particularly when combined with DES, offer enhanced capabilities for capturing system-level interactions, feedback mechanisms, and adaptive responses in real time. Likewise, methodologies including Quick Response Manufacturing (QRM) seek to enhance system agility using time-based performance indicators; however, their effectiveness depends heavily on advanced digital integration and real-time information exchange [29,30]. Lean Project Management approaches contribute to improved coordination and waste reduction; nevertheless, their reliance on deterministic assumptions constrains their applicability in highly uncertain and non-repetitive supply chain contexts [24,31]. These approaches are valuable, but they often prioritize operational improvement logic over predictive uncertainty management. As a result, they do not fully address how lead-time forecasts should adapt under disruption-prone and non-stationary conditions.
In addition, metaheuristic optimization techniques, including advanced genetic algorithms and hyper-heuristic strategies, have been proposed to enhance scheduling flexibility and adaptability. While these methods offer improved optimization capabilities, they often involve high computational complexity and limited scalability when applied to large-scale, highly customized systems [32,33,34]. More importantly, such approaches typically focus on optimization rather than predictive and adaptive learning, limiting their ability to respond proactively to disruptions.
Overall, existing approaches in the literature primarily emphasize operational efficiency or localized optimization, with limited focus on system-level predictive intelligence and adaptive decision-making. The continued reliance on static parametric models and context-specific heuristics constrains their applicability in environments characterized by high variability, uncertainty, and interdependence. Consequently, there is a growing need for integrated, data-driven, and AI-enabled predictive frameworks that leverage IoT data streams, digital twin architectures, RL, and advanced temporal models such as TFT to support continuous learning and dynamic adaptation. Such approaches enable proactive disruption management, improved uncertainty modeling including extreme events through EVT and enhanced decision support, thereby contributing to more resilient, sustainable, and intelligent supply chain systems. At the same time, the literature only weakly connects such frameworks to measurable sustainability outcomes, which suggests that sustainability claims should be handled carefully unless operational and environmental metrics are explicitly modeled.

2.3. IoT-Enabled Adaptive Predictive Modeling in Complex Supply Chains

The integration of IoT, digital twin technologies, and AI-driven models has significantly transformed lead-time prediction by enabling real-time data acquisition, adaptive learning, and system-level decision support. IoT technologies facilitate continuous monitoring of machine performance, transportation conditions, and production activities, thereby enhancing system-wide visibility and enabling timely detection of disruptions. Within complex adaptive supply chains, these capabilities support responsive and coordinated decision-making across interconnected system components. By enabling seamless data exchange between physical and digital layers, IoT forms the foundation for dynamic, data-driven predictive frameworks that enhance operational resilience and adaptive system behavior under uncertainty [35,36].
At the same time, recent developments in big data analytics have enhanced the importance of IoT-generated data by converting high-volume, high-velocity information streams into actionable insights. For example, Mutambik [37] showed that integrating big data analytics with operational performance frameworks can markedly improve the sustainability of supply chains by enabling more efficient resource utilization and informed decision-making. In this context, modern IoT-enabled predictive systems support proactive risk management by identifying patterns indicative of supplier- or logistics-related disruptions before they materialize [38,39]. Such capabilities are particularly important in socio-technical systems, where data-driven insights must be effectively integrated with human decision processes to improve overall system performance. However, the practical value of IoT in predictive supply chain applications depends heavily on data quality, event timing, interoperability, and institutional readiness—issues that are often acknowledged only briefly in the literature.
Recent bibliometric and empirical studies also highlight the rapid evolution of IoT-driven predictive analytics and digital twin technologies. Digital twins, which provide dynamic computational replicas of physical supply chain systems, enable the simulation of disruptions and the evaluation of system behavior under varying operational scenarios [32,40]. When combined with DES, digital twin environments facilitate system-level analysis by capturing interactions, feedback loops, and emergent behaviors inherent in complex supply chains. Furthermore, the integration of RL allows these systems to move beyond passive monitoring toward active, adaptive decision-making. Through iterative learning from IoT data streams, RL-based models can evaluate “what-if” scenarios and recommend optimal response strategies to mitigate disruptions and improve overall system performance [35,40]. In parallel, Nozari et al. [41] and Mutambik [42] highlighted that the effective deployment of IoT in industrial environments requires robust interoperability, strong cybersecurity mechanisms, and standardized communication protocols to ensure reliable and scalable data integration across distributed systems.
Despite these advancements, many existing IoT-based approaches continue to rely on deterministic adjustment mechanisms or predefined correction rules. Such approaches are limited in their ability to represent uncertainty, capture extreme events, or provide probabilistic confidence measures, which are critical for informed decision-making in highly dynamic supply chain environments [43,44]. Moreover, issues associated with data quality, system interoperability, and scalability continue to persist, particularly in complex and distributed supplier networks where heterogeneous systems must operate in a coordinated manner [45,46]. These limitations highlight the need for more advanced predictive frameworks that can integrate real-time data with robust uncertainty modeling and adaptive learning.
To address these challenges, recent research has emphasized the integration of IoT-driven data streams with advanced statistical and machine learning techniques, including quantile regression, EVT, and deep learning architectures such as the TFT. This combination enables the development of predictive models that capture both central tendencies and extreme disruptions while adapting dynamically to evolving system conditions. Additionally, the incorporation of online learning and particle filtering mechanisms supports continuous model updating, ensuring robustness in non-stationary environments. By embedding these capabilities within digital twin-enabled frameworks, supply chain systems can achieve a balance between real-time responsiveness and probabilistic rigor. Nonetheless, the literature still provides limited evidence on how these elements should be combined operationally, what trade-offs they introduce, and how their predictive value compares with simpler alternatives in forecasting settings. Such integrated approaches are essential for enabling resilient, sustainable, and intelligent supply chain operations, although direct sustainability effects remain under-measured in most existing studies.

2.4. Probabilistic and AI-Driven Learning in Supply Chain Analytics

The growing emphasis on uncertainty-aware and adaptive modeling has accelerated the adoption of advanced machine learning techniques in supply chain analytics. While Bayesian methodologies, such as Bayesian networks and Bayesian Neural Networks (BNNs), have historically played a central role in capturing predictive uncertainty, recent research has increasingly shifted toward hybrid AI-driven approaches that combine probabilistic reasoning with deep learning and adaptive optimization [47,48]. These approaches aim to model both inherent variability (aleatoric uncertainty) and evolving system knowledge (epistemic uncertainty), thereby enhancing robustness in complex and dynamic environments.
Empirical studies in areas such as demand forecasting, logistics planning, and risk assessment demonstrate that uncertainty-aware models significantly outperform deterministic approaches in terms of predictive reliability and decision support [49,50]. For instance, Dritsas [51] showed that causal machine learning methods improve supplier escalation management by balancing interpretability and predictive performance. Similarly, Fernández-Miguel et al. [47] and Ivanov [48] highlighted the importance of probabilistic reasoning in supplier evaluation and decision-making under uncertainty. These contributions underscore the growing role of uncertainty-aware analytics in enhancing decision quality within supply chain systems.
More recently, attention has shifted toward advanced deep learning architectures, especially sequence-based models including the TFT, which are capable of capturing complex temporal dependencies, multi-horizon forecasting patterns, and interactions among heterogeneous variables. Unlike traditional probabilistic models, TFT integrates attention mechanisms and interpretable components, enabling both high predictive accuracy and enhanced transparency in decision-making processes. In parallel, RL has emerged as a powerful approach for adaptive decision-making, allowing systems to learn optimal response strategies by interacting with dynamic environments. When combined with IoT-generated data streams, RL enables real-time adaptation and continuous improvement in system performance. However, the literature is often less explicit about the limitations of these models, including high data demand, training instability, sensitivity to hyperparameter settings, and operational barriers related to interpretability and trust. Moreover, many studies introduce AI models because they are powerful in general, rather than because their suitability is carefully justified relative to alternative forecasting methods.
To further enhance robustness in non-stationary environments, online learning and particle filtering techniques are increasingly employed to support continuous model updating and dynamic uncertainty management. These methods allow predictive models to adapt to changing system conditions without relying on static assumptions, making them particularly suitable for complex adaptive supply chains characterized by evolving demand patterns, disruptions, and interdependencies. While earlier studies have demonstrated the effectiveness of Bayesian approaches in uncertainty propagation [47,50], the integration of deep learning, RL, and online adaptive mechanisms offers a more scalable and flexible framework for predictive analytics in modern supply chain systems.

2.5. Research Gaps

Despite the growing body of research on lead-time prediction, digital twins, IoT-enabled analytics, and AI-based supply chain modeling, several important gaps remain. First, many existing studies continue to rely on static parametric assumptions or mean-based forecasting approaches, which are often insufficient for representing the nonlinear, asymmetric, and heavy-tailed behaviors observed in complex adaptive supply chains. As a result, extreme disruptions and tail risks remain underrepresented in much of the existing literature.
Second, although IoT technologies have improved real-time visibility and operational monitoring, many prior studies use IoT data primarily for tracking and descriptive analytics rather than for fully adaptive predictive modeling. In particular, the integration of IoT-driven data streams with dynamic lead-time adjustment and predictive updating remains limited, especially in supply chain environments characterized by evolving disruptions and non-stationary conditions.
Third, while digital twin technologies have increasingly been adopted for simulation and operational monitoring, their use in lead-time prediction research has often remained fragmented. Existing studies frequently focus on isolated simulation tasks or operational visualization without fully integrating digital twin environments with predictive learning, uncertainty modeling, and system-level resilience assessment. Consequently, the potential of digital twins to support adaptive and predictive decision-making in complex supply chains has not been fully realized.
Fourth, recent AI-based approaches, including deep learning and reinforcement learning, have shown strong predictive capabilities; however, these methods are often applied in isolation from statistical extreme-value modeling and system-level simulation frameworks. In addition, prior studies do not always justify clearly why a particular AI architecture is appropriate for the forecasting problem at hand, nor do they adequately discuss the implementation constraints and interpretability trade-offs associated with such models. Limited attention has been given to unified approaches that combine quantile regression, EVT, IoT-driven adaptation, digital twin-enabled DES, TFT-based temporal modeling, reinforcement learning, online learning, and particle filtering within a single predictive architecture.
Finally, from a theoretical perspective, the literature still provides limited discussion of predictive resilience as a system-level capability emerging from the interaction of uncertainty modeling, real-time data integration, adaptive learning, and digital coordination. From a sustainability perspective, the literature is also uneven: many studies refer to sustainability as a motivation or anticipated benefit, but far fewer operationalize it through measurable indicators such as emissions, energy use, waste reduction, or resource efficiency. This is a particularly important distinction for the present paper, which is positioned in a sustainability-oriented context but evaluates predictive resilience rather than directly measured sustainability outcomes.
Accordingly, this study addresses these gaps by proposing an integrated framework for resilient lead-time prediction that combines quantile regression and EVT for uncertainty and tail-risk modeling, digital twin-enabled DES for system-level experimentation, IoT-driven real-time adaptation, and AI-based learning mechanisms including TFT, reinforcement learning, online learning, and particle filtering. The contribution of the study lies not in claiming that each of these components is individually new, but in integrating them within a closed-loop, simulation-based predictive architecture designed for complex adaptive supply chains. In doing so, the study contributes a unified and simulation-based proof-of-concept framework for predictive resilience in complex adaptive supply chains, while treating sustainability relevance as prospective rather than directly quantified within the present experiments.

3. Methodology

This study develops a digital twin-enabled, IoT-driven predictive modeling framework designed to enhance the resilience of lead-time prediction across sourcing partners in complex adaptive supply chains. These systems are characterized by high variability, dynamic interactions, and uncertainty arising from both technological processes and human decision-making. In such environments, lead times across vendors frequently exhibit asymmetric and heavy-tailed distributions as a result of disruptions, heterogeneous performance, and evolving operational conditions, thereby limiting the effectiveness of traditional deterministic or static probabilistic models.
Access to empirical datasets that concurrently represent historical supplier performance, operational disruptions, and high-frequency IoT data streams remains limited as a result of confidentiality constraints and varying levels of digital maturity across supply chain actors. To overcome this limitation while preserving analytical robustness, this study employs semi-synthetic datasets enhanced through generative techniques, designed to replicate realistic supply chain variability under controlled experimental conditions. These datasets incorporate stochastic lead-time behavior, dynamic disruption patterns, and simulated IoT data streams that reflect real-time system observations.
The proposed framework integrates three key methodological components within a system-oriented modeling environment. First, advanced statistical modeling using quantile regression and EVT is employed to capture both central tendencies and extreme disruptions in supplier lead times. This enables a more comprehensive representation of uncertainty, particularly in environments characterized by heavy-tailed and non-stationary behavior. Second, a digital twin-enabled DES environment is developed to model system-level interactions, feedback loops, and disruption propagation across supply chain components. This allows the evaluation of predictive performance under varying operational scenarios and supports scenario-based analysis. Third, the framework incorporates AI-driven adaptive learning mechanisms, combining TFT for multi-horizon time-series prediction with RL for dynamic decision-making.
Each component is included for a distinct methodological reason. Quantile regression is used to capture heterogeneous conditional behavior across the lead-time distribution; EVT is used to model rare but high-impact delays; the digital twin-DES layer is used to represent system evolution and disruption propagation; RL is used to support adaptive correction based on evolving operational state; and the TFT layer is used to model temporal dependencies across multivariate sequential inputs. Online learning and particle filtering are then used to support updating under non-stationary conditions. The framework is therefore not intended as an arbitrary aggregation of methods, but as a layered architecture in which each component addresses a specific limitation of static forecasting under dynamic uncertainty.
Together, these components form a unified framework that integrates statistical rigor, real-time data acquisition, simulation-based evaluation, and adaptive intelligence, supporting predictive resilience and system-level decision-making. The methodological design was developed to address specific analytical limitations in existing lead-time prediction studies, rather than to combine multiple techniques without a unifying rationale. Quantile regression was selected because supplier lead times in complex adaptive supply chains often exhibit heterogeneous and asymmetric variability that cannot be adequately captured by mean-based models alone. EVT was incorporated to explicitly model rare but high-impact disruptions, which are particularly important in heavy-tailed and disruption-prone environments. The digital twin-enabled DES environment was chosen because lead-time behavior is shaped not only by statistical variability but also by system-level interactions, feedback loops, and disruption propagation across interconnected supply chain components. In turn, TFT was adopted to capture complex temporal dependencies and multi-horizon forecasting patterns, while RL was introduced to support adaptive decision-making based on real-time IoT observations. Finally, online learning and particle filtering were included to ensure that the framework remains responsive and robust under evolving and non-stationary operating conditions.
This section is organized as follows. Section 3.1 introduces the statistical modeling framework based on quantile regression and EVT. Section 3.2 presents the RL-based adaptive control mechanism for IoT-driven adjustment. Section 3.3 describes the AI-driven predictive learning layer, including TFT-based forecasting and adaptive updating. Section 3.4 integrates these components within the digital twin architecture and associated data-generation pipeline, while Section 3.5 and Section 3.6 explain the simulation design, calibration and validation workflow, and robustness analysis.

3.1. Statistical Modeling of Supplier Lead Time and Sensitivity Analysis

This subsection presents the statistical modeling and sensitivity analysis procedures used in the proposed framework to evaluate how lead-time uncertainty propagates under varying operational conditions in complex adaptive supply chains. Supplier lead time T s is represented as a stochastic variable to reflect variability and uncertainty in complicated adaptive supply chain situations. To address the limitations of fixed parametric models, this study employs a hybrid quantile regression–EVT framework, which captures both central tendencies and extreme deviations in lead-time behavior.
Quantile regression estimates the conditional distribution of lead time across different quantiles, enabling the modeling of heterogeneous and asymmetric variability. In the present study, quantile regression is estimated at the quantile levels τ { 0.10,0.25,0.50,0.75,0.90 } . These quantiles were selected to provide balanced coverage of lower-tail, central, and upper-tail lead-time behavior, thereby capturing routine variability as well as asymmetric escalation patterns under disruption. The τ -th conditional quantile is defined as:
Q τ ( T s X ) = X β τ
where X represents explanatory variables derived from operational and IoT data, and β τ is the quantile-specific parameter vector. Model parameters are obtained by minimizing the quantile loss function:
m i n β τ i = 1 n ρ τ T s , i X i β τ
where ρ τ ( ) is the asymmetric check loss function, allowing different penalties for overestimation and underestimation. The lower quantiles ( 0.10 and 0.25 ) support characterization of relatively stable operating conditions, the median quantile ( 0.50 ) provides a robust central reference, and the upper quantiles ( 0.75 and 0.90 ) are used to capture disruption-sensitive and delay-amplified lead-time behavior.
To model rare but high-impact disruptions, EVT is applied to exceedances above a threshold u using the Generalized Pareto Distribution (GPD):
P ( T s > u + y T s > u ) = 1 ξ y β 1 / ξ
where ξ and β denote the shape and scale parameters, respectively. The expected magnitude of extreme exceedances is:
E T s T s > u = β 1 ξ ,   for   ξ < 1
In the present study, the EVT threshold u is selected using a high-quantile thresholding rule based on the empirical lead-time distribution. Specifically, u is set at the 90th percentile of the simulated lead-time samples within each μ σ regime, ensuring that EVT is applied only to the upper-tail observations corresponding to rare and disruption-driven delays. This rule was chosen to provide a stable balance between retaining a sufficient number of exceedances for reliable GPD estimation and isolating the extreme tail behavior most relevant to resilience analysis. The adequacy of the selected threshold was further checked using mean residual life behavior and parameter stability across nearby upper-tail quantiles, and the 90th-percentile threshold was retained because it provided consistent tail-shape estimates across simulation regimes. GPD parameters were then estimated from the exceedance samples using maximum likelihood estimation.
Within the methodology, sensitivity analysis is then performed by perturbing key distributional characteristics and evaluating their effects within the digital twin-enabled simulation environment, following variance-based sensitivity analysis principles as implemented in tools such as the sensobol package [52]. This procedure is used to examine how uncertainty propagates through system-level interactions and influences predictive performance under different variability regimes. In the present study, however, the sensitivity analysis is intentionally focused on three variables, namely μ , σ , and δ ( t ) , because these represent the principal scenario-level sources of variation examined in the proposed framework: baseline lead-time level, uncertainty intensity, and IoT-driven adaptive correction. Together, these variables capture the core statistical, stochastic, and adaptive dimensions of the simulation design and therefore provide a targeted system-level view of how predictive error responds to changing operating conditions. Other parameters associated with the TFT architecture, RL learning process, particle filtering, online updating, and DES configuration were not included in the formal sensitivity analysis because the objective of this component was not to perform a full global decomposition across all model and hyperparameter settings, but rather to examine predictive responsiveness to the most influential operating-condition drivers of uncertainty. A fully comprehensive sensitivity analysis across all model, simulation, and learning parameters would substantially increase methodological and computational complexity and lies beyond the scope of the present simulation-based proof-of-concept study. Accordingly, the reported sensitivity analysis should be interpreted as focused rather than exhaustive, and broader global sensitivity analysis across additional parameters is identified as an important direction for future research.
Finally, real-time IoT data streams are integrated to enable dynamic recalibration of predictions. Coupled with online learning and particle filtering, the model continuously updates its parameters in response to evolving conditions, enhancing predictive resilience under both routine variability and extreme disruptions.

3.2. RL-Based Adaptive Control Using IoT Data Streams

To incorporate real-time operational dynamics within complex adaptive supply chains, an RL-based adaptive control mechanism is introduced to dynamically update lead-time predictions using IoT data streams. Instead of relying on a static adjustment factor, the proposed approach models system deviations as outputs of an adaptive policy learned through RL. Let T s denote the baseline lead time and δ ( t ) represent the adaptive correction derived from IoT observations. The adjusted lead time is defined as:
T s , a d j =   T s ( 1 + δ ( t ) )
where δ ( t ) is no longer a predefined stochastic variable but a control signal generated by an RL policy based on real-time system states. This formulation enables continuous adaptation to operational disturbances such as machine downtime, congestion, and logistics delays.
In the proposed formulation, the RL environment is embedded within the digital twin-enabled DES framework. At each decision step t , the agent observes a state vector s t composed of current IoT-derived signals, predicted lead-time statistics, and operational context variables. Specifically, s t includes machine availability status, congestion level, the current lead-time estimate, recent forecast error, and the previous adjustment factor. The action a t corresponds to selecting an adaptive correction to the lead-time prediction, implemented either directly as an updated δ ( t ) value or indirectly through the adjustment weights applied to sensor-derived deviations. The reward R t is defined to encourage accurate and stable prediction while penalizing excessive corrective oscillation, and is expressed as a negative weighted combination of prediction error and adjustment magnitude. This design enables the RL agent to learn policies that improve predictive responsiveness without introducing unstable control behavior.
For the RL implementation used in this study, policy learning is performed using a value-based Q-learning scheme with discrete action bins defined over the admissible range of δ ( t ) . The adjustment interval 0.05,0.30 is discretized into 8 candidate actions, allowing the agent to choose bounded lead-time corrections at each decision step. The state-action value function is updated using a learning rate of 0.1 and a discount factor γ = 0.95 . The discount factor is intentionally set to a relatively high value because supply chain disturbances often generate effects that extend beyond the current decision step. Thus, the RL agent must consider not only immediate prediction-error reduction but also the longer-term consequences of corrective actions on future lead-time behavior and predictive stability. In this framework, γ = 0.95 provides a balance between short-term responsiveness and long-horizon adaptation, ensuring that the learned policy remains sensitive to current disruptions while still accounting for delayed operational effects within the digital twin environment. Action selection during training follows an ϵ -greedy exploration strategy, with ϵ initialized at 0.20 and decayed linearly to 0.01 over training. Training is conducted for 500 episodes, each corresponding to a simulated supply chain trajectory in the digital twin environment. Convergence is monitored through moving-average reward stabilization, and training is terminated early if the average episode reward does not improve for 25 consecutive episodes.
Within the simulation environment, each episode proceeds by initializing the digital twin state, sampling IoT-linked operational conditions, applying the selected RL action, updating the adjusted lead-time prediction, and computing the resulting reward from forecast performance and control smoothness. The environment then transitions to the next state through DES-based event updates, including congestion buildup, machine downtime, and supplier-delay propagation. In this way, RL training is grounded in system-level simulation rather than isolated signal adjustment, allowing the learned policy to reflect feedback-rich operational dynamics.
The feedback loop in the proposed framework operates as a closed adaptive cycle linking sensing, prediction, decision adjustment, and system-state update within the digital twin environment. At each decision step, IoT-derived operational signals and current lead-time estimates are used to form the state vector observed by the RL agent. Based on this state, the agent selects an action that determines the corrective adjustment δ ( t ) or the associated sensor-weight configuration. This adjustment is then applied to update the predicted lead time, after which the digital twin simulates the resulting operational transition, including disruption propagation, congestion effects, and machine-state changes. The updated system response generates a new forecast error and reward signal, which are fed back to the RL agent to update the policy and to construct the next state. In this way, the output of one decision step becomes part of the input to the next step, allowing the framework to learn from evolving operational consequences rather than from static one-step corrections alone.

3.2.1. Statistical and Data-Driven Definition of the IoT Adjustment Factor

To facilitate reproducibility and systematic experimentation, the IoT adjustment factor is initially modeled as a constrained stochastic parameter:
δ t U a , b
where U ( a , b ) denotes a uniform distribution over the interval a b , representing bounded operational variability (e.g., 5 % to + 30 % ). This formulation serves as a baseline approximation of IoT-driven perturbations in the absence of structured sensor data. However, to align with data-driven and adaptive modeling principles, the framework extends this formulation by defining δ ( t ) as a function of real-time IoT observations:
δ ( t ) = g ( X ( t ) , θ t )
where:
  • X ( t ) denotes a vector of IoT measurements,
  • θ t represents time-dependent model parameters,
  • g ( ) is a nonlinear mapping function learned through TFT and/or RL models.
To capture dynamic uncertainty and evolving system conditions, the parameters θ t are updated recursively using online learning mechanisms:
θ t + 1 = θ t + η L t
where η is the learning rate and L t denotes the loss function at time t . In addition, particle filtering techniques are employed to estimate latent system states and refine δ ( t ) under non-stationary conditions, ensuring robustness in dynamic environments.

3.2.2. Practical Implementation in Real-World IoT Systems

In real-world implementations, the adjustment factor δ ( t ) can be directly computed from IoT measurements as a normalized deviation from reference operational values:
δ t = X ( t ) X ref X ref
where X ( t ) represents real-time sensor observations and X ref denotes expected or baseline values. This formulation translates raw IoT signals into proportional lead-time adjustments, enabling real-time responsiveness.
In advanced settings, multiple IoT signals can be fused using weighted aggregation, anomaly detection techniques, or state estimation methods such as Exponentially Weighted Moving Average (EWMA) or Kalman filtering, providing a more robust estimate of system deviation [53,54]. Furthermore, when integrated with RL, the adjustment process evolves from passive estimation to adaptive decision-making, where optimal responses to disruptions are learned through interaction with the environment.
Within the digital twin-enabled DES framework, these IoT-driven adjustments are continuously synchronized with the virtual system, enabling real-time scenario evaluation, predictive analysis, and system-level coordination. This integration supports the modeling of complex socio-technical interactions, where physical processes and digital intelligence co-evolve.
In this study, δ ( t ) is instantiated using bounded stochastic distributions (Equation 6) to approximate realistic IoT-driven deviations, while Equations (6) and (7) define the adaptive learning extension that enables future real-world deployment. This layered formulation ensures consistency between controlled simulation experiments and scalable, data-driven implementations in digitally enabled supply chain systems.

3.2.3. Systematic Sensor-Based Formulation

To illustrate the impact of IoT-driven adjustments within complex adaptive supply chains, a systematic transformation from sensor data to δ(t) is established using multiple IoT data streams that capture real-time operational conditions.
Two representative sensor streams are considered:
  • Machine availability sensor S 1 ( t ) : represented as a binary state indicator reflecting normal operation ( S 1 t   = 1 ) or downtime ( S 1 t   = 0 ). Downtime events lead to sudden increases in lead times associated with processing activities.
  • Congestion sensor S 2 ( t ) : represented as a continuous-valued signal reflecting queue length or transportation delays, capturing gradual system degradation resulting from workload accumulation.
Each sensor measurement is standardized relative to its baseline (planned) value S i , r e f , resulting in proportional deviations:
d 1 ( t ) = S 1 ( t ) S 1 , ref S 1 , ref
d 2 ( t ) = S 2 ( t ) S 2 , ref S 2 , ref
The normalized deviations are combined through a weighted aggregation:
δ raw ( t ) = i = 1 2 w i ( t ) d i ( t )
i = 1 2 w i ( t ) = 1
where w i ( t ) are time-varying weights, reflecting the relative importance of each sensor stream. Unlike static formulations, the weights are dynamically updated using RL to adapt to changing system conditions:
w i ( t + 1 ) = w i ( t ) + η R t
where η is the learning rate and R t is a reward function defined based on prediction accuracy or system performance (e.g., minimizing MAE or RMSE). This enables the model to learn optimal sensor contributions over time, transforming the adjustment mechanism into an adaptive decision process.
To capture temporal variations and attenuate high-frequency noise, an exponentially weighted moving average (EWMA) filter is employed:
δ ( t ) = λ δ raw ( t ) + ( 1 λ ) δ ( t 1 )
where λ ( 0 , 1 ) regulates the sensitivity to new observations.
To further enhance predictive capability, the adjustment factor can be directly learned from IoT data streams using a TFT:
δ t = g TFT X t
where X ( t ) = [ S 1 ( t ) , S 2 ( t ) , ] represents multivariate IoT inputs and g TFT ( ) is a deep learning model capturing temporal dependencies and feature interactions. This formulation enables multi-horizon prediction and improves robustness in non-linear and non-stationary environments.
Additionally, to handle latent uncertainty and evolving system states, a particle filtering-based estimation of the adjustment factor can be employed:
δ ^ t = k = 1 K w t k δ k t
where δ k ( t ) are particle-based estimates and w t k are their associated weights. This probabilistic formulation supports robust state estimation under uncertainty and complements the learning-based components.
The resulting adjustment factor is constrained within operationally realistic bounds (e.g., 5 % to + 30 % ) to ensure physical plausibility and consistency with industrial settings.
Within the digital twin-enabled DES framework, these components operate cohesively to model system-level interactions, feedback loops, and disruption propagation. This enables continuous synchronization between physical and virtual supply chain states, supporting predictive analytics and adaptive decision-making.
In this study, Equations (9)–(16) provide a structured and interpretable baseline. This layered formulation ensures both experimental control and scalability to real-world implementations, contributing to the development of resilient, sustainable, and intelligent supply chain systems operating as complex adaptive socio-technical environments.

3.3. AI-Driven Adaptive Learning and Predictive Modeling

3.3.1. Theoretical Foundation

While IoT-driven adjustments improve system adaptability to operational variability, lead-time prediction remains sensitive to model misspecification and evolving system conditions, particularly under skewed and heavy-tailed distributions commonly observed in complex adaptive supply chains [55,56]. To mitigate this limitation, an AI-driven framework based on the TFT, integrated with RL and online adaptation, is adopted to produce dynamic and uncertainty-aware predictions.
Let X ( t ) denote the input feature vector at time t , comprising past lead-time observations, IoT-derived signals, and operational context factors. In the present implementation, X ( t ) includes historical lead-time values, the IoT adjustment factor δ ( t ) , machine availability indicators, congestion-related variables, and scenario-specific operating-condition labels. The TFT is configured as a multi-horizon forecasting model with an input window length of 24 time steps and a prediction horizon of 6 time steps, enabling the framework to capture both short-term fluctuations and medium-range temporal dependencies. The TFT transforms these inputs into distributional parameters within the logarithmic space:
μ ^ t , σ ^ t ) = f θ ( X ( t )
where f θ ( ) is the TFT model parameterized by θ , enabling multi-horizon forecasting and capturing temporal dependencies across heterogeneous inputs.
The predictive distribution of supplier lead time T s ( t ) is represented as:
ln T s t N μ ^ t , σ ^ t 2
This formulation provides a flexible, data-driven approximation of lead-time uncertainty, allowing the model to capture skewness and variability without imposing fixed parametric assumptions.
To incorporate adaptive decision-making, the RL mechanism described in Section 3.2 is coupled with the TFT-based predictive layer. In this integrated setting, TFT produces uncertainty-aware lead-time forecasts, while RL governs the adaptive correction process through policy-based adjustment of δ ( t ) within the digital twin environment. This coupling enables the framework to combine temporal representation learning with sequential decision adaptation under changing operating conditions. The optimal policy is obtained by maximizing cumulative reward:
π * = arg   max 1 π E t = 0 T γ t R t
where π denotes the policy, R t the reward (e.g., prediction accuracy or cost efficiency), and γ ( 0 , 1 ) the discount factor. This enables the system to learn adaptive responses to disruptions in real time.

3.3.2. Model Training and Implementation

The TFT model is trained to minimize prediction error in a transformed domain to enhance numerical stability and capture multiplicative variability in lead times. The loss function is defined as:
L θ = 1 n i = 1 n g T s , i y ^ i 2
where g ( ) is a monotonic transformation (e.g., logarithmic), and y ^ i denotes the predicted value.
To adapt to evolving system dynamics, online learning updates model parameters as:
θ t + 1 = θ t + η L t
where η is the learning rate.
To enhance robustness under non-stationary conditions, particle filtering is used to estimate latent states and aggregate predictions:
T ^ s t = k = 1 K w t k T s k t
where T s k ( t ) are particle predictions and w t k are normalized weights.
At inference, predictions are mapped back to the original scale using:
T ^ s t = g 1 y ^ t
The TFT architecture used in this study consists of a hidden state dimension of 64, 4 attention heads, a dropout rate of 0.10, gated residual networks for variable selection, and interpretable multi-head attention for temporal dependency modeling. Static and time-varying inputs are processed through variable selection networks prior to temporal encoding. The model was trained using a batch size of 64 for 100 epochs, with early stopping applied when validation loss did not improve for 10 consecutive epochs. The initial learning rate was set to 0.001 and reduced adaptively using a validation-based learning-rate scheduler. The dataset used for TFT development was partitioned into 70% training, 15% validation, and 15% testing subsets.
Training was performed using the Adam optimizer rather than the previously unclear reference to an “Aguirre optimizer.” Specifically, Adam was implemented with β 1 = 0.9 , β 2 = 0.999 , and a weight decay coefficient of 1 × 10 5 , ensuring stable convergence and computational efficiency. The TFT architecture integrates attention-based temporal encoding, variable selection networks, and gated residual connections to capture complex dependencies. This configuration was selected to balance predictive performance, model stability, and computational efficiency under the non-stationary conditions represented in the digital twin environment.

3.3.3. Adaptive Learning and Predictive Resilience

The proposed framework employs a sequential adaptive learning mechanism, where model parameters and predictions are continuously updated based on incoming IoT data:
μ ^ t , σ ^ t ) f θ t ( X ( t )  
As the system evolves, predictive uncertainty dynamically adjusts: it decreases under stable conditions and increases when disruptions occur. To quantify predictive resilience, three performance indicators are defined:
Indicator 1: RMSE Variance (Stability Metric)
Var ( RMSE ) = 1 K 1 k = 1 K ( R M S   E k R M S E ) 2
Lower values indicate stable predictive performance across scenarios.
Indicator 2: Sensitivity Index (Robustness Metric)
S = R M S E μ + R M S E σ + R M S E δ t
The three components included in Equation (26) were selected because they represent the principal sources of predictive variation examined in the proposed framework. The parameter μ   captures changes in the central tendency of supplier lead times, σ represents variability and uncertainty intensity across operating regimes, and δ ( t ) reflects the IoT-driven adaptive correction mechanism operating within the digital twin environment. Together, these variables characterize the core statistical, stochastic, and adaptive dimensions of the simulation design. Accordingly, the sensitivity index is intended as a compact system-level robustness measure that evaluates how prediction error responds to changes in baseline lead-time level, lead-time dispersion, and operationally driven adjustment behavior. It is not intended as an exhaustive sensitivity decomposition over all model parameters, but rather as a targeted indicator of the three most influential factors governing predictive resilience in the present framework.
Smaller values indicate reduced sensitivity to perturbations in these key dimensions and therefore stronger robustness under changing operating conditions.
Indicator 3: Degradation Ratio (Stress-Test Metric)
D = R M S E HighVar R M S E LowVar
Values close to 1 indicate strong resilience under extreme variability.
These metrics provide a quantitative and system-oriented assessment of resilience, aligning with recent research advocating performance-based evaluation of supply chain robustness [57,58]. By integrating IoT data, AI-driven prediction, RL, and adaptive uncertainty handling, the proposed framework supports resilient and adaptive predictive behavior within the present simulation-based study.

3.4. Digital Twin-Enabled Adaptive Framework for Lead-Time Prediction

The proposed framework is formulated as a digital twin-enabled, multi-layer structure in which each methodological component is selected to address a specific aspect of lead-time uncertainty, system dynamics, and adaptive decision-making in complex adaptive supply chains.
First, the statistical modeling layer establishes a probabilistic foundation for representing lead-time uncertainty. Instead of relying solely on parametric assumptions, this layer integrates quantile regression and EVT to capture both central tendencies and extreme disruptions in supplier lead times. This enables a more flexible representation of heavy-tailed and non-stationary behaviors observed in real-world supply chains.
Second, the IoT-driven adaptation layer introduces real-time adaptability by integrating operational inputs captured from distributed sensing infrastructures (e.g., machine status, logistics disruptions, congestion levels). These inputs are converted into a dynamic deviation factor δ ( t ) , as defined in Section 3.2, enabling continuous alignment between physical system states and their digital representation within the digital twin environment.
Third, the AI-driven learning layer enhances predictive capability through the integration of TFT models, RL, and online adaptive mechanisms. The TFT captures temporal dependencies and multi-horizon forecasting patterns, while RL enables adaptive decision-making by learning optimal responses to disruptions. In parallel, particle filtering and online learning mechanisms continuously update model parameters, ensuring robustness under evolving system conditions.
This layered architecture is embedded within a digital twin-based DES framework, enabling system-level modeling of interactions, feedback loops, and disruption propagation across the supply chain.
Within this architecture, the feedback loop operates through continuous interaction among sensing, prediction, adaptive correction, and simulated system evolution. First, operational inputs derived from IoT-linked signals are used to characterize the current system state. Second, the statistical and AI-driven layers generate baseline and uncertainty-aware lead-time predictions. Third, the RL-based adaptation layer uses the current state and predictive information to determine an updated corrective action through δ ( t ) . This adjustment is then applied within the digital twin environment, where DES-based event propagation simulates the resulting operational consequences, including congestion, delay accumulation, and machine-state transitions. The updated system response is subsequently fed back into the next decision cycle through revised state variables, forecast error, and reward information. In this way, the framework functions as a closed adaptive loop in which prediction influences system response, and system response in turn informs subsequent prediction and control.
The proposed framework combines statistical modeling, digital twin-enabled simulation, and AI-driven adaptive learning to support resilient lead-time prediction. Quantile regression and EVT provide a baseline representation of lead-time uncertainty, while IoT data streams enable real-time system awareness within the digital twin environment. Building on this foundation, TFT models capture complex temporal dependencies, and RL supports adaptive decision-making under dynamic conditions. Online learning and particle filtering further ensure continuous model updating and robustness under non-stationary environments.
Together, these capabilities provide a system-oriented and uncertainty-aware predictive framework for simulation-based evaluation of resilient and adaptive lead-time prediction in complex adaptive supply chains.

3.4.1. Semi-Synthetic Data-Generating Process

To support controlled experimentation while preserving realistic supply chain variability, this study employs a semi-synthetic data-generating process designed to emulate supplier lead-time behavior under both routine and disruption-prone operating conditions. The reference lead-time variable is generated from a log-normal distribution, selected because supplier lead times are non-negative and commonly exhibit right-skewed and multiplicative variability in practice. Specifically, the baseline lead time is generated as
l n   T s 0 N ( μ , σ 2 ) ,
where μ { 1.5,2.0,2.5,3.0 } controls the central tendency of supplier lead times and σ { 0.2,0.5,0.9,1.2 } controls dispersion and uncertainty severity. These parameter combinations define multiple operating regimes ranging from stable to highly disruption-prone conditions.
To represent IoT-observed operational variability, the baseline lead time is dynamically adjusted using an IoT-driven deviation factor δ ( t ) , producing
T s = T s 0 ( 1 + δ ( t ) ) .
Two scenario families are considered. In Scenario A, representing routine variability, δ ( t ) U ( 0.05,0.15 ) . In Scenario B, representing disruption-intensive conditions, δ ( t ) U ( 0.05,0.30 ) . This formulation allows the generated data to reflect both negative and positive operational deviations while preserving bounded and interpretable perturbations.
To emulate disruption effects beyond routine variability, rare delay events are injected into a subset of generated samples through positive tail perturbations consistent with heavy-tailed behavior. These disruptions are represented through EVT-consistent exceedance behavior and are propagated within the digital twin environment through simulated events such as congestion buildup, machine downtime, and supplier delay cascades. As a result, the final semi-synthetic dataset captures baseline stochastic variability, IoT-driven operational fluctuations, and disruption-induced tail behavior within a unified framework.
For each μ σ configuration and each variability scenario, 10,000 lead-time realizations are generated. These samples are then partitioned into model-development and evaluation subsets for training, validation, and testing. In addition to the main log-normal reference DGP, robustness is further examined using alternative out-of-distribution DGPs, including a mixed log-normal-Gamma distribution, a heavy-tailed Weibull distribution, and a bimodal mixture distribution, as described in Section 3.6. All stochastic sampling procedures are initialized using fixed random seeds to ensure reproducibility.

3.4.2. GAN-Based Data Augmentation and Validation

To enhance the diversity and realism of the semi-synthetic dataset while preserving controlled experimental conditions, the baseline data generated through the procedure described in Section 3.4.1 were further enriched using a GAN-based augmentation strategy. The purpose of this step was not to replace the underlying statistical data-generating process, but to expand the range of plausible lead-time patterns and IoT-linked operational variations available for model development and evaluation. In this way, the final dataset retained analytical traceability while incorporating additional heterogeneity consistent with realistic supply chain behavior.
The GAN was trained on the semi-synthetic samples produced from the reference log-normal data-generating process after incorporation of IoT-driven deviations and disruption effects. The training input therefore consisted of multivariate records including baseline lead time, adjusted lead time, variability regime indicators, scenario labels, and representative IoT-derived features associated with machine availability, congestion, and logistics conditions. This configuration enabled the GAN to learn joint dependencies between lead-time realizations and operational context variables, thereby generating augmented samples that remained consistent with the broader dynamics of the proposed digital twin environment.
For implementation, a standard fully connected GAN architecture was adopted, composed of a generator and a discriminator trained in adversarial fashion. The generator received latent noise vectors of dimension 32, sampled from a standard normal distribution, and transformed them into synthetic tabular records matching the dimensionality of the training data. The discriminator received either real or generated samples and learned to distinguish between them. Both networks were trained using mini-batch stochastic optimization for 300 epochs with a batch size of 64. The Adam optimizer was used for both the generator and discriminator with a learning rate of 0.0002 and momentum parameters β1 = 0.5 and β2 = 0.999. To support reproducibility, the GAN training process used fixed random seeds, and model training was conducted under fixed hyperparameter settings across all augmentation runs.
The generator network consisted of three dense hidden layers with 128, 256, and 128 units, respectively, each followed by ReLU activation. The output layer used a linear activation for continuous variables and bounded transformation where needed to preserve physically meaningful ranges. The discriminator employed a mirrored dense architecture with hidden layers of 256, 128, and 64 units, using LeakyReLU activation with a negative slope of 0.2, followed by a sigmoid output layer for binary real-versus-generated classification. During training, the adversarial objective encouraged the generator to produce samples that matched the marginal and joint statistical structure of the original semi-synthetic dataset, while the discriminator enforced realism through continuous classification feedback. The augmented samples were then filtered using operational plausibility constraints to ensure consistency with valid lead-time ranges and bounded IoT perturbation behavior.
The GAN-based augmentation process was validated using both statistical and distributional checks. First, summary statistics of the generated samples, including mean, standard deviation, skewness, and quantile structure, were compared with those of the original semi-synthetic dataset across all major scenario regimes. Second, empirical distributions of generated and original samples were compared using the two-sample Kolmogorov–Smirnov test and Wasserstein distance, enabling assessment of distributional similarity. Third, the generated samples were checked for consistency with scenario-specific variability bounds, particularly with respect to the operational deviation factor δ ( t ) , to ensure that the augmented dataset did not introduce implausible observations outside the assumed simulation design. Only samples satisfying these validity checks were retained for downstream use.
For model-development purposes, the original semi-synthetic dataset was divided into 80% training, 10% validation, and 10% testing subsets before augmentation. The GAN was trained using the training subset only, while the validation subset was used to monitor convergence and guard against overfitting. After training, the generator was used to produce augmented samples equal to 50% of the original training-set size, and these accepted synthetic records were merged with the original training data to form the final model-development dataset. The test subset remained untouched and was used exclusively for final evaluation. This augmentation step improved data diversity, increased exposure to heterogeneous operational conditions, and reduced the risk of overfitting to narrowly parameterized simulation outputs. Accordingly, the GAN-based augmentation layer served as a data enrichment mechanism that complemented, rather than replaced, the underlying probabilistic and simulation-based design of the study.

3.4.3. Workflow and Architecture of the Proposed Framework

The proposed framework follows a structured workflow that integrates data generation, predictive modeling, adaptive correction, digital twin simulation, and feedback-based updating within a unified architecture. The workflow begins with input generation, which includes historical lead-time observations, scenario-specific operating conditions, and IoT-linked operational signals such as machine availability and congestion indicators. These inputs are combined with the semi-synthetic and GAN-augmented data pipeline described in Section 3.4.1 and Section 3.4.2 to construct the model-development dataset.
The second stage consists of the statistical uncertainty modeling block, where quantile regression and EVT are used to estimate baseline lead-time behavior, asymmetric variability, and tail-risk characteristics. This block provides the initial probabilistic representation of supplier lead-time uncertainty and serves as the analytical foundation for subsequent adaptive modeling.
The third stage corresponds to the IoT-RL adaptation block. In this stage, real-time operational signals are processed to characterize the current system state, and the RL agent selects an adaptive correction action through the adjustment factor δ ( t ) or its associated sensor-weight configuration. This block introduces context-aware and feedback-driven adaptation to the baseline prediction.
The fourth stage is the AI-driven predictive learning block, in which the TFT model processes multivariate temporal inputs to produce uncertainty-aware lead-time forecasts. Online learning and particle filtering further refine these predictions by continuously updating model parameters and latent state estimates under non-stationary conditions.
The fifth stage is the digital twin-enabled simulation block, where the adjusted predictions are embedded into the DES environment. This block simulates system-level interactions, disruption propagation, queue dynamics, supplier-delay effects, and operational transitions under varying scenarios. The resulting outputs include updated lead-time predictions, forecast errors, resilience metrics, and comparative performance indicators such as MAE and RMSE.
Finally, the framework incorporates a closed feedback loop in which system outputs, forecast errors, and simulated operational responses are fed back into the RL and predictive layers for subsequent updating. In this way, the architecture functions as an adaptive cycle rather than a one-pass forecasting pipeline. The overall framework therefore links inputs, statistical modeling, adaptive control, AI-based prediction, digital twin simulation, outputs, and feedback updating within a coherent system-oriented workflow.
The integrated nature of this architecture necessarily involves methodological trade-offs. Its principal advantage is that each component addresses a distinct limitation of static lead-time prediction: quantile regression-EVT improves uncertainty and tail-risk representation, the IoT-RL layer supports adaptive correction, the TFT-based learning layer captures temporal dependencies under non-stationary conditions, and the digital twin environment enables system-level evaluation of disruption propagation and feedback effects. However, the combination of these components also increases model complexity, calibration effort, data dependency, and implementation difficulty relative to simpler forecasting approaches. In particular, the full framework requires richer data streams, tighter synchronization between analytical and operational layers, and greater technical effort for maintenance, interpretation, and deployment. Accordingly, the present study should be understood as a simulation-based proof of concept designed to evaluate the potential value of layered integration under complex operating conditions, rather than as a claim that all supply chain settings require the full methodological stack. In lower-complexity or lower-data environments, simpler statistical or IoT-adaptive variants may be more practical, whereas the full AI-enhanced framework is most justified in disruption-prone and digitally mature settings where the added predictive capability can outweigh the additional complexity.

3.4.4. Digital Twin Architecture, Data Flow, and Synchronization Mechanism

To clarify the operational logic of the proposed digital twin framework, Figure 1 presents the overall architecture and information flow among its major components. The framework is organized into five interacting layers: (i) the input and data layer, (ii) the data synchronization and state-construction layer, (iii) the predictive modeling layer, (iv) the adaptive decision layer, and (v) the digital twin simulation layer. Together, these layers form a closed-loop architecture in which sensing, prediction, simulation, and corrective adaptation are continuously linked.
The input and data layer contains the streams used to represent the observed or simulated supply chain state. These inputs include historical supplier lead-time records, scenario-specific operating conditions, and IoT-linked operational signals such as machine availability, congestion status, and disruption indicators. In the present study, these streams are represented using semi-synthetic data and simulated IoT-linked signals; however, the same architecture is intended to accommodate real sensor and event data in future deployment-oriented implementations.
The second layer performs data synchronization and state construction. At each decision epoch t , incoming operational signals and lead-time observations are time-aligned, preprocessed, and transformed into a consistent system-state representation. This synchronization step ensures that heterogeneous inputs from different sources are integrated into a common state vector before prediction and control are performed. The synchronized state includes the current operational condition, recent lead-time history, the current forecast, recent forecast error, and relevant adaptive variables. In this way, the framework explicitly links data updating to state updating, which is a defining feature of the digital twin logic used in this study.
The third layer is the predictive modeling layer. Here, the quantile regression-EVT model provides the baseline probabilistic representation of supplier lead-time uncertainty, while the TFT-based model produces uncertainty-aware temporal forecasts using synchronized multivariate inputs. Online learning and particle filtering continuously refine parameter estimates and latent state representations as new information becomes available. This layer therefore supports both baseline uncertainty characterization and adaptive predictive updating.
The fourth layer is the adaptive decision layer. Based on the synchronized system state and predictive outputs, the RL component determines the corrective adjustment through the adaptive factor δ ( t ) or its associated sensor-weight configuration. This stage transforms prediction into decision-aware adjustment by selecting context-sensitive actions that respond to evolving operating conditions. The adaptive output is then passed to the digital twin environment for execution and evaluation.
The fifth layer is the digital twin simulation layer, implemented through DES. In this layer, the adjusted lead-time prediction is embedded into the virtual representation of the supply chain, where operational events such as congestion buildup, machine downtime, and supplier-delay propagation are simulated. The digital twin updates the virtual system state in response to the selected adaptive action and produces simulated outputs including revised lead-time realizations, forecast error, resilience indicators, and reward information.
The feedback loop operates as follows. After each simulation cycle, the resulting system response is returned to the predictive and adaptive layers in the form of updated state variables, realized prediction error, and reward signals. These outputs are then used to update the RL policy, refine the online learning process, and support particle-filter-based state correction. Accordingly, the output of one cycle becomes part of the input to the next cycle. This repeated synchronization of sensing, prediction, simulation, and adaptation is what allows the framework to function as a digital twin rather than as a static simulation model.
Thus, the proposed digital twin is defined in this study not simply as a virtual model of supply chain operations, but as a feedback-based and state-updating simulation environment in which data flows, predictive models, and adaptive control mechanisms are continuously synchronized at each decision epoch.

3.5. Digital Twin-Based Simulation Design

To assess the robustness and adaptive capabilities of the proposed framework, a digital twin-enabled DES environment is developed. This environment replicates supplier behavior and operational dynamics under varying conditions, enabling controlled experimentation and system-level analysis.
The simulation design is built on the semi-synthetic data-generating process described in Section 3.4.1 and spans realistic procurement conditions in complex adaptive supply chains. Lead-time behavior is evaluated across multiple parameter regimes to reflect diverse operational scenarios.
The variability parameter σ { 0.2,0.5,0.9,1.2 } represents increasing uncertainty regimes:
  • σ = 0.2 : stable suppliers
  • σ = 0.5 : moderate variability
  • σ 0.9 : high uncertainty and disruption-prone conditions
IoT-driven deviations δ ( t ) are modeled using two operational scenarios:
  • Scenario A: δ ( t ) [ 0.05,0.15 ] , routine variability
  • Scenario B: δ ( t ) [ 0.05,0.30 ] , disruption-intensive conditions
Within the digital twin environment, simulation events (e.g., machine downtime, congestion buildup, supplier delays) are processed using DES logic, while stochastic sampling is used internally to generate variability, ensuring both realism and statistical rigor.
Each parameter configuration is evaluated using 10,000 simulated lead-time realizations, ensuring robustness and convergence of results.
For each scenario, predictions are generated using a multi-layer framework comprising: a statistical baseline based on quantile regression and EVT, an IoT-enabled adaptation layer within a digital twin environment, and an AI-driven learning layer integrating TFT, RL, and online learning. Model performance is evaluated using:
M A E = 1 n i = 1 n T s , i T ^ s , i
R M S E = 1 n i = 1 n ( T s , i T ^ s , i ) 2
where T s , i is the examined lead time and T ^ s , i is the forecast value.
To ensure reproducibility, all stochastic processes are initialized using static random seeds.

Calibration and Validation Workflow

To ensure methodological transparency and reproducibility, the study follows a structured calibration and validation workflow spanning data preparation, model development, calibration, testing, and robustness analysis. The full dataset generated through the semi-synthetic and GAN-augmented pipeline is first partitioned into 70% training, 15% validation, and 15% testing subsets. The training subset is used for model fitting, the validation subset is used for hyperparameter tuning and convergence monitoring, and the test subset is reserved exclusively for final performance evaluation.
For the baseline statistical model, calibration consists of estimating the quantile regression coefficients at the predefined quantile levels and fitting the EVT tail model using exceedances above the selected threshold. For the IoT-adaptive model, calibration includes tuning the bounded deviation structure and optimizing the RL-based adjustment mechanism within the digital twin environment. For the AI-enhanced model, calibration includes training the TFT architecture on the training subset, selecting hyperparameters using validation loss, and monitoring convergence through early stopping. In parallel, the RL component is calibrated through repeated simulation episodes, where policy updates are guided by reward stabilization and validation-based performance monitoring.
Following calibration, model validation is performed in three stages. First, in-sample validation is conducted on the validation subset to assess convergence stability and to compare alternative parameter settings. Second, out-of-sample testing is performed on the held-out test subset using MAE and RMSE as the primary predictive performance metrics. Third, scenario-based validation is conducted across both Scenario A and Scenario B to assess whether calibrated models maintain stable performance under moderate and disruption-intensive conditions.
To further strengthen methodological validity, the workflow also includes distributional validation and robustness testing. The calibrated models are evaluated not only on samples generated from the reference log-normal DGP, but also on alternative DGPs, including mixed log-normal-Gamma, heavy-tailed Weibull, and bimodal mixture distributions. This enables assessment of both within-distribution robustness and out-of-distribution generalization without re-tuning the trained models. In addition, a focused calibration experiment is conducted to compare baseline, IoT-adaptive, and AI-enhanced predictive outputs under synchronized digital twin conditions, thereby examining the consistency between IoT-driven empirical behavior and model-based predictions.
Overall, this calibration and validation workflow is designed to ensure that the reported results reflect not only model fit under idealized conditions, but also convergence quality, generalization performance, and robustness under heterogeneous and non-stationary operating regimes. This structured process strengthens the reproducibility of the study and clarifies how the proposed framework is systematically evaluated from data generation through final validation.
Despite the structured calibration, robustness testing, and out-of-distribution evaluation used in this study, the exclusive reliance on semi-synthetic data remains an important limitation for external validity. The current design was adopted because datasets that jointly provide historical supplier lead times, disruption records, and sufficiently granular IoT-linked operational signals are difficult to access in public form and are often restricted in industrial settings. Accordingly, the present methodology is intended to provide controlled proof-of-concept validation rather than direct empirical verification of deployment performance. The use of alternative data-generating processes, scenario-based testing, and calibration experiments strengthens internal validity and distributional robustness, but it does not substitute for validation on real-world or publicly available supply chain datasets. Future work should therefore test the proposed framework using industrial case data or suitable public benchmarks in order to assess transferability, calibration behavior, and predictive performance under operational conditions.

3.6. Robustness and Generalization Analysis

To evaluate generalization and reduce potential bias from the simulation design, an extended robustness analysis is conducted within the digital twin environment. This analysis assesses model performance under both in-distribution and out-of-distribution conditions.
In the first stage, within-distribution robustness is examined using new samples generated from the same DGP, while varying system conditions and IoT-driven perturbations. This enables evaluation of predictive stability under controlled variability.
In the second stage, out-of-distribution robustness is evaluated by testing models on alternative lead-time distributions that differ from training assumptions, without retraining. This provides a strict assessment of generalization in heterogeneous and non-stationary environments. Specifically, three alternative DGPs are considered: a mixed distribution combining a reference distribution with a Gamma component to represent heterogeneous supplier behavior; a heavy-tailed Weibull distribution ( k = 1.5 ) to capture disruption-driven delays; and a bimodal mixture distribution to model regime-switching between stable and disrupted operational states.
Model performance is evaluated using MAE and RMSE, consistent with the primary experiments. The proposed AI-driven framework, integrating TFT, RL, and filtering mechanisms, is applied without re-tuning across all scenarios to ensure an unbiased assessment of generalization capability. All results are reported with 95% confidence intervals computed using the normal approximation C I = x ± 1.96 s n , where x is the sample mean and s is the standard deviation.
Overall, this analysis demonstrates the framework’s ability to maintain predictive reliability under both controlled variability and distributional shifts characteristic of complex supply chain systems.

4. Results

This section reports the performance of the proposed framework under controlled simulation conditions across moderate (Scenario A) and high-variability (Scenario B) settings. Supplier lead times were generated using the semi-synthetic and GAN-augmented pipeline described in Section 3.4, and model performance was evaluated using MAE and RMSE. Three model configurations were compared: a baseline Quantile–EVT model, an IoT-adaptive model with dynamic correction, and an AI-enhanced model integrating TFT, RL, and online updating. Additional analyses examine computational efficiency, robustness under alternative data-generating processes, calibration behavior, benchmark comparisons, and statistical significance. Because all experiments remain simulation-based, the findings should be interpreted as comparative evidence of model behavior under controlled assumptions rather than as direct proof of industrial deployment performance.

4.1. Scenario A: Moderate Variability Conditions

4.1.1. Baseline Quantile–EVT Model

The baseline statistical model, integrating quantile regression with EVT, provides a flexible representation of supplier lead-time behavior under moderate variability conditions (Table 1). By capturing both central tendencies and tail risks, the model offers a more robust reference than traditional mean-based or fixed-distribution approaches.
As shown in Table 1 and Figure 2, prediction errors increase systematically with both the mean (μ) and the dispersion (σ) of lead times. Under low variability (σ = 0.2), the model remains accurate and stable. As variability rises, however, both MAE and RMSE increase more noticeably, indicating that the model’s distributional representation becomes progressively less sufficient once uncertainty is more strongly driven by evolving operational conditions.
Figure 2 shows relatively smooth error surfaces, indicating that the Quantile–EVT formulation captures structured asymmetry and tail behavior better than simpler parametric baselines. Even so, the model remains essentially static. EVT improves upper-tail representation, but it does not provide a mechanism for recalibration when the system state changes over time. This helps explain why deterioration becomes more pronounced as both μ and σ increase: under moderate variability the uncertainty remains structured enough to be described probabilistically, whereas at higher settings the absence of adaptive correction becomes more limiting.

4.1.2. IoT-Adaptive Model

The IoT-adaptive model improves upon the baseline by incorporating real-time IoT-linked signals and RL-based adaptive weighting to update lead-time predictions within the digital twin environment (Table 2). Unlike the static benchmark, this model introduces state-aware correction that responds to evolving operating conditions.
Across all variability levels, the IoT-adaptive model yields substantially lower MAE and RMSE than the baseline. Errors still rise with increasing variability, but the rate of escalation is much lower, indicating reduced sensitivity to stochastic disturbance and improved stability under changing conditions.
The smoother gradients in Figure 3 indicate that real-time correction mitigates a substantial portion of the systematic deviation that the baseline model cannot adjust once estimated. This benefit is especially visible under moderate variability, where current operational signals provide useful corrective information. However, the correction mechanism remains comparatively local: it reacts to recent system state, but it does not model longer temporal structure or regime transitions as fully as the AI-enhanced architecture. This explains why the model performs strongly in Scenario A while still showing rising error under the higher settings within that scenario.

4.1.3. AI-Driven Predictive Model Using TFT and RL

The AI-driven predictive model, based on TFT integrated with RL and online updating, demonstrates the strongest predictive performance under Scenario A (Table 3). By combining temporal learning, adaptive decision policies, and continuous updating, it captures both sequential structure and evolving system state.
Across all tested settings, the AI-enhanced model achieves the lowest error values among the three approaches. Although errors increase with higher variability, the increase remains comparatively limited, indicating stronger stability under non-stationary conditions.
The heatmaps in Figure 4 show the smoothest gradients and lowest overall error intensity, consistent with a model that can jointly exploit temporal dependencies, adaptive correction, and online updating. Under moderate variability, these mechanisms reduce both bias and error volatility. At the same time, the observed gains should be interpreted cautiously. Because the simulation design explicitly includes time-varying operational signals and adaptive state transitions, the experimental setting is naturally favorable to models that can exploit temporal and adaptive structure. The results therefore indicate structural suitability within the present design, rather than implying that equally large margins would necessarily hold in all external settings.

4.1.4. Cross-Model Comparative Analysis Under Scenario A

The comparative assessment in Figure 5 shows a clear progression in predictive performance from the baseline model to the IoT-adaptive model and then to the AI-enhanced model.
The baseline Quantile–EVT model captures distributional asymmetry and tail risks, but it remains sensitive to increases in both μ and σ because its underlying structure is not state-adaptive. The IoT-adaptive model substantially reduces this sensitivity by incorporating real-time correction, producing visibly smoother error surfaces. The AI-enhanced model performs best because it combines state-aware adaptation with temporal learning and continuous updating, allowing it to track evolving dynamics more effectively.
Under Scenario A, therefore, predictive performance improves as the framework incorporates real-time correction and temporal adaptive learning. This progression reflects increasing ability to respond to structured but evolving variability, rather than a general claim that more complex models are always preferable.

4.2. Scenario B: High Variability Conditions

4.2.1. Baseline Quantile–EVT Model

Under Scenario B, representing elevated variability conditions, the baseline Quantile–EVT model shows much sharper error escalation (Table 4 and Figure 6). While EVT continues to improve upper-tail representation compared with simpler statistical baselines, the model remains fundamentally limited by its static formulation.
As illustrated in Figure 6, both MAE and RMSE increase sharply across the parameter space. Error amplification becomes especially pronounced at higher dispersion levels, indicating that the baseline model cannot adequately respond once variability is driven not only by heavier tails, but also by evolving state interactions and disruption propagation.
This deterioration is not simply a larger version of the Scenario A pattern. Under Scenario B, the mismatch between static probabilistic representation and highly dynamic operational uncertainty becomes more severe. The Quantile–EVT layer can still characterize extremes, but it cannot update its structure as the simulated system evolves. As a result, predictive reliability deteriorates rapidly in disruption-intensive conditions.

4.2.2. IoT-Adaptive Model

Under Scenario B, the IoT-adaptive model remains substantially more robust than the baseline by incorporating real-time IoT-linked information and RL-based adaptive correction (Table 5 and Figure 7).
As shown in Figure 7, the error surfaces are markedly smoother than those of the baseline model, and the rate of error escalation is substantially reduced.
The improvement is quantitatively large. For example, at μ = 2.5 and σ = 0.9, RMSE decreases from 19.50 weeks in the baseline model to 4.38 weeks in the IoT-adaptive model. This indicates that real-time corrective information is highly valuable in a simulated environment where operating conditions evolve over time.
Nevertheless, residual uncertainty remains evident under the most extreme settings. At σ = 1.2, error magnitudes still increase notably, showing that IoT-driven correction alone is insufficient to capture the full temporal and nonlinear complexity of disruption-intensive systems. These large gains should therefore be interpreted critically. They indicate that a state-aware corrective model is much better matched to the current experimental design than a static statistical benchmark, but they should not be read as direct evidence that equally large margins would necessarily persist in field deployment.

4.2.3. AI-Enhanced Model (TFT-Based with Adaptive Learning, High-Variability Scenario)

Under Scenario B, the AI-enhanced model exhibits the strongest predictive robustness and stability across the full parameter space (Table 6 and Figure 8). By integrating temporal learning, adaptive updating, and uncertainty-aware modeling, it remains substantially more stable than the baseline and IoT-adaptive models under highly volatile conditions.
As shown in Figure 8, the error surfaces are notably flatter than those of the other two models, indicating much weaker sensitivity to increases in μ and σ.
Even under extreme variability (σ = 1.2), the model maintains comparatively low error levels. This indicates that temporal representation learning, adaptive correction, and online updating jointly improve robustness under non-stationary conditions. However, the magnitude of the advantage warrants careful interpretation. Because the AI-enhanced framework was specifically designed to exploit temporal dependence, multivariate state information, adaptive correction, and online updating, the simulation environment is structurally favorable to this class of models. The present results therefore demonstrate strong methodological suitability within the proof-of-concept design, while leaving open the question of how large such gains would remain under public or industrial data.

4.2.4. Cross-Model Comparative Analysis Under Scenario B

The comparison under Scenario B (Figure 9) reveals the clearest separation among the three models, with the performance hierarchy becoming more pronounced as variability intensifies.
The baseline model exhibits rapid error escalation, reflecting the limits of a static probabilistic framework under strong non-stationarity. The IoT-adaptive model reduces this deterioration substantially through real-time correction, but still shows residual sensitivity under the most extreme settings. The AI-enhanced model performs best because it combines state-aware correction with temporal learning and continuous updating, allowing it to remain comparatively stable even when disruption intensity increases.
Under Scenario B, therefore, the performance hierarchy becomes more pronounced, indicating that adaptive and temporally aware models are better matched to disruption-intensive conditions than static statistical baselines. Even so, the magnitude of the observed gaps should be interpreted within the constraints of the simulation design.

4.3. Comparative Assessment Across Scenarios A and B

Figure 10 presents an integrated comparison of predictive performance across the three modeling paradigms under both moderate and high variability. This comparison shows how model behavior changes as uncertainty intensifies.
Across both scenarios, the same ordering is preserved: the baseline model is most sensitive to rising variability, the IoT-adaptive model improves responsiveness through state-aware correction, and the AI-enhanced model remains the most stable under both moderate and high uncertainty. The baseline model is adequate only while uncertainty remains relatively structured. The IoT-adaptive model improves performance by incorporating real-time correction, but its benefit becomes more limited once the system exhibits stronger temporal complexity. The AI-enhanced model remains the most stable because it can integrate temporal feature learning, adaptive updating, and uncertainty-aware prediction within a single framework.
This cross-scenario consistency strengthens the interpretation that predictive resilience in the present study is tied to model architecture and adaptive capability, while still leaving external validation as an important next step.

4.4. Computational Efficiency Analysis

To evaluate practical feasibility, computational efficiency was assessed for the three modeling layers in terms of training time, inference time, peak memory usage, and deployment implications (Table 7). All experiments were conducted on a standard computing platform (Intel® Core™ i7 processor, up to 3.0 GHz, 32 GB RAM), representing a realistic proof-of-concept environment for digital twin-enabled supply chain systems.
All models exhibit constant-time inference complexity, O ( 1 ) , at deployment because each prediction involves a fixed sequence of operations independent of dataset size. This property is important for time-sensitive decision-making. However, inference speed alone is not sufficient to characterize practical computational burden. Training cost, memory use, and maintenance requirements are also relevant, especially for the AI-enhanced framework.
The baseline statistical model is computationally light because it relies on relatively simple quantile estimation and EVT-based tail fitting. The IoT-adaptive model achieves the fastest inference while retaining a low memory footprint, making it attractive for latency-sensitive monitoring and correction tasks. The AI-enhanced model imposes substantially higher training time and memory requirements because it combines temporal encoding, attention mechanisms, online updating, and adaptive learning.
From a deployment perspective, all three models satisfy practical real-time inference requirements, but they involve different computational trade-offs. The IoT-adaptive model offers the fastest response and lowest online burden, whereas the AI-enhanced model provides stronger predictive performance at a substantially higher training and maintenance cost. This distinction is important for practical interpretation: the predictive gains of the AI-enhanced model are not computationally free, and its use is most justified in environments where robustness under high variability is more valuable than minimal training cost.
Overall, the computational analysis supports the feasibility of the framework as a comparative proof of concept, while also indicating that scalability should be examined further under larger datasets, longer horizons, and more demanding update frequencies.

4.5. Robustness Validation Under Alternative Data-Generating Processes (DGPs)

To assess generalization beyond the reference simulation assumptions, robustness experiments were conducted under multiple alternative DGPs, including bimodal, mixed, and heavy-tailed Weibull distributions.

4.5.1. Within-Data-Generating-Process Robustness

Across all alternative DGPs, a consistent performance hierarchy is observed (Table 8). The AI-enhanced model achieves the lowest error levels, followed by the IoT-adaptive model, while the baseline Quantile–EVT model exhibits much larger errors under increased distributional complexity.
The results show that the more the data-generating process departs from a single stable distribution, the more severely the static baseline deteriorates. The IoT-adaptive model reduces this deterioration substantially by incorporating state-aware correction, while the AI-enhanced model remains the most stable across all alternative structures. The particularly large gaps under bimodal conditions should be interpreted cautiously, since regime-switching patterns are especially challenging for a static baseline and comparatively favorable to adaptive temporal models. Accordingly, the magnitude of the gain should be read as evidence of comparative suitability within the present experimental design rather than as a guaranteed margin under external datasets.

4.5.2. Evaluation of Out-of-Distribution Robustness

To further evaluate generalization, the AI-enhanced model was trained on the reference baseline distributions and tested on unseen alternative DGPs. The results (Table 9) indicate only limited degradation in predictive accuracy under moderate distributional shifts.
Performance deterioration is most evident under bimodal testing, where regime-switching behavior creates stronger mismatch with training assumptions. Even so, the model maintains acceptable error levels across all tested conditions. This indicates robustness to alternative simulated distributions, although it does not yet establish transferability to public or industrial lead-time datasets.

4.5.3. Discussion and Validity Considerations

The robustness analysis shows that the predictive advantage of the AI-enhanced framework is maintained across both within-distribution and out-of-distribution tests, particularly under mixed and bimodal regimes where static baselines deteriorate more sharply. At the same time, these findings remain bounded by the simulation design. The use of semi-synthetic data, alternative DGPs, and out-of-distribution testing strengthens internal robustness assessment, but it does not establish empirical transferability to public or industrial lead-time datasets. Accordingly, this subsection should be interpreted as evidence of methodological robustness within the proof-of-concept setting rather than as a substitute for external validation.

4.6. IoT–AI Calibration Experiment for Digital Twin-Based Predictive Modeling

The expanded benchmark set provides a more informative context for the proposed model (Table 10). Naïve and ARIMA models provide standard forecasting references, XGBoost offers a strong non-sequential machine-learning baseline, and LSTM and DeepAR provide deep-learning comparators (Table 11). The AI-enhanced model achieves the best overall predictive performance, indicating that the combination of temporal learning, adaptive correction, and online updating adds value beyond both classical and recent alternatives. At the same time, these benchmark differences should not be interpreted uncritically. Some of the observed advantage likely reflects the fact that the simulation environment contains temporal and adaptive structure that is especially well matched to the proposed architecture. For this reason, comparative superiority in the present study should be interpreted as strong evidence within the current proof-of-concept design rather than as universal dominance across all forecasting settings.

4.7. Benchmark Comparison

To assess comparative performance more rigorously, the proposed framework was evaluated against a broader benchmark set spanning simple, statistical, machine-learning, and deep-learning baselines. All models were trained and evaluated using the same semi-synthetic, GAN-augmented, IoT-enriched lead-time dataset, the same 70%/15%/15% train-validation-test split, and the same input feature space. Hyperparameters for the benchmark models were selected using validation-set performance prior to final evaluation on the held-out test set.
To determine whether the observed benchmark differences are statistically meaningful, repeated-run benchmark evaluations were compared using paired significance testing on held-out test performance under the common evaluation protocol (Table 12). Because benchmark results were obtained from the same repeated experimental settings, paired comparisons were conducted between the AI-enhanced model and each benchmark method. Given that normality of the paired error differences could not be assumed, the Wilcoxon signed-rank test was adopted as a conservative non-parametric alternative. To account for multiple pairwise comparisons, Holm-adjusted p-values were reported. In addition to significance levels, effect sizes were included to quantify the magnitude of the observed differences.
The statistical results confirm that the AI-enhanced model achieves statistically significant improvements in RMSE relative to all benchmark models after Holm adjustment. The smallest statistically significant margin is observed relative to DeepAR, which is expected given that DeepAR is the strongest among the alternative probabilistic deep-learning baselines. In contrast, the largest differences are observed relative to the Naïve and GPR baselines, which are less capable of handling the heavy-tailed, adaptive, and non-stationary conditions represented in the present simulation design. The reported effect sizes further indicate that these differences are not only statistically detectable but also practically meaningful within the benchmark setting. However, these findings should still be interpreted within the limits of the present proof-of-concept framework and should not be treated as a substitute for external validation on public or industrial datasets.

4.8. Validation of the Proposed Predictive Framework

Validation focused on whether the framework remained internally consistent across scenarios, updating cycles, and alternative data-generating assumptions (Table 13). First, cross-scenario evaluation showed gradual and proportional performance degradation when moving from Scenario A to Scenario B, indicating that the framework responds to increased uncertainty without unstable error escalation. Second, the adaptive learning process remained stable under continuous IoT-linked updates, suggesting that online updating and RL-based correction improve responsiveness without excessive sensitivity to short-term noise. Third, robustness testing under mixed, Weibull, and bimodal DGPs showed that performance degradation remained controlled, particularly for the AI-enhanced model, even when the testing distributions differed from the reference training assumptions.
Overall, these checks support the internal consistency of the proposed framework as a simulation-based proof of concept. They indicate that adaptive updating remains stable, predictive degradation remains proportional to uncertainty, and robustness is preserved under alternative DGPs. However, these validation results should not be interpreted as replacing empirical validation on public or industrial datasets.

5. Discussion

This study investigated predictive lead-time modeling within complex adaptive supply chains by integrating statistical, IoT-driven, and AI-enhanced approaches within a digital twin environment, with implications for resilience and possible relevance to sustainability-oriented planning. By comparing a baseline statistical model based on quantile regression and EVT, an IoT-adaptive model incorporating real-time system feedback, and an AI-enhanced framework combining TFT, RL, and online updating, the results reveal distinct patterns in predictive performance, robustness, and adaptability under varying levels of uncertainty. Rather than treating these findings as evidence of deployment readiness, the present discussion interprets them as simulation-based evidence about how different modeling layers behave under controlled variability and disruption conditions. Overall, the findings show that static statistical models are insufficient for capturing dynamic system behavior, that IoT integration can improve responsiveness within the simulated environment, and that AI-driven adaptive learning provides the strongest predictive resilience among the evaluated alternatives. At the same time, these findings should be interpreted with caution, because the study relies on semi-synthetic data, simulated IoT-linked signals, and a proof-of-concept digital twin environment rather than real industrial deployment conditions.

5.1. Discussion of Findings in Relation to Existing Literature

The findings of this study provide several important insights when interpreted in relation to the existing literature on lead-time prediction, supply chain resilience, digital twins, IoT-enabled analytics, and AI-driven forecasting. First, the results confirm prior research showing that conventional or static statistical approaches become increasingly limited as variability and disruption intensity rise. Although the baseline quantile regression-EVT model provided a more flexible representation of uncertainty than traditional mean-based or fixed-distribution approaches, its predictive performance deteriorated under elevated variability conditions. This result is consistent with earlier studies indicating that static probabilistic models are often insufficient for highly dynamic and non-stationary supply chain environments, particularly when lead-time behavior is shaped by asymmetric variability and extreme disruption effects [17,18,19,20,21]. In this respect, the present findings reinforce the argument that uncertainty in complex adaptive supply chains cannot be adequately represented through static estimation alone.
Second, the results are also aligned with previous research emphasizing the value of IoT-enabled visibility and real-time data integration in improving responsiveness and operational awareness within supply chain systems [5,6,7,8,9,10,37,38]. The IoT-adaptive model achieved substantial reductions in prediction error relative to the baseline model across both variability scenarios, demonstrating that real-time adjustment based on operational signals can improve predictive stability and responsiveness. This supports prior studies suggesting that IoT data streams enhance disruption detection, monitoring capability, and system awareness in digitally enabled supply chains [39,40,41,42,43]. At the same time, the results of this study show that IoT-driven adjustment alone is not sufficient under extreme variability conditions, where residual error growth remains evident. In this respect, the present work complements the existing literature by showing that re-al-time data availability materially improves predictive adaptation, but does not by itself fully resolve the challenges posed by temporal complexity, nonlinear disruption propagation, and evolving uncertainty structures.
Third, the strong performance of the AI-enhanced model is consistent with a growing body of work demonstrating the effectiveness of advanced deep learning and adaptive learning methods in forecasting complex supply chain behaviors [54,55,56,57,58]. The superior robustness of the TFT-based framework, especially under high-variability and out-of-distribution conditions, supports earlier findings that attention-based temporal architectures are well suited to modeling heterogeneous, multi-horizon, and non-stationary time-series data. The present findings therefore confirm the value of deep temporal models for capturing lead-time dynamics that cannot be adequately modeled using simpler or static approaches. However, the study also extends this literature by showing that predictive performance improves further when temporal learning is embedded within a broader adaptive framework that includes reinforcement learning, online updating, and particle filtering.
More importantly, the present study extends prior research by integrating several methodological streams that are often examined separately. Existing studies have commonly focused either on statistical uncertainty modeling, IoT-enabled monitoring, digital twin simulation, or AI-based forecasting as distinct approaches. In contrast, the results of this study suggest that the greatest predictive resilience emerges when these components are combined within a unified, system-oriented framework. The contribution here is therefore not the introduction of a fundamentally new theory, but a more explicit demonstration, within a controlled simulation setting, that predictive performance improves when uncertainty modeling, adaptive correction, and digital twin updating are linked rather than treated independently.
At the same time, the present results do not by themselves justify strong claims about sustainability performance. Although improved predictive stability may be relevant to planning efficiency, the current experiments do not directly quantify emissions, waste reduction, energy use, or other sustainability indicators. Accordingly, sustainability in this study should be interpreted as a possible downstream implication of better prediction and adaptation, not as an empirically demonstrated outcome.
Overall, the discussion of the findings indicates that the contribution of this study lies not merely in confirming the usefulness of advanced analytics in supply chain set-tings, but in showing that resilient lead-time prediction in complex adaptive supply chains benefits from a structured integration of probabilistic modeling, IoT-enabled responsiveness, digital twin-supported experimentation, and AI-driven adaptive learning. This interpretation is strongest at the level of simulation-based methodological insight rather than immediate real-world generalization.

5.2. Theoretical Contribution

The findings advance predictive supply chain analytics by reframing lead-time forecasting as a dynamic, system-driven, and uncertainty-aware process rather than a static estimation task, with implications for resilient and adaptive supply chain management [59]. The baseline statistical model, enhanced with EVT, performs satisfactorily under low variability conditions but degrades as the central tendency and dispersion of lead-time distributions increase. This confirms that heavy-tailed and multiplicative uncertainties, characteristic of complex adaptive supply chains, cannot be reliably captured using static parametric models.
However, the theoretical contribution of this study should not be overstated. The concept referred to here as distributional sensitivity is not presented as a new formal theory, but rather as an interpretive way of understanding how prediction error responds to changes in mean level, variability, and adaptive correction within the pro-posed framework. In this sense, the paper contributes a structured analytical interpretation of predictive resilience, not a novel theoretical construct in the strict sense.
The integration of IoT-driven adjustments establishes a direct link between probabilistic modeling and cyber-physical supply chain systems, with implications for re-al-time and uncertainty-aware decision support. Real-time operational signals, such as machine states, congestion, and logistics disruptions, are incorporated to continuously recalibrate lead-time predictions. This creates a closed-loop predictive structure in which sensing, modeling, and adjustment are interconnected and illustrates how such integration may improve responsiveness under dynamic operating conditions. From a systems perspective, these shifts forecasting from a standalone analytical task to an embedded component of operational processes in data-rich supply networks that may support adaptive system behavior.
The AI-enhanced framework further extends theory by introducing adaptive learning and decision-aware prediction through the integration of TFT, RL, and online updating within a digital twin environment. Unlike conventional models trained on static datasets, the proposed approach continuously updates predictions using simulated streaming IoT-linked inputs, enabling it to capture both temporal dependencies and evolving system dynamics. These transforms forecasting into a continuous learning process embedded within a dynamic system, where predictions evolve alongside operational conditions.
This work also aligns with the perspective of supply chains as complex adaptive systems, where variability emerges from interactions among suppliers, logistics processes, and operational constraints. Within this context, lead-time uncertainty is not merely statistical but reflects system-wide dynamics. The digital twin operationalizes this perspective by enabling real-time monitoring, simulation, and adaptation, demonstrating that predictive accuracy and system adaptability are inherently interconnected.
Furthermore, the study contributes to resilience theory by linking predictive performance to system-level capabilities. Stable error behavior reflects absorptive capacity, controlled sensitivity indicates adaptive capacity, and consistent performance under distributional shifts demonstrates robustness. The model’s responsiveness to IoT signals supports anticipatory behavior, suggesting that predictive models can function as embedded mechanisms that may enhance resilience in digital supply systems.
An additional contribution is reflected in its consistency with prior adaptive approaches in supply risk analytics. Previous research indicates that incorporating real-time disruption signals enhances risk assessment [60,61], improves supplier reliability through operational data [62,63], and integrates sensor-based evidence for dynamic evaluation [64,65]. While these approaches often rely on probabilistic or Bayesian updating, the present study extends this direction by enabling continuous, distributional prediction through AI-driven temporal modeling and online learning. This shifts uncertainty management from explicit probabilistic estimation to data-driven adaptation, making it more suitable for complex and evolving supply environments.
Finally, this study advances the analytical understanding of intelligent supply chains by integrating statistical modeling, IoT data, and AI-driven learning within a unified digital twin framework. Rather than treating these components independently, the proposed approach demonstrates that their integration produces a more robust, adaptive, and context-aware predictive system within the tested simulation environment. Its theoretical value lies primarily in clarifying how these components interact in a closed-loop predictive architecture, rather than in claiming a wholly new theory of supply chain resilience.

5.3. Managerial Implications

From a managerial standpoint, the findings provide analytically grounded insights based on controlled, proof-of-concept simulations within complex adaptive supply chains exhibiting high variability and interdependent processes. The proposed AI-enhanced framework, embedded within a digital twin environment, enables the simulation-based quantification of predictive uncertainty through dynamic learning mechanisms rather than static estimation. This capability allows decision-makers to differentiate between stable and high-risk lead-time scenarios, supporting more informed planning in environments where variability is structurally embedded and continuously evolving.
Within the simulation setting, improvements in predictive accuracy, reflected in reduced RMSE values, can be interpreted as a proxy for reduced effective lead-time variability in conventional planning models. Unlike traditional approaches that rely on fixed assumptions, the AI-enhanced model produces adaptive predictive distributions and time-varying uncertainty estimates based on simulated IoT-linked inputs. These outputs can be conceptually translated into an equivalent lead-time standard deviation to illustrate how uncertainty-aware predictions may influence inventory and scheduling decisions. This mapping is intended to enhance interpretability rather than substitute established industrial practices.
Nevertheless, the practical implications should be interpreted conservatively. The study does not include a formal inventory model, a cost model, an emissions model, or an implementation analysis. Therefore, claims about inventory reduction, waste minimization, or emission savings cannot be treated as demonstrated results. At most, the simulation findings suggest that improved predictive stability could create favorable conditions for such benefits if validated within downstream operational models.
For example, in a high-variability supply context such as aerospace or industrial manufacturing, where lead times are long and subject to disruption, a reduction in predictive uncertainty from approximately four weeks to around one and a half weeks, as observed in the simulation results, may translate into a reduction in required safety buffers under equivalent service-level assumptions. However, the actual magnitude of such an effect would depend on industry-specific inventory policies, cost structures, demand profiles, and managerial risk preferences, none of which are explicitly modeled here. This effect therefore illustrates interpretive managerial relevance rather than validated operational savings.
In addition to point estimates, the breadth and evolution of predictive intervals provide an additional layer of decision support. Narrow predictive intervals indicate high confidence and support stable production planning and order release decisions, whereas wider intervals signal elevated uncertainty and justify proactive mitigation strategies such as contingency sourcing or dynamic rescheduling. The integration of IoT-derived signals further enhances this capability by enabling continuous updates of predictive distributions as operational conditions change, thereby supporting real-time decision-making within a digital twin framework at the simulation level.
From an implementation perspective, however, several practical barriers must be recognized more explicitly. First, deployment would depend heavily on data availability and data quality. In many real supply chains, supplier event logs, machine-status records, congestion indicators, and delay histories are fragmented across multiple organizational units, recorded at inconsistent time intervals, or unavailable at the level of granularity required for adaptive prediction. As a result, substantial effort would be needed for data cleaning, missing-value treatment, timestamp alignment, and governance over shared data access before a framework of this kind could operate reliably.
Second, practical deployment would require integration across heterogeneous enterprise systems. The proposed framework assumes interaction among IoT-linked sensing, predictive models, and a digital twin environment; in operational settings, however, these functions are often distributed across ERP, MES, WMS, transportation management systems, and supplier-facing platforms. Connecting these components in a reliable manner may require middleware, standardized data interfaces, API-level coordination, and organizational agreement on data ownership and process responsibility. Accordingly, the feasibility of implementation would depend not only on predictive accuracy, but also on the digital maturity of the organization and its partners.
Third, implementation cost is likely to be a significant consideration. Although the simulation results suggest that the predictive framework can improve robustness, real deployment would involve investment in sensor infrastructure, data engineering pipelines, digital twin development, cloud or edge computing resources, model monitoring, maintenance, and staff training. In this respect, the baseline and IoT-adaptive layers may be more practical for organizations with limited digital infrastructure, whereas the full AI-enhanced framework may be more appropriate in settings where the expected decision value justifies greater technical complexity and ongoing operational cost.
Fourth, managerial trust in AI-assisted decisions remains a non-trivial challenge. Even if predictive performance is strong in simulation, managers may be reluctant to rely on adaptive recommendations when the logic of correction is not fully transparent, particularly under disruption conditions where decisions involve service, cost, and supplier-risk trade-offs. For this reason, practical adoption would likely require explainability mechanisms, human-in-the-loop validation, override rules, and phased deployment in which the system initially supports rather than replaces expert judgment. Trust, therefore, should be regarded as an implementation condition rather than an automatic consequence of predictive accuracy alone.
From a systems perspective, these capabilities extend beyond individual decision points and suggest possible contributions to broader resilience objectives. By improving visibility into supply chain dynamics and enabling adaptive responses to disruptions, the framework may support more informed coordination across planning functions. However, claims regarding sustainability-related outcomes remain prospective. Because the study does not quantify energy use, emissions, waste, or circularity indicators, the managerial relevance of the framework is currently strongest in the domain of uncertainty-aware planning rather than demonstrated sustainability performance.

6. Conclusions

This study proposes a resilient lead-time prediction framework for complex adaptive supply chains by integrating a hybrid quantile regression-EVT model, an IoT-enabled adaptive mechanism, and an AI-enhanced predictive layer based on TFT, RL, and online updating within a digital twin environment. The framework is designed to support predictive resilience and to provide a simulation-based basis for examining lead-time behavior under dynamic and uncertain operating conditions. It was evaluated under multiple variability scenarios and alternative data-generating processes to assess robustness and generalization.
The results reveal a clear performance hierarchy across the three modeling layers. The baseline statistical model provides interpretability and improved tail-risk representation but remains sensitive to increasing variability and distributional shifts. The IoT-adaptive model enhances predictive responsiveness through real-time operational adjustment and improves model adaptability under dynamic conditions. The AI-enhanced model achieves the highest level of stability and robustness by capturing temporal dependencies and continuously adapting to evolving system states, thereby reducing uncertainty-driven inefficiencies and improving predictive resilience within the simulated environment.
The findings provide direct and explicit answers to the research questions stated in the Introduction. Regarding RQ1, the results show that the hybrid quantile regression-EVT framework is effective as a baseline method for representing supplier lead-time uncertainty because it captures both routine variability and disruption-related tail behavior. This confirms that combining quantile regression with EVT offers a more informative uncertainty representation than conventional mean-based or fixed-distribution approaches. However, the results also show that, when used alone, this framework becomes increasingly sensitive as variability intensifies and as the underlying distribution shifts, indicating that static uncertainty modeling is not sufficient for highly dynamic supply chain environments.
Regarding RQ2, the results show that integrating IoT-linked operational signals with digital twin-enabled simulation improves adaptive lead-time prediction relative to the baseline model. The main benefit of this integration lies in its ability to incorporate real-time system awareness into the prediction process, enabling corrective updates when congestion, disruption, or machine-state changes occur. This leads to more responsive and context-aware prediction than the purely statistical model. At the same time, the results also indicate that IoT-driven correction alone is not enough to fully address the temporal complexity and nonlinear disruption patterns that emerge under extreme variability.
Regarding RQ3, the results show that the addition of AI-driven adaptive learning mechanisms, specifically TFT, RL, particle filtering, and online updating, provides the strongest overall predictive performance among the three modeling layers. The AI-enhanced framework consistently delivers the greatest gains in predictive accuracy, robustness, and stability because it combines temporal representation learning with adaptive correction and continuous updating. This allows the model to remain more effective than the baseline and IoT-adaptive alternatives when the system becomes more non-stationary, disruption-prone, and distributionally complex. In this sense, the results confirm that the strongest predictive resilience is achieved when uncertainty modeling, real-time adaptation, and AI-driven temporal learning are integrated within a unified digital twin environment.
Taken together, these answers indicate that predictive performance improves progressively as the framework moves from static uncertainty modeling to IoT-enabled adaptive correction and then to fully AI-enhanced temporal learning. This progression is important because it shows that each modeling layer contributes distinct value, while the full integrated framework provides the most robust response to dynamic and uncertainty-intensive supply chain conditions.
The study contributes by reframing lead-time prediction as a dynamic, system-aware, and uncertainty-aware process rather than a static estimation task. It further shows that predictive performance is closely connected to resilience-related capabilities such as stability, adaptability, and robustness. While the framework may have relevance for sustainability-oriented decision contexts, the present study does not directly measure sustainability outcomes such as emissions, energy use, waste reduction, or resource efficiency. Accordingly, any sustainability implications should be interpreted as prospective rather than demonstrated results. From a managerial perspective, the framework highlights the value of integrating real-time data, digital twins, and adaptive learning to improve planning, coordination, and risk visibility in uncertain supply environments, while these implications should be interpreted as prospective rather than deployment validated.
While the findings demonstrate the promise of the proposed framework, several limitations should be acknowledged. The evaluation is based on semi-synthetic data and simulated IoT-driven deviations, which support controlled experimentation but do not fully capture the complexity of real-world supply chain environments. In addition, although the AI-enhanced model showed strong predictive performance, further validation is needed using empirical datasets and real sensor streams under operational conditions. Another limitation is that the sensitivity analysis is focused rather than comprehensive, as it examines μ, σ, and δ(t) as the principal scenario-level drivers of predictive variation but does not extend to the full set of model, simulation, and learning parameters. A further limitation is that sustainability was not operationalized as a direct outcome in the present study. No explicit indicators of emissions, energy consumption, material waste, or resource efficiency were incorporated into the simulation design or evaluation framework. Future research should therefore examine the framework in real industrial settings, extend it to multi-tier supply chain networks, and evaluate its practical value using broader decision-oriented performance criteria. In particular, future work should explicitly incorporate sustainability-related metrics, such as energy use, carbon emissions, waste reduction, and resource utilization, in order to assess whether improvements in predictive resilience also translate into measurable sustainability benefits.

Funding

This research was funded by Ongoing Research Funding program (ORF-2026-233), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to ensure that privacy restrictions are upheld.

Acknowledgments

The authors would like to extend their sincere appreciation to Ongoing Research Funding program (ORF-2026-233), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Moreno-Baca, F.; Cano-Olivos, P.; Sánchez-Partida, D.; Martínez-Flores, J.-L. The Bullwhip Effect and Ripple Effect with Respect to Supply Chain Resilience: Challenges and Opportunities. Logistics 2025, 9, 62. [Google Scholar] [CrossRef]
  2. Durugbo, C.M.; Al-Balushi, Z. Supply Chain Management in Times of Crisis: A Multi-Case Study. Prod. Plan. Control 2025, 36, 1529–1557. [Google Scholar] [CrossRef]
  3. Salas-Navarro, K.; Rojano-Flores, M.; Salcedo-Villanueva, V.; Cárdenas-Barrón, L.E. An Analytical Review of Vendor-Managed Inventory Models in Sustainable Supply Chains. Supply Chain Anal. 2026, 13, 100189. [Google Scholar] [CrossRef]
  4. Han, Y.; Chong, W.K.; Li, D. A Systematic Literature Review of the Capabilities and Performance Metrics of Supply Chain Resilience. Int. J. Prod. Res. 2020, 58, 4541–4566. [Google Scholar] [CrossRef]
  5. Zaman, J.; Shoomal, A.; Jahanbakht, M.; Ozay, D. Driving Supply Chain Transformation with IoT and AI Integration: A Dual Approach Using Bibliometric Analysis and Topic Modeling. IoT 2025, 6, 21. [Google Scholar] [CrossRef]
  6. Al-Ibrahim, H.B.; Aksoy, M.S. IoT in Supply Chain Management: An Overview. J. Adv. Manag. Sci. 2024, 2, 97–102. [Google Scholar] [CrossRef]
  7. Wong, E.K.S.; Ting, H.Y.; Atanda, A.F. Enhancing Supply Chain Traceability through Blockchain and IoT Integration: A Comprehensive Review. Green Intell. Syst. Appl. 2024, 4, 11–28. [Google Scholar] [CrossRef]
  8. Faisal, S. Artificial Intelligence in Supply Chains and Finance: Driving Efficiency and Risk Management. Sci. J. King Faisal Univ. Humanit. Manag. Sci. 2025, 26, 62–67. [Google Scholar] [CrossRef]
  9. Cil, A.E.; Buyuktanir, T.; Yildiz, K. Comprehensive Explainable AI Approach for Audit Opinion Classification Using Feed-Forward Neural Networks. Knowl. Based. Syst. 2026, 339, 115606. [Google Scholar] [CrossRef]
  10. Jayalath, M.M.; Ratnayake, R.M.C.; Perera, H.N.; Thibbotuwawa, A. An Analytical Approach to Risk Assessment in Agri-Food Supply Chains Using Fuzzy Inference Systems. Supply Chain Anal. 2026, 13, 100179. [Google Scholar] [CrossRef]
  11. Kong, L.; Koh, S.C.L.; Robinson, D.; Sena, V.; Wood, M. A Model for Assessing Maturity of Hydrogen Supply Chain Resilience: A Complexity-Inspired Approach. Int. J. Prod. Res. 2026, 1–21. [Google Scholar] [CrossRef]
  12. Vert, M.; Sharpanskykh, A. The Resilience of Complex Sociotechnical Systems: A Meta-Review of Conceptualisations. Systems 2026, 14, 71. [Google Scholar] [CrossRef]
  13. Bahmanova, A.; Lace, N. Modelling Cyber Resilience in SMEs as a Socio-Technical System: A Systemic Approach to Adaptive Digital Risk Management. Systems 2026, 14, 151. [Google Scholar] [CrossRef]
  14. Chen, X.; Li, J.; Wang, Z. Equilibrium Decisions for Complex Agricultural Supply Chains with Quality as a Strategic Variable. Eng. Optim. 2026, 58, 619–652. [Google Scholar] [CrossRef]
  15. Vadaga, A.K.; Dokuburra, U.R.; Nekkanti, H.; Gudla, S.S.; Kumari, R.K. Digital Transformation in Pharmaceuticals: The Impact of AI on Supply Chain Management. Intell. Hosp. 2025, 1, 100008. [Google Scholar] [CrossRef]
  16. Qu, Z. Discrete-Event Simulation Modelling of Inventory Turnover under Supply Chain Financial Collaboration. Int. J. Simul. Process Model. 2026, 23, 11–23. [Google Scholar] [CrossRef]
  17. Alaoua, A.; Karim, M. Intelligent Early Warning System for Supplier Delays Using Dynamic IoT-Calibrated Probabilistic Modeling in Smart Engineer-to-Order Supply Chains. Appl. Syst. Innov. 2025, 8, 124. [Google Scholar] [CrossRef]
  18. Alaoua, A.; Karim, M. A Novel Approach Based on IoT and Log-Normal Distribution for Supplier Lead Time Optimization in Smart Engineer-to-Order Supply Chains. Logistics 2025, 9, 82. [Google Scholar] [CrossRef]
  19. Mutambik, I. Enhancing IoT Security Using GA-HDLAD: A Hybrid Deep Learning Approach for Anomaly Detection. Appl. Sci. 2024, 14, 9848. [Google Scholar] [CrossRef]
  20. Mutambik, I. The Role of Strategic Partnerships and Digital Transformation in Enhancing Supply Chain Agility and Performance. Systems 2024, 12, 456. [Google Scholar] [CrossRef]
  21. Surana, A.; Kumara, S.; Greaves, M.; Raghavan, U.N. Supply-Chain Networks: A Complex Adaptive Systems Perspective. Int. J. Prod. Res. 2005, 43, 4235–4265. [Google Scholar] [CrossRef]
  22. Zhao, K.; Zuo, Z.; Blackhurst, J.V. Modelling Supply Chain Adaptation for Disruptions: An Empirically Grounded Complex Adaptive Systems Approach. J. Oper. Manag. 2019, 65, 190–212. [Google Scholar] [CrossRef]
  23. Wyrembek, M.; Baryannis, G.; Brintrup, A. Causal Machine Learning for Supply Chain Risk Prediction and Intervention Planning. Int. J. Prod. Res. 2025, 63, 5629–5648. [Google Scholar] [CrossRef]
  24. Boone, T.; Fahimnia, B.; Ganeshan, R.; Herold, D.M.; Sanders, N.R. Generative AI: Opportunities, Challenges, and Research Directions for Supply Chain Resilience. Transp. Res. E Logist. Transp. Rev. 2025, 199, 104135. [Google Scholar] [CrossRef]
  25. Jackson, I.; Ivanov, D.; Dolgui, A.; Namdar, J. Generative Artificial Intelligence in Supply Chain and Operations Management: A Capability-Based Framework for Analysis and Implementation. Int. J. Prod. Res. 2024, 62, 6120–6145. [Google Scholar] [CrossRef]
  26. Soufi, Z.; Mestiri, S.; David, P.; Yahouni, Z.; Fottner, J. A Material Handling System Modeling Framework: A Data-Driven Approach for the Generation of Discrete-Event Simulation Models. Flex. Serv. Manuf. J. 2025, 37, 67–96. [Google Scholar] [CrossRef]
  27. Kusdiana, A.; Gunawan, F.E.; Zuraida, R.; Lukito, D. Enhancing Inventory Simulation Models for Retail. Teh. Glas. 2025, 19, 285–292. [Google Scholar] [CrossRef]
  28. Zaayman, G.; Innamorato, A. The Application of Simio Scheduling in Industry 4.0. In Proceedings of the 2017 Winter Simulation Conference (WSC), Las Vegas, NV, USA, 3–6 December 2017; IEEE: New York, NY, USA, 2017; pp. 4425–4434. [Google Scholar]
  29. Kessy, S.S.A.; Salema, G.L.; Simwita, Y. Lean Thinking in Medical Commodities Supply Chains: Applicability and Success Factors for Tanzanian Health Supply Chains. J. Humanit. Logist. Supply Chain Manag. 2024, 14, 105–117. [Google Scholar] [CrossRef]
  30. Strandhagen, J.W.; Buer, S.-V.; Semini, M.; Alfnes, E.; Strandhagen, J.O. Sustainability Challenges and How Industry 4.0 Technologies Can Address Them: A Case Study of a Shipbuilding Supply Chain. Prod. Plan. Control 2022, 33, 995–1010. [Google Scholar] [CrossRef]
  31. Karagoz, S. Resolving Conflicting Goals in Manufacturing Supply Chains: A Deterministic Multi-Objective Approach. Systems 2026, 14, 126. [Google Scholar] [CrossRef]
  32. Canciglieri, A.B.; Seelent, J.F.C.; Kai, D.A.; Franco, C.W.; Benitez, G.B. Adaptive Leadership in Artificial Intelligence Solutions in Operations and Supply Chain Management Projects: The Robert Bosch GmbH Case. Int. J. Phys. Distrib. Logist. Manag. 2026, 1–22. [Google Scholar] [CrossRef]
  33. Mbago, M.; Mkansi, M.; Ntayi, J.M.; Namagembe, S.; Tukamuhabwa, B.; Mwelu, N. A Complex Adaptive Systems Approach to Reverse Logistics: A Holistic Implementation Framework from Developing Country Context. Environ. Qual. Manag. 2026, 35, e70285. [Google Scholar] [CrossRef]
  34. Mutambik, I. An Entropy-Based Clustering Algorithm for Real-Time High-Dimensional IoT Data Streams. Sensors 2024, 24, 7412. [Google Scholar] [CrossRef]
  35. Han, Y.; Fang, X. Systematic Review of Adopting Blockchain in Supply Chain Management: Bibliometric Analysis and Theme Discussion. Int. J. Prod. Res. 2024, 62, 991–1016. [Google Scholar] [CrossRef]
  36. Jum’a, L.; Ikram, M.; Jose Chiappetta Jabbour, C. Towards Circular Economy: A IoT Enabled Framework for Circular Supply Chain Integration. Comput. Ind. Eng. 2024, 192, 110194. [Google Scholar] [CrossRef]
  37. Mutambik, I. An Efficient Flow-Based Anomaly Detection System for Enhanced Security in IoT Networks. Sensors 2024, 24, 7408. [Google Scholar] [CrossRef]
  38. Krishnan, R.; Phan, P.Y.; Krishnan, S.N.; Agarwal, R.; Sohal, A. Industry 4.0-driven Business Model Innovation for Supply Chain Sustainability: An Exploratory Case Study. Bus. Strategy Environ. 2025, 34, 276–295. [Google Scholar] [CrossRef]
  39. Ziari, M.; Taleizadeh, A.A. Integrated Data-Driven and Artificial Intelligence Framework to Develop Digital Twins in Distribution System of Supply Chains: A Real Industrial Case. Int. J. Prod. Econ. 2025, 289, 109743. [Google Scholar] [CrossRef]
  40. Al Kurdi, B.; Alzoubi, H.M.; Tan, C.L.; El Khatib, M.; Yanamandra, R.; Ozturk, I.; Shwedeh, F. Internet of Things-Driven Information Sharing: A Strategic Approach to Mitigating Supply Chain Risks. Int. Rev. Manag. Mark. 2025, 15, 325–332. [Google Scholar] [CrossRef]
  41. Nozari, H.; Szmelter-Jarosz, A.; Ghahremani-Nahr, J. Analysis of the Challenges of Artificial Intelligence of Things (AIoT) for the Smart Supply Chain (Case Study: FMCG Industries). Sensors 2022, 22, 2931. [Google Scholar] [CrossRef]
  42. Mutambik, I. AI-Driven Cybersecurity in IoT: Adaptive Malware Detection and Lightweight Encryption via TRIM-SEC Framework. Sensors 2025, 25, 7072. [Google Scholar] [CrossRef]
  43. Elvemo, L. Supply Chain Resilience in Military Operations: A Case Study Exploring Command and Control. Scand. J. Mil. Stud. 2025, 8, 178–199. [Google Scholar] [CrossRef]
  44. Padmaja, A.R.L.; Sree Radha Manga Mani, M.; Thangam, A.; Praveen, R.; Tikhe, K.; Sahana Sharma, M. A Hybrid GNN–Knowledge Graph Framework for Sustainable and Adaptive Supply Chain Optimization. In Proceedings of the 2025 IEEE 4th International Conference for Advancement in Technology (ICONAT), Goa, India, 19–21 September 2025; IEEE: New York, NY, USA, 2025; pp. 1–6. [Google Scholar]
  45. Mutambik, I. Foresight for Sustainable Last-Mile Delivery: A Delphi-Based Scenario Study for Smart Cities in 2030. Sustainability 2025, 17, 6660. [Google Scholar] [CrossRef]
  46. Katragadda, S.R. AI-Driven Resilient Supply Chain Architectures: Machine Learning Frameworks for Risk Anticipation, Disruption Mitigation, and Adaptive Decision-Making. J. Bus. Manag. Stud. 2026, 8, 38–50. [Google Scholar] [CrossRef]
  47. Fernández-Miguel, A.; Ortíz-Marcos, S.; Jiménez-Calzado, M.; Fernández del Hoyo, A.P.; García-Muiña, F.; Settembre-Blundo, D. A Data-Driven and Cognitive Analytics Framework for Sustainable Supply Chain Transformation in Industry 6.0. Supply Chain Anal. 2026, 13, 100197. [Google Scholar] [CrossRef]
  48. Ivanov, D. Agentic Digital Twins: Bridging Model-Based and AI-Driven Decision-Making Support for a New Era of Supply Chain and Operations Management. Int. J. Prod. Res. 2026, 1–17. [Google Scholar] [CrossRef]
  49. Ivanov, D. A Survey of System-Cybernetic Principles in Supply Chain Resilience: Viability, Artificial Intelligence, Digital Twins, and Ecosystems. Int. J. Syst. Sci. 2026, 57, 1012–1027. [Google Scholar] [CrossRef]
  50. Rachana Harish, A.; Liu, X.; Wang, X.; Pan, S.; Dai, H.-N.; Li, M.; Huang, G.Q. Blockchain for Logistics 4.0: A Systematic Review and Prospects. Transp. Res. E Logist. Transp. Rev. 2025, 201, 104269. [Google Scholar] [CrossRef]
  51. Dritsas, E.; Trigka, M. Machine Learning in E-Commerce: Trends, Applications, and Future Challenges. IEEE Access 2025, 13, 99048–99067. [Google Scholar] [CrossRef]
  52. Puy, A.; Piano, S.L.; Saltelli, A.; Levin, S.A. Sensobol: An R Package to Compute Variance-Based Sensitivity Indices. J. Stat. Softw. 2022, 102, 1–37. [Google Scholar] [CrossRef]
  53. Jawad Mirza, M.; Hashmi, S.; Thomas Mwakudisa, M.; Safariyan, A.; Naghmi Habibullah, S.; Noor-ul-Amin, M. Performance Evaluation of Extended EWMA Control Chart in the Presence of Measurement Error. Commun. Stat. Simul. Comput. 2026, 55, 488–503. [Google Scholar] [CrossRef]
  54. Imtiaz, A.; Khan, N.; Saleem, M.; Aslam, M. Development and Application of a Modified EWMA Control Chart for Early Detection of Process Shifts in Skewed Distributions. J. Chin. Inst. Eng. 2025, 48, 680–690. [Google Scholar] [CrossRef]
  55. Aguirre, F.; Sebastian, A.; Le Gallo, M.; Song, W.; Wang, T.; Yang, J.J.; Lu, W.; Chang, M.-F.; Ielmini, D.; Yang, Y.; et al. Hardware Implementation of Memristor-Based Artificial Neural Networks. Nat. Commun. 2024, 15, 1974. [Google Scholar] [CrossRef]
  56. Al-Selwi, S.M.; Hassan, M.F.; Abdulkadir, S.J.; Muneer, A.; Sumiea, E.H.; Alqushaibi, A.; Ragab, M.G. RNN-LSTM: From Applications to Modeling Techniques and beyond—Systematic Review. J. King Saud Univ. Comput. Inf. Sci. 2024, 36, 102068. [Google Scholar] [CrossRef]
  57. Belhadi, A.; Kamble, S.; Subramanian, N.; Singh, R.K.; Venkatesh, M. Digital Capabilities to Manage Agri-Food Supply Chain Uncertainties and Build Supply Chain Resilience during Compounding Geopolitical Disruptions. Int. J. Oper. Prod. Manag. 2024, 44, 1914–1950. [Google Scholar] [CrossRef]
  58. Hosseini Shekarabi, S.A.; Kiani Mavi, R.; Romero Macau, F. Supply Chain Resilience: A Critical Review of Risk Mitigation, Robust Optimisation, and Technological Solutions and Future Research Directions. Glob. J. Flex. Syst. Manag. 2025, 26, 681–735. [Google Scholar] [CrossRef]
  59. Mutambik, I. Assessing Critical Success Factors for Supply Chain 4.0 Implementation Using a Hybrid MCDM Framework. Systems 2025, 13, 489. [Google Scholar] [CrossRef]
  60. Naeem, T.; Pirani, M.; Spalazzi, L. Evidence-Based Oracles Using Bayesian Network. In Proceedings of the 2025 21st International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT), Lucca, Italy, 9–11 June 2025; IEEE: New York, NY, USA, 2025; pp. 1–6. [Google Scholar]
  61. Jovanović, I.; Karatuğ, Ç.; Perčić, M.; Vladimir, N. Combined Fault Tree Analysis and Bayesian Network for Reliability Assessment of Marine Internal Combustion Engine. J. Mar. Sci. Appl. 2026, 25, 239–258. [Google Scholar] [CrossRef]
  62. Younang, V.C.W.; Sen, A. Security Risk Assessment Using Bayesian Attack Graphs and Complex Probabilities for Large Scale IoT Applications. IEEE Trans. Dependable Secur. Comput. 2025, 22, 7360–7371. [Google Scholar] [CrossRef]
  63. Liang, Y.; Xia, Y.; Wang, Y.; Bai, G.; Wu, L.; Wu, M.; Ding, X. Robustness Analysis of Cyber–Physical Supply Chain Systems under Hybrid Cascading Failures. Chaos Solitons Fractals 2025, 199, 116876. [Google Scholar] [CrossRef]
  64. Alzahrani, A.; Asghar, M.Z. Intelligent Risk Prediction System in IoT-Based Supply Chain Management in Logistics Sector. Electronics 2023, 12, 2760. [Google Scholar] [CrossRef]
  65. Flores, M.; Heredia, D.; Andrade, R.; Ibrahim, M. Smart Home IoT Network Risk Assessment Using Bayesian Networks. Entropy 2022, 24, 668. [Google Scholar] [CrossRef]
Figure 1. Digital twin-enabled adaptive lead-time prediction framework with synchronized data flow, predictive modeling, adaptive control, DES-based simulation, and closed-loop feedback.
Figure 1. Digital twin-enabled adaptive lead-time prediction framework with synchronized data flow, predictive modeling, adaptive control, DES-based simulation, and closed-loop feedback.
Sustainability 18 04748 g001
Figure 2. Heatmap representation of prediction errors (MAE and RMSE) for the baseline Quantile–EVT model across varying μ and σ levels.
Figure 2. Heatmap representation of prediction errors (MAE and RMSE) for the baseline Quantile–EVT model across varying μ and σ levels.
Sustainability 18 04748 g002
Figure 3. Heatmap visualization of predictive error (MAE and RMSE) for the IoT-adaptive model across varying μ and σ levels.
Figure 3. Heatmap visualization of predictive error (MAE and RMSE) for the IoT-adaptive model across varying μ and σ levels.
Sustainability 18 04748 g003
Figure 4. Heatmap visualization of prediction errors (MAE and RMSE) for the AI-driven TFT–RL model across varying μ and σ levels.
Figure 4. Heatmap visualization of prediction errors (MAE and RMSE) for the AI-driven TFT–RL model across varying μ and σ levels.
Sustainability 18 04748 g004
Figure 5. Comparative heatmap analysis of prediction errors (MAE and RMSE) across baseline, IoT-adaptive, and AI-driven models under varying μ and σ.
Figure 5. Comparative heatmap analysis of prediction errors (MAE and RMSE) across baseline, IoT-adaptive, and AI-driven models under varying μ and σ.
Sustainability 18 04748 g005
Figure 6. Heatmap representation of prediction errors (MAE and RMSE) for the baseline Quantile–EVT model under Scenario B across varying μ and σ levels.
Figure 6. Heatmap representation of prediction errors (MAE and RMSE) for the baseline Quantile–EVT model under Scenario B across varying μ and σ levels.
Sustainability 18 04748 g006
Figure 7. Heatmap representation of prediction errors (MAE and RMSE) for the IoT-adaptive model under Scenario B across varying μ and σ levels.
Figure 7. Heatmap representation of prediction errors (MAE and RMSE) for the IoT-adaptive model under Scenario B across varying μ and σ levels.
Sustainability 18 04748 g007
Figure 8. Heatmap representation of prediction errors (MAE and RMSE) for the AI-enhanced model under Scenario B across varying μ and σ levels.
Figure 8. Heatmap representation of prediction errors (MAE and RMSE) for the AI-enhanced model under Scenario B across varying μ and σ levels.
Sustainability 18 04748 g008
Figure 9. Comparative heatmap analysis of prediction errors (MAE and RMSE) across baseline, IoT-adaptive, and AI-enhanced models under Scenario B.
Figure 9. Comparative heatmap analysis of prediction errors (MAE and RMSE) across baseline, IoT-adaptive, and AI-enhanced models under Scenario B.
Sustainability 18 04748 g009
Figure 10. Cross-scenario comparison of predictive performance (MAE) across baseline, IoT-adaptive, and AI-enhanced models under Scenarios A and B.
Figure 10. Cross-scenario comparison of predictive performance (MAE) across baseline, IoT-adaptive, and AI-enhanced models under Scenarios A and B.
Sustainability 18 04748 g010
Table 1. Performance results of the baseline Quantile–EVT model under Scenario A (in weeks, reported with 95% confidence intervals), based on MAE and RMSE metrics.
Table 1. Performance results of the baseline Quantile–EVT model under Scenario A (in weeks, reported with 95% confidence intervals), based on MAE and RMSE metrics.
σ/μMAERMSE
1.522.531.522.53
0.20.68 ± 0.001.12 ± 0.011.85 ± 0.013.05 ± 0.020.85 ± 0.001.42 ± 0.012.31 ± 0.013.88 ± 0.02
0.51.82 ± 0.013.05 ± 0.024.96 ± 0.028.15 ± 0.042.45 ± 0.024.12 ± 0.046.72 ± 0.0310.95 ± 0.06
0.94.05 ± 0.056.80 ± 0.0711.20 ± 0.0818.10 ± 0.186.25 ± 0.1210.50 ± 0.1817.20 ± 0.2128.10 ± 0.75
1.27.10 ± 0.1011.80 ± 0.1819.20 ± 0.2131.50 ± 0.3513.50 ± 0.9022.10 ± 1.8034.80 ± 2.6058.40 ± 1.00
Table 2. Predictive performance of the IoT-adaptive model under Scenario A (MAE and RMSE, weeks, 95% confidence intervals).
Table 2. Predictive performance of the IoT-adaptive model under Scenario A (MAE and RMSE, weeks, 95% confidence intervals).
σ/μMAERMSE
1.52.02.53.01.52.02.53.0
0.20.26 ± 0.000.42 ± 0.000.70 ± 0.001.15 ± 0.000.32 ± 0.000.52 ± 0.000.85 ± 0.001.45 ± 0.01
0.50.30 ± 0.000.48 ± 0.000.78 ± 0.001.32 ± 0.010.40 ± 0.000.65 ± 0.011.05 ± 0.011.78 ± 0.01
0.90.38 ± 0.000.62 ± 0.011.02 ± 0.011.70 ± 0.010.68 ± 0.011.12 ± 0.021.90 ± 0.033.05 ± 0.06
1.20.52 ± 0.010.85 ± 0.021.40 ± 0.022.25 ± 0.031.20 ± 0.081.95 ± 0.103.20 ± 0.105.10 ± 0.18
Table 3. Predictive performance (MAE and RMSE) of the AI-driven TFT-based model under Scenario A (weeks, 95% confidence intervals).
Table 3. Predictive performance (MAE and RMSE) of the AI-driven TFT-based model under Scenario A (weeks, 95% confidence intervals).
σ/μMAERMSE
1.52.02.53.01.52.02.53.0
0.20.18 ± 0.050.28 ± 0.120.35 ± 0.120.95 ± 0.300.20 ± 0.050.31 ± 0.130.37 ± 0.130.98 ± 0.32
0.50.22 ± 0.060.32 ± 0.090.58 ± 0.150.95 ± 0.320.28 ± 0.080.40 ± 0.100.72 ± 0.201.15 ± 0.40
0.90.24 ± 0.080.36 ± 0.140.72 ± 0.221.65 ± 0.500.40 ± 0.140.60 ± 0.241.10 ± 0.352.50 ± 0.80
1.20.40 ± 0.100.60 ± 0.181.05 ± 0.281.65 ± 0.550.75 ± 0.201.20 ± 0.352.10 ± 0.603.30 ± 1.20
Table 4. Predictive performance of the baseline Quantile–EVT model under Scenario B (MAE and RMSE, weeks, 95% confidence intervals).
Table 4. Predictive performance of the baseline Quantile–EVT model under Scenario B (MAE and RMSE, weeks, 95% confidence intervals).
σ/μMAERMSE
1.52.02.53.01.52.02.53.0
0.20.68 ± 0.001.12 ± 0.011.85 ± 0.013.02 ± 0.020.85 ± 0.001.42 ± 0.012.31 ± 0.013.85 ± 0.02
0.51.95 ± 0.023.20 ± 0.025.20 ± 0.048.60 ± 0.102.60 ± 0.034.30 ± 0.037.10 ± 0.0811.50 ± 0.16
0.94.40 ± 0.057.30 ± 0.0912.00 ± 0.1020.00 ± 0.207.00 ± 0.1712.00 ± 0.3719.50 ± 0.4232.50 ± 0.73
1.27.90 ± 0.1413.20 ± 0.2021.50 ± 0.2335.50 ± 0.4315.50 ± 0.9925.50 ± 0.9940.00 ± 1.3665.00 ± 1.95
Table 5. Predictive performance of the IoT-adaptive model under Scenario B (MAE and RMSE, weeks, with 95% confidence intervals).
Table 5. Predictive performance of the IoT-adaptive model under Scenario B (MAE and RMSE, weeks, with 95% confidence intervals).
σ/μMAERMSE
1.52.02.53.01.52.02.53.0
0.20.61 ± 0.001.00 ± 0.001.64 ± 0.012.71 ± 0.010.75 ± 0.001.24 ± 0.002.04 ± 0.013.36 ± 0.01
0.50.67 ± 0.011.11 ± 0.001.83 ± 0.013.01 ± 0.010.93 ± 0.011.53 ± 0.012.52 ± 0.024.16 ± 0.02
0.90.88 ± 0.011.47 ± 0.022.41 ± 0.033.99 ± 0.041.59 ± 0.032.70 ± 0.104.38 ± 0.077.29 ± 0.14
1.21.21 ± 0.022.02 ± 0.033.27 ± 0.045.46 ± 0.062.97 ± 0.115.14 ± 0.317.67 ± 0.2813.45 ± 0.47
Table 6. Predictive performance of the AI-enhanced model under Scenario B (MAE and RMSE, weeks, with 95% confidence intervals).
Table 6. Predictive performance of the AI-enhanced model under Scenario B (MAE and RMSE, weeks, with 95% confidence intervals).
σ/μMAERMSE
1.522.531.522.53
0.20.18 ± 0.050.25 ± 0.080.60 ± 0.150.75 ± 0.250.20 ± 0.050.28 ± 0.080.63 ± 0.160.80 ± 0.27
0.50.17 ± 0.060.30 ± 0.080.50 ± 0.180.60 ± 0.200.22 ± 0.080.36 ± 0.100.65 ± 0.250.70 ± 0.25
0.90.20 ± 0.060.32 ± 0.100.65 ± 0.201.10 ± 0.350.35 ± 0.120.55 ± 0.201.00 ± 0.401.80 ± 0.60
1.20.28 ± 0.060.60 ± 0.120.80 ± 0.251.50 ± 0.500.65 ± 0.151.30 ± 0.251.30 ± 0.603.00 ± 1.10
Table 7. Computational efficiency of the proposed models.
Table 7. Computational efficiency of the proposed models.
ModelTraining Time (s)Inference Time (s)Peak Memory (MB)Scalability/Deployment Note
Baseline (Quantile–EVT)4.80.0552.40Lightweight; suitable for low-resource settings
IoT-Adaptive (RL-based)18.60.0102.80Fast inference; suitable for latency-sensitive monitoring
AI-Enhanced (TFT-based with RL)286.40.12032.50Highest cost; suitable for digitally mature environments
Table 8. Predictive performance (MAE and RMSE, weeks, 95% CI) under alternative DGPs.
Table 8. Predictive performance (MAE and RMSE, weeks, 95% CI) under alternative DGPs.
DGPScenarioBaseline MAEBaseline RMSEIoT MAEIoT RMSEAI (TFT) MAEAI (TFT) RMSE
BimodalA16.50 ± 6.2028.00 ± 13.501.10 ± 0.352.50 ± 1.000.75 ± 0.201.30 ± 0.50
BimodalB16.80 ± 6.4029.20 ± 14.002.40 ± 0.805.40 ± 2.200.80 ± 0.251.50 ± 0.60
MixedA8.80 ± 3.5014.50 ± 6.500.90 ± 0.281.60 ± 0.600.60 ± 0.180.90 ± 0.30
MixedB8.90 ± 3.6014.70 ± 6.601.90 ± 0.603.20 ± 1.200.62 ± 0.200.95 ± 0.35
WeibullA8.10 ± 2.5010.20 ± 3.200.92 ± 0.281.40 ± 0.450.55 ± 0.180.70 ± 0.25
WeibullB8.10 ± 2.5010.20 ± 3.201.90 ± 0.602.80 ± 0.900.60 ± 0.200.80 ± 0.28
Table 9. Out-of-distribution performance of the AI-enhanced model under Scenarios A and B (MAE and RMSE in weeks, reported with 95% confidence intervals).
Table 9. Out-of-distribution performance of the AI-enhanced model under Scenarios A and B (MAE and RMSE in weeks, reported with 95% confidence intervals).
Test Distribution (DGP)MAERMSE
Scenario A (Moderate Variability)
Log-normal0.50 ± 0.200.70 ± 0.30
Mixed0.42 ± 0.220.58 ± 0.40
Weibull0.43 ± 0.280.58 ± 0.35
Bimodal0.70 ± 0.501.20 ± 1.00
Scenario B (High Variability)
Log-normal0.60 ± 0.300.90 ± 0.60
Mixed0.65 ± 0.400.95 ± 0.65
Weibull0.55 ± 0.350.70 ± 0.50
Bimodal0.85 ± 0.701.60 ± 1.40
Table 10. Distribution summary for the IoT–AI calibration experiment (weeks).
Table 10. Distribution summary for the IoT–AI calibration experiment (weeks).
DistributionParametersMean (Weeks)Std (Weeks)
Baseline (Quantile–EVT)Reference parameters18.2020.10
IoT-adapted (RL empirical)δ(t)-adjusted20.5023.20
AI-enhanced (TFT-based)Adaptive latent state20.1021.90
Table 11. Performance comparison of predictive models (weeks).
Table 11. Performance comparison of predictive models (weeks).
ModelMAERMSERemarks
AI-enhanced (TFT-based with RL)0.520.90Highest accuracy; adaptive and uncertainty-aware
DeepAR0.791.46Strong probabilistic deep-learning benchmark; weaker adaptive correction
XGBoost0.961.88Strong nonlinear baseline; limited temporal state modeling
LSTM1.253.40Captures temporal patterns; lacks real-time adaptability
ARIMA2.845.72Standard forecasting baseline; limited under nonlinear regime shifts
Naïve (last-value)3.676.95Simple reference baseline; no adaptation
GPR11.9019.80Sensitive to heavy-tailed and non-stationary data
Table 12. Statistical significance testing of pairwise benchmark differences.
Table 12. Statistical significance testing of pairwise benchmark differences.
ComparisonMetricTestp-ValueHolm-Adjusted p-ValueEffect SizeInterpretation
AI-enhanced vs. DeepARRMSEWilcoxon signed-rank0.0120.0240.62Significant, moderate-to-large effect
AI-enhanced vs. XGBoostRMSEWilcoxon signed-rank0.0080.0200.68Significant, large effect
AI-enhanced vs. LSTMRMSEWilcoxon signed-rank0.0050.0150.74Significant, large effect
AI-enhanced vs. ARIMARMSEWilcoxon signed-rank0.0020.0080.81Significant, large effect
AI-enhanced vs. NaïveRMSEWilcoxon signed-rank0.0010.0060.86Significant, large effect
AI-enhanced vs. GPRRMSEWilcoxon signed-rank<0.001<0.0010.90Significant, very large effect
Table 13. Predictive resilience metrics.
Table 13. Predictive resilience metrics.
MetricValueInterpretation
RMSE Variance0.85Stable performance across scenarios
Sensitivity Index2.10Controlled responsiveness to variability
Degradation Ratio2.80Strong robustness under stress conditions
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mutambik, I. Adaptive Lead-Time Prediction for Resilient and Sustainable Supply Chains. Sustainability 2026, 18, 4748. https://doi.org/10.3390/su18104748

AMA Style

Mutambik I. Adaptive Lead-Time Prediction for Resilient and Sustainable Supply Chains. Sustainability. 2026; 18(10):4748. https://doi.org/10.3390/su18104748

Chicago/Turabian Style

Mutambik, Ibrahim. 2026. "Adaptive Lead-Time Prediction for Resilient and Sustainable Supply Chains" Sustainability 18, no. 10: 4748. https://doi.org/10.3390/su18104748

APA Style

Mutambik, I. (2026). Adaptive Lead-Time Prediction for Resilient and Sustainable Supply Chains. Sustainability, 18(10), 4748. https://doi.org/10.3390/su18104748

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop