Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (606)

Search Parameters:
Keywords = discrete-event simulation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 743 KB  
Article
Design-Space Mapping of Post-Quantum Cryptographic Artifact Transport on CAN-FD: A Discrete-Event Simulation Study
by Min-Woo Lee, Minjoo Sim, Siwoo Eum, Gyeongju Song and Hwajeong Seo
Appl. Sci. 2026, 16(8), 3705; https://doi.org/10.3390/app16083705 - 10 Apr 2026
Abstract
Post-quantum cryptography (PQC) artifacts are one to three orders of magnitude larger than their classical counterparts and must be segmented via ISO-TP across a shared CAN-FD bus while coexisting with periodic safety-critical traffic. No prior work has quantitatively mapped the transport-level feasibility of [...] Read more.
Post-quantum cryptography (PQC) artifacts are one to three orders of magnitude larger than their classical counterparts and must be segmented via ISO-TP across a shared CAN-FD bus while coexisting with periodic safety-critical traffic. No prior work has quantitatively mapped the transport-level feasibility of these artifacts under realistic multi-electronic control unit (ECU) contention. This paper presents a validated discrete-event simulator and evaluates 29 parameter sets from nine algorithm families—spanning the KpqC final portfolio, NIST FIPS 203–205 standards, and the draft FIPS 206—across 534 scenarios classified as feasible, borderline, or infeasible. Results show that key encapsulation mechanism (KEM) feasibility is scenario-dependent: domain scale and startup coordination dominate over algorithm choice, with 4-ECU staggered deployments feasible for all Level-1 candidates, while 16-ECU simultaneous startup is universally infeasible. For digital signatures, FN-DSA achieves the best transport feasibility due to its compact signature, while HQC is uniformly infeasible and SLH-DSA is nearly uniformly infeasible, quantifying the CAN-FD bandwidth premium of algorithmic diversity. System-side traffic shaping—staggered startup and reserved bus windows—outperforms algorithm substitution as a mitigation strategy. To the best of our knowledge, these findings constitute the first design-space map of PQC artifact transport on CAN-FD and provide actionable deployment guidelines for post-quantum transition. Full article
(This article belongs to the Special Issue Information Security: Threats and Attacks)
28 pages, 4860 KB  
Article
Robust Voltage Stability Enhancement of DFIG Systems Using Deadbeat-Controlled STATCOM and ADRC-Based Supercapacitor Support
by Ahmed Muthanna Nori, Ali Kadhim Abdulabbas, Omar Alrumayh and Tawfiq M. Aljohani
Mathematics 2026, 14(8), 1254; https://doi.org/10.3390/math14081254 - 9 Apr 2026
Abstract
The increasing penetration of Doubly Fed Induction Generator (DFIG)-based wind energy systems raises major concerns regarding voltage stability and Fault Ride-Through (FRT) capability under grid disturbances and wind speed variations. This paper proposes a coordinated control framework for a grid-connected DFIG system, where [...] Read more.
The increasing penetration of Doubly Fed Induction Generator (DFIG)-based wind energy systems raises major concerns regarding voltage stability and Fault Ride-Through (FRT) capability under grid disturbances and wind speed variations. This paper proposes a coordinated control framework for a grid-connected DFIG system, where a Static Synchronous Compensator (STATCOM) based on discrete-time deadbeat current control is integrated with a Supercapacitor Energy Storage System (SCES) connected to the DC link through a bidirectional DC-DC converter governed by cascaded Active Disturbance Rejection Control (ADRC). The deadbeat-controlled STATCOM provides fast reactive current injection for voltage support during sag and swell events, while the cascaded ADRC enhances DC-link voltage regulation and suppresses rotor-speed oscillations. Comprehensive MATLAB/Simulink simulations are carried out under variable wind speed and severe grid disturbances up to 80% voltage sag and 50% voltage swell. For voltage regulation, the proposed method is compared with SVC and PI-based STATCOM. In addition, SCES control performance is evaluated by comparing PI, single ADRC, and cascaded ADRC in terms of DC-link voltage overshoot, undershoot, and ripple. The results show clear improvements in voltage response and transient performance. Under a 20% voltage sag, the proposed deadbeat-controlled STATCOM significantly improves the dynamic response, where the undershoot is reduced from 0.125 p.u. (with SVC) to 0.04 p.u., and the settling time is shortened from 0.04 s to 0.025 s. Under a severe 80% sag, the overshoot is limited to 0.02 p.u., compared with 0.13 p.u. for the SVC and 0.15 p.u. for the PI-based STATCOM. Similarly, under a 50% voltage swell, the overshoot is reduced to 0.20 p.u., compared with 0.46 p.u. for the SVC and 0.27 p.u. for the PI-based STATCOM. Regarding the DC-link performance under 80% sag, the proposed cascaded ADRC-based SCES limits the overshoot and undershoot to 6 V and 2 V, respectively, compared with 39 V and 32 V for the PI-based SCES. These results confirm the superior damping, disturbance rejection, and FRT enhancement achieved by the proposed strategy. Full article
14 pages, 709 KB  
Article
Infrastructure-Driven Performance Effects in Airport Stand Allocation: A Simulation-Based Analysis of Configuration Impact on System Capacity at International Airports
by Edina Jenčová, Peter Hanák and Marek Hanzlík
Appl. Sci. 2026, 16(8), 3656; https://doi.org/10.3390/app16083656 - 8 Apr 2026
Viewed by 127
Abstract
Airport stand allocation research has traditionally focused on optimizing assignments within fixed infrastructure configurations, while strategic decisions regarding stand category composition remain underexplored. This study investigates how different proportional distributions of stand categories affect system-level performance under high traffic demand at international airports. [...] Read more.
Airport stand allocation research has traditionally focused on optimizing assignments within fixed infrastructure configurations, while strategic decisions regarding stand category composition remain underexplored. This study investigates how different proportional distributions of stand categories affect system-level performance under high traffic demand at international airports. A discrete-event simulation model implemented in MATLAB evaluates fifteen infrastructure configurations with varying distributions of small, medium, and large stands, classified according to the ICAO Annex 14. The model employed a first-come–first-served allocation logic to isolate infrastructure-driven effects from algorithmic decision-making. System throughput was measured through acceptance and rejection rates, disaggregated by aircraft stand category. Acceptance rates ranged from 33% to 92% across tested configurations, demonstrating pronounced sensitivity to stand composition. Balanced configurations consistently outperformed asymmetric alternatives. Insufficient stand availability in any single category led to concentrated rejection patterns and non-linear performance degradation; excess capacity in unconstrained categories could not compensate for shortfalls in constrained ones. Proportionality across stand categories is identified as a critical determinant of infrastructure robustness. The proposed simulation framework provides a computationally efficient tool for early-stage (pre-operational planning phase) infrastructure screening, supporting informed strategic capacity decisions prior to detailed operational optimization. Full article
Show Figures

Figure 1

17 pages, 1841 KB  
Article
Dynamic Event-Triggered Output Feedback Control for Switched Systems via Switched Lyapunov Functions
by Xinyue Wang, Yanhui Tong and Yuyuan Li
Appl. Sci. 2026, 16(7), 3585; https://doi.org/10.3390/app16073585 - 7 Apr 2026
Viewed by 248
Abstract
This study carries out the research on event-triggered output feedback control tailored for discrete-time switched linear systems. A dynamic event-triggered mechanism (DETM) is utilized to mitigate the triggering frequency. To ensure stability and control performance, it is assumed that an event is triggered [...] Read more.
This study carries out the research on event-triggered output feedback control tailored for discrete-time switched linear systems. A dynamic event-triggered mechanism (DETM) is utilized to mitigate the triggering frequency. To ensure stability and control performance, it is assumed that an event is triggered whenever the system undergoes a switch. First, the closed-loop stability of the underlying switched system with DETM is analyzed via the switched Lyapunov function method, followed by the establishment of a stability criterion for the system under arbitrary switching. Based on this criterion, a dynamic event-triggered output feedback control strategy is devised. The viability and application potential of our proposed control strategy is validated through simulation trials using a morphing aircraft model. Furthermore, when we pit dynamic event-triggered control (DETC) against its static (SETC) version, the proposed DETM reduces the trigger events and prolongs the inter-event intervals versus the SETM, while retaining nearly identical control accuracy and energy consumption, thus providing an efficient solution for resource-constrained networked control systems. Full article
(This article belongs to the Collection Advances in Automation and Robotics)
Show Figures

Figure 1

22 pages, 1479 KB  
Article
Gate Management in Free Port Context: A Case Study of the Port of Trieste
by Valentina Boschian, Caterina Caramuta, Alessia Grosso and Giovanni Longo
Sustainability 2026, 18(7), 3433; https://doi.org/10.3390/su18073433 - 1 Apr 2026
Viewed by 231
Abstract
Ports play a central role in global trade and act as key hubs for both maritime and land transport. Free ports, characterized by special customs regimes and fiscal advantages, represent a distinctive segment of this landscape. Despite their relevance, the literature on port [...] Read more.
Ports play a central role in global trade and act as key hubs for both maritime and land transport. Free ports, characterized by special customs regimes and fiscal advantages, represent a distinctive segment of this landscape. Despite their relevance, the literature on port gate management and on free ports has developed disconnected research streams, leaving the operational implications of special customs regimes largely unexplored. This study addresses this gap by investigating how gate procedures in free ports can be managed more efficiently, using the Port of Trieste as a case study. The analysis combines Business Process Model and Notation (BPMN) with discrete event simulation: BPMN served as the logical foundation for capturing the procedural complexity of free port gate operations, while simulation provided the quantitative framework for scenario evaluation. The model was calibrated on real gate access data and validated against observed vehicle volumes. Nine scenarios were evaluated, covering managerial, technological, infrastructural, and disruption-related interventions. The results show that no single measure produces significant improvements across all performance indicators and the integrated approaches consistently outperform standalone measures. Infrastructure interventions, while more costly, prove particularly valuable in improving port resilience under severe disruption conditions. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

15 pages, 1847 KB  
Article
Exploring Artificial Intelligence as a Tool for Logistics Process Simulation
by Martin Straka and Marek Ondov
Appl. Sci. 2026, 16(7), 3301; https://doi.org/10.3390/app16073301 - 29 Mar 2026
Viewed by 255
Abstract
The growing integration of generative artificial intelligence in logistics demands efficient simulation modeling. This study evaluates generative large language models, Perplexity and ChatGPT, for discrete-event simulation in ExtendSim. It focuses on modeling a real, complex manufacturing system, yielding 9721 tons of output. The [...] Read more.
The growing integration of generative artificial intelligence in logistics demands efficient simulation modeling. This study evaluates generative large language models, Perplexity and ChatGPT, for discrete-event simulation in ExtendSim. It focuses on modeling a real, complex manufacturing system, yielding 9721 tons of output. The following three scenarios were assessed: autonomous model creation, output estimation from process descriptions and parameters, and copilot-guided manual building. LLMs cannot autonomously construct ExtendSim models due to the lack of APIs. Output estimation only matched benchmarks after iterative prompt refinement, achieving errors of 0.1% for Perplexity and 1.2% to 22.8% for ChatGPT. Estimation without substantial human intervention proved infeasible. Only the copilot approach appeared viable despite initial errors. It enabled a validated model with 9718 tons output after resolving 25 errors for Perplexity and 22 for ChatGPT through iterative refinement. Approximately 28% (Perplexity) or 32% (ChatGPT) of the errors were hallucinations. The copilot approach reduced development time from several days to 8–10 h. Human expertise remained essential for verifying model outputs and addressing hallucinations and logical flaws. Consequently, this approach may be less feasible for inexperienced users. The copilot paradigm offers practical acceleration for experienced users; however, its limitations underscore the need for API integration and retrieval-augmented generation enhancements. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 4309 KB  
Article
A Deep Reinforcement Learning Approach for Joint Resource Allocation in Time-Varying Underwater Acoustic Cooperative Networks
by Liangliang Zeng, Tongxing Zheng, Yifan Wu, Yimeng Ge and Jiahao Gao
J. Mar. Sci. Eng. 2026, 14(7), 616; https://doi.org/10.3390/jmse14070616 - 27 Mar 2026
Viewed by 370
Abstract
Underwater acoustic sensor networks (UASNs) have emerged as a pivotal technology for ocean exploration, tactical surveillance, and environmental monitoring. However, the underwater acoustic channel poses severe challenges, including high propagation delay, limited bandwidth, and rapid time-varying multipath fading, which significantly degrade communication reliability. [...] Read more.
Underwater acoustic sensor networks (UASNs) have emerged as a pivotal technology for ocean exploration, tactical surveillance, and environmental monitoring. However, the underwater acoustic channel poses severe challenges, including high propagation delay, limited bandwidth, and rapid time-varying multipath fading, which significantly degrade communication reliability. Cooperative communication, which exploits spatial diversity via relay nodes, offers a promising solution to these impairments. In this paper, we investigate the joint optimization of relay selection and power allocation in UASNs to maximize the long-term system energy efficiency and throughput. This problem is inherently complex due to the hybrid action space, which couples the discrete selection of relay nodes with the continuous allocation of transmission power, and the absence of real-time, perfect channel state information (CSI). To address these challenges, we propose a novel deep hybrid reinforcement learning (DHRL) framework utilizing a parameterized deep Q-Network (P-DQN) architecture. Unlike traditional approaches that discretize power levels or relax discrete constraints, our approach seamlessly integrates a deterministic policy network for continuous power control and a value-based network for discrete relay evaluation. Furthermore, we incorporate a prioritized experience replay (PER) mechanism to improve sample efficiency by focusing on rare but significant channel transition events. We provide a comprehensive theoretical analysis of the algorithm’s complexity and convergence properties. Extensive simulation results demonstrate that the proposed DHRL algorithm outperforms state-of-the-art combinatorial bandit algorithms and conventional deep reinforcement learning baselines in terms of system energy efficiency, and also exhibits superior robustness against channel estimation errors. Full article
(This article belongs to the Section Coastal Engineering)
Show Figures

Figure 1

32 pages, 1111 KB  
Review
Lean Management, Discrete Event Simulation, and Virtual Reality in Hemodialysis Units: A Scoping Literature Review and Evidence Gap Analysis
by Joseph Jabbour, Jalal Possik, Adriano O. Solis, Charles Yaacoub, Sina Namaki Araghi and Gregory Zacharewicz
Modelling 2026, 7(2), 63; https://doi.org/10.3390/modelling7020063 - 25 Mar 2026
Viewed by 335
Abstract
The rising global incidence of kidney failure is increasing pressure on hemodialysis unit operations, with operational vulnerabilities further exposed by the COVID-19 pandemic. This scoping review mapped evidence on Lean management, discrete event simulation (DES), and virtual reality (VR) in hemodialysis units; compared [...] Read more.
The rising global incidence of kidney failure is increasing pressure on hemodialysis unit operations, with operational vulnerabilities further exposed by the COVID-19 pandemic. This scoping review mapped evidence on Lean management, discrete event simulation (DES), and virtual reality (VR) in hemodialysis units; compared reported outcome domains and performance indicators; identified barriers to Lean implementation; and assessed the empirical basis for a combined Lean–DES–VR framework. English-language peer-reviewed articles, conference papers, and book chapters addressing Lean, DES, VR, or their combination in dialysis settings were searched in Scopus, PubMed, SpringerLink, IEEE Xplore, ACM Digital Library, and Google Scholar to 30 June 2024; grey literature and opinion pieces were excluded. Structured data extraction and thematic narrative synthesis were applied. Twenty-seven studies were included (Lean n = 4, DES n = 9, VR n = 13, DES + VR n = 1). DES studies mainly reported operational outcomes, whereas VR studies focused predominantly on patient-centered rehabilitation and experience. Most studies examined methods in isolation, and integrated Lean–DES–VR applications were almost entirely absent. The literature suggests complementarity among these approaches but provides no robust empirical basis for a fully integrated framework. No protocol was prospectively registered. Full article
Show Figures

Figure 1

28 pages, 4270 KB  
Article
Fréchet Distance-Based Vehicle Selection and Satisfaction-Aware Vehicle Allocation for Demand-Responsive Shared Mobility: A Discrete Event Simulation Study
by Hun Kim, Ji-Hyeon Woo, Yeong-Hyun Lim and Kyung-Min Seo
Mathematics 2026, 14(7), 1099; https://doi.org/10.3390/math14071099 - 24 Mar 2026
Viewed by 232
Abstract
Demand-responsive transit (DRT) requires real-time vehicle assignment under dynamically arriving requests, where each decision may alter multi-stop routes and affect both onboard and newly arriving passengers. However, DRT simulations often face three key limitations: rapidly increasing computational complexity as fleet size and demand [...] Read more.
Demand-responsive transit (DRT) requires real-time vehicle assignment under dynamically arriving requests, where each decision may alter multi-stop routes and affect both onboard and newly arriving passengers. However, DRT simulations often face three key limitations: rapidly increasing computational complexity as fleet size and demand grow, insufficient integration of traffic congestion into routing decisions, and limited consideration of passenger-oriented service quality in final vehicle assignment. To address these issues, this study proposes an integrated DRT simulation incorporating three core algorithms: Fréchet Distance-based Candidate Vehicle Selection (FD-CVS), Congestion-Aware Path Planning (CA-PP), and Satisfaction-Aware Vehicle Assignment (SA-VA). FD-CVS reduces computational burden by filtering candidate vehicles based on route similarity. CA-PP extends conventional path planning by incorporating congestion-adjusted travel costs derived from public transportation data. SA-VA determines the final vehicle assignment by jointly evaluating passenger waiting time, in-vehicle travel time, and capacity constraints. The algorithms are implemented within a discrete-event simulation environment using real-world data. Experimental results demonstrate that FD-CVS significantly reduces execution time under high-demand conditions, while SA-VA improves passenger waiting time and acceptance rates. Overall, the proposed three-algorithm framework enables more realistic and computationally efficient DRT system evaluation. Full article
(This article belongs to the Special Issue Applied Mathematics in Supply Chain and Logistics)
Show Figures

Figure 1

25 pages, 3972 KB  
Article
Adaptive Real-Time Speed Control for Automated Smart Manufacturing Systems: A Disturbance-Resilient Solution for Productivity
by Ahmad Attar, Shuya Zhong, Martino Luis and Voicu Ion Sucala
Systems 2026, 14(3), 335; https://doi.org/10.3390/systems14030335 - 23 Mar 2026
Viewed by 302
Abstract
Manufacturing is going through a significant shift propelled by Industry 4.0 and smart manufacturing infrastructures, requiring sophisticated production control techniques that can adaptively adjust to fluctuating operational situations. This paper presents a novel five-step hybrid simulation framework for adaptive real-time production speed control [...] Read more.
Manufacturing is going through a significant shift propelled by Industry 4.0 and smart manufacturing infrastructures, requiring sophisticated production control techniques that can adaptively adjust to fluctuating operational situations. This paper presents a novel five-step hybrid simulation framework for adaptive real-time production speed control in smart manufacturing lines, integrating conceptual modelling, hybrid simulation, algorithm redefinition, design of experiments, optimisation, and real-system implementation. The framework transforms the speed management systems into online digital twins capable of optimising system performance and mitigating unforeseen fluctuations, faults, and congestion. A comprehensive case study from the beverage manufacturing sector demonstrates the framework’s effectiveness, utilising a universal simulation platform to model both continuous fluid flow and discrete event processes. The proposed stepwise, multi-threshold algorithm employs multiple distinct logical thresholds evaluated sequentially to optimise both upstream and downstream station speeds, with decision thresholds independently adjustable for each production line segment. The experimental results show significant improvements, including around an 18% increase in overall throughput and a 95.7% reduction in work-in-process inventory. A comprehensive resiliency analysis and statistical tests under various disruption scenarios further validated the approach, demonstrating its superiority. Beyond the studied case, the framework provides a transferable pathway for real-time adaptive control across a wide range of smart manufacturing environments, enabling enhancements to operational efficiency without requiring additional capital investment in new equipment or infrastructure. Full article
(This article belongs to the Special Issue Modeling of Complex Systems and Systems of Systems)
Show Figures

Figure 1

39 pages, 1642 KB  
Article
A Post-Quantum Secure Architecture for 6G-Enabled Smart Hospitals: A Multi-Layered Cryptographic Framework
by Poojitha Devaraj, Syed Abrar Chaman Basha, Nithesh Nair Panarkuzhiyil Santhosh and Niharika Panda
Future Internet 2026, 18(3), 165; https://doi.org/10.3390/fi18030165 - 20 Mar 2026
Viewed by 392
Abstract
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols [...] Read more.
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols that are vulnerable to quantum attacks or deploy isolated post-quantum primitives without providing a unified framework for secure real-time medical command transmission. This research presents a latency-aware, multi-layered post-quantum security architecture for 6G-enabled smart hospital environments. The proposed framework establishes an end-to-end secure command transmission pipeline that integrates hardware-rooted device authentication, post-quantum key establishment, hybrid payload protection, dynamic access enforcement, and tamper-evident auditing within a coherent system design. In contrast to existing approaches that focus on individual security mechanisms, the architecture introduces a structured integration of Kyber-based key encapsulation and Dilithium digital signatures with hybrid AES-based encryption and legacy-compatible key transport, while Physical Unclonable Function authentication provides hardware-bound device identity verification. Zero Trust access control, metadata-driven anomaly detection, and blockchain-style audit logging provide continuous verification and traceability, while threshold cryptography distributes cryptographic authority to eliminate single points of compromise. The proposed architecture is evaluated using a discrete-event simulation framework representing adversarial conditions in realistic 6G medical communication scenarios, including replay attacks, payload manipulation, and key corruption attempts. Experimental results demonstrate improved security and operational efficiency, achieving a 48% reduction in detection latency, a 68% reduction in false-positive anomaly detection rate, and a 39% improvement in end-to-end round-trip latency compared to conventional RSA-AES-based architectures. These results demonstrate that the proposed framework provides a practical and scalable approach for achieving post-quantum secure and low-latency command transmission in next-generation 6G smart hospital systems. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Graphical abstract

20 pages, 1006 KB  
Article
A Data-Driven Discrete-Event Simulation for Assessing Passenger Dynamics and Bottlenecks in Mexico City Metro Line 7
by Elias Heriberto Arias Nava, Brendan Patrick Sullivan and Luis A. Moncayo-Martinez
Modelling 2026, 7(2), 58; https://doi.org/10.3390/modelling7020058 - 17 Mar 2026
Viewed by 316
Abstract
Mexico City’s Metro Line 7 is a critical north–south artery within one of the world’s largest metro systems, yet it suffers from persistent operational inefficiencies, including chronic overcrowding and extended passenger travel times. This research employed a data-driven discrete-event simulation model built in [...] Read more.
Mexico City’s Metro Line 7 is a critical north–south artery within one of the world’s largest metro systems, yet it suffers from persistent operational inefficiencies, including chronic overcrowding and extended passenger travel times. This research employed a data-driven discrete-event simulation model built in SIMIO to analyze the passenger dynamics of Line 7. The model was grounded in a comprehensive dataset of approximately 280,000 daily passengers over one year. Key innovations included modeling station-specific passenger arrivals as non-stationary Poisson processes with time-varying rates calculated at 15-min intervals and incorporating empirically derived walking times within stations. The simulation framework replicated the system’s operational logic, including train movements, passenger boarding and alighting, and complex transfer behaviors at interchange stations, while accounting for the influence of the broader metro network on Line 7’s passenger flows. The simulation results, derived from 100 replications, quantified severe systemic inefficiencies. The average total travel time for a passenger using Line 7 was 81.17 min. However, the ideal in-motion travel time was calculated to be only 53 min, revealing that passengers spend a disproportionate amount of time waiting. This yielded a travel time efficiency of just 65.3%. The model identified specific bottlenecks at key transfer stations like Tacubaya and San Pedro de Los Pinos, where platform utilization reaches full capacity, directly causing the excessive queuing times that degrade the overall passenger experience. This study demonstrated that the primary issue is not the speed of trains but the systemic inability to manage passenger flow during peak demand, leading to critical capacity shortfalls at specific stations. The simulation provides a quantitative tool for diagnosing these inefficiencies and offers a robust platform for prototyping and evaluating strategic interventions, such as optimized timetables and resource allocation, before costly real-world implementation. Full article
Show Figures

Figure 1

25 pages, 6362 KB  
Article
Dust Deposition on Solar Greenhouse Films: Mechanisms, Simulations, and Tomato Physiological Responses
by Haoda Li, Gang Wu, Yuhao Wei and Yifei Liu
Agriculture 2026, 16(6), 660; https://doi.org/10.3390/agriculture16060660 - 14 Mar 2026
Viewed by 384
Abstract
In desert regions, frequent aeolian dust events lead to rapid dust accumulation on greenhouse films, critically compromising light transmittance and inhibiting crop growth. To address this challenge, this study integrated Computational Fluid Dynamics–Discrete Phase Model (CFD-DPM) simulations with field experiments to conduct a [...] Read more.
In desert regions, frequent aeolian dust events lead to rapid dust accumulation on greenhouse films, critically compromising light transmittance and inhibiting crop growth. To address this challenge, this study integrated Computational Fluid Dynamics–Discrete Phase Model (CFD-DPM) simulations with field experiments to conduct a comprehensive investigation spanning from microscopic deposition mechanisms to macroscopic physiological responses. Particle characterization revealed a distinct aerodynamic sorting effect, wherein fine particles (<65 μm) preferentially adhered to film surfaces driven by airflow, contrasting sharply with the gravitational settling of coarse ground particles. Numerical simulations further confirmed that as wind speeds increased from 2 to 7 m/s, dust deposition rates exhibited a significant exponential reduction, with accumulation predominantly concentrated in the windward and wake zones. The dust layer covering the film induced a substantial reduction in the indoor daily light integral (DLI), which leads to influence tomato growth that stunted plant height and suppressed the net photosynthetic rate. Physiologically, antioxidant enzyme activities exhibited an initial surge followed by a decline, reflecting photosynthetic constraints and oxidative stress. Consequently, a high-frequency cleaning interval of 7–14 days is recommended to significantly enhance photosynthetic capacity and stress resilience. Full article
Show Figures

Figure 1

27 pages, 3243 KB  
Article
Multiple Waste Crane Scheduling Based on Cooperative Optimization of Discrete Ivy Algorithm and Simulated Annealing
by Liang Wu, Donghao Huang, Jiaxiang Luo, Cuihong Luo, Gang Yi and Tao Liang
Mathematics 2026, 14(6), 980; https://doi.org/10.3390/math14060980 - 13 Mar 2026
Viewed by 220
Abstract
Efficient scheduling of co-rail waste cranes is critical for ensuring continuous incinerator operation and reducing energy costs in waste-to-energy plants. Existing scheduling methods fail to address the unique characteristics of waste crane operations like task heterogeneity and dynamic spatial interference. To address this, [...] Read more.
Efficient scheduling of co-rail waste cranes is critical for ensuring continuous incinerator operation and reducing energy costs in waste-to-energy plants. Existing scheduling methods fail to address the unique characteristics of waste crane operations like task heterogeneity and dynamic spatial interference. To address this, a mixed-integer linear programming model is established to minimize the total crane traveling distance and task delays. A two-stage Discrete Ivy-Simulated Annealing (DIVY-SA) algorithm is proposed: the Ivy algorithm (IVYA) is discretized to generate high-quality task sequences, which are then refined by Simulated Annealing (SA) via a fine-grained local search. A heuristic task assignment scheme and a discrete-event simulation module are designed to evaluate task sequences accurately. Experiments using real-world operational data from a waste incineration plant cover task scales of 25 to 200, representing scheduling horizons of 15 min to 2 h. The algorithm’s runtime (15.04–652.81 s) demonstrates computational feasibility for near-real-time scheduling via a rolling horizon strategy. Results show that DIVY-SA outperforms representative metaheuristic algorithms and reduces the average total traveling distance by 22.19% compared with manual scheduling. This work provides technical support for the intelligent upgrading of waste incineration plants, effectively cutting energy consumption and improving operational efficiency. Full article
Show Figures

Figure 1

28 pages, 4916 KB  
Article
Improving Manufacturing Line Design Efficiency Using Digital Value Stream Mapping
by P Paryanto, Muhammad Faizin and Jörg Franke
J. Manuf. Mater. Process. 2026, 10(3), 98; https://doi.org/10.3390/jmmp10030098 - 13 Mar 2026
Viewed by 603
Abstract
This study proposes a real-time data-based Digital Value Stream Mapping (Digital VSM) framework that integrates Artificial Intelligence (AI) feature selection and discrete-event simulation validation to enhance production system performance. Unlike conventional VSM approaches that rely on static, manually aggregated data, the proposed framework [...] Read more.
This study proposes a real-time data-based Digital Value Stream Mapping (Digital VSM) framework that integrates Artificial Intelligence (AI) feature selection and discrete-event simulation validation to enhance production system performance. Unlike conventional VSM approaches that rely on static, manually aggregated data, the proposed framework uses real-time operational data to dynamically quantify Value Added (VA), Non-Value Added (NVA), and Necessary Non-Value Added (NNVA) activities. To improve decision accuracy, an Artificial Neural Network (ANN) combined with Genetic Algorithm (GA) feature selection is employed to identify dominant production variables influencing lead time and line imbalance. Furthermore, Ranked Positional Weight (RPW) optimization results are validated through Tecnomatix Plant Simulation to ensure robustness before physical implementation. The proposed framework was applied to a discrete manufacturing line, resulting in a reduction of total lead time from 8755 s to 6400 s and an increase in process ratio from 33.64% to 45.91%, with line efficiency reaching 91.7%. The findings demonstrate that integrating Digital VSM with AI-driven feature selection and simulation validation transforms Lean analysis from a descriptive tool into a predictive and validated decision-support system suitable for Industry 4.0 environments. Full article
(This article belongs to the Special Issue Emerging Methods in Digital Manufacturing)
Show Figures

Figure 1

Back to TopTop