Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (599)

Search Parameters:
Keywords = discrete-event simulation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 1111 KB  
Review
Lean Management, Discrete Event Simulation, and Virtual Reality in Hemodialysis Units: A Scoping Literature Review and Evidence Gap Analysis
by Joseph Jabbour, Jalal Possik, Adriano O. Solis, Charles Yaacoub, Sina Namaki Araghi and Gregory Zacharewicz
Modelling 2026, 7(2), 63; https://doi.org/10.3390/modelling7020063 (registering DOI) - 25 Mar 2026
Abstract
The rising global incidence of kidney failure is increasing pressure on hemodialysis unit operations, with operational vulnerabilities further exposed by the COVID-19 pandemic. This scoping review mapped evidence on Lean management, discrete event simulation (DES), and virtual reality (VR) in hemodialysis units; compared [...] Read more.
The rising global incidence of kidney failure is increasing pressure on hemodialysis unit operations, with operational vulnerabilities further exposed by the COVID-19 pandemic. This scoping review mapped evidence on Lean management, discrete event simulation (DES), and virtual reality (VR) in hemodialysis units; compared reported outcome domains and performance indicators; identified barriers to Lean implementation; and assessed the empirical basis for a combined Lean–DES–VR framework. English-language peer-reviewed articles, conference papers, and book chapters addressing Lean, DES, VR, or their combination in dialysis settings were searched in Scopus, PubMed, SpringerLink, IEEE Xplore, ACM Digital Library, and Google Scholar to 30 June 2024; grey literature and opinion pieces were excluded. Structured data extraction and thematic narrative synthesis were applied. Twenty-seven studies were included (Lean n = 4, DES n = 9, VR n = 13, DES + VR n = 1). DES studies mainly reported operational outcomes, whereas VR studies focused predominantly on patient-centered rehabilitation and experience. Most studies examined methods in isolation, and integrated Lean–DES–VR applications were almost entirely absent. The literature suggests complementarity among these approaches but provides no robust empirical basis for a fully integrated framework. No protocol was prospectively registered. Full article
Show Figures

Figure 1

28 pages, 4270 KB  
Article
Fréchet Distance-Based Vehicle Selection and Satisfaction-Aware Vehicle Allocation for Demand-Responsive Shared Mobility: A Discrete Event Simulation Study
by Hun Kim, Ji-Hyeon Woo, Yeong-Hyun Lim and Kyung-Min Seo
Mathematics 2026, 14(7), 1099; https://doi.org/10.3390/math14071099 - 24 Mar 2026
Abstract
Demand-responsive transit (DRT) requires real-time vehicle assignment under dynamically arriving requests, where each decision may alter multi-stop routes and affect both onboard and newly arriving passengers. However, DRT simulations often face three key limitations: rapidly increasing computational complexity as fleet size and demand [...] Read more.
Demand-responsive transit (DRT) requires real-time vehicle assignment under dynamically arriving requests, where each decision may alter multi-stop routes and affect both onboard and newly arriving passengers. However, DRT simulations often face three key limitations: rapidly increasing computational complexity as fleet size and demand grow, insufficient integration of traffic congestion into routing decisions, and limited consideration of passenger-oriented service quality in final vehicle assignment. To address these issues, this study proposes an integrated DRT simulation incorporating three core algorithms: Fréchet Distance-based Candidate Vehicle Selection (FD-CVS), Congestion-Aware Path Planning (CA-PP), and Satisfaction-Aware Vehicle Assignment (SA-VA). FD-CVS reduces computational burden by filtering candidate vehicles based on route similarity. CA-PP extends conventional path planning by incorporating congestion-adjusted travel costs derived from public transportation data. SA-VA determines the final vehicle assignment by jointly evaluating passenger waiting time, in-vehicle travel time, and capacity constraints. The algorithms are implemented within a discrete-event simulation environment using real-world data. Experimental results demonstrate that FD-CVS significantly reduces execution time under high-demand conditions, while SA-VA improves passenger waiting time and acceptance rates. Overall, the proposed three-algorithm framework enables more realistic and computationally efficient DRT system evaluation. Full article
(This article belongs to the Special Issue Applied Mathematics in Supply Chain and Logistics)
Show Figures

Figure 1

25 pages, 3972 KB  
Article
Adaptive Real-Time Speed Control for Automated Smart Manufacturing Systems: A Disturbance-Resilient Solution for Productivity
by Ahmad Attar, Shuya Zhong, Martino Luis and Voicu Ion Sucala
Systems 2026, 14(3), 335; https://doi.org/10.3390/systems14030335 - 23 Mar 2026
Viewed by 71
Abstract
Manufacturing is going through a significant shift propelled by Industry 4.0 and smart manufacturing infrastructures, requiring sophisticated production control techniques that can adaptively adjust to fluctuating operational situations. This paper presents a novel five-step hybrid simulation framework for adaptive real-time production speed control [...] Read more.
Manufacturing is going through a significant shift propelled by Industry 4.0 and smart manufacturing infrastructures, requiring sophisticated production control techniques that can adaptively adjust to fluctuating operational situations. This paper presents a novel five-step hybrid simulation framework for adaptive real-time production speed control in smart manufacturing lines, integrating conceptual modelling, hybrid simulation, algorithm redefinition, design of experiments, optimisation, and real-system implementation. The framework transforms the speed management systems into online digital twins capable of optimising system performance and mitigating unforeseen fluctuations, faults, and congestion. A comprehensive case study from the beverage manufacturing sector demonstrates the framework’s effectiveness, utilising a universal simulation platform to model both continuous fluid flow and discrete event processes. The proposed stepwise, multi-threshold algorithm employs multiple distinct logical thresholds evaluated sequentially to optimise both upstream and downstream station speeds, with decision thresholds independently adjustable for each production line segment. The experimental results show significant improvements, including around an 18% increase in overall throughput and a 95.7% reduction in work-in-process inventory. A comprehensive resiliency analysis and statistical tests under various disruption scenarios further validated the approach, demonstrating its superiority. Beyond the studied case, the framework provides a transferable pathway for real-time adaptive control across a wide range of smart manufacturing environments, enabling enhancements to operational efficiency without requiring additional capital investment in new equipment or infrastructure. Full article
(This article belongs to the Special Issue Modeling of Complex Systems and Systems of Systems)
Show Figures

Figure 1

39 pages, 1642 KB  
Article
A Post-Quantum Secure Architecture for 6G-Enabled Smart Hospitals: A Multi-Layered Cryptographic Framework
by Poojitha Devaraj, Syed Abrar Chaman Basha, Nithesh Nair Panarkuzhiyil Santhosh and Niharika Panda
Future Internet 2026, 18(3), 165; https://doi.org/10.3390/fi18030165 - 20 Mar 2026
Viewed by 111
Abstract
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols [...] Read more.
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols that are vulnerable to quantum attacks or deploy isolated post-quantum primitives without providing a unified framework for secure real-time medical command transmission. This research presents a latency-aware, multi-layered post-quantum security architecture for 6G-enabled smart hospital environments. The proposed framework establishes an end-to-end secure command transmission pipeline that integrates hardware-rooted device authentication, post-quantum key establishment, hybrid payload protection, dynamic access enforcement, and tamper-evident auditing within a coherent system design. In contrast to existing approaches that focus on individual security mechanisms, the architecture introduces a structured integration of Kyber-based key encapsulation and Dilithium digital signatures with hybrid AES-based encryption and legacy-compatible key transport, while Physical Unclonable Function authentication provides hardware-bound device identity verification. Zero Trust access control, metadata-driven anomaly detection, and blockchain-style audit logging provide continuous verification and traceability, while threshold cryptography distributes cryptographic authority to eliminate single points of compromise. The proposed architecture is evaluated using a discrete-event simulation framework representing adversarial conditions in realistic 6G medical communication scenarios, including replay attacks, payload manipulation, and key corruption attempts. Experimental results demonstrate improved security and operational efficiency, achieving a 48% reduction in detection latency, a 68% reduction in false-positive anomaly detection rate, and a 39% improvement in end-to-end round-trip latency compared to conventional RSA-AES-based architectures. These results demonstrate that the proposed framework provides a practical and scalable approach for achieving post-quantum secure and low-latency command transmission in next-generation 6G smart hospital systems. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Graphical abstract

20 pages, 1006 KB  
Article
A Data-Driven Discrete-Event Simulation for Assessing Passenger Dynamics and Bottlenecks in Mexico City Metro Line 7
by Elias Heriberto Arias Nava, Brendan Patrick Sullivan and Luis A. Moncayo-Martinez
Modelling 2026, 7(2), 58; https://doi.org/10.3390/modelling7020058 - 17 Mar 2026
Viewed by 194
Abstract
Mexico City’s Metro Line 7 is a critical north–south artery within one of the world’s largest metro systems, yet it suffers from persistent operational inefficiencies, including chronic overcrowding and extended passenger travel times. This research employed a data-driven discrete-event simulation model built in [...] Read more.
Mexico City’s Metro Line 7 is a critical north–south artery within one of the world’s largest metro systems, yet it suffers from persistent operational inefficiencies, including chronic overcrowding and extended passenger travel times. This research employed a data-driven discrete-event simulation model built in SIMIO to analyze the passenger dynamics of Line 7. The model was grounded in a comprehensive dataset of approximately 280,000 daily passengers over one year. Key innovations included modeling station-specific passenger arrivals as non-stationary Poisson processes with time-varying rates calculated at 15-min intervals and incorporating empirically derived walking times within stations. The simulation framework replicated the system’s operational logic, including train movements, passenger boarding and alighting, and complex transfer behaviors at interchange stations, while accounting for the influence of the broader metro network on Line 7’s passenger flows. The simulation results, derived from 100 replications, quantified severe systemic inefficiencies. The average total travel time for a passenger using Line 7 was 81.17 min. However, the ideal in-motion travel time was calculated to be only 53 min, revealing that passengers spend a disproportionate amount of time waiting. This yielded a travel time efficiency of just 65.3%. The model identified specific bottlenecks at key transfer stations like Tacubaya and San Pedro de Los Pinos, where platform utilization reaches full capacity, directly causing the excessive queuing times that degrade the overall passenger experience. This study demonstrated that the primary issue is not the speed of trains but the systemic inability to manage passenger flow during peak demand, leading to critical capacity shortfalls at specific stations. The simulation provides a quantitative tool for diagnosing these inefficiencies and offers a robust platform for prototyping and evaluating strategic interventions, such as optimized timetables and resource allocation, before costly real-world implementation. Full article
Show Figures

Figure 1

25 pages, 6362 KB  
Article
Dust Deposition on Solar Greenhouse Films: Mechanisms, Simulations, and Tomato Physiological Responses
by Haoda Li, Gang Wu, Yuhao Wei and Yifei Liu
Agriculture 2026, 16(6), 660; https://doi.org/10.3390/agriculture16060660 - 14 Mar 2026
Viewed by 258
Abstract
In desert regions, frequent aeolian dust events lead to rapid dust accumulation on greenhouse films, critically compromising light transmittance and inhibiting crop growth. To address this challenge, this study integrated Computational Fluid Dynamics–Discrete Phase Model (CFD-DPM) simulations with field experiments to conduct a [...] Read more.
In desert regions, frequent aeolian dust events lead to rapid dust accumulation on greenhouse films, critically compromising light transmittance and inhibiting crop growth. To address this challenge, this study integrated Computational Fluid Dynamics–Discrete Phase Model (CFD-DPM) simulations with field experiments to conduct a comprehensive investigation spanning from microscopic deposition mechanisms to macroscopic physiological responses. Particle characterization revealed a distinct aerodynamic sorting effect, wherein fine particles (<65 μm) preferentially adhered to film surfaces driven by airflow, contrasting sharply with the gravitational settling of coarse ground particles. Numerical simulations further confirmed that as wind speeds increased from 2 to 7 m/s, dust deposition rates exhibited a significant exponential reduction, with accumulation predominantly concentrated in the windward and wake zones. The dust layer covering the film induced a substantial reduction in the indoor daily light integral (DLI), which leads to influence tomato growth that stunted plant height and suppressed the net photosynthetic rate. Physiologically, antioxidant enzyme activities exhibited an initial surge followed by a decline, reflecting photosynthetic constraints and oxidative stress. Consequently, a high-frequency cleaning interval of 7–14 days is recommended to significantly enhance photosynthetic capacity and stress resilience. Full article
Show Figures

Figure 1

27 pages, 3243 KB  
Article
Multiple Waste Crane Scheduling Based on Cooperative Optimization of Discrete Ivy Algorithm and Simulated Annealing
by Liang Wu, Donghao Huang, Jiaxiang Luo, Cuihong Luo, Gang Yi and Tao Liang
Mathematics 2026, 14(6), 980; https://doi.org/10.3390/math14060980 - 13 Mar 2026
Viewed by 162
Abstract
Efficient scheduling of co-rail waste cranes is critical for ensuring continuous incinerator operation and reducing energy costs in waste-to-energy plants. Existing scheduling methods fail to address the unique characteristics of waste crane operations like task heterogeneity and dynamic spatial interference. To address this, [...] Read more.
Efficient scheduling of co-rail waste cranes is critical for ensuring continuous incinerator operation and reducing energy costs in waste-to-energy plants. Existing scheduling methods fail to address the unique characteristics of waste crane operations like task heterogeneity and dynamic spatial interference. To address this, a mixed-integer linear programming model is established to minimize the total crane traveling distance and task delays. A two-stage Discrete Ivy-Simulated Annealing (DIVY-SA) algorithm is proposed: the Ivy algorithm (IVYA) is discretized to generate high-quality task sequences, which are then refined by Simulated Annealing (SA) via a fine-grained local search. A heuristic task assignment scheme and a discrete-event simulation module are designed to evaluate task sequences accurately. Experiments using real-world operational data from a waste incineration plant cover task scales of 25 to 200, representing scheduling horizons of 15 min to 2 h. The algorithm’s runtime (15.04–652.81 s) demonstrates computational feasibility for near-real-time scheduling via a rolling horizon strategy. Results show that DIVY-SA outperforms representative metaheuristic algorithms and reduces the average total traveling distance by 22.19% compared with manual scheduling. This work provides technical support for the intelligent upgrading of waste incineration plants, effectively cutting energy consumption and improving operational efficiency. Full article
Show Figures

Figure 1

28 pages, 4916 KB  
Article
Improving Manufacturing Line Design Efficiency Using Digital Value Stream Mapping
by P Paryanto, Muhammad Faizin and Jörg Franke
J. Manuf. Mater. Process. 2026, 10(3), 98; https://doi.org/10.3390/jmmp10030098 - 13 Mar 2026
Viewed by 373
Abstract
This study proposes a real-time data-based Digital Value Stream Mapping (Digital VSM) framework that integrates Artificial Intelligence (AI) feature selection and discrete-event simulation validation to enhance production system performance. Unlike conventional VSM approaches that rely on static, manually aggregated data, the proposed framework [...] Read more.
This study proposes a real-time data-based Digital Value Stream Mapping (Digital VSM) framework that integrates Artificial Intelligence (AI) feature selection and discrete-event simulation validation to enhance production system performance. Unlike conventional VSM approaches that rely on static, manually aggregated data, the proposed framework uses real-time operational data to dynamically quantify Value Added (VA), Non-Value Added (NVA), and Necessary Non-Value Added (NNVA) activities. To improve decision accuracy, an Artificial Neural Network (ANN) combined with Genetic Algorithm (GA) feature selection is employed to identify dominant production variables influencing lead time and line imbalance. Furthermore, Ranked Positional Weight (RPW) optimization results are validated through Tecnomatix Plant Simulation to ensure robustness before physical implementation. The proposed framework was applied to a discrete manufacturing line, resulting in a reduction of total lead time from 8755 s to 6400 s and an increase in process ratio from 33.64% to 45.91%, with line efficiency reaching 91.7%. The findings demonstrate that integrating Digital VSM with AI-driven feature selection and simulation validation transforms Lean analysis from a descriptive tool into a predictive and validated decision-support system suitable for Industry 4.0 environments. Full article
(This article belongs to the Special Issue Emerging Methods in Digital Manufacturing)
Show Figures

Figure 1

36 pages, 16506 KB  
Article
A Scenario-Based Visual Modeling Method for the Complex Products Lifecycle
by Shuanglong Chang, Chuangye Chang, Xiyu Liu and Xinghai Gao
Electronics 2026, 15(6), 1198; https://doi.org/10.3390/electronics15061198 - 13 Mar 2026
Viewed by 272
Abstract
The development of complex products is challenged by diverse requirements, interdisciplinary coupling, intricate behaviors, and prolonged lifecycles. Traditional document-based systems engineering methods exhibit deficiencies in requirement validation, architectural verification, and cross-disciplinary integration, struggling to support early-stage verification and validation as well as interdisciplinary [...] Read more.
The development of complex products is challenged by diverse requirements, interdisciplinary coupling, intricate behaviors, and prolonged lifecycles. Traditional document-based systems engineering methods exhibit deficiencies in requirement validation, architectural verification, and cross-disciplinary integration, struggling to support early-stage verification and validation as well as interdisciplinary collaboration. To address these limitations, this paper proposes a scenario-based visual modeling method for the entire lifecycle of complex products, aiming to realize a closed-loop process epitomized by “construction as verification.” This method integrates model-based systems engineering, scenario-driven design, and multi-level visualization techniques to construct a multi-paradigm visual modeling and simulation framework driven by operational scenarios, use-case scenarios, and working-condition scenarios, each serving as the blueprint for constructing the corresponding Operational Concept, Functional/Logical, and Physical Specification Models. Concurrently, a semantic integration mechanism based on hybrid ontologies is introduced, which resolves semantic heterogeneity and facilitates model interoperability among multi-source heterogeneous models through formalized mapping. Furthermore, a simulation engine scheme based on Discrete Event System Specification is proposed to enable continuous verification from conceptual design to solution development. A case study on the braking mechanism of a high-speed train demonstrates that the proposed method can effectively support precise requirement validation, logical architectural verification, and multi-solution trade-off analysis, thereby significantly enhancing early verification capabilities and R&D efficiency. Full article
Show Figures

Figure 1

7 pages, 963 KB  
Proceeding Paper
Analysis of Self-Checkout Operations of Taiwanese Retail Store: A Simulation Modeling Approach
by Victor James C. Escolano, Shang-Yun Lin and Wei-Jung Shiang
Eng. Proc. 2026, 128(1), 21; https://doi.org/10.3390/engproc2026128021 - 12 Mar 2026
Viewed by 233
Abstract
Checkout service is crucial in ensuring customer satisfaction and enhancing retail efficiency. In recent years, self-checkout has become increasingly popular in modern retail operations. However, despite its growing adoption, there is limited quantitative evidence on its effectiveness in reducing operational costs and improving [...] Read more.
Checkout service is crucial in ensuring customer satisfaction and enhancing retail efficiency. In recent years, self-checkout has become increasingly popular in modern retail operations. However, despite its growing adoption, there is limited quantitative evidence on its effectiveness in reducing operational costs and improving overall efficiency. In this study, a discrete-event simulation model based on real-world scenarios of a retail store in Taoyuan City, Taiwan, was developed using ARENA (version 16) simulation software. Four checkout scenarios were modeled and compared through statistical tests to evaluate checkout performance. The results showed that the proposed self-checkout model with improved service time enhanced operational efficiency and contributed to reducing operational costs. These findings suggest that retail managers should implement strategic measures to optimize self-checkout operations to achieve efficient and cost-effective store performance. Finally, practical and managerial implications are discussed at the end of the study. Full article
Show Figures

Figure 1

9 pages, 399 KB  
Proceeding Paper
Modelling Helmet Manufacturing System Using Discrete Event Simulation
by Khoong Tai Wai, Wan Laailatul Hanim Mat Desa, Lim Li Li, Houng Chien Tan, Chan Ling Meng and Kumara Adji Kusuma
Eng. Proc. 2026, 128(1), 10; https://doi.org/10.3390/engproc2026128010 - 9 Mar 2026
Viewed by 227
Abstract
We simulated the manufacturing production line in a micro, small, and medium enterprise (MSME) to assess the efficiency of a helmet product organization, using ARENA simulation modelling software version 15.10.00000. The process and standard time for each process in the production line were [...] Read more.
We simulated the manufacturing production line in a micro, small, and medium enterprise (MSME) to assess the efficiency of a helmet product organization, using ARENA simulation modelling software version 15.10.00000. The process and standard time for each process in the production line were estimated from data provided by the enterprise’s management and direct observation. The enterprise line was engaged in six different processes to manufacture a singular product type. ARENA was used to analyse data. The simulation results showed an increase in workers’ utilization and reduced production duration for restructuring worker allocations, while maintaining a constant throughput rate. Full article
Show Figures

Figure 1

21 pages, 1560 KB  
Article
QEMU-Based 1553B Bus Simulation and Precise Timing Modeling Method
by Haitian Gao, Weijun Lu, Yiwen Fu, Wentao Ye and Xiaofei Guo
Electronics 2026, 15(5), 1121; https://doi.org/10.3390/electronics15051121 - 9 Mar 2026
Viewed by 219
Abstract
Deterministic, microsecond-level timing reproduction in full-system virtualization remains a key challenge for hardware-in-the-loop simulation of timing-sensitive communication buses. This paper presents a virtual time-driven approach that models protocol timing semantics as discrete events on a deterministic virtual timeline, and validates it using MIL-STD-1553B, [...] Read more.
Deterministic, microsecond-level timing reproduction in full-system virtualization remains a key challenge for hardware-in-the-loop simulation of timing-sensitive communication buses. This paper presents a virtual time-driven approach that models protocol timing semantics as discrete events on a deterministic virtual timeline, and validates it using MIL-STD-1553B, a representative aerospace bus with strict microsecond-level requirements, as a case study. The MIL-STD-1553B data bus is widely used in aerospace and high-reliability embedded systems, where communication correctness depends not only on message formats but also critically on microsecond-level timing semantics such as message intervals, frame periods, response timeouts, and automatic retries. However, existing Quick Emulator (QEMU)-based virtualization solutions typically rely on host scheduling for timing, making it difficult to maintain determinism under varying loads, which may lead to missed detections or false alarms in timeout/retry behaviors. This paper implements a configurable BU-64843 device model supporting bus controller (BC), remote terminal (RT), and monitor terminal (MT) multi-role switching under a unified framework and completes behavioral modeling of both legacy and enhanced bus controllers covering message scheduling, execution, and exception handling paths. We propose a virtual time-driven precise timing modeling method that explicitly models key timing semantics as discrete events on a virtual timeline. Extensive experiments across 10 timing scenarios demonstrate that our method reduces timing deviation from an average of 8 µs to 65–124 ns (99.1% improvement), achieving deterministic simulation decoupled from host execution speed while meeting the 1 µs minimum resolution requirement. While demonstrated on 1553B, the virtual time-driven method is applicable to other timing-sensitive bus protocols in QEMU-based simulation environments, offering a low-cost, reproducible, and high-precision simulation environment for protocol compliance verification, driver development, and system integration. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

24 pages, 631 KB  
Article
Generative Simulation and Summarization of Neonatal Patient Data
by Jesse Levine, Gurshan Riarh and James R. Green
Information 2026, 17(3), 261; https://doi.org/10.3390/info17030261 - 5 Mar 2026
Viewed by 288
Abstract
In the Neonatal Intensive Care Unit (NICU), clinicians must balance the demands of constant patient monitoring with the need for precise documentation and clear communication with colleagues and families. To address the clinical burden of documenting patient care and health status, this paper [...] Read more.
In the Neonatal Intensive Care Unit (NICU), clinicians must balance the demands of constant patient monitoring with the need for precise documentation and clear communication with colleagues and families. To address the clinical burden of documenting patient care and health status, this paper presents two complementary AI-based systems. First, a GAN-driven NICU Patient Simulator is developed to generate realistic neonatal vital sign data and discrete clinical intervention events, typical of care in the NICU. While useful for a variety of research goals, this simulator provides a safe and controllable data source essential for the development and validation of the second system: the LLM-powered Neonatal Patient Status Summarizer (NPSS). The NPSS fuses the output of multiple machine learning systems, each extracting specific aspects of patient care and health, together with vital sign data from a patient monitor. Leveraging Retrieval-Augmented Generation (RAG) to incorporate neonatal-specific reference data, the NPSS enables several key use cases, including generating parent-friendly updates, summarizing patient status for clinician handovers, and automatically populating patient records for charting. Simulator validation demonstrates the high fidelity of the simulated data relative to available infant data in Physionet. The NPSS is evaluated using an automated LLM-as-judge framework across repeated test scenarios. To mitigate self-preference bias, evaluations were conducted using three distinct LLM judges (OpenAI o3-mini, Llama-3, and Mistral). Across judges, the NPSS achieved consistently high relevance scores (0.95–0.99) and strong groundedness scores (0.80–0.91), indicating that generated summaries remain on-topic and faithful to the underlying simulator data. Once validated, the NPSS will reduce charting workload, improve shift handover efficiency, and streamline parental updates, addressing key clinical bottlenecks in NICU data workflows. Full article
Show Figures

Graphical abstract

27 pages, 2454 KB  
Article
Event-Driven Spiking Neural Networks for Private Vehicle Parking Prediction
by Wangchen Long and Jie Chen
Entropy 2026, 28(3), 253; https://doi.org/10.3390/e28030253 - 25 Feb 2026
Viewed by 243
Abstract
Predicting the future parking locations and durations of private vehicles using vehicular edge devices is critical for real-time intelligent transportation services, ranging from instant point-of-interest recommendations to dynamic route planning. Advanced deep neural networks like Transformers demonstrate exceptional performance in mobility prediction; however, [...] Read more.
Predicting the future parking locations and durations of private vehicles using vehicular edge devices is critical for real-time intelligent transportation services, ranging from instant point-of-interest recommendations to dynamic route planning. Advanced deep neural networks like Transformers demonstrate exceptional performance in mobility prediction; however, their heavy reliance on dense matrix multiplication makes them unsuitable for real-time applications on vehicular edge devices. Spiking neural networks offer a potential solution due to their asynchronous event-driven characteristics and low power consumption. However, existing spiking neural networks face three fundamental challenges: (1) handling heterogeneous inter-event intervals; (2) mitigating quantization errors in regression tasks under limited simulation steps; and (3) efficiently regulating information flow based on external contexts. To address these challenges, we propose an event-driven spiking neural network for private vehicle parking prediction called Spark. First, we design a Time-Adaptive Leaky Integrate-and-Fire neuron with a lookup table-based decay mechanism to efficiently model variable inter-event intervals. Second, an accumulate-based readout strategy is introduced to mitigate quantization errors by integrating discrete spike trains into continuous output values for high-precision regression. Third, a Spiking Contextual Gating module is proposed to selectively regulate spiking information flow across channels based on environmental context. These components are integrated into a unified architecture that maintains high prediction accuracy while remaining computationally efficient. Extensive experiments on real-world datasets demonstrate that Spark achieves an effective balance between prediction accuracy and computational efficiency compared to baselines. Full article
Show Figures

Figure 1

38 pages, 3241 KB  
Review
Digitalisation of Shipyard Production Planning: A Review of Simulation, Optimisation, AI, and Digital Twin Methods (2010–2025)
by Amir Bordbar, Mina Tadros, Amin Nazemian, Myo Zin Aung, Konstantinos Georgoulas, Panagiotis Louvros and Evangelos Boulougouris
J. Mar. Sci. Eng. 2026, 14(4), 396; https://doi.org/10.3390/jmse14040396 - 21 Feb 2026
Viewed by 884
Abstract
Digitalisation is reshaping shipyard production, yet its methodological foundations remain fragmented across simulation, optimisation, Artificial Intelligence (AI), and Digital Twin (DT) research streams. This paper presents a domain-specific methodological review of shipyard production modelling from 2010 to 2025, synthesising advances in Discrete-Event Simulation [...] Read more.
Digitalisation is reshaping shipyard production, yet its methodological foundations remain fragmented across simulation, optimisation, Artificial Intelligence (AI), and Digital Twin (DT) research streams. This paper presents a domain-specific methodological review of shipyard production modelling from 2010 to 2025, synthesising advances in Discrete-Event Simulation (DES), multi-objective optimisation, hybrid simulation–optimisation architectures, Machine Learning (ML), reinforcement learning (RL), and DT-enabled cyber-physical systems. Using an explicit evaluative framework based on integration depth, validation basis, and decision scope, the review differentiates between analytically mature but execution-decoupled DES/optimisation approaches and integration-rich yet variably validated DT and AI-driven systems. The analysis shows that hybrid DES-optimisation frameworks currently represent the most operationally credible class of methods, delivering measurable production improvements under structured conditions, whereas many DT and AI contributions prioritise architectural integration and data synchronisation over longitudinal yard-wide KPI validation. A comparative assessment of simulation platforms, optimisation engines, and manufacturing execution system/enterprise resource planning/product lifecycle management infrastructures highlights the central role of structured product–process–resource data and execution-layer connectivity, while severe confidentiality constraints and the scarcity of openly available industrial datasets continue to limit reproducibility and benchmarking. Overall, shipyard production research is progressing toward increasingly integrated and cyber-physical systems, but sustained yard-scale validation and shared benchmark development remain critical prerequisites for translating architectural sophistication into demonstrable operational impact. Full article
(This article belongs to the Special Issue Safety of Ships and Marine Design Optimization)
Show Figures

Figure 1

Back to TopTop