Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (13,679)

Search Parameters:
Keywords = network reliability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5710 KB  
Article
Steel Slag-Enhanced Cement-Stabilized Recycled Aggregate Bases: Mechanical Performance and PINN-Based Sulfate Diffusion Prediction
by Guodong Zeng, Hao Li, Yuyuan Deng, Xuancang Wang, Yang Fang and Haoxiang Liu
Materials 2026, 19(3), 546; https://doi.org/10.3390/ma19030546 - 29 Jan 2026
Abstract
The application of cement-stabilized recycled aggregate (CSR) in pavement bases is constrained by the high porosity and low strength of recycled aggregate (RA), whereas sulfate transport and durability mechanisms are less reported. To address this issue, this study incorporated high-strength and potentially reactive [...] Read more.
The application of cement-stabilized recycled aggregate (CSR) in pavement bases is constrained by the high porosity and low strength of recycled aggregate (RA), whereas sulfate transport and durability mechanisms are less reported. To address this issue, this study incorporated high-strength and potentially reactive steel slag aggregate (SSA) into CSR to develop steel slag-enhanced cement-stabilized recycled aggregate (CSRS). The mechanical performance of the mixtures was evaluated through unconfined compressive strength (UCS) and indirect tensile strength (ITS) tests, and their durability was assessed using thermal shrinkage and sulfate resistance tests. In addition, a sulfate prediction model based on a physics-informed neural network (PINN) was developed. The results showed that, compared with CSR, the 7-day and 28-day UCS of CSRS increased by 6.7% and 16.0%, respectively, and the ITS increased by 4.3% and 5.9%. Thermal shrinkage tests indicated that CSR and CSRS, incorporating RA and SSA, exhibited slightly higher thermal shrinkage strain than cement-stabilized natural aggregate (CSN). During sulfate attack, SSA significantly improved the sulfate resistance of CSR, with the sulfate resistance coefficient of CSRS increasing by 18.8% compared to CSR. Furthermore, the PINN model predicted that, in 3%, 5%, and 7% sodium sulfate solutions, the sulfate concentration at a 1 mm depth in CSRS was reduced by 35.6%, 21.8%, and 29.4%, respectively, compared to CSR, with an average relative error below 14%, confirming its reliability. Therefore, these findings demonstrate that the incorporation of SSA markedly enhances the mechanical properties and sulfate resistance of CSR, and that the PINN model provides an effective tool for accurate simulation and prediction of sulfate diffusion. Full article
Show Figures

Figure 1

20 pages, 1248 KB  
Article
Round-Trip Time Estimation Using Enhanced Regularized Extreme Learning Machine
by Hassan Rizky Putra Sailellah, Hilal Hudan Nuha and Aji Gautama Putrada
Network 2026, 6(1), 10; https://doi.org/10.3390/network6010010 - 29 Jan 2026
Abstract
Reliable Internet connectivity is essential for latency-sensitive services such as video conferencing, media streaming, and online gaming. Round-trip time (RTT) is a key indicator of network performance and is central to setting retransmission timeout (RTO); inaccurate RTT estimates may trigger unnecessary retransmissions or [...] Read more.
Reliable Internet connectivity is essential for latency-sensitive services such as video conferencing, media streaming, and online gaming. Round-trip time (RTT) is a key indicator of network performance and is central to setting retransmission timeout (RTO); inaccurate RTT estimates may trigger unnecessary retransmissions or slow loss recovery. This paper proposes an Enhanced Regularized Extreme Learning Machine (RELM) for RTT estimation that improves generalization and efficiency by interleaving a bidirectional log-step heuristic to select the regularization constant C. Unlike manual tuning or fixed-range grid search, the proposed heuristic explores C on a logarithmic scale in both directions (×10 and /10) within a single loop and terminates using a tolerance–patience criterion, reducing redundant evaluations without requiring predefined bounds. A custom RTT dataset is generated using Mininet with a dumbbell topology under controlled delay injections (1–1000 ms), yielding 1000 supervised samples derived from 100,000 raw RTT measurements. Experiments follow a strict train/validation/test split (6:1:3) with training-only standardization/normalization and validation-only hyperparameter selection. On the controlled Mininet dataset, the best configuration (ReLU, 150 hidden neurons, C=102) achieves R2=0.9999, MAPE=0.0018, MAE=966.04, and RMSE=1589.64 on the test set, while maintaining millisecond-level runtime. Under the same evaluation pipeline, the proposed method demonstrates competitive performance compared to common regression baselines (SVR, GAM, Decision Tree, KNN, Random Forest, GBDT, and ELM), while maintaining lower computational overhead within the controlled simulation setting. To assess practical robustness, an additional evaluation on a public real-world WiFi RSS–RTT dataset shows near-meter accuracy in LOS and mixed LOS/NLOS scenarios, while performance degrades markedly under dominant NLOS conditions, reflecting physical-channel limitations rather than model instability. These results demonstrate the feasibility of the Enhanced RELM and motivate further validation on operational networks with packet loss, jitter, and path variability. Full article
28 pages, 8566 KB  
Article
Design and Experimental Validation of a 12 GHz High-Gain 4 × 4 Patch Antenna Array for S21 Phase-Based Vital Signs Monitoring
by David Vatamanu, Simona Miclaus and Ladislau Matekovits
Sensors 2026, 26(3), 887; https://doi.org/10.3390/s26030887 - 29 Jan 2026
Abstract
Non-contact monitoring of human vital signs using microwave radar has attracted increasing attention due to its capability to operate unobtrusively and through clothing or light obstacles. In vector network analyzer (VNA)-based radar systems, vital signs can be extracted from phase variations in the [...] Read more.
Non-contact monitoring of human vital signs using microwave radar has attracted increasing attention due to its capability to operate unobtrusively and through clothing or light obstacles. In vector network analyzer (VNA)-based radar systems, vital signs can be extracted from phase variations in the forward transmission coefficient S21, whose sensitivity strongly depends on the electromagnetic performance of the antenna system. This work presents the design, optimization, fabrication, and experimental validation of a high-gain 12 GHz 4 × 4 microstrip patch antenna array specifically developed for phase-based vital signs monitoring. The antenna array was progressively optimized through coaxial feeding, slot-based impedance control, stepped transmission line matching, and mitered bends, achieving a simulated gain of 17.8 dBi, a measured gain of 17.06 dBi, a reflection coefficient of −26 dB at 12 GHz, and a total efficiency close to 74%. The antenna performance was experimentally validated in an anechoic chamber and subsequently integrated into a continuous-wave VNA-based radar system. Comparative measurements were conducted against a commercial biconical antenna, a single patch radiator, and an MIMO antenna under identical conditions. Results demonstrate that while respiration can be detected with moderate-gain antennas, reliable heartbeat detection requires high-gain, narrow-beam antennas to enhance phase sensitivity and suppress environmental clutter. The proposed array significantly improves pulse detectability in the (1–1.5) Hz band without relying on advanced signal processing. These findings highlight the critical role of antenna design in S21-based biomedical radar systems and provide practical design guidelines for high-sensitivity non-contact vital signs monitoring. Full article
Show Figures

Figure 1

14 pages, 1426 KB  
Article
Optimization of Multi-Layer Neural Network-Based Cooling Load Prediction for Office Buildings Through Data Preprocessing and Algorithm Variations
by Namchul Seong, Daeung Danny Kim and Goopyo Hong
Buildings 2026, 16(3), 566; https://doi.org/10.3390/buildings16030566 - 29 Jan 2026
Abstract
Accurate forecasting of cooling loads is essential for the effective operation of Building Energy Management Systems (BEMSs) and the reduction of building-sector carbon emissions. Although Artificial Neural Networks (ANNs), particularly Multi-Layer Perceptrons (MLPs), have shown strong capability in modeling nonlinear thermal dynamics, their [...] Read more.
Accurate forecasting of cooling loads is essential for the effective operation of Building Energy Management Systems (BEMSs) and the reduction of building-sector carbon emissions. Although Artificial Neural Networks (ANNs), particularly Multi-Layer Perceptrons (MLPs), have shown strong capability in modeling nonlinear thermal dynamics, their reliability in practice is often limited by inappropriate training algorithm selection and poor data quality, including missing values and numerical distortions. To address these limitations, this study conducts a comprehensive empirical investigation into the effects of training algorithms and systematic data preprocessing strategies on cooling load prediction performance using an MLP model. Through benchmarking ten distinct training algorithms under identical conditions, the Levenberg–Marquardt (LM) algorithm was identified as achieving the lowest prediction error when integrated data preprocessing was applied. In particular, the application of data preprocessing reduced the CvRMSE from 18.56% to 6.03% during the testing period. Furthermore, the proposed framework effectively mitigated zero-load prediction errors during non-cooling periods and improved prediction accuracy under high-load operating conditions. These results provide practical and quantitative guidance for developing robust data-driven forecasting models applicable to real-time building energy optimization. Full article
(This article belongs to the Special Issue Built Environment and Building Energy for Decarbonization)
28 pages, 2329 KB  
Article
Calculation of Buffer Zone Size for Critical Chain of Hydraulic Engineering Considering the Correlation of Construction Period Risk
by Shengjun Wang, Junqiang Ge, Jikun Zhang, Shengwei Su, Zihang Hu, Jianuo Gu and Xiangtian Nie
Buildings 2026, 16(3), 557; https://doi.org/10.3390/buildings16030557 - 29 Jan 2026
Abstract
Due to their large scale, long duration, complex geological conditions, and multiple stakeholders, water conservancy engineering projects are subject to diverse, interrelated, and uncertain risk factors that affect the construction timeline. Traditional critical chain buffer calculation methods, such as the cut-and-paste method and [...] Read more.
Due to their large scale, long duration, complex geological conditions, and multiple stakeholders, water conservancy engineering projects are subject to diverse, interrelated, and uncertain risk factors that affect the construction timeline. Traditional critical chain buffer calculation methods, such as the cut-and-paste method and the root variance method, typically assume the independence of risks, which limits their effectiveness in addressing schedule delays caused by correlated risk events. To overcome this limitation, this paper proposes a novel critical chain buffer calculation approach that explicitly incorporates risk correlation analysis. A fuzzy DEMATEL-ISM-BN model is employed to systematically identify the interrelationships and influence pathways among schedule risk factors. Bayesian network inference is then used to quantify the overall occurrence probability while accounting for risk correlations. By integrating critical chain management theory, risk impact coefficients are introduced to improve the traditional root variance method, resulting in a buffer calculation model that captures interdependencies among schedule risks. The effectiveness of the proposed model is validated through a case study of the X Pumped Storage Power Station. The results indicate that, compared with conventional methods, the proposed approach significantly enhances the robustness of project schedule planning under correlated risk conditions while appropriately increasing buffer sizes. Consequently, the adaptability and reliability of schedule control are improved. This study provides novel theoretical tools and practical insights for schedule risk management in complex engineering projects. Full article
(This article belongs to the Topic Sustainable Building Materials)
Show Figures

Figure 1

25 pages, 2237 KB  
Article
A Generalized Cost Model for Techno-Economic Analysis in Optical Networks
by André Souza, Marco Quagliotti, Mohammad M. Hosseini, Andrea Marotta, Carlo Centofanti, Farhad Arpanaei, Arantxa Villavicencio Paz, José Manuel Rivas-Moscoso, Gianluca Gambari, Laia Nadal, Marc Ruiz, Stephen Parker and João Pedro
Photonics 2026, 13(2), 125; https://doi.org/10.3390/photonics13020125 - 29 Jan 2026
Abstract
Techno-economic analysis (TEA) plays a vital role in assessing the feasibility and scalability of emerging technologies, especially in the context of innovation and development. Central to any effective TEA is a reliable and detailed model of capital and operational costs. This paper reports [...] Read more.
Techno-economic analysis (TEA) plays a vital role in assessing the feasibility and scalability of emerging technologies, especially in the context of innovation and development. Central to any effective TEA is a reliable and detailed model of capital and operational costs. This paper reports the development of such a model for optical networks in the framework of the SEASON project, aimed at supporting a broad spectrum of techno-economic evaluations. The model is constructed using publicly available data and expert insights from project participants. Its generalizable design allows it to be used both within the SEASON project and as a reference for other studies. By harmonizing assumptions and cost parameters, the model fosters consistency across different analyses. It includes cost and power consumption data for a wide range of commercially available optical network components (including transceivers for point-to-multipoint communications), introduces a statistical framework for estimating values for emerging technologies, and provides a cost model for multiband-doped fiber amplifiers. To demonstrate its practical relevance, the paper applies the model to two case studies: an evaluation of how the cost of various multiband node architectures scales with network traffic in meshed topologies and a comparison of different transport solutions to carry fronthaul flows in the radio access network. Full article
(This article belongs to the Section Optical Communication and Network)
Show Figures

Figure 1

24 pages, 3822 KB  
Article
Optimising Calculation Logic in Emergency Management: A Framework for Strategic Decision-Making
by Yuqi Hang and Kexi Wang
Systems 2026, 14(2), 139; https://doi.org/10.3390/systems14020139 - 29 Jan 2026
Abstract
Given the increasing demand for rapid emergency management decision-making, which must be both timely and reliable, even slight delays can result in substantial human and economic losses. However, current systems and recent state-of-the-art work often use inflexible rule-based logic that cannot adapt to [...] Read more.
Given the increasing demand for rapid emergency management decision-making, which must be both timely and reliable, even slight delays can result in substantial human and economic losses. However, current systems and recent state-of-the-art work often use inflexible rule-based logic that cannot adapt to rapidly changing emergency conditions or dynamically optimise response allocation. As a result, our study presents the Calculation Logic Optimisation Framework (CLOF), a novel data-driven approach that enhances decision-making intelligently and strategically through learning-based predictive and multi-objective optimisation, utilising the 911 Emergency Calls data set, comprising more than half a million records from Montgomery County, Pennsylvania, USA. The CLOF examines patterns over space and time and uses optimised calculation logic to reduce response latency and increase decision reliability. The suggested framework outperforms the standard Decision Tree, Random Forest, Gradient Boosting, and XGBoost baselines, achieving 94.68% accuracy, a log-loss of 0.081, and a reliability score (R2) of 0.955. The mean response time error is reported to have been reduced by 19%, illustrating robustness to real-world uncertainty. The CLOF aims to deliver results that confirm the scalability, interpretability, and efficiency of modern EM frameworks, thereby improving safety, risk awareness, and operational quality in large-scale emergency networks. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

30 pages, 3570 KB  
Article
Two-Stage Decoupled Security-Constrained Redispatching for Hybrid AC/DC Grids
by Emanuele Ciapessoni, Diego Cirio and Andrea Pitto
Energies 2026, 19(3), 706; https://doi.org/10.3390/en19030706 - 29 Jan 2026
Abstract
Hybrid AC/DC grids with High Voltage Direct Current (HVDC) systems enhance grid resilience and enable efficient long-distance power transfer, asynchronous network interconnection, and seamless integration of offshore renewable energy sources. However, ensuring secure and reliable operation of these complex hybrid systems, particularly under [...] Read more.
Hybrid AC/DC grids with High Voltage Direct Current (HVDC) systems enhance grid resilience and enable efficient long-distance power transfer, asynchronous network interconnection, and seamless integration of offshore renewable energy sources. However, ensuring secure and reliable operation of these complex hybrid systems, particularly under contingency scenarios, presents significant challenges. This paper proposes a novel and computationally efficient two-stage linearized decoupled formulation for security-constrained redispatch in hybrid AC/DC grids. The methodology explicitly addresses N-1 security criterion, incorporating constraints from both the AC and DC subsystems, as well as the DC/AC converters. Simulation results on a test power system demonstrate the effectiveness of the proposed approach in mitigating the impact of both transmission line and generator outages, validating its applicability for enhancing grid resilience. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

30 pages, 5327 KB  
Article
Enhancing Short-Term Load Forecasting Using Hyperparameter-Optimized Deep Learning Approaches
by Nazmun Nahar Karima, Shameem Ahmad, Amirul Islam, A.S. Nazmul Huda, Lilik Jamilatul Awalin, Syahirah Abd Halim, Hazlie Mokhlis, Mohd Syukri Ali and Syahril Mubarok
Energies 2026, 19(3), 705; https://doi.org/10.3390/en19030705 - 29 Jan 2026
Abstract
The reliability and efficiency of power system operations, especially in smart grid scenarios, depend on accurate load demand forecasting. Electrical load forecasting is crucial for power system design, fault protection and diversification as it reduces operating costs while enhancing the system’s overall reliability, [...] Read more.
The reliability and efficiency of power system operations, especially in smart grid scenarios, depend on accurate load demand forecasting. Electrical load forecasting is crucial for power system design, fault protection and diversification as it reduces operating costs while enhancing the system’s overall reliability, stability, and efficiency from an economic and technical perspective. Previously, load forecasting analysis has frequently been limited by inadequate feature engineering and insufficient model tuning. Prediction reliability was reduced by many previous methods’ inabilities to accurately evaluate short-term variations over time and the impact of important variables. These constraints encouraged us to develop a more reliable and thorough forecasting procedure. This research proposes an enhanced short-term load forecasting framework based on a hyperparameter-tuned long short-term memory (LSTM) using a deep learning method recurrent neural network (RNN), alongside more neural network-based models such as artificial neural networks, k-nearest neighbors, and backpropagation neural networks. Hyperparameter optimization techniques (Keras Tuner, Grid SearchCV, Scikeras + Randomized SearchCV, etc.) were used to systematically tune training parameters, learning rates, and network architectures for each forecasting model to increase model accuracy. To provide a more reliable and accurate evaluation of forecasting performance, this research employs the use of an hourly load dataset (2003–2014) enhanced with historical and environmental variables. Significant statistical metrics, such as a mean absolute error of 0.0048, root mean squared error of 0.0091, coefficient of determination of R2 0.9958, and mean absolute percentage error of 1.60%, demonstrate that the hyperparameter optimized with hourly data performed better than both conventional and other deep learning models, with the highest efficiency of all tested models. In accordance with the results, accurate LSTM-RNN parameter modification significantly improves prediction accuracy. Full article
Show Figures

Figure 1

15 pages, 1832 KB  
Article
Learning Structural Relations for Robust Chest X-Ray Landmark Detection
by Su-Bin Choi, Gyu-Sung Ham and Kanghan Oh
Electronics 2026, 15(3), 589; https://doi.org/10.3390/electronics15030589 - 29 Jan 2026
Abstract
Accurate anatomical landmark localization is essential to automate chest X-ray analysis and improve diagnostic reliability. While global context recognition is essential in medical imaging, the inherently high-resolution nature of these images has long made this task particularly difficult. While the U-Net-based heatmap regression [...] Read more.
Accurate anatomical landmark localization is essential to automate chest X-ray analysis and improve diagnostic reliability. While global context recognition is essential in medical imaging, the inherently high-resolution nature of these images has long made this task particularly difficult. While the U-Net-based heatmap regression methods show strong performance, they still lack explicit modeling of the global spatial relationships among landmarks. To address this limitation, we propose an integrated structural learning framework that captures anatomical correlations across landmarks. The model generates probabilistic heatmaps with U-Net and derives continuous coordinates via soft-argmax. Subsequently, these coordinates, along with their corresponding local feature vectors, are fed into a Graph Neural Network (GNN) to refine the final positions by learning inter-landmark dependencies. Anatomical priors, such as bilateral symmetry and vertical hierarchy, are incorporated into the loss function to enhance spatial consistency. The experimental results show that our method consistently outperforms state-of-the-art models across all metrics, achieving significant improvements in MRE and SDR at 3, 6, and 9 pxl thresholds. This high precision demonstrates the framework’s strong potential to enhance the accuracy and robustness of clinical diagnostic systems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

30 pages, 2039 KB  
Article
Quantifying the Trajectory Tracking Accuracy in UGVs: The Role of Traffic Scheduling in Wi-Fi-Enabled Time-Sensitive Networking
by Elena Ferrari, Alberto Morato, Federico Tramarin, Claudio Zunino and Matteo Bertocco
Sensors 2026, 26(3), 881; https://doi.org/10.3390/s26030881 - 29 Jan 2026
Abstract
Accurate trajectory tracking is a key requirement in unmanned ground vehicles (UGVs) operating in autonomous driving, mobile robotics, and industrial automation. In wireless Time-Sensitive Networking (WTSN) scenarios, trajectory accuracy strongly depends on deterministic packet delivery, precise traffic scheduling, and time synchronization among distributed [...] Read more.
Accurate trajectory tracking is a key requirement in unmanned ground vehicles (UGVs) operating in autonomous driving, mobile robotics, and industrial automation. In wireless Time-Sensitive Networking (WTSN) scenarios, trajectory accuracy strongly depends on deterministic packet delivery, precise traffic scheduling, and time synchronization among distributed devices. This paper quantifies the impact of IEEE 802.1Qbv time-aware traffic scheduling on trajectory tracking accuracy in UGVs operating over Wi-Fi-enabled TSN networks. The analysis focuses on how misconfigured real-time (RT) and best-effort (BE) transmission windows, as well as clock misalignment between devices, affect packet reception and control performance. A mathematical framework is introduced to predict the number of correctly received RT packets based on cycle time, packet periodicity, scheduling window lengths, and synchronization offsets, enabling the a priori dimensioning of RT and BE windows. The proposed model is validated through extensive simulations conducted in an ROS–Gazebo environment, utilising Linux-based traffic shaping and scheduling tools. Results show that improper traffic scheduling and synchronization offsets can significantly degrade trajectory tracking accuracy, while correctly dimensioned scheduling windows ensure reliable packet delivery and stable control, even under imperfect synchronization. The proposed approach provides practical design guidelines for configuring wireless TSN networks supporting real-time trajectory tracking in mobile robotic systems. Full article
Show Figures

Figure 1

15 pages, 3669 KB  
Article
Development of Programmable Digital Twin via IEC-61850 Communication for Smart Grid
by Hyllyan Lopez, Ehsan Pashajavid, Sumedha Rajakaruna, Yanqing Liu and Yanyan Yin
Energies 2026, 19(3), 703; https://doi.org/10.3390/en19030703 - 29 Jan 2026
Abstract
This paper proposes the development of an IEC 61850-compliant platform that is readily programmable and deployable for future digital twin applications. Given the compatibility between IEC-61850 and digital twin concepts, a focused case study was conducted involving the robust development of a Raspberry [...] Read more.
This paper proposes the development of an IEC 61850-compliant platform that is readily programmable and deployable for future digital twin applications. Given the compatibility between IEC-61850 and digital twin concepts, a focused case study was conducted involving the robust development of a Raspberry Pi platform with protection relay functionality using the open-source libIEC61850 library. Leveraging IEC-61850’s object-oriented data modelling, the relay can be represented by fully consistent virtual and physical models, providing an essential foundation for accurate digital twin instantiation. The relay implementation supports high-speed Sampled Value (SV) subscription, real-time RMS calculations, IEC Standard Inverse overcurrent trip behaviour according to IEC-60255, and Generic Object-Oriented Substation Event (GOOSE) publishing. Further integration includes setting group functionality for dynamic parameter switching, report control blocks for MMS client–server monitoring, and GOOSE subscription to simulate backup relay protection behaviour with peer trip messages. A staged development methodology was used to iteratively develop features from simple to complex. At the end of each stage, the functionality of the added features was verified before proceeding to the next stage. The integration of the Raspberry Pi into Curtin’s IEC = 61,850 digital substation was undertaken to verify interoperability between IEDs, a key outcome relevant to large-scale digital twin systems. The experimental results confirm GOOSE transmission times below 4 ms, tight adherence to trip-time curves, and performance under higher network traffic. Such measured RMS and trip-time errors fall well within industry and IEC limits, confirming the reliability of the relay logic. The takeaways from this case study establish a high-performing, standardised foundation for a digital twin system that requires fast, bidirectional communication between a virtual and a physical system. Full article
Show Figures

Figure 1

28 pages, 15662 KB  
Article
Cable Fire Risk Prediction via Dynamic Q-Learning-Driven Ensemble of Deep Temporal Networks
by Haoxuan Li, Hao Gao, Xuehong Gao and Guozhong Huang
Fire 2026, 9(2), 61; https://doi.org/10.3390/fire9020061 - 29 Jan 2026
Abstract
Cables, which are critical for power and signal transmission in complex buildings and underground infrastructure, are exposed to elevated fire risks during operation, making reliable risk prediction essential for building fire safety. This study proposes a multivariate cable fire risk prediction model that [...] Read more.
Cables, which are critical for power and signal transmission in complex buildings and underground infrastructure, are exposed to elevated fire risks during operation, making reliable risk prediction essential for building fire safety. This study proposes a multivariate cable fire risk prediction model that integrates three deep temporal networks (RNN, LSTM, and GRU) through a Q-learning-based ensemble learning (QBEL). The model uses current, voltage, power, temperature, humidity, oxygen concentration, and system risk values acquired from an intelligent fire alarm system as inputs. Using a real-world dataset comprising 3060 seven-dimensional time steps collected from a tobacco logistics center, QBEL achieves a test-set MSE of 1.73, RMSE of 1.31, MAE of 0.84, and MAPE of 2.66%, improving the MAE and MAPE of the best single recurrent network by approximately 10–12%. Comparative experiments against conventional ensemble approaches based on XGBoost (Python package, version 3.0.0) boosting and stacking, as well as recent time-series forecasting models including DLinear, PatchTST, MoLE, and Fredformer, demonstrate that QBEL attains the lowest MAE and MAPE among all methods, while maintaining an MSE close to that of the best linear baseline and a moderate computational cost of approximately 5.5 × 10−3 GFLOPs and 45 MB of memory per inference. These results indicate that QBEL provides a favorable balance between prediction accuracy and computational efficiency, supporting its potential use in edge-oriented monitoring pipelines for timely cable fire risk warnings in building environments. Full article
(This article belongs to the Special Issue Building Fire Prediction and Suppression)
Show Figures

Figure 1

23 pages, 3738 KB  
Article
Enhancing Concrete Strength Prediction from Non-Destructive Testing Under Variable Curing Temperatures Using Artificial Neural Networks
by Ghazal Gholami Hossein Abadi, Kehinde Adewale, Muhammad Usama Salim and Carlos Moro
Infrastructures 2026, 11(2), 46; https://doi.org/10.3390/infrastructures11020046 - 29 Jan 2026
Abstract
Non-destructive testing (NDT) methods are widely used to evaluate the performance of concrete, but their accuracy can be influenced by external factors such as curing temperature. Temperature not only modifies hydration kinetics and strength development but may also change the correlation between NDT [...] Read more.
Non-destructive testing (NDT) methods are widely used to evaluate the performance of concrete, but their accuracy can be influenced by external factors such as curing temperature. Temperature not only modifies hydration kinetics and strength development but may also change the correlation between NDT measurements and compressive strength. However, no prior research has systematically examined how different curing temperatures influence the reliability of various NDT techniques. This study evaluates three curing temperatures and their effect on the correlation between NDTs and compressive strength at various ages (1, 3, 7, 28, and 90 days). Both simple regression analysis and artificial neural networks (ANNs) were employed to predict strength from NDT measurements. Results show that NDT sensitivity to curing temperature is most pronounced at early ages, and that linear regression models cannot adequately capture the complexity of these relationships. In contrast, ANNs demonstrated superior predictive capability, though initial training with limited data led to overfitting and instability. By applying Gaussian Noise Augmentation (GNA), model accuracy and generalization improved substantially, achieving R2 values above 0.95 across training, validation, and test sets. These findings highlight the potential of non-linear models, supported by data augmentation, to improve prediction reliability, lower experimental costs, and more accurately capture the role of curing temperature in NDT–strength correlations for concrete. Full article
Show Figures

Figure 1

17 pages, 4613 KB  
Article
Sustainable Utilization of Modified Manganese Slag in Cemented Tailings Backfill: Mechanical and Microstructural Properties
by Yu Yin, Shijiao Yang, Yan He, Rong Yang and Qian Kang
Sustainability 2026, 18(3), 1336; https://doi.org/10.3390/su18031336 - 29 Jan 2026
Abstract
Cemented tailings backfill (CTB) is widely used in mining operations due to its operational simplicity, reliable performance, and environmental benefits. However, the poor consolidation of fine tailings with ordinary Portland cement (OPC) remains a critical challenge, leading to excessive backfill costs. This study [...] Read more.
Cemented tailings backfill (CTB) is widely used in mining operations due to its operational simplicity, reliable performance, and environmental benefits. However, the poor consolidation of fine tailings with ordinary Portland cement (OPC) remains a critical challenge, leading to excessive backfill costs. This study addresses the utilization of modified manganese slag (MMS) as a supplementary cementitious material (SCM) for fine tailings from an iron mine in Anhui, China. Sodium silicate (Na2SiO3) modification coupled with melt-water quenching was implemented to activate the pozzolanic reactivity of manganese slag (MS) through glassy structure alteration. The MMS underwent comprehensive characterization via physicochemical analysis, X-ray diffraction (XRD), and Fourier-transform infrared spectroscopy (FTIR) to elucidate its physicochemical attributes, mineralogical composition, and glassy phase architecture. The unconfined compressive strength (UCS) of the CTB samples prepared with MMS, OPC, tailings, and water (T-MMS) was systematically evaluated at curing ages of 7, 28, and 60 days. The results demonstrate that MMS predominantly consists of SiO2, Al2O3, CaO, and MnO, exhibiting a high specific surface area and extensive vitrification. Na2SiO3 modification induced depolymerization of the highly polymerized Q4 network into less-polymerized Q2 chain structures, thereby enhancing the pozzolanic reactivity of MMS. This structural depolymerization facilitated formation of stable gel products with low calcium–silicon ratios, conferring upon the T-MMS10 sample a 60-day strength of 3.85 MPa, representing a 94.4% enhancement over the T-OPC. Scanning electron microscopy–energy dispersive spectroscopy (SEM-EDS) analysis revealed that Na2SiO3 modification precipitated extensive calcium silicate hydrate (C-S-H) gel formation and pore refinement, forming a dense networked framework that superseded the porous microstructure of the control sample. Additionally, the elevated zeta potential for T-MMS10 engendered electrostatic repulsion, while the aluminosilicate gel provided imparted lubrication, collectively improving the flowability of the composite slurry exhibiting a 26.40 cm slump, which satisfies the requirements for pipeline transportation in backfill operations. Full article
Show Figures

Figure 1

Back to TopTop