Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,984)

Search Parameters:
Keywords = large-scale networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1365 KB  
Article
A Multi-Level Ensemble Model-Based Method for Power Quality Disturbance Identification
by Hao Bai, Ruotian Yao, Chang Liu, Tong Liu, Shiqi Jiang, Yuchen Huang and Yiyong Lei
Energies 2026, 19(3), 730; https://doi.org/10.3390/en19030730 (registering DOI) - 29 Jan 2026
Abstract
With the large-scale integration of renewable energy and power electronic devices, power quality disturbances exhibit strong nonlinearity and complex dynamic behavior. Traditional methods are limited by insufficient feature extraction and cumbersome classification, often failing to meet practical accuracy and robustness requirements. To address [...] Read more.
With the large-scale integration of renewable energy and power electronic devices, power quality disturbances exhibit strong nonlinearity and complex dynamic behavior. Traditional methods are limited by insufficient feature extraction and cumbersome classification, often failing to meet practical accuracy and robustness requirements. To address this issue, this paper proposes a multi-level ensemble method for power quality disturbance identification. A time–frequency dual-branch feature extraction module was designed, combining residual networks and bidirectional temporal convolutional networks to capture both local discriminative features and long-range temporal dependencies in the time and frequency domains. A cross-attention mechanism was further employed to fuse the time–frequency features, enabling adaptive focus on the most critical information for disturbance classification. The fused features were fed into fully connected layers and a Softmax classifier for multi-class identification. Experimental results demonstrated superior accuracy, robustness, and generalization capability compared with existing methods, validating the effectiveness of the proposed model. Full article
Show Figures

Figure 1

23 pages, 7886 KB  
Article
Building Virtual Drainage Systems Based on Open Road Data and Assessing Urban Flooding Risks
by Haowen Li, Chuanjie Yan, Chun Zhou and Li Zhou
Water 2026, 18(3), 341; https://doi.org/10.3390/w18030341 - 29 Jan 2026
Abstract
With accelerating urbanisation, extreme rainfall events have become increasingly frequent, leading to rising urban flooding risks that threaten city operation and infrastructure safety. The rapid expansion of impervious surfaces reduces infiltration capacity and accelerates runoff responses, making cities more vulnerable to short-duration, high-intensity [...] Read more.
With accelerating urbanisation, extreme rainfall events have become increasingly frequent, leading to rising urban flooding risks that threaten city operation and infrastructure safety. The rapid expansion of impervious surfaces reduces infiltration capacity and accelerates runoff responses, making cities more vulnerable to short-duration, high-intensity storms. Although the SWMM is widely used for urban stormwater simulation, its application is often constrained by the lack of detailed drainage network data, such as pipe diameters, slopes, and node connectivity. To address this limitation, this study focuses on the main built-up area within the Second Ring Expressway of Chengdu, Sichuan Province, in southwestern China. As a regional core city, Chengdu frequently experiences intense short-duration rainfall during the rainy season, and the coexistence of rapid urbanisation with ageing drainage infrastructure further elevates flood risk. Accordingly, a technical framework of “open road data substitution–automated modelling–SWMM-based assessment” is proposed. Leveraging the spatial correspondence between road layouts and drainage pathways, open road data are used to construct a virtual drainage system. Combined with DEM and land-use data, Python-based automation enables sub-catchment delineation, parameter extraction, and network topology generation, achieving efficient large-scale modelling. Design storms of multiple return periods are generated based on Chengdu’s revised rainfall intensity formula, while socioeconomic indicators such as population density and infrastructure exposure are normalised and weighted using the entropy method to develop a comprehensive flood-risk assessment. Results indicate that the virtual drainage network effectively compensates for missing pipe data at the macro scale, and high-risk zones are mainly concentrated in densely populated and highly urbanised older districts. Overall, the proposed method successfully captures urban flood-risk patterns under data-scarce conditions and provides a practical approach for large-city flood-risk management. Full article
Show Figures

Figure 1

18 pages, 52908 KB  
Article
M2UNet: A Segmentation-Guided GAN with Attention-Enhanced U2-Net for Face Unmasking
by Mohamed Mahmoud, Mostafa Farouk Senussi, Mahmoud Abdalla, Mahmoud SalahEldin Kasem and Hyun-Soo Kang
Mathematics 2026, 14(3), 477; https://doi.org/10.3390/math14030477 - 29 Jan 2026
Abstract
Face unmasking is a critical task in image restoration, as masks conceal essential facial features like the mouth, nose, and chin. Current inpainting methods often struggle with structural fidelity when handling large-area occlusions, leading to blurred or inconsistent results. To address this gap, [...] Read more.
Face unmasking is a critical task in image restoration, as masks conceal essential facial features like the mouth, nose, and chin. Current inpainting methods often struggle with structural fidelity when handling large-area occlusions, leading to blurred or inconsistent results. To address this gap, we propose the Masked-to-Unmasked Network (M2UNet), a segmentation-guided generative framework. M2UNet leverages a segmentation-derived mask prior to accurately localize occluded regions and employs a multi-scale, attention-enhanced generator to restore fine-grained facial textures. The framework focuses on producing visually and semantically plausible reconstructions that preserve the structural logic of the face. Evaluated on a synthetic masked-face dataset derived from CelebA, M2UNet achieves state-of-the-art performance with a PSNR of 31.3375 dB and an SSIM of 0.9576. These results significantly outperform recent inpainting methods while maintaining high computational efficiency. Full article
Show Figures

Figure 1

39 pages, 1649 KB  
Review
The Network and Information Systems 2 Directive: Toward Scalable Cyber Risk Management in the Remote Patient Monitoring Domain: A Systematic Review
by Brian Mulhern, Chitra Balakrishna and Jan Collie
IoT 2026, 7(1), 14; https://doi.org/10.3390/iot7010014 - 29 Jan 2026
Abstract
Healthcare 5.0 and the Internet of Medical Things (IoMT) is emerging as a scalable model for the delivery of customised healthcare and chronic disease management, through Remote Patient Monitoring (RPM) in patient smart home environments. Large-scale RPM initiatives are being rolled out by [...] Read more.
Healthcare 5.0 and the Internet of Medical Things (IoMT) is emerging as a scalable model for the delivery of customised healthcare and chronic disease management, through Remote Patient Monitoring (RPM) in patient smart home environments. Large-scale RPM initiatives are being rolled out by healthcare providers (HCPs); however, the constrained nature of IoMT devices and proximity to poorly administered smart home technologies create a cyber risk for highly personalised patient data. The recent Network and Information Systems (NIS 2) directive requires HCPs to improve their cyber risk management approaches, mandating heavy penalties for non-compliance. Current research into cyber risk management in smart home-based RPM does not address scalability. This research examines scalability through the lens of the Non-adoption, Abandonment, Scale-up, Spread and Sustainability (NASSS) framework and develops a novel Scalability Index (SI), informed by a PRISMA guided systematic literature review. Our search strategy identified 57 studies across major databases including ACM, IEEE, MDPI, Elsevier, and Springer, authored between January 2016 and March 2025 (final search 21 March 2025), which focussed on cyber security risk management in the RPM context. Studies focussing solely on healthcare institutional settings were excluded. To mitigate bias, a sample of the papers (30/57) were assessed by two other raters; the resulting Cohen’s Kappa inter-rater agreement statistic (0.8) indicating strong agreement on study selection. The results, presented in graphical and tabular format, provide evidence that most cyber risk approaches do not consider scalability from the HCP perspective. Applying the SI to the 57 studies in our review resulted in a low to medium scalability potential of most cyber risk management proposals, indicating that they would not support the requirements of NIS 2 in the RPM context. A limitation of our work is that it was not tested in a live large-scale setting. However, future research could validate the proposed SI, providing guidance for researchers and practitioners in enhancing cyber risk management of large-scale RPM initiatives. Full article
(This article belongs to the Topic Applications of IoT in Multidisciplinary Areas)
Show Figures

Graphical abstract

20 pages, 4637 KB  
Article
A Lightweight YOLOv13-G Framework for High-Precision Building Instance Segmentation in Complex UAV Scenes
by Yao Qu, Libin Tian, Jijun Miao, Sergei Leonovich, Yanchun Liu, Caiwei Liu and Panfeng Ba
Buildings 2026, 16(3), 559; https://doi.org/10.3390/buildings16030559 - 29 Jan 2026
Abstract
Accurate building instance segmentation from UAV imagery remains a challenging task due to significant scale variations, complex backgrounds, and frequent occlusions. To tackle these issues, this paper proposes an improved lightweight YOLOv13-G-based framework for building extraction in UAV imagery. The backbone network is [...] Read more.
Accurate building instance segmentation from UAV imagery remains a challenging task due to significant scale variations, complex backgrounds, and frequent occlusions. To tackle these issues, this paper proposes an improved lightweight YOLOv13-G-based framework for building extraction in UAV imagery. The backbone network is enhanced by incorporating cross-stage lightweight connections and dilated convolutions, which improve multi-scale feature representation and expand the receptive field with minimal computational cost. Furthermore, a coordinate attention mechanism and an adaptive feature fusion module are introduced to enhance spatial awareness and dynamically balance multi-level features. Extensive experiments on a large-scale dataset, which includes both public benchmarks and real UAV images, demonstrate that the proposed method achieves superior segmentation accuracy with a mean intersection over union of 93.12% and real-time inference speed of 38.46 frames per second while maintaining a compact Model size of 5.66 MB. Ablation studies and cross-dataset experiments further validate the effectiveness and generalization capability of the framework, highlighting its strong potential for practical UAV-based urban applications. Full article
(This article belongs to the Topic Application of Smart Technologies in Buildings)
Show Figures

Figure 1

20 pages, 1953 KB  
Article
A Monocular Depth Estimation Method for Autonomous Driving Vehicles Based on Gaussian Neural Radiance Fields
by Ziqin Nie, Zhouxing Zhao, Jieying Pan, Yilong Ren, Haiyang Yu and Liang Xu
Sensors 2026, 26(3), 896; https://doi.org/10.3390/s26030896 - 29 Jan 2026
Abstract
Monocular depth estimation is one of the key tasks in autonomous driving, which derives depth information of the scene from a single image. And it is a fundamental component for vehicle decision-making and perception. However, approaches currently face challenges such as visual artifacts, [...] Read more.
Monocular depth estimation is one of the key tasks in autonomous driving, which derives depth information of the scene from a single image. And it is a fundamental component for vehicle decision-making and perception. However, approaches currently face challenges such as visual artifacts, scale ambiguity and occlusion handling. These limitations lead to suboptimal performance in complex environments, reducing model efficiency and generalization and hindering their broader use in autonomous driving and other applications. To solve these challenges, this paper introduces a Neural Radiance Field (NeRF)-based monocular depth estimation method for autonomous driving. It introduces a Gaussian probability-based ray sampling strategy to effectively solve the problem of massive sampling points in large complex scenes and reduce computational costs. To improve generalization, a lightweight spherical network incorporating a fine-grained adaptive channel attention mechanism is designed to capture detailed pixel-level features. These features are subsequently mapped to 3D spatial sampling locations, resulting in diverse and expressive point representations for improving the generalizability of the NeRF model. Our approach exhibits remarkable performance on the KITTI benchmark, surpassing traditional methods in depth estimation tasks. This work contributes significant technical advancements for practical monocular depth estimation in autonomous driving applications. Full article
17 pages, 1881 KB  
Article
LATS: Robust Trajectory Similarity Computation via Hybrid LSTM-Attention and Adaptive Contrastive Learning
by Hui Ding, Jiteng Wang and Pei Cao
Appl. Sci. 2026, 16(3), 1383; https://doi.org/10.3390/app16031383 - 29 Jan 2026
Abstract
Trajectory similarity calculation, a cornerstone of trajectory data mining, is pivotal for diverse applications such as clustering, classification, and retrieval. While existing representation learning-based methods offer notable advantages in efficiency and accuracy, preserving the fidelity of similarity computation when processing large-scale trajectory data [...] Read more.
Trajectory similarity calculation, a cornerstone of trajectory data mining, is pivotal for diverse applications such as clustering, classification, and retrieval. While existing representation learning-based methods offer notable advantages in efficiency and accuracy, preserving the fidelity of similarity computation when processing large-scale trajectory data remains a significant challenge. To address this, this paper introduces a novel hybrid network architecture integrating Long Short-Term Memory (LSTM) and attention mechanisms to learn discriminative latent representations of trajectories. Moreover, we propose an Adaptive Contrastive Trajectory Learning (ACTL) module that dynamically refines the learning process through batch-adaptive temperature scaling and strategic hard negative mining, substantially improving boundary discrimination and robustness to data perturbations. Experimental validation on two real-world datasets, Porto and Chengdu, demonstrates the superiority of our model over state-of-the-art (SOTA) baselines in both similarity trajectory search and k-Nearest Neighbor (k-NN) query evaluations. The model exhibits exceptional performance, particularly under conditions of high noise and with large trajectory volumes, underscoring its practical applicability in demanding scenarios. Full article
Show Figures

Figure 1

22 pages, 14476 KB  
Article
HGLN: Hybrid Gated Large-Kernel Network for Lightweight Image Super-Resolution
by Man Zhao, Jinkai Niu and Xiang Li
Appl. Sci. 2026, 16(3), 1382; https://doi.org/10.3390/app16031382 - 29 Jan 2026
Abstract
Recent large-kernel based SISR methods often struggle to balance global structural consistency with local texture preservation while maintaining computational efficiency. To address this, we propose the Hybrid Gated Large-kernel Network (HGLN). First, the Hybrid Multi-Scale Aggregation (HMSA) decouples features into structural and detailed [...] Read more.
Recent large-kernel based SISR methods often struggle to balance global structural consistency with local texture preservation while maintaining computational efficiency. To address this, we propose the Hybrid Gated Large-kernel Network (HGLN). First, the Hybrid Multi-Scale Aggregation (HMSA) decouples features into structural and detailed streams via dual-path processing, utilizing a modified Large Kernel Attention to capture long-range interactions. Second, the Local–Global Synergistic Attention (LGSA) recalibrates features by integrating local spatial context with dual global statistics (mean and standard deviation). Finally, the Structure-Gated Feed-forward Network (SGFN) leverages high-frequency residuals to modulate the gating mechanism for precise edge restoration. Extensive experiments demonstrate that HGLN outperforms state-of-the-art methods. Notably, on the challenging Urban100 dataset (×4), HGLN achieves significant PSNR gains with extremely low complexity (only 11G Multi-Adds), proving its suitability for resource-constrained applications. Full article
17 pages, 1874 KB  
Article
A Large-Kernel and Scale-Aware 2D CNN with Boundary Refinement for Multimodal Ischemic Stroke Lesion Segmentation
by Omar Ibrahim Alirr
Eng 2026, 7(2), 59; https://doi.org/10.3390/eng7020059 - 29 Jan 2026
Abstract
Accurate segmentation of ischemic stroke lesions from multimodal magnetic resonance imaging (MRI) is fundamental for quantitative assessment, treatment planning, and outcome prediction; yet, it remains challenging due to highly heterogeneous lesion morphology, low lesion–background contrast, and substantial variability across scanners and protocols. This [...] Read more.
Accurate segmentation of ischemic stroke lesions from multimodal magnetic resonance imaging (MRI) is fundamental for quantitative assessment, treatment planning, and outcome prediction; yet, it remains challenging due to highly heterogeneous lesion morphology, low lesion–background contrast, and substantial variability across scanners and protocols. This work introduces Tri-UNetX-2D, a large-kernel and scale-aware 2D convolutional network with explicit boundary refinement for automated ischemic stroke lesion segmentation from DWI, ADC, and FLAIR MRI. The architecture is built on a compact U-shaped encoder–decoder backbone and integrates three key components: first, a Large-Kernel Inception (LKI) module that employs factorized depthwise separable convolutions and dilation to emulate very large receptive fields, enabling efficient long-range context modeling; second, a Scale-Aware Fusion (SAF) unit that learns adaptive weights to fuse encoder and decoder features, dynamically balancing coarse semantic context and fine structural detail; and third, a Boundary Refinement Head (BRH) that provides explicit contour supervision to sharpen lesion borders and reduce boundary error. Squeeze-and-Excitation (SE) attention is embedded within LKI and decoder stages to recalibrate channel responses and emphasize modality-relevant cues, such as DWI-dominant acute core and FLAIR-dominant subacute changes. On the ISLES 2022 multi-center benchmark, Tri-UNetX-2D improves Dice Similarity Coefficient from 0.78 to 0.86, reduces the 95th-percentile Hausdorff distance from 12.4 mm to 8.3 mm, and increases the lesion-wise F1-score from 0.71 to 0.81 compared with a plain 2D U-Net trained under identical conditions. These results demonstrate that the proposed framework achieves competitive performance with substantially lower complexity than typical 3D or ensemble-based models, highlighting its potential for scalable, clinically deployable stroke lesion segmentation. Full article
Show Figures

Figure 1

35 pages, 2226 KB  
Article
Life-Cycle Co-Optimization of User-Side Energy Storage Systems with Multi-Service Stacking and Degradation-Aware Dispatch
by Lixiang Lin, Yuanliang Zhang, Chenxi Zhang, Xin Li, Zixuan Guo, Haotian Cai and Xiangang Peng
Processes 2026, 14(3), 477; https://doi.org/10.3390/pr14030477 - 29 Jan 2026
Abstract
The integration of a user-side energy storage system (ESS) faces notable economic challenges, including high upfront investment, uncertainty in quantifying battery degradation, and fragmented ancillary service revenue streams, which hinder large-scale deployment. Conventional configuration studies often handle capacity planning and operational scheduling at [...] Read more.
The integration of a user-side energy storage system (ESS) faces notable economic challenges, including high upfront investment, uncertainty in quantifying battery degradation, and fragmented ancillary service revenue streams, which hinder large-scale deployment. Conventional configuration studies often handle capacity planning and operational scheduling at different stages, complicating consistent life-cycle valuation under degradation and multi-service participation. This paper proposes a life-cycle multi-service co-optimization model (LC-MSCOM) to jointly determine ESS power–energy ratings and operating strategies. A unified revenue framework quantifies stacked revenues from time-of-use arbitrage, demand charge management, demand response, and renewable energy accommodation, while depth of discharge (DoD)-related lifetime loss is converted into an equivalent degradation cost and embedded in the optimization. The model is validated on a modified IEEE benchmark system using real generation and load data. Results show that LC-MSCOM increases net present value (NPV) by 26.8% and reduces discounted payback period (DPP) by 12.7% relative to conventional benchmarks, and sensitivity analyses confirm robustness under discount-rate, inflation-rate, and tariff uncertainties. By coordinating ESS dispatch with distribution network operating limits (nodal power balance, voltage bounds, and branch ampacity constraints), the framework provides practical, investment-oriented decision support for user-side ESS deployment. Full article
28 pages, 2329 KB  
Article
Calculation of Buffer Zone Size for Critical Chain of Hydraulic Engineering Considering the Correlation of Construction Period Risk
by Shengjun Wang, Junqiang Ge, Jikun Zhang, Shengwei Su, Zihang Hu, Jianuo Gu and Xiangtian Nie
Buildings 2026, 16(3), 557; https://doi.org/10.3390/buildings16030557 - 29 Jan 2026
Abstract
Due to their large scale, long duration, complex geological conditions, and multiple stakeholders, water conservancy engineering projects are subject to diverse, interrelated, and uncertain risk factors that affect the construction timeline. Traditional critical chain buffer calculation methods, such as the cut-and-paste method and [...] Read more.
Due to their large scale, long duration, complex geological conditions, and multiple stakeholders, water conservancy engineering projects are subject to diverse, interrelated, and uncertain risk factors that affect the construction timeline. Traditional critical chain buffer calculation methods, such as the cut-and-paste method and the root variance method, typically assume the independence of risks, which limits their effectiveness in addressing schedule delays caused by correlated risk events. To overcome this limitation, this paper proposes a novel critical chain buffer calculation approach that explicitly incorporates risk correlation analysis. A fuzzy DEMATEL-ISM-BN model is employed to systematically identify the interrelationships and influence pathways among schedule risk factors. Bayesian network inference is then used to quantify the overall occurrence probability while accounting for risk correlations. By integrating critical chain management theory, risk impact coefficients are introduced to improve the traditional root variance method, resulting in a buffer calculation model that captures interdependencies among schedule risks. The effectiveness of the proposed model is validated through a case study of the X Pumped Storage Power Station. The results indicate that, compared with conventional methods, the proposed approach significantly enhances the robustness of project schedule planning under correlated risk conditions while appropriately increasing buffer sizes. Consequently, the adaptability and reliability of schedule control are improved. This study provides novel theoretical tools and practical insights for schedule risk management in complex engineering projects. Full article
(This article belongs to the Topic Sustainable Building Materials)
Show Figures

Figure 1

24 pages, 3822 KB  
Article
Optimising Calculation Logic in Emergency Management: A Framework for Strategic Decision-Making
by Yuqi Hang and Kexi Wang
Systems 2026, 14(2), 139; https://doi.org/10.3390/systems14020139 - 29 Jan 2026
Abstract
Given the increasing demand for rapid emergency management decision-making, which must be both timely and reliable, even slight delays can result in substantial human and economic losses. However, current systems and recent state-of-the-art work often use inflexible rule-based logic that cannot adapt to [...] Read more.
Given the increasing demand for rapid emergency management decision-making, which must be both timely and reliable, even slight delays can result in substantial human and economic losses. However, current systems and recent state-of-the-art work often use inflexible rule-based logic that cannot adapt to rapidly changing emergency conditions or dynamically optimise response allocation. As a result, our study presents the Calculation Logic Optimisation Framework (CLOF), a novel data-driven approach that enhances decision-making intelligently and strategically through learning-based predictive and multi-objective optimisation, utilising the 911 Emergency Calls data set, comprising more than half a million records from Montgomery County, Pennsylvania, USA. The CLOF examines patterns over space and time and uses optimised calculation logic to reduce response latency and increase decision reliability. The suggested framework outperforms the standard Decision Tree, Random Forest, Gradient Boosting, and XGBoost baselines, achieving 94.68% accuracy, a log-loss of 0.081, and a reliability score (R2) of 0.955. The mean response time error is reported to have been reduced by 19%, illustrating robustness to real-world uncertainty. The CLOF aims to deliver results that confirm the scalability, interpretability, and efficiency of modern EM frameworks, thereby improving safety, risk awareness, and operational quality in large-scale emergency networks. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

17 pages, 2836 KB  
Article
Co-Design of Battery-Aware UAV Mobility and Extended PRoPHET Routing for Reliable DTN-Based FANETs in Disaster Areas
by Masaki Miyata and Tomofumi Matsuzawa
Electronics 2026, 15(3), 591; https://doi.org/10.3390/electronics15030591 (registering DOI) - 29 Jan 2026
Abstract
In recent years, flying ad hoc networks (FANETs) have attracted attention as aerial communication platforms for large-scale disasters. In wide, city-scale disaster zones, survivors’ devices often form multiple isolated clusters, while battery-powered unmanned aerial vehicles (UAVs) must periodically return to a ground station [...] Read more.
In recent years, flying ad hoc networks (FANETs) have attracted attention as aerial communication platforms for large-scale disasters. In wide, city-scale disaster zones, survivors’ devices often form multiple isolated clusters, while battery-powered unmanned aerial vehicles (UAVs) must periodically return to a ground station (GS). Under such conditions, conventional delay/disruption-tolerant networking (DTN) routing (e.g., PRoPHET) often traps bundles in clusters or UAVs, degrading the bundle delivery ratio (BDR) to the GS. This study proposes a DTN-based FANET architecture that integrates (i) a mobility model assigning UAVs to information–exploration UAVs that randomly patrol the disaster area and GS–relay UAVs that follow spoke-like routes to periodically visit the GS, and (ii) an extended PRoPHET-based routing protocol that exploits exogenous information on GS visits to bias delivery predictabilities toward GS–relay UAVs and UAVs returning for recharging. Simulations with The ONE in a 10 km × 10 km scenario with multiple clusters show that the proposed method suppresses BDR degradation by up to 41% relative to PRoPHET, raising the BDR from 0.27 to 0.39 in the five-cluster case and increasing the proportion of bundles delivered with lower delay. These results indicate that the proposed method is well-suited for relaying critical disaster-related information. Full article
Show Figures

Figure 1

21 pages, 6750 KB  
Article
Machine Learning-Based Energy Consumption and Carbon Footprint Forecasting in Urban Rail Transit Systems
by Sertaç Savaş and Kamber Külahcı
Appl. Sci. 2026, 16(3), 1369; https://doi.org/10.3390/app16031369 - 29 Jan 2026
Abstract
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately [...] Read more.
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately forecasting the energy consumption of these systems is vital for sustainable urban planning, energy supply management, and the development of carbon balancing strategies. In this study, forecasting models are designed using five different machine learning (ML) algorithms, and their performances in predicting the energy consumption and carbon footprint of urban rail transit systems are comprehensively compared. For five distribution-center substations, 10 years of monthly energy consumption data and the total carbon footprint data of these substations are used. Support Vector Regression (SVR), Extreme Gradient Boosting (XGBoost), Long Short-Term Memory (LSTM), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Nonlinear Autoregressive Neural Network (NAR-NN) models are developed to forecast these data. Model hyperparameters are optimized using a 20-iteration Random Search algorithm, and the stochastic models are run 10 times with the optimized parameters. Results reveal that the SVR model consistently exhibits the highest forecasting performance across all datasets. For carbon footprint forecasting, the SVR model yields the best results, with an R2 of 0.942 and a MAPE of 3.51%. The ensemble method XGBoost also demonstrates the second-best performance (R2=0.648). Accordingly, while deterministic traditional ML models exhibit superior performance, the neural network-based stochastic models, such as LSTM, ANFIS, and NAR-NN, show insufficient generalization capability under limited data conditions. These findings indicate that, in small- and medium-scale time-series forecasting problems, traditional machine learning methods are more effective than neural network-based methods that require large datasets. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 3669 KB  
Article
Development of Programmable Digital Twin via IEC-61850 Communication for Smart Grid
by Hyllyan Lopez, Ehsan Pashajavid, Sumedha Rajakaruna, Yanqing Liu and Yanyan Yin
Energies 2026, 19(3), 703; https://doi.org/10.3390/en19030703 - 29 Jan 2026
Abstract
This paper proposes the development of an IEC 61850-compliant platform that is readily programmable and deployable for future digital twin applications. Given the compatibility between IEC-61850 and digital twin concepts, a focused case study was conducted involving the robust development of a Raspberry [...] Read more.
This paper proposes the development of an IEC 61850-compliant platform that is readily programmable and deployable for future digital twin applications. Given the compatibility between IEC-61850 and digital twin concepts, a focused case study was conducted involving the robust development of a Raspberry Pi platform with protection relay functionality using the open-source libIEC61850 library. Leveraging IEC-61850’s object-oriented data modelling, the relay can be represented by fully consistent virtual and physical models, providing an essential foundation for accurate digital twin instantiation. The relay implementation supports high-speed Sampled Value (SV) subscription, real-time RMS calculations, IEC Standard Inverse overcurrent trip behaviour according to IEC-60255, and Generic Object-Oriented Substation Event (GOOSE) publishing. Further integration includes setting group functionality for dynamic parameter switching, report control blocks for MMS client–server monitoring, and GOOSE subscription to simulate backup relay protection behaviour with peer trip messages. A staged development methodology was used to iteratively develop features from simple to complex. At the end of each stage, the functionality of the added features was verified before proceeding to the next stage. The integration of the Raspberry Pi into Curtin’s IEC = 61,850 digital substation was undertaken to verify interoperability between IEDs, a key outcome relevant to large-scale digital twin systems. The experimental results confirm GOOSE transmission times below 4 ms, tight adherence to trip-time curves, and performance under higher network traffic. Such measured RMS and trip-time errors fall well within industry and IEC limits, confirming the reliability of the relay logic. The takeaways from this case study establish a high-performing, standardised foundation for a digital twin system that requires fast, bidirectional communication between a virtual and a physical system. Full article
Show Figures

Figure 1

Back to TopTop