Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (966)

Search Parameters:
Keywords = large-scale network control

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2162 KB  
Article
Blockchain-Enabled Decentralized End Hopping for Proactive Network Defense
by Shenghan Luo, Fangxiao Li, Leyi Shi and Dawei Zhao
Telecom 2026, 7(2), 28; https://doi.org/10.3390/telecom7020028 - 4 Mar 2026
Abstract
As network attack methods continue to evolve, flooding attacks remain a major threat that causes network paralysis and service disruption. Statically configured systems are particularly vulnerable, as attackers can exploit reconnaissance information to launch large-scale attacks, while conventional defense mechanisms often fail under [...] Read more.
As network attack methods continue to evolve, flooding attacks remain a major threat that causes network paralysis and service disruption. Statically configured systems are particularly vulnerable, as attackers can exploit reconnaissance information to launch large-scale attacks, while conventional defense mechanisms often fail under high-intensity traffic. To address this problem, this paper introduces Moving Target Defense (MTD) within a decentralized framework and proposes a blockchain-based decentralized End Hopping system. The system employs the Practical Byzantine Fault Tolerance (PBFT) consensus protocol for dynamic controller election and incorporates a disaster recovery mechanism, which eliminates single points of failure while ensuring reliable controller transitions and rapid service restoration. Experimental results demonstrate that the proposed system achieves satisfactory performance in terms of availability, effectiveness, and security, providing a practical approach to constructing robust proactive defense networks. Full article
Show Figures

Figure 1

19 pages, 1473 KB  
Article
AI-Assisted Analysis of Future-Oriented Discourses: Institutional Narratives and Public Reactions on Social Media
by Galina V. Gradoselskaya, Inga V. Zheltikova, Maria Pilgun, Alexey N. Raskhodchikov and Andrey N. Yazykayev
Journal. Media 2026, 7(1), 49; https://doi.org/10.3390/journalmedia7010049 - 2 Mar 2026
Abstract
This study explores how digital media ecosystems shape collective visions of the future under conditions of rapid technological innovation and the growing influence of artificial intelligence (AI). Drawing on a large corpus of social media content comprising 50,036,592 tokens, the research examines institutional [...] Read more.
This study explores how digital media ecosystems shape collective visions of the future under conditions of rapid technological innovation and the growing influence of artificial intelligence (AI). Drawing on a large corpus of social media content comprising 50,036,592 tokens, the research examines institutional narratives and user-generated responses through a hybrid methodological framework. This framework combines information-wave detection, network analysis, semantic and associative modeling (TextAnalyst 2.32), and interpretation supported by a large language model (GPT-5). The methodological contribution of the study lies in the integration of network-based and semantic algorithms with AI-driven analytical tools for the examination of large-scale textual data. The findings indicate that media discourses about the future operate as key mechanisms through which societies interpret the environmental, social, and economic consequences of technological change. Institutional actors promote multiple future-oriented models that often conflict with one another at both discursive and practical levels. In contrast, user-generated content reflects widespread fear, skepticism, and distrust. Prominent themes include nostalgia for the past, anxiety about socio-economic and environmental consequences, and concerns related to expanding forms of digital control. The analysis also reveals divergent perspectives on urban development. Positive narratives emphasize ecological balance, a comfortable urban environment, thoughtfully designed mixed-use development, and solutions to transportation challenges. Negative narratives, by contrast, focus on over-densification, environmental degradation, and the erosion of privacy in technologically saturated urban spaces. Full article
Show Figures

Graphical abstract

18 pages, 7743 KB  
Article
Deep Learning-Based Interferogram Quality Assessment and Application to Tectonic Deformation Study
by Ziwei Liu, Wenyu Gong, Zhenjie Wang, Jun Hua and Xu Liu
Remote Sens. 2026, 18(5), 733; https://doi.org/10.3390/rs18050733 - 28 Feb 2026
Viewed by 62
Abstract
Time-series interferometric synthetic aperture radar (TS-InSAR) has become a widely used technique for monitoring surface deformation with high spatial and temporal resolution. The recent rise in cloud-based InSAR platforms has significantly accelerated the production of interferograms. However, the accuracy of deformation inversion remains [...] Read more.
Time-series interferometric synthetic aperture radar (TS-InSAR) has become a widely used technique for monitoring surface deformation with high spatial and temporal resolution. The recent rise in cloud-based InSAR platforms has significantly accelerated the production of interferograms. However, the accuracy of deformation inversion remains limited by fundamental issues affecting interferogram quality, including temporal and spatial decorrelation and phase unwrapping errors. These degrading effects are most pronounced in vegetated, desert, and snow-covered terrains, which are common in active tectonic zones and thereby exert a major impact on the quality of the unwrapped phase. Traditional quality control methods are inefficient or inadequate for large-scale analysis, and discarding low-quality data reduces the inversion accuracy. To address these limitations, we developed a deep learning-based approach to automatically assess interferogram quality and integrate it into the time-series InSAR inversion workflow. We utilized Sentinel-1 interferograms generated by the COMET-LiCSAR system as the primary data source. Based on this dataset, we developed a multi-stage selection strategy for interferogram quality control, integrating loop phase closure analysis, statistical indicators (including coherence and phase standard deviation), and manual verification. As a result, we constructed a high-quality labeled dataset comprising approximately 20,000 samples. An improved ConvNeXt-InSAR model was designed and trained to automatically quantify the quality of each pixel in individual interferograms. The model generates pixel-wise quality maps, which are then incorporated as weight constraints in the time-series InSAR network inversion. The proposed method was applied to the interseismic deformation reconstruction in the central-southern Tibetan Plateau region. This study highlights the potential of deep learning-based interferogram quality assessment in facilitating large-scale, automated time-series InSAR processing. Full article
Show Figures

Figure 1

19 pages, 3137 KB  
Article
Kinetic Oxidation Analysis in AISI 1045 Steel Using Infrared Thermography and Convolutional Neural Networks
by Oscar David Prieto-Sánchez, Antony Morales-Cervantes, Jorge Sergio Téllez-Martínez, Gerardo Marx Chávez-Campos, Edgar Guevara and Héctor Javier Vergara-Hernández
Materials 2026, 19(5), 920; https://doi.org/10.3390/ma19050920 (registering DOI) - 27 Feb 2026
Viewed by 113
Abstract
This study presents a pioneering approach, integrating infrared thermography and deep learning to analyse surface oxide layers on AISI 1045 steel, addressing the critical need for advanced monitoring in steelmaking processes. Using thermography for observation and semantic segmentation for accurate identification, 50 tests [...] Read more.
This study presents a pioneering approach, integrating infrared thermography and deep learning to analyse surface oxide layers on AISI 1045 steel, addressing the critical need for advanced monitoring in steelmaking processes. Using thermography for observation and semantic segmentation for accurate identification, 50 tests between 200 and 700 °C were analysed in a Joule-controlled heating system to study the formation and thickening of oxide layers on steel surfaces. A convolutional neural network (CNN), specifically SegNet, was trained for semantic segmentation, facilitating detailed analysis. The model achieved an overall accuracy of 96.40% in identifying the presence of oxide. By quantifying pixelation changes, relationships in oxide evolution kinetics were obtained, and by quantifying the activation energy in isothermal cases, the magnitude is in the range reported by other works. The approach also highlighted the potential for non-destructive monitoring and control on a large scale without compromising personnel safety. This potential could improve industrial process control, predict surface quality or provide data relevant to sub-processes. Full article
(This article belongs to the Special Issue Modeling and Optimization of Material Properties and Characteristics)
Show Figures

Graphical abstract

25 pages, 25354 KB  
Article
OpenPlant: A Large-Scale Benchmark Dataset for Agricultural Plant Classification Using CNNs, ViTs, and VLMs
by Kaiqi Liu, Wei Sun, Guanping Wang, Quan Feng and Hui Li
Plants 2026, 15(5), 727; https://doi.org/10.3390/plants15050727 - 27 Feb 2026
Viewed by 208
Abstract
Accurate plant classification based on deep learning is important for precision agriculture, such as weed control, crop monitoring, and smart farming systems. The accuracies of deep learning models rely on datasets. Although many datasets have been proposed in recent decades, they have the [...] Read more.
Accurate plant classification based on deep learning is important for precision agriculture, such as weed control, crop monitoring, and smart farming systems. The accuracies of deep learning models rely on datasets. Although many datasets have been proposed in recent decades, they have the common limitations in terms of scale, less environmental diversity, and challenges of data integration. To solve these problems, in this paper, we introduce a new dataset named OpenPlant, which is a large-scale and open dataset containing 635,176 RGB images across 1167 plant species. OpenPlant includes diverse growth stages of plants, plant structures, and environmental conditions, and its annotations were carefully verified to ensure quality. The proposed OpenPlant can be a benchmark for agricultural plant classification. In this paper, we benchmarked 10 widely used convolutional neural networks (CNNs), 6 vision transformers (ViTs), and 12 vision–language models (VLMs) to provide a comprehensive evaluation. The OpenPlant dataset offers a comprehensive benchmark for agricultural research using deep learning and the results provide insights into future directions. Full article
Show Figures

Figure 1

35 pages, 1715 KB  
Review
Optimization Strategies for Large-Scale PV Integration in Smart Distribution Networks: A Review
by Stefania Conti, Antonino Laudani, Santi A. Rizzo, Nunzio Salerno, Gian Giuseppe Soma, Giuseppe M. Tina and Cristina Ventura
Energies 2026, 19(5), 1191; https://doi.org/10.3390/en19051191 - 27 Feb 2026
Viewed by 103
Abstract
The large-scale integration of photovoltaic systems into modern distribution networks requires advanced forecasting and optimisation tools to address variability, uncertainty, and increasingly complex operational conditions. This review examines 160 peer-reviewed studies published primarily between 2018 and 2026 and provides a unified, system-level perspective [...] Read more.
The large-scale integration of photovoltaic systems into modern distribution networks requires advanced forecasting and optimisation tools to address variability, uncertainty, and increasingly complex operational conditions. This review examines 160 peer-reviewed studies published primarily between 2018 and 2026 and provides a unified, system-level perspective that links photovoltaic power forecasting, photovoltaic optimisation, and energy storage system management within the broader context of Smart Grid operation. The analysis covers forecasting techniques across all temporal horizons, compares deterministic, stochastic, metaheuristic, and hybrid optimisation approaches, and reviews siting, sizing, and operational strategies for both PV units and Energy Storage Systems, including their effects on hosting capacity, reactive power control, and network flexibility. A key contribution of this work is the consolidation of planning- and operation-oriented methods into a coherent framework that clarifies how forecasting accuracy influences Distributed Energy Resources optimisation and system-level performance. The review also highlights emerging trends, such as reinforcement learning for real-time Energy Storage Systems control, surrogate-assisted multi-objective optimisation, data-driven hosting capacity evaluation, and explainable AI for grid transparency, as essential enablers for flexible, resilient, and sustainable distribution networks. Open challenges include uncertainty modelling, real-world validation of optimisation tools, interoperability with flexibility markets, and the development of scalable and adaptive optimisation frameworks for next-generation smart grids. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

27 pages, 5793 KB  
Article
Understanding Tight Naturally Fractured Carbonate Reservoir Architecture for Subsurface Gas Storage
by Sadam Hussain, Bruno Ramon Batista Fernandes, Mojdeh Delshad and Kamy Sepehrnoori
Appl. Sci. 2026, 16(5), 2278; https://doi.org/10.3390/app16052278 - 26 Feb 2026
Viewed by 180
Abstract
This study develops a conceptual framework for characterizing reservoir architecture in multi-component, discrete systems using pressure transient analysis (PTA), aimed at calibrating inflow geometry prior to full-field dynamic simulation for subsurface gas storage applications such as CO2 and hydrogen. A secondary objective [...] Read more.
This study develops a conceptual framework for characterizing reservoir architecture in multi-component, discrete systems using pressure transient analysis (PTA), aimed at calibrating inflow geometry prior to full-field dynamic simulation for subsurface gas storage applications such as CO2 and hydrogen. A secondary objective is to identify variations in permeability over time by analyzing flow capacity trends and evaluating the dynamic influence of faults and fractures. The analysis is based on a gas-condensate field comprising seven wells and four zones (A, B, C, D), using integrated dynamic datasets including extended well tests (EWTs), mud loss, production logs, and production data. Detailed interpretation of PX-1’s EWT indicated delayed re-pressurization and persistent under-pressure, suggesting a compartmentalized or transient system with limited gas-in-place connectivity. Four reservoir architecture concepts were developed: (1) lithology-dominated inflow, (2) structurally controlled inflow, (3) discrete, weakly connected compartments, and (4) transient-dominated systems with tight matrix GIIP. These concepts informed four reservoir models: matrix-only (M), areal heterogeneity (A), sparse bodies (B), and sparse networks (S). Application of these models across other wells revealed consistent localized KH (permeability–thickness product) behavior, with all models fitting short-duration data comparably. However, only sparse drainage models (B/S) adequately matched PX-1’s EWT response. PTA results confirm that well tests constrain KH locally but provide limited insight into large-scale reservoir architecture. EWTs may reach ~1 km, while shorter tests are confined to ~200–400 m, typically within one to two simulation grid blocks. This study demonstrates how integrating PTA with multi-scale data improves characterization of naturally fractured, tight carbonate reservoirs and supports reservoir simulation and history matching for hydrogen storage evaluation. Based on reservoir simulations, this study concluded that naturally fractured carbonate gas reservoirs can provide significant storage and injection capacities for underground hydrogen storage. This study exemplifies how to characterize the naturally fractured tight carbonate reservoirs by integrating multi-scale and multi-dimensional data such as PTA. Furthermore, this study assists in gridding for full-field reservoir models, for history matching and quantifying the potential of hydrogen storage in these complex reservoirs. The proposed workflow provides an uncertainty-bounded reservoir characterization framework and should not be interpreted as a complete field-design methodology for hydrogen storage. The modeling does not explicitly couple geomechanical fracture growth, hydrogen diffusion, long-term geochemical reactions, or caprock integrity degradation. Therefore, the presented storage scenarios represent technically feasible cases under defined assumptions. Comprehensive site-specific geomechanical and containment assessments are required prior to field-scale implementation. Full article
(This article belongs to the Section Energy Science and Technology)
Show Figures

Figure 1

16 pages, 3605 KB  
Article
High-Resolution Microbial Fingerprinting for Forensic Individual Identification: A Proof-of-Concept Study Integrating 2bRAD-M and Hierarchical Attention Network
by Haoran Li, Zhiyao Yu, Zhijing Wu, Yuxin Lin, Tao Liu, Yuli Liu, Juan An, Jing Zhao, Yan Liu, Xueman Ma and Haiyan Wang
Genes 2026, 17(3), 263; https://doi.org/10.3390/genes17030263 - 26 Feb 2026
Viewed by 222
Abstract
Background: Human skin and saliva microbial communities have emerged as promising forensic biomarkers due to their individual specificity. However, existing studies are limited by small sample sizes and methodological inconsistencies. This proof-of-concept study aims to develop a novel framework integrating 2bRAD-M sequencing [...] Read more.
Background: Human skin and saliva microbial communities have emerged as promising forensic biomarkers due to their individual specificity. However, existing studies are limited by small sample sizes and methodological inconsistencies. This proof-of-concept study aims to develop a novel framework integrating 2bRAD-M sequencing with a hierarchical attention network (HAN) for forensic individual identification, addressing these limitations through large-scale public data integration and controlled validation. Methods: We utilized 2263 skin and saliva samples from public databases (Qiita, HMP, NCBI SRA) for model development. These public data included longitudinal samples collected over periods up to 180 days. A contemporary validation cohort of 6 volunteers, providing 26 forensic-relevant samples (including simulated touch evidence), was sequenced using 2bRAD-M for validation. Data integration involved batch effect correction (ComBat), normalization (CSS), and cross-database harmonization using GTDB for taxonomic assignment. The HAN model was optimized with triplet margin loss for metric learning. Results: The HAN model achieved 98.7% Rank-1 accuracy for pristine samples, outperforming random forest (70.2%) and CNN (75.8%). Microbial signatures showed high temporal stability (ICC = 0.86 over 180 days) and robustness in mixed samples (87.4% accuracy). Discriminatory biomarkers included Cutibacterium (skin) and Prevotella (saliva). Particulate matter exposure significantly influenced microbial composition (PERMANOVA R2 = 0.32, p < 0.001). Conclusions: This study establishes a proof-of-concept pipeline for microbial forensics, demonstrating high accuracy under controlled conditions. Future work must address antibiotic exposure, sample diversity, and cross-laboratory validation before forensic implementation. Full article
(This article belongs to the Special Issue Advances in Forensic Genetics and DNA)
Show Figures

Figure 1

16 pages, 1627 KB  
Article
Thermally Reversible and Recyclable Polyethylene Networks via Furan–Maleimide Diels–Alder Dynamic Covalent Chemistry
by Zengheng Hao, Wei Zhang, Yugui Liu, Jianhui Xu, Haidong Liu, Shutong Tang and Junan Shen
Molecules 2026, 31(5), 771; https://doi.org/10.3390/molecules31050771 - 25 Feb 2026
Viewed by 166
Abstract
The formation of recyclable polyethylene materials is significantly limited by traditional crosslinking methods, which involve solvent-heavy processes and permanent chemical bonds that cannot be undone. Herein, we report an environmentally friendly and scalable approach to construct a thermo-reversible polyethylene network (PE-g-DA) via solvent-free, [...] Read more.
The formation of recyclable polyethylene materials is significantly limited by traditional crosslinking methods, which involve solvent-heavy processes and permanent chemical bonds that cannot be undone. Herein, we report an environmentally friendly and scalable approach to construct a thermo-reversible polyethylene network (PE-g-DA) via solvent-free, one-step melt processing based on furan–maleimide Diels–Alder (D–A) dynamic covalent chemistry. Furan-functionalized polyethylene was dynamically crosslinked with bismaleimide during melt mixing, fully compatible with conventional polyolefin processing techniques. FTIR spectroscopy, temperature-dependent solubility, and differential scanning calorimetry collectively confirm the reversible formation and dissociation of D–A adducts, enabling thermal switching of the network structure. Equilibrium swelling experiments based on the Flory–Rehner model indicate that the crosslink density can be precisely controlled by varying the bismaleimide content. As a result, PE-g-DA exhibits significantly enhanced tensile strength while maintaining high ductility at moderate crosslink densities. Notably, the dynamic network allows efficient thermal reprocessing, with recycled samples retaining approximately 93% and 80% of their original tensile strength after the first and second reprocessing cycles, respectively. Moreover, intrinsic thermal self-healing behavior is directly visualized by scanning electron microscopy at 120 °C. This work demonstrates that combining dynamic Diels–Alder chemistry with solvent-free melt processing offers a practical and sustainable route to recyclable, reprocessable, and self-healable polyethylene materials with clear potential for large-scale industrial production. Full article
(This article belongs to the Special Issue Photoelectrochemical Properties of Nanostructured Thin Films)
Show Figures

Figure 1

26 pages, 13164 KB  
Article
Tri-Stage Selective Reasoning for Rumor Source Detection via Graph Neural Networks and Large Language Models
by Tao Xue, Wenzhuo Liu, Long Xi and Wen Lv
Electronics 2026, 15(5), 914; https://doi.org/10.3390/electronics15050914 - 24 Feb 2026
Viewed by 114
Abstract
Rumor source detection aims to identify the initial origin of misinformation diffusion in social networks. Accurate source localization is essential for effective rumor intervention and early mitigation in large-scale social media platforms. Existing rumor source detection methods often struggle to model complex propagation [...] Read more.
Rumor source detection aims to identify the initial origin of misinformation diffusion in social networks. Accurate source localization is essential for effective rumor intervention and early mitigation in large-scale social media platforms. Existing rumor source detection methods often struggle to model complex propagation structures. However, applying mathematical models uniformly to all samples introduces unnecessary computational overhead and limits scalability. By leveraging GNN-based candidate ranking, our approach effectively narrows the source search space and provides a reliable structural foundation for subsequent reasoning. Prior studies typically perform end-to-end inference without considering prediction confidence, leading to inefficient processing of low-uncertainty samples. To address this issue, we introduce an entropy-based uncertainty filtering mechanism that selectively identifies high-uncertainty cases requiring further reasoning, significantly reducing redundant computation. Meanwhile, existing methods lack semantic interpretability when handling ambiguous propagation patterns, motivating the incorporation of large language model (LLM) reasoning. We employ LLM-based reasoning only on filtered samples to enhance semantic understanding while controlling inference cost. Based on these designs, we propose TSR-RSD, a tri-stage selective reasoning framework that integrates GNN-based structural modeling, uncertainty-driven sample selection, and LLM-based semantic reasoning. Experimental results on GossipCop, PolitiFact, and PHEME demonstrate that TSR-RSD consistently outperforms GNN-based baselines in terms of Hit@1, Hit@3, Hit@5, and Mean Reciprocal Rank (MRR), reflecting improved accuracy and stability in rumor source ranking. Furthermore, the entropy-based uncertainty filtering mechanism significantly reduces the LLM invocation ratio by approximately 40–60%, while maintaining comparable or improved ranking performance. As a result, TSR-RSD achieves an overall inference time reduction of 35–50%, effectively balancing localization accuracy, computational efficiency, and interpretability. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 24468 KB  
Article
Reduced-Switch Active Power Filter with Modified One-Cycle Control for Non-Ideal Voltage Conditions
by Honglan Pei, Wenna Zhang, Wenqiang Zhang, Lidong Wang and Lei Wang
Processes 2026, 14(5), 733; https://doi.org/10.3390/pr14050733 - 24 Feb 2026
Viewed by 195
Abstract
With the evolution of new power systems, harmonic sources in distribution networks have become increasingly dispersed, thus requiring lower-cost harmonic mitigation devices suitable for large-scale deployment. With its simple control architecture, the one-cycle controlled active power filter (APF) is better adapted to meet [...] Read more.
With the evolution of new power systems, harmonic sources in distribution networks have become increasingly dispersed, thus requiring lower-cost harmonic mitigation devices suitable for large-scale deployment. With its simple control architecture, the one-cycle controlled active power filter (APF) is better adapted to meet the aforementioned requirements. That said, under non-ideal voltage conditions like voltage distortion or unbalance, the compensating target current of the APF that relies on traditional one-cycle control (OCC) will undergo distortion as well, resulting in a substantial reduction in the compensation effect. This paper introduces a modified OCC method based on a positive-sequence filter, which allows for the control of a reduced-switch three-phase APF. This control method eliminates the negative sequence and harmonic components in the target current of the APF, and makes the compensated current maintain a good sinusoidal waveform. A one-cycle control equation applied to the reduced-switch APF was derived. The modified one-cycle control method allows the active filter to retain a favorable compensation effect when operating under non-ideal voltage conditions. Meanwhile, it preserves the inherent advantages of traditional one-cycle control, including the elimination of a phase-locked loop (PLL), a fixed switching frequency, and a straightforward control structure. Finally, an APF simulation model and a dSPACE-based APF experimental circuit were built to verify the proposed control method. In simulation, with the adoption of the modified OCC, the THD of the current was reduced from 8.25% before improvement to 3.79% after improvement. In experiments, according to the spectrum analysis function of the oscilloscope, the third-order current harmonic caused by voltage distortion was decreased from 500 mA to 100 mA, representing a reduction of 80%. Both simulation and experimental results verify that the proposed modified one-cycle control method can effectively solve the problem that control performance is susceptible to voltage quality. Full article
(This article belongs to the Special Issue Design, Control, Modeling and Simulation of Energy Converters)
Show Figures

Figure 1

23 pages, 3524 KB  
Article
A Diffusion Weighted Ensemble Framework for Robust Short-Horizon Global SST Forecasting from Multivariate GODAS Data
by Gwangun Yu, GilHan Choi, Moonseung Choi, Sun-hong Min and Yonggang Kim
Mathematics 2026, 14(4), 740; https://doi.org/10.3390/math14040740 - 22 Feb 2026
Viewed by 203
Abstract
Accurate time series forecasting of sea surface temperature (SST) is essential for understanding the ocean climate system and large-scale ocean circulation, yet it remains challenging due to regime-dependent variability and correlated errors across heterogeneous prediction models. This study addresses these challenges by formulating [...] Read more.
Accurate time series forecasting of sea surface temperature (SST) is essential for understanding the ocean climate system and large-scale ocean circulation, yet it remains challenging due to regime-dependent variability and correlated errors across heterogeneous prediction models. This study addresses these challenges by formulating SST ensemble time series forecasting aggregation as a stochastic, sample-adaptive weighting problem. We propose a diffusion-conditioned ensemble framework in which heterogeneous base forecasters generate out-of-sample SST predictions that are combined through a noise-conditioned weighting network. The proposed framework produces convex, sample-specific mixture weights without requiring iterative reverse-time sampling. The approach is evaluated on short-horizon global SST forecasting using the Global Ocean Data Assimilation System (GODAS) reanalysis as a representative multivariate dataset. Under a controlled experimental protocol with fixed input windows and one-step-ahead prediction, the proposed method is compared against individual deep learning forecasters and conventional global pooling strategies, including uniform averaging and validation-optimized convex weighting. The results show that adaptive, diffusion-weighted aggregation yields consistent improvements in error metrics over the best single-model baseline and static pooling rules, with more pronounced gains in several mid- to high-latitude regimes. These findings indicate that stochastic, condition-dependent weighting provides an effective and computationally practical framework for enhancing the robustness of multivariate time series forecasting, with direct applicability to global SST prediction from large-scale geophysical reanalysis data. Full article
Show Figures

Figure 1

20 pages, 1420 KB  
Article
High-Level Synthesis (HLS)-Enabled Field-Programmable Gate Array (FPGA) Algorithms for Latency-Critical Plasma Diagnostics and Neural Trigger Prototyping in Next-Generation Energy Projects
by Radosław Cieszewski, Krzysztof Poźniak, Ryszard Romaniuk and Maciej Linczuk
Energies 2026, 19(4), 1091; https://doi.org/10.3390/en19041091 - 21 Feb 2026
Viewed by 262
Abstract
Large-scale advanced energy systems, including fusion devices, high-power plasma sources, and accelerator-driven energy platforms, increasingly depend on real-time, hardware-level data processing for diagnostics, control, and protection. In such installations, ultra-low latency, deterministic throughput, and multi-decade operational lifetimes are not optional design goals but [...] Read more.
Large-scale advanced energy systems, including fusion devices, high-power plasma sources, and accelerator-driven energy platforms, increasingly depend on real-time, hardware-level data processing for diagnostics, control, and protection. In such installations, ultra-low latency, deterministic throughput, and multi-decade operational lifetimes are not optional design goals but strict system-level requirements. While similar timing constraints exist in high-energy physics infrastructures, energy applications place a stronger emphasis on long-term stability, maintainability, and reproducibility of digital signal processing pipelines. This work investigates whether high-level synthesis (HLS) provides a practical and sustainable design methodology for implementing both classical pattern-based and compact neural network (NN) trigger logic on Field-Programmable Gate Arrays (FPGAs) under realistic energy-system constraints. Using representative commercial toolchains (Intel HLS and hls4ml) as reference workflows, we demonstrate the capabilities of fixed-point, fully pipelined streaming architectures, while also identifying critical shortcomings of pragma-driven HLS approaches in terms of architecture transparency, long-term portability, and systematic multi-objective design-space exploration, all of which are crucial for long-lived energy projects and plasma diagnostic systems. These limitations directly motivate the development of a custom, vendor-agnostic, extensible HLS framework (PyHLS), specifically oriented toward deterministic latency, reproducibility, and physics-grade verification demands of advanced energy infrastructures. Gas Electron Multipliers (GEMs) are modern gaseous detectors increasingly employed in plasma diagnostics, radiation monitoring, and high-power energy experiments, where high rate capability, fine spatial resolution, and radiation tolerance are required. Their massively parallel signal structure and continuous data streams make GEMs a representative and demanding benchmark for FPGA-based real-time trigger and preprocessing systems in energy-related environments. The primary objective of this study is to establish a pragmatic technological baseline, demonstrating that contemporary HLS workflows can reliably support both template-based and neural inference-based trigger architectures within strict timing, resource, and power constraints typical for advanced energy installations. Furthermore, we outline a scalable development path toward multi-channel and two-dimensional (pixelated) GEM readout architectures, directly applicable to fusion diagnostics, plasma accelerators, beam–plasma interaction studies, and radiation-hard energy monitoring platforms. Although the proposed methodology remains fully transferable to large-scale physics trigger systems, its principal relevance is directed toward real-time diagnostics and protection layers in next-generation energy systems. Full article
Show Figures

Figure 1

35 pages, 4968 KB  
Article
Research on Protection of a Three-Level Converter-Based Flexible DC Traction Substation System
by Peng Chen, Qiang Fu, Chunjie Wang and Yaning Zhu
Sensors 2026, 26(4), 1350; https://doi.org/10.3390/s26041350 - 20 Feb 2026
Viewed by 177
Abstract
With the expansion of urban rail transit, increased train operation density, and the large-scale grid integration of renewable energy such as offshore photovoltaic power, traction power supply systems face stricter requirements for operational safety, power supply reliability and energy utilization efficiency. Offshore photovoltaic [...] Read more.
With the expansion of urban rail transit, increased train operation density, and the large-scale grid integration of renewable energy such as offshore photovoltaic power, traction power supply systems face stricter requirements for operational safety, power supply reliability and energy utilization efficiency. Offshore photovoltaic power, integrated into the traction power supply network via flexible DC transmission technology, promotes renewable energy consumption, but its random and volatile output overlaps with time-varying traction loads, increasing the complexity of DC-side fault characteristics and protection control. Flexible DC technology is a core direction for next-generation traction substations, and three-level converters (key energy conversion units) have advantages over traditional two-level topologies. However, their P-O-N three-terminal DC-side topology introduces new faults (e.g., PO/ON bipolar short circuits, O-point-to-ground faults), making traditional protection strategies ineffective. In addition, wide system current fluctuation (0.5–3 kA) and offshore photovoltaic power fluctuation easily cause fixed-threshold protection maloperation, and the coupling mechanism among modulation strategies, DC bus capacitor voltage dynamics and fault current paths is unclear. To solve these bottlenecks, this paper establishes a simulation model of the system based on the PSCAD/EMTDC(A professional simulation software for electromagnetic transient analysis in power systems V4.5.3) platform, analyzes the transient electrical characteristics of three-level converters under traction and braking conditions for typical faults, clarifies the coupling mechanism, proposes a condition-adaptive fault identification strategy, and designs a reconfigurable fault energy handling system with bypass thyristors and adaptive crowbar circuits. Simulation and hardware-in-the-loop (HIL) experiments show that the proposed scheme completes fault identification and protection within 2–3 ms, suppresses fault peak current by more than 70%, limits DC bus overvoltage within ±10% of the rated voltage, and has good post-fault recovery performance. It provides a reliable and engineering-feasible protection solution for related systems and technical references for similar flexible DC system protection design. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

16 pages, 818 KB  
Article
Probabilistic Assume–Guarantee Contracts for Islanded Mission-Critical Power System Operations
by Venkatraman Renganathan and Soham Ghosh
Electronics 2026, 15(4), 855; https://doi.org/10.3390/electronics15040855 - 18 Feb 2026
Viewed by 184
Abstract
Design of large-scale power systems is getting increasingly complex nowadays from an operational and reliability standpoint due to the uncertainties associated with the injection of renewables and consumption of load. These uncertainties pose a great challenge in gauging and subsequently obtaining reliable system-level [...] Read more.
Design of large-scale power systems is getting increasingly complex nowadays from an operational and reliability standpoint due to the uncertainties associated with the injection of renewables and consumption of load. These uncertainties pose a great challenge in gauging and subsequently obtaining reliable system-level assurances from subsystem-level guarantees, particularly in mission-critical systems such as those seen in data centers. We propose a formal and modular framework of probabilistic assume–guarantee contracts (PAGCs) for compositional reasoning and control of uncertain power systems, motivated by the need for resilient and verifiable operation in data center power networks. In contrast to classical contracts, which require absolute satisfaction of assumptions and guarantees, PAGCs allow for high-probability satisfaction under system uncertainty and variability. We formalize the syntax and semantics of PAGCs, develop soundness and compositionality theorems, and demonstrate their applicability to power grid components such as generators, transformers, circuit breakers, and loads. Given the current approval bottlenecks in interconnection requests, a growing number of data center operators are opting for islanded generation configuration. A case study on such a modular islanded data center power system is presented to validate the proposed theory. The proposed PAGC application in power networks is promising in several aspects to solve several existing open problems in distributed systems, particularly in future large-scale smart power networks. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

Back to TopTop