Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,750)

Search Parameters:
Keywords = distribution network system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1451 KB  
Review
AI-Driven Network Optimization for the 5G-to-6G Transition: A Taxonomy-Based Survey and Reference Framework
by Rexhep Mustafovski, Galia Marinova, Besnik Qehaja, Edmond Hajrizi, Shejnaze Gagica and Vassil Guliashki
Future Internet 2026, 18(3), 155; https://doi.org/10.3390/fi18030155 - 17 Mar 2026
Abstract
This paper presents a taxonomy-based survey of AI-driven network optimization mechanisms relevant to the transition from fifth generation (5G) to sixth generation (6G) mobile communication systems. In contrast to earlier generational shifts that are often described as technology replacement cycles, the 5G-to-6G evolution [...] Read more.
This paper presents a taxonomy-based survey of AI-driven network optimization mechanisms relevant to the transition from fifth generation (5G) to sixth generation (6G) mobile communication systems. In contrast to earlier generational shifts that are often described as technology replacement cycles, the 5G-to-6G evolution is increasingly characterized in the literature as a prolonged period of coexistence, hybrid operation, and progressive integration of new capabilities across radio, edge, core, and service layers. To structure this transition, the paper organizes prior work into a transition-oriented taxonomy covering migration strategies, AI-enabled closed-loop control, RAN disaggregation and edge intelligence, core virtualization and slice orchestration, spectrum-aware coexistence, service-driven requirements, and security-aware governance. Rather than introducing a new optimization algorithm or an experimentally validated architecture, the contribution of this survey is analytical and integrative. Specifically, it consolidates fragmented research directions into a reference view of how AI-driven control mechanisms are distributed across spectrum, RAN, edge, and core domains during hybrid 5G–6G operation. In addition, the paper includes a structured evidence synthesis of performance trends, deployment maturity signals, and recurring methodological limitations reported across the literature. The review indicates that meeting anticipated 6G objectives, including ultra-low latency, high reliability, scalability, and improved energy efficiency, depends less on isolated enhancements at individual protocol layers and more on coordinated cross-layer optimization supported by AI-native control loops. At the same time, the surveyed literature reveals persistent gaps in service-to-control mapping, security-aware orchestration, interoperability across heterogeneous domains, and reproducible evaluation methodologies for hybrid 5G–6G environments. The survey is intended to provide researchers, network operators, and standardization stakeholders with a structured analytical basis for assessing how AI-driven optimization can support the staged evolution from 5G systems toward 6G-ready infrastructures. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

23 pages, 9128 KB  
Article
Mineral-Scale Mechanical Properties of Carbonate Rocks Based on Nanoindentation
by Zechen Guo, Dongjin Xu, Haijun Mao, Bao Li and Baoan Zhang
Appl. Sci. 2026, 16(6), 2874; https://doi.org/10.3390/app16062874 - 17 Mar 2026
Abstract
Carbonate reservoirs in the Shunbei area develop pronounced fracture networks after acidized hydraulic fracturing and thus have the potential to be repurposed as underground gas storage (UGS) after hydrocarbon depletion. Characterizing their mechanical behavior is essential for safe UGS operation; however, deep to [...] Read more.
Carbonate reservoirs in the Shunbei area develop pronounced fracture networks after acidized hydraulic fracturing and thus have the potential to be repurposed as underground gas storage (UGS) after hydrocarbon depletion. Characterizing their mechanical behavior is essential for safe UGS operation; however, deep to ultra-deep natural cores are difficult to obtain, and conventional macroscopic tests often cannot provide parameters that meet engineering requirements. To address this issue, nanoindentation combined with QEMSCAN (Quantitative Evaluation of Minerals by Scanning Electron Microscopy) was employed to quantify microscale mineral distributions and the mechanical properties of the major constituents. The investigated rock is calcite-dominated (89.62%), with minor quartz (9.89%) and trace feldspar-group minerals (1.89%). Minerals are randomly embedded, and soft–hard phase boundaries are widely distributed. A finite–discrete element method (FDEM) model was then constructed and calibrated in ABAQUS. The discrepancies in uniaxial compressive strength and elastic modulus relative to laboratory results were 6.51% and 9.91%, respectively, indicating good agreement in both mechanical response and failure mode. Parametric analyses using three additional models with different mineral proportions show that damage preferentially initiates at mineral phase boundaries and stress concentration zones induced by end constraints. Microcracks then propagate and coalesce into a dominant compressive–shear band, and final failure is mainly governed by slip along the shear band with localized tensile cracking. With increasing quartz and feldspar contents, enhanced heterogeneity and a higher density of phase boundaries lead to a higher density of crack nucleation sites and increased crack branching, and the failure pattern transitions from a single shear-band–controlled mode to a more network-like fracture system. Moreover, macroscopic strength is not determined solely by the intrinsic strength of individual minerals; heterogeneity and phase-boundary characteristics strongly govern microcrack behavior, such that higher hard-phase contents may result in a lower peak strength. Full article
Show Figures

Figure 1

32 pages, 1219 KB  
Article
Optimized Operational Characteristics and Carbon Reduction Decision Pathways of School Milk Cold-Chain Distribution Network Under an Internal Carbon Pricing Mechanism
by Ching-Kuei Kao, Sheng Fei, Guang-Ze Chen and Zheng Zhuang
Future Transp. 2026, 6(2), 65; https://doi.org/10.3390/futuretransp6020065 - 17 Mar 2026
Abstract
Urban short-haul cold-chain distribution operates under strict service constraints while facing increasing pressure to reduce carbon emissions under the dual-carbon goals. Existing emission-aware routing studies often treat carbon emissions as external constraints or ex post evaluation indicators, limiting their influence on operational decision [...] Read more.
Urban short-haul cold-chain distribution operates under strict service constraints while facing increasing pressure to reduce carbon emissions under the dual-carbon goals. Existing emission-aware routing studies often treat carbon emissions as external constraints or ex post evaluation indicators, limiting their influence on operational decision making. This study addresses this gap by developing a cold-chain distribution network optimization model that integrates internal carbon pricing (ICP), enabling carbon emissions to be internalized as economic costs within routing and scheduling decisions. Using the student milk cold-chain distribution system serving 54 primary and secondary schools in Fuzhou as an empirical case, the model incorporates multiple cost components, including energy consumption, warehouse operation, carbon emissions, and low-load penalties, while embedding operational constraints such as vehicle capacity, delivery time windows, and minimum economic loading requirements. An improved genetic algorithm is applied to solve the model. Scenario analyses are conducted across carbon price variation and demand fluctuation. Results show that when the internal carbon price increases from 97.49 RMB/t to 2000 RMB/t, the total distribution cost rises from 3531.2 RMB to 4082.842 RMB, indicating that carbon costs become an increasingly important factor in operational decision making. The distribution network exhibits a core-route-dominated structure, with key routes remaining stable across carbon price scenarios, suggesting that the influence of ICP is primarily reflected through cost internalization rather than route substitution. Demand analysis further shows that a 10% demand reduction reduces costs through route consolidation, while a 20% reduction weakens load efficiency and reduces vehicle utilization without triggering low-load penalty costs. These findings demonstrate that integrating ICP into routing optimization provides an effective pathway for aligning operational decisions with low-carbon transition objectives in rigid-demand cold-chain distribution systems. Full article
Show Figures

Figure 1

29 pages, 5152 KB  
Article
Impact of Neural Network Initialisation Seed and Architecture on Accuracy, Generalisation and Generative Consistency in Data-Driven Internal Combustion Engine Modelling
by Arturas Gulevskis, Redha Benhadj-Djilali and Konstantin Volkov
Computers 2026, 15(3), 194; https://doi.org/10.3390/computers15030194 - 17 Mar 2026
Abstract
Artificial neural networks (ANNs) are widely used to approximate nonlinear mappings, yet their ability to capture thermodynamic behaviour in dynamic physical systems remains insufficiently characterised. This study investigates how representational capacity influences surrogate modelling accuracy for a crank-angle-resolved internal combustion engine (ICE) simulation [...] Read more.
Artificial neural networks (ANNs) are widely used to approximate nonlinear mappings, yet their ability to capture thermodynamic behaviour in dynamic physical systems remains insufficiently characterised. This study investigates how representational capacity influences surrogate modelling accuracy for a crank-angle-resolved internal combustion engine (ICE) simulation with a maximum dynamic state dimension of six. Two feedforward ANN configurations are evaluated: a low-capacity 5–5 architecture containing 84 trainable parameters and a high-capacity 25–25–25 architecture containing 1554 parameters (18.5× larger). Both networks approximate the nonlinear mapping from five embedded operating parameters to four peak thermodynamic outputs (maximum pressure, pressure phasing, maximum temperature, and temperature phasing). Evaluation across 53,178 operating points demonstrates that the high-capacity configuration reduces root mean squared error by factors of 30–50× relative to the low-capacity network, decreasing peak temperature error from 17.68 K to 0.36 K and peak pressure error from 0.116 MPa to 0.0025 MPa. Although both models achieve coefficients of determination exceeding 0.99, the low-capacity network exhibits heavy-tailed residual distributions and regime-dependent error amplification, whereas the high-capacity model reduces both central dispersion and extreme-case error. These results demonstrate that high correlation alone does not guarantee engineering reliability in nonlinear thermodynamic systems. Distribution-level analysis, including percentile and extreme-case characterisation, is required to evaluate engineering robustness. The findings provide a quantitative framework linking ANN capacity, nonlinear dynamic system representation, and predictive robustness. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence (2nd Edition))
Show Figures

Figure 1

29 pages, 4007 KB  
Article
CCBA: Dynamic Scheduling Algorithm for Jammer Resources in Strong Electromagnetic Interference Environment
by Zhenhua Wei, Wenpeng Wu, Haiyang You, Zhaoguang Zhang, Chenxi Li, Jianwei Zhan and Shan Zhao
Future Internet 2026, 18(3), 153; https://doi.org/10.3390/fi18030153 - 16 Mar 2026
Abstract
The strong electromagnetic interference environment on the battlefield has brought new challenges to the networking collaboration of jammers and the estimation of jamming effects. Traditional successful jamming indicators are difficult to meet the needs of continuous, low-power, and flexible jamming, causing difficulties in [...] Read more.
The strong electromagnetic interference environment on the battlefield has brought new challenges to the networking collaboration of jammers and the estimation of jamming effects. Traditional successful jamming indicators are difficult to meet the needs of continuous, low-power, and flexible jamming, causing difficulties in emergency scheduling of jamming resources. Aiming at the overall degradation of the communication party’s signal reception quality, this paper proposes the restrictive conditions of “overall limited jamming” and the analysis and evaluation index of “multistage jamming-to-signal ratio (J/S)”, which meets the scheduling requirements of distributed jamming resources in harsh environments. Based on the jammer layout that can achieve overall high-intensity jamming, the electromagnetic environment estimation, power scheduling, and collaboration strategies of jammers are designed, a communication countermeasure game algorithm under blocked networking collaboration is established, and the independent dynamic scheduling of jamming resources is realized. The experimental results show that the Concentric Circle Broadcasting Algorithm (CCBA) not only maintains effective communication jamming (the proportion of high-intensity jamming is no less than 50%, and the proportion of normal signal reception of communication nodes is no more than 6%), but also extends the system operation duration by 66.8–269.6% compared with the comparative algorithms for the 600 MHz fixed-frequency and 1 MHz bandwidth communication system. This work is limited to the line-of-sight (LOS) scenario, and future research will extend it to non-line-of-sight (NLOS) scenarios. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

28 pages, 1600 KB  
Article
A Data-Driven Deep Reinforcement Learning Framework for Real-Time Economic Dispatch of Microgrids Under Renewable Uncertainty
by Biao Dong, Shijie Cui and Xiaohui Wang
Energies 2026, 19(6), 1481; https://doi.org/10.3390/en19061481 - 16 Mar 2026
Abstract
The real-time economic dispatch of microgrids (MGs) is challenged by the high penetration of renewable energy and the resulting source–load uncertainties. Conventional optimization-based scheduling methods rely heavily on accurate probabilistic models and often suffer from high computational burdens, which limits their real-time applicability. [...] Read more.
The real-time economic dispatch of microgrids (MGs) is challenged by the high penetration of renewable energy and the resulting source–load uncertainties. Conventional optimization-based scheduling methods rely heavily on accurate probabilistic models and often suffer from high computational burdens, which limits their real-time applicability. To address these challenges, a data-driven deep reinforcement learning (DRL) framework is proposed for real-time microgrid energy management. The MG dispatch problem is formulated as a Markov decision process (MDP), and a Deep Deterministic Policy Gradient (DDPG) algorithm is adopted to efficiently handle the high-dimensional continuous action space of distributed generators and energy storage systems (ESS). The system state incorporates renewable generation, load demand, electricity price, and ESS operational conditions, while the reward function is designed as the negative of the operational cost with penalty terms for constraint violations. A continuous-action policy network is developed to directly generate control commands without action discretization, enabling smooth and flexible scheduling. Simulation studies are conducted on an extended European low-voltage microgrid test system under both deterministic and stochastic operating scenarios. The proposed approach is compared with model-based methods (MPC and MINLP) and representative DRL algorithms (SAC and PPO). The results show that the proposed DDPG-based strategy achieves competitive economic performance, fast convergence, and good adaptability to different initial ESS conditions. In stochastic environments, the proposed method maintains operating costs close to the optimal MINLP reference while significantly reducing the online computational time. These findings demonstrate that the proposed framework provides an efficient and practical solution for the real-time economic dispatch of microgrids with high renewable penetration. Full article
Show Figures

Figure 1

19 pages, 1546 KB  
Article
Deep Learning-Enhanced Proactive Strategy: LSTM and VRP/ACO for Autonomous Replenishment and Demand Forecasting in Shared Logistics
by Martin Straka and Kristína Kleinová
Appl. Sci. 2026, 16(6), 2838; https://doi.org/10.3390/app16062838 - 16 Mar 2026
Abstract
At present, the global logistics sector faces critical challenges, including rising energy costs and pressure to reduce CO2 emissions. Traditional linear supply chains are becoming inefficient, necessitating a transition toward shared logistics based on the principles of the sharing economy. This paper [...] Read more.
At present, the global logistics sector faces critical challenges, including rising energy costs and pressure to reduce CO2 emissions. Traditional linear supply chains are becoming inefficient, necessitating a transition toward shared logistics based on the principles of the sharing economy. This paper presents a progressive three-layer architecture that transforms conventional reactive data collection into an autonomous, proactive management system for the distribution of consumable materials. While previous research established foundations in IoT connectivity for smart vending machines, this study advances the process by integrating an intelligent layer of artificial intelligence (AI) algorithms. The framework utilizes Long Short-Term Memory (LSTM) neural networks for demand forecasting, dynamic route optimization (VRP/ACO) for replenishment, and Isolation Forest/DBSCAN algorithms for real-time anomaly detection. To evaluate the framework, a numerical simulation was conducted using representative pilot scenarios. The results indicate that within the simulated environment, the system achieves over 95% accuracy in inventory depletion prediction (MAPE = 4.02%). In these analyzed instances, this leads to a 25–30% reduction in stock-out risks and a 25% reduction in replenishment distance. These findings demonstrate the significant potential for reducing operational costs and carbon footprints in green logistics. The study confirms that the synergy between IoT infrastructure and AI-driven analysis provides a robust foundation for transitioning from static methodologies to resilient, collaborative logistics ecosystems. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in the Internet of Things)
Show Figures

Figure 1

23 pages, 316 KB  
Article
Sustainability and Agricultural Investments in Bulgaria: Balancing Profitability and Environmental Protection
by Mariya Peneva
Sustainability 2026, 18(6), 2898; https://doi.org/10.3390/su18062898 - 16 Mar 2026
Abstract
Agriculture in Bulgaria faces increasing pressure to balance profitability with environmental sustainability under the evolving framework of the Common Agricultural Policy (CAP) and the European Green Deal. This study analyses the relationship between sustainability-oriented investment support, production cost structure, and farm profitability using [...] Read more.
Agriculture in Bulgaria faces increasing pressure to balance profitability with environmental sustainability under the evolving framework of the Common Agricultural Policy (CAP) and the European Green Deal. This study analyses the relationship between sustainability-oriented investment support, production cost structure, and farm profitability using farm-level data from the Farm Accountancy Data Network (FADN). The analysis integrates investment-related subsidies, input intensity, productivity indicators, and structural characteristics into an econometric framework to examine their associations with economic performance. Results show that environmental payments, when aligned with efficient management, enhance profitability, whereas conventional investment and rural development support display limited or delayed effects. Higher crop protection expenditure is associated with lower profitability, suggesting cost inefficiencies in chemically intensive production systems. In contrast, fertiliser expenditure shows no significant association, while energy-related spending exhibits a positive but statistically insignificant relationship, likely reflecting mechanisation and technological modernisation effects. Structural factors, particularly farm size and land productivity, remain key determinants of profitability for balancing economic and environmental goals. Overall, the findings suggest that sustainable profitability in Bulgarian agriculture is achievable but unevenly distributed, shaped by structural conditions, managerial capacity, and the design of support instruments. The study offers empirical evidence for aligning sustainable investment incentives with farm-level competitiveness and supports the transition toward integrated economic-environmental monitoring within the forthcoming Farm Sustainability Data Network (FSDN). Full article
21 pages, 1611 KB  
Article
Mobility-Aware Cooperative Optimization for Task Offloading and Resource Allocation in Multi-Edge Computing
by Dong Chen, Ximing Zhang, Kequan Lin, Chunhua Mei and Ru Huo
Algorithms 2026, 19(3), 221; https://doi.org/10.3390/a19030221 - 16 Mar 2026
Abstract
The rapid proliferation of mobile Internet of Things (IoT) devices has introduced significant resource scheduling challenges in multi-edge computing networks, where device mobility leads to dynamic network connectivity and load imbalance, complicating task offloading and resource management. To address these issues, this paper [...] Read more.
The rapid proliferation of mobile Internet of Things (IoT) devices has introduced significant resource scheduling challenges in multi-edge computing networks, where device mobility leads to dynamic network connectivity and load imbalance, complicating task offloading and resource management. To address these issues, this paper presents a mobility-driven hierarchical optimization framework for task offloading and computation resource allocation in multi-region edge computing environments, a functionally coupled hierarchical framework that integrates mobility-aware heuristic offloading with multi-agent deep deterministic policy gradient (MADDPG)-based resource allocation. Devices are first clustered according to their mobility patterns, and offloading decisions are dynamically made based on trajectory and dwell-time characteristics. Each edge server is modeled as an autonomous agent, and an MADDPG framework is adopted to collaboratively optimize resource allocation, with the joint objective of minimizing task processing delay and system energy consumption. Experimental evaluations under diverse mobility and workload conditions show that the proposed approach achieves a 19.0% reduction in task delay compared to the Multi-Objective Gray Wolf Optimization (MOGWO) method at the largest device scale (60 devices) and maintains comparable energy efficiency. Furthermore, it exhibits stronger adaptability and scheduling performance across varying mobility group distributions. These results confirm the effectiveness of the proposed method in enhancing system performance within dynamic mobile edge computing scenarios. Full article
Show Figures

Figure 1

26 pages, 4676 KB  
Article
Energy-Efficient Access Point Switch On/Off in Cell-Free Massive MIMO Using Proximal Policy Optimization
by Guillermo García-Barrios, Alberto Alonso and Manuel Fuentes
Electronics 2026, 15(6), 1219; https://doi.org/10.3390/electronics15061219 - 14 Mar 2026
Abstract
The increasing densification of cell-free massive multiple-input multiple-output (MIMO) networks makes access point switch on/off (ASO) a key mechanism for improving energy efficiency in future wireless systems. While reinforcement learning (RL) has been explored for ASO, differences in modeling assumptions and evaluation scope [...] Read more.
The increasing densification of cell-free massive multiple-input multiple-output (MIMO) networks makes access point switch on/off (ASO) a key mechanism for improving energy efficiency in future wireless systems. While reinforcement learning (RL) has been explored for ASO, differences in modeling assumptions and evaluation scope leave open questions regarding robustness and scalability. In this work, ASO is investigated from an explicit energy-efficiency perspective using a RL framework based on Proximal Policy Optimization (PPO). The policy learns state-dependent AP activation under partial observability using compact per-access point (AP) large-scale fading statistics and power parameters, without requiring instantaneous small-scale channel state information or combinatorial search, enabling practical online implementation. A comprehensive evaluation is conducted under a unified and reproducible simulation framework across three cell-free deployment scenarios of increasing size that preserve AP density while incorporating realistic channel and power consumption models. Performance is assessed through both average and distribution-based metrics. Numerical results show that the PPO-based policy consistently outperforms random activation and the all-on baseline, achieving energy-efficiency improvements of up to 66% and nearly 50%, respectively, while activating a comparable number of APs. Moreover, the learned policy maintains robust performance as the network scales, reducing the likelihood of highly energy-inefficient operating regimes. Full article
17 pages, 2631 KB  
Article
Monitoring of Liquid Metal Reactor Heater Zones with Recurrent Neural Network Learning of Temperature Time Series
by Maria Pantopoulou, Derek Kultgen, Lefteri Tsoukalas and Alexander Heifetz
Energies 2026, 19(6), 1462; https://doi.org/10.3390/en19061462 - 14 Mar 2026
Abstract
Advanced high-temperature fluid reactors (ARs), such as sodium fast reactors (SFRs) and molten salt cooled reactors (MSCRs) utilize high-temperature fluids at ambient pressure. To melt the fluid during reactor startup and prevent fluid freezing during cooldown, the thermal–hydraulic systems of such ARs include [...] Read more.
Advanced high-temperature fluid reactors (ARs), such as sodium fast reactors (SFRs) and molten salt cooled reactors (MSCRs) utilize high-temperature fluids at ambient pressure. To melt the fluid during reactor startup and prevent fluid freezing during cooldown, the thermal–hydraulic systems of such ARs include heater zones consisting of specific heaters with controllers, temperature sensors, and thermal insulation. The failure of heater zones due to insulation material degradation or improper installation, resulting in parasitic heat losses, can lead to fluid freezing. The detection of faults using a heat-transfer model is difficult because of a lack of knowledge of the experimental details. Data-driven machine learning of heater zone temperature time series offers a viable alternative. In this study, we benchmarked the performance of recurrent neural networks (RNNs) in an analysis of heat-up transient temperature time series of heater zones installed on a liquid sodium vessel. The RNN models include long short-term memory (LSTM) and gated recurrent unit (GRU) networks, as well as their bi-directional variants, BiLSTM and BiGRU. Anomalous temperature points were designated using a percentile-based threshold applied to residual fluctuations in the detrended temperature time series. Additionally, the impact of the exponentially weighted moving average (EWMA) method on detection accuracy was examined. The RNN models’ performance was assessed using precision, recall, and F1 score metrics. Results demonstrated that RNN models effectively detect anomalies in temperature time series with the best models for each heater zone achieving F1 scores of over 93%. To explain the variations in RNN model performance across different heater zones, we used Kullback–Leibler (KL) divergence to quantify the relative entropy between training and testing data, and the Detrended Fluctuation Analysis (DFA) to assess long-range temporal correlations. For datasets with strong long-range correlations and minimal relative entropy between training and testing data, GRU is the best-performing model. When the data exhibits weaker long-term correlations and a significant relative entropy between training and testing distributions, BiGRU shows the best performance. For the data sets with intermediate values of both KL divergence and DFA, the best performance is obtained with LSTM and BiLSTM, respectively. Full article
Show Figures

Figure 1

17 pages, 1326 KB  
Article
A Hybrid Quantum–Classical Neural Network Framework for the Detection of Quantum Hacking Attacks in CVQKD
by Xinglin He, Jiaxun Xiao and Xuanli Lyu
Appl. Sci. 2026, 16(6), 2793; https://doi.org/10.3390/app16062793 - 14 Mar 2026
Abstract
The security of continuous-variable quantum key distribution (CVQKD) systems faces severe challenges from quantum hacking attacks in practical deployments. This paper proposes a novel hybrid quantum-classical neural network (HQCNN) architecture for the detection of quantum hacking attacks. This architecture employs a convolutional neural [...] Read more.
The security of continuous-variable quantum key distribution (CVQKD) systems faces severe challenges from quantum hacking attacks in practical deployments. This paper proposes a novel hybrid quantum-classical neural network (HQCNN) architecture for the detection of quantum hacking attacks. This architecture employs a convolutional neural network (CNN) to extract features from raw pulse signals at the receiver and to reduce spatial dimensionality. Subsequently, the extracted features are mapped into a high-dimensional Hilbert space via angle encoding, and a variational quantum circuit (VQC) is utilized as the core classifier for discrimination. In five-class classification experiments involving local oscillator intensity attacks (LOIA), calibration attacks, saturation attacks, hybrid attacks, and the no-attack state, the HQCNN achieves an overall accuracy of 93%, representing a 6% improvement over the classical residual network (ResNet). In addition, the proposed HQCNN architecture exhibits a significant advantage in parameter efficiency compared with classical deep neural networks. This study provides an efficient intelligent detection scheme for enhancing the practical security of CVQKD systems. Full article
(This article belongs to the Special Issue Quantum Communication and Applications)
Show Figures

Figure 1

38 pages, 1285 KB  
Review
From Static Welfare Optimization to Dynamic Efficiency in Energy Policy: A Governance Framework for Complex and Uncertain Energy Systems
by Martin García-Vaquero, Antonio Sánchez-Bayón and Frank Daumann
Energies 2026, 19(6), 1460; https://doi.org/10.3390/en19061460 - 13 Mar 2026
Viewed by 82
Abstract
The energy transition represents a complex, multi-level system subject to profound uncertainty and recurrent shocks. Current policy design approaches predominantly rely on static optimization frameworks (centralized, calculative models that presume stable conditions and predictable technological trajectories). Yet evidence from the 2021–2023 energy crisis [...] Read more.
The energy transition represents a complex, multi-level system subject to profound uncertainty and recurrent shocks. Current policy design approaches predominantly rely on static optimization frameworks (centralized, calculative models that presume stable conditions and predictable technological trajectories). Yet evidence from the 2021–2023 energy crisis in Europe, coupled with structural challenges in market liberalization and renewable integration, demonstrates persistent challenges in policy implementation. Price interventions affect competitive dynamics; subsidies influence technology selection; capacity mechanisms create coordination tensions; and rigid tariff structures create misalignments with evolving grid needs. This paper argues that these recurrent policy tensions stem not from implementation gaps, but from an inadequate theoretical foundation: the treatment of energy systems as optimizable rather than as complex, adaptive systems operating under Knight–Mises uncertainty and Huerta de Soto dynamic efficiency. This work explores an alternative framework grounded in dynamic efficiency, complex–uncertain systems, decentralized incentives, and adaptive governance (international–domestic, public–private, etc.). This review uses the theoretical and methodological framework of the Heterodox Synthesis, an alternative to the Neoclassical Synthesis. There is a reinterpretation of some insights from Knight and Mises (uncertainty), Hayek (distributed knowledge), Huerta de Soto (dynamic efficiency) and contemporary complexity economics into operational criteria applicable to energy policy design: (1) robustness to deep uncertainty; (2) preservation of price signals and risk-bearing mechanisms; (3) alignment of incentives across distributed actors; (4) institutional adaptability; and (5) minimization of ex post policy corrections. Through illustrative application to four critical policy instruments (price caps, renewable subsidies, capacity mechanisms, and network tariff design), it is shown how this framework identifies systematic tensions and consequences that conventional analysis overlooks. The contribution is exploratory in a bootstrap way: theoretical, by integrating classical and contemporary economics into energy governance; methodological, by operationalizing dynamic efficiency into evaluable criteria distinct from existing adaptive governance frameworks; and sectorial, by providing policymakers and regulators with diagnostic tools for assessing design robustness in conditions of deep uncertainty and rapid transition. According to this review, improved energy policy design under uncertainty is not achieved through more sophisticated optimization (in a calculative way), but through institutional architectures that preserve creative and adaptive learning, maintain distributed decision-making capacity, and remain functional when assumptions prove incorrect or not well-known. Full article
Show Figures

Figure 1

23 pages, 17791 KB  
Article
Open vs. Commercial 5G SA Deployments: Performance Assessment
by Teodora-Cristina Stoian, Razvan-Marius Mihai, Ekaterina Svertoka, Alexandru Martian and Cristian Patachia-Sultanoiu
Technologies 2026, 14(3), 177; https://doi.org/10.3390/technologies14030177 - 13 Mar 2026
Viewed by 125
Abstract
Open-source and commercial fifth-generation (5G) deployments are difficult to compare because they are built for different goals and reported under different conditions, which slows down validation and technology transfer from research to practice. This study explores the deployment and evaluation of two 5G [...] Read more.
Open-source and commercial fifth-generation (5G) deployments are difficult to compare because they are built for different goals and reported under different conditions, which slows down validation and technology transfer from research to practice. This study explores the deployment and evaluation of two 5G Standalone (SA) disaggregated Radio Access Network (RAN) systems, using open-source research RAN, commercial RAN, and Software-Defined Radio (SDR) hardware. The first testbed is a SDR-based prototype, containing a Universal Software Radio Peripheral (USRP) B210 device, using Software Radio System RAN (srsRAN) as the RAN. The commercial-based testbed contains a Benetel RAN550 Radio Unit (RU), connected via an optical fiber to a Commercial Off-the-Shelf (COTS) server acting as the Distributed Unit (DU) and Centralized Unit (CU) using the Accelleran virtualized Baseband Unit (vBBU) platform. The Core Network (CN) is implemented using the open-source Open5GS in both testbeds. To evaluate the network’s functionality, throughput and latency are tracked using a Motorola Edge 50 Pro mobile terminal. The experimental results are analyzed and compared with representative performance metrics reported in the literature to place the measurements in a broader research context. This study further assesses trade-offs related to cost, portability, and scalability by comparing SDR-based research prototypes with commercial deployments. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

31 pages, 2256 KB  
Article
Trust Assessment of Distributed Power Grid Terminals via Dual-Domain Graph Neural Networks
by Cen Chen, Jinghong Lan, Yi Wang, Zhuo Lv, Junchen Li, Ying Zhang, Xinlei Ming and Yubo Song
Electronics 2026, 15(6), 1211; https://doi.org/10.3390/electronics15061211 - 13 Mar 2026
Viewed by 171
Abstract
As distributed terminals are increasingly integrated into modern power systems with high penetration of renewable energy and decentralized resources, access control mechanisms must support continuous and highly detailed trust assessment. Existing approaches based on machine learning primarily rely on network traffic features from [...] Read more.
As distributed terminals are increasingly integrated into modern power systems with high penetration of renewable energy and decentralized resources, access control mechanisms must support continuous and highly detailed trust assessment. Existing approaches based on machine learning primarily rely on network traffic features from a single source and analyze terminals in isolation, which limits their ability to capture complex device states and correlated attack behaviors. This paper presents a trust assessment framework for distributed power grid terminals that combines multidimensional behavioral modeling with dual domain graph neural networks. Behavioral features are collected from network traffic, runtime environment, and hardware or kernel events and are fused into compact representations through a variational autoencoder to mitigate redundancy and reduce computational overhead. Based on the fused features and observed communication relationships, two graphs are constructed in parallel: a feature domain graph reflecting behavioral similarity and a topological domain graph capturing communication structure between terminals. Graph convolution is performed in both domains to jointly model individual behavioral risk and correlation across terminals. A fusion mechanism based on attention is further introduced to adaptively integrate embeddings specific to each domain, together with a loss function that enforces both shared and complementary representations across domains. Experiments conducted on the CIC EV Charger Attack Dataset 2024 show that the proposed framework achieves a classification accuracy of 96.84%, while maintaining a recall rate above 95% for the low trust category. These results indicate that incorporating multidimensional behavior perception and dual domain relational modeling improves trust assessment performance for distributed power grid terminals under complex attack scenarios. Full article
(This article belongs to the Special Issue Advances in Data Security: Challenges, Technologies, and Applications)
Show Figures

Figure 1

Back to TopTop