Next Issue
Volume 15, February-1
Previous Issue
Volume 15, January-1
 
 
electronics-logo

Journal Browser

Journal Browser

Electronics, Volume 15, Issue 2 (January-2 2026) – 244 articles

Cover Story (view full-size image): A reconfigurable analog beamformer for the use case of multiband Global Navigation Satellite System (GNSS) multiantenna receiver systems is designed and tested. The beamformer board operates in all existing GNSS frequency bands. In this paper, the two commonly used GNSS bands, the E1 and E5a GNSS bands at 1.575 GHz and 1.176 GHz, respectively, are studied. An analog weighting of the complex excitation of up to 14 individual channels is realized using attenuators and phase shifters, digitally controlled by proprietary PC software. We present an analysis of the relative errors between the channels and a simple calibration of constant errors which is applied and validated. The beamformer is then demonstrated in an exemplary test case, to generate an ad hoc pattern from an array of antennas. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 31548 KB  
Article
Large-Signal Stability Analysis of VSC-HVDC System Based on T-S Fuzzy Model and Model-Free Predictive Control
by Zhaozun Sun, Yalan He, Zhe Cao, Jingrui Jiang, Tongkun Li, Pizheng Tan, Kaixuan Mei, Shujie Gu, Tao Yu, Jiashuo Zhang and Linyun Xiong
Electronics 2026, 15(2), 492; https://doi.org/10.3390/electronics15020492 - 22 Jan 2026
Viewed by 225
Abstract
Voltage source converter-based–high voltage direct current (VSC-HVDC) systems exhibit strong nonlinear characteristics that dominate their dynamic behavior under large disturbances, making large-signal stability assessment essential for secure operation. This paper proposes a large-signal stability analysis framework for VSC-HVDC systems. The framework combines a [...] Read more.
Voltage source converter-based–high voltage direct current (VSC-HVDC) systems exhibit strong nonlinear characteristics that dominate their dynamic behavior under large disturbances, making large-signal stability assessment essential for secure operation. This paper proposes a large-signal stability analysis framework for VSC-HVDC systems. The framework combines a unified Takagi–Sugeno (T–S) fuzzy model with a model-free predictive control (MFPC) scheme to enlarge the estimated domain of attraction (DOA) and bring it closer to the true stability region. The global nonlinear dynamics are captured by integrating local linear sub-models corresponding to different operating regions into a single T–S fuzzy representation. A Lyapunov function is then constructed, and associated linear matrix inequality (LMI) conditions are derived to certify large-signal stability and estimate the DOA. To further reduce the conservatism of the LMI-based iterative search, we embed a genetic-algorithm-based optimizer into the model-free predictive controller. The optimizer guides the improved LMI iteration paths and enhances the DOA estimation. Simulation studies in MATLAB 2023b/Simulink on a benchmark VSC-HVDC system confirm the feasibility of the proposed approach and show a less conservative DOA estimate compared with conventional methods. Full article
Show Figures

Figure 1

25 pages, 5757 KB  
Article
A Device-Free Human Detection System Using 2.4 GHz Wireless Networks and an RSSI Distribution-Based Method with Autonomous Threshold
by Charernkiat Pochaiya, Apidet Booranawong, Dujdow Buranapanichkit, Kriangkrai Tassanavipas and Hiroshi Saito
Electronics 2026, 15(2), 491; https://doi.org/10.3390/electronics15020491 - 22 Jan 2026
Viewed by 300
Abstract
A device-free human detection system based on a received signal strength indicator (RSSI) monitors and analyzes the change of RSSI signals to detect human movements in a wireless network. This study proposes and implements a real-time, device-free human detection system based on an [...] Read more.
A device-free human detection system based on a received signal strength indicator (RSSI) monitors and analyzes the change of RSSI signals to detect human movements in a wireless network. This study proposes and implements a real-time, device-free human detection system based on an RSSI distribution-based detection method with an autonomous threshold. The novelty and contribution of our solution is that the RSSI distribution concept is considered and used to calculate the optimal threshold setting for human detection, while thresholds can be automatically determined from RSSI data streams gathered from test environments. The proposed system can efficiently work without requiring an offline phase, as introduced in many existing works in the research literature. Experiments using 2.4 GHz IEEE 802.15.4 technology have been carried out in indoor environments in two laboratory rooms with different numbers of wireless links, human movement patterns, and movement speeds. Experimental results show that, in all test scenarios, the proposed method can monitor and detect human movement in a wireless network in real time. It outperforms a comparative method and achieves high accuracy (i.e., 100% detection accuracy) with a low computational complexity requirement. Full article
Show Figures

Figure 1

24 pages, 4676 KB  
Article
Resonance-Suppression Strategy for High-Penetration Renewable Energy Power Systems Based on Active Amplitude and Phase Corrector
by Tan Li, Zhichuang Li, Zijun Bin, Bingxin He, Zhan Shi, Zheren Zhang and Zheng Xu
Electronics 2026, 15(2), 490; https://doi.org/10.3390/electronics15020490 - 22 Jan 2026
Viewed by 209
Abstract
Due to the negative resistance effect of power electronic devices, power systems with a high proportion of renewable energy face a significant resonance risk. To address this, this paper proposes a resonance-suppression strategy for high-penetration renewable energy systems based on an active amplitude [...] Read more.
Due to the negative resistance effect of power electronic devices, power systems with a high proportion of renewable energy face a significant resonance risk. To address this, this paper proposes a resonance-suppression strategy for high-penetration renewable energy systems based on an active amplitude and phase corrector (APC). Firstly, by considering its internal dynamics and complete control loops, the impedance model of the APC is derived. Next, the similarities and differences between resonance stability and harmonic resonance are analyzed using the s-domain and frequency-domain admittance matrices, concluding that resonance suppression for low-damping s-domain modes can be handled in the frequency domain. Then, a supplementary APC control strategy in the abc-frame is proposed, which improves impedance magnitude at specific frequencies while keeping the phase almost unchanged. Finally, the proposed strategy is validated through case studies on an offshore wind power system in Zhejiang Province. Full article
(This article belongs to the Special Issue Advances in High-Penetration Renewable Energy Power Systems Research)
Show Figures

Figure 1

26 pages, 3967 KB  
Article
A General-Purpose AXI Plug-and-Play Hyperdimensional Computing Accelerator
by Rocco Martino, Marco Pisani, Marco Angioli, Marcello Barbirotta, Antonio Mastrandrea, Antonello Rosato and Mauro Olivieri
Electronics 2026, 15(2), 489; https://doi.org/10.3390/electronics15020489 - 22 Jan 2026
Viewed by 221
Abstract
Hyperdimensional Computing (HDC) offers a robust and energy-efficient paradigm for edge intelligence; however, current hardware accelerators are often proprietary, tailored to the target learning task and tightly coupled to specific CPU microarchitectures, limiting portability and adoption. To address this, and democratize the deployment [...] Read more.
Hyperdimensional Computing (HDC) offers a robust and energy-efficient paradigm for edge intelligence; however, current hardware accelerators are often proprietary, tailored to the target learning task and tightly coupled to specific CPU microarchitectures, limiting portability and adoption. To address this, and democratize the deployment of HDC hardware, we present a general-purpose, plug-and-play accelerator IP that implements the Binary Spatter Code framework as a standalone, host-agnostic module. The design is compliant with the AMBA AXI4 standard and provides an AXI4-Lite control plane and DMA-driven AXI4-Stream datapaths coupled to a banked scratchpad memory. The architecture supports synthesis-time scalability, enabling high-throughput transfers independently of the host processor, while employing microarchitectural optimizations to minimize silicon area. A multi-layer C++ software (GitHub repository commit 3ae3b46) stack running in Linux userspace provides a unified programming model, abstracting low-level hardware interactions and enabling the composition of complex HDC pipelines. Implemented on a Xilinx Zynq XC7Z020 SoC, the accelerator achieves substantial gains over an ARM Cortex-A9 baseline, with primitive-level speedups of up to 431×. On end-to-end classification benchmarks, the system delivers average speedups of 68.45× for training and 93.34× for inference. The complete RTL and software stack are released as open-source hardware to support reproducible research and rapid adoption on heterogeneous SoCs. Full article
(This article belongs to the Special Issue Hardware Acceleration for Machine Learning)
Show Figures

Figure 1

27 pages, 11804 KB  
Article
FRAM-ViT: Frequency-Aware and Relation-Enhanced Vision Transformer with Adaptive Margin Contrastive Center Loss for Fine-Grained Classification of Ancient Murals
by Lu Wei, Zhengchao Chang, Jianing Li, Jiehao Cai and Xianlin Peng
Electronics 2026, 15(2), 488; https://doi.org/10.3390/electronics15020488 - 22 Jan 2026
Viewed by 187
Abstract
Fine-grained visual classification requires recognizing subtle inter-class differences under substantial intra-class variation. Ancient mural recognition poses additional challenges: severe degradation and complex backgrounds introduce noise that obscures discriminative features, limited annotated data restricts model training, and dynasty-specific artistic styles manifest as periodic brushwork [...] Read more.
Fine-grained visual classification requires recognizing subtle inter-class differences under substantial intra-class variation. Ancient mural recognition poses additional challenges: severe degradation and complex backgrounds introduce noise that obscures discriminative features, limited annotated data restricts model training, and dynasty-specific artistic styles manifest as periodic brushwork patterns and compositional structures that are difficult to capture. Existing spatial-domain methods fail to model the frequency characteristics of textures and the cross-region semantic relationships inherent in mural imagery. To address these limitations, we propose a Vision Transformer (ViT) framework which integrates frequency-domain enhancement, explicit token-relation modeling, adaptive multi-focus inference, and discriminative metric supervision. A Frequency Channel Attention (FreqCA) module applies 2D FFT-based channel gating to emphasize discriminative periodic patterns and textures. A Cross-Token Relation Attention (CTRA) module employs joint global and local gates to strengthen semantically related token interactions across distant regions. An Adaptive Omni-Focus (AOF) block partitions tokens into importance groups for multi-head classification, while Complementary Tokens Integration (CTI) fuses class tokens from multiple transformer layers. Finally, Adaptive Margin Contrastive Center Loss (AMCCL) improves intra-class compactness and inter-class separability with margins adapted to class-center similarities. Experiments on CUB-200-2011, Stanford Dogs, and a Dunhuang mural dataset show accuracies of 91.15%, 94.57%, and 94.27%, outperforming the ACC-ViT baseline by 1.35%, 1.63%, and 2.20%, respectively. Full article
Show Figures

Figure 1

28 pages, 6584 KB  
Article
Short-Term Wind Power Prediction with Improved Spatio-Temporal Modeling Accuracy: A Dynamic Graph Convolutional Network Based on Spatio-Temporal Information and KAN Enhancement
by Bo Wang, Zhao Wang, Xu Cao, Jiajun Niu, Zheng Wang and Miao Guo
Electronics 2026, 15(2), 487; https://doi.org/10.3390/electronics15020487 - 22 Jan 2026
Viewed by 233
Abstract
Aiming at the challenges of complex spatial-temporal correlation and strong nonlinearity in the power prediction of large-scale wind farm clusters, this study proposes a short-term wind power prediction method that combines a dynamic graph structure and a Kolmogorov–Arnold Network (KAN) enhanced neural network. [...] Read more.
Aiming at the challenges of complex spatial-temporal correlation and strong nonlinearity in the power prediction of large-scale wind farm clusters, this study proposes a short-term wind power prediction method that combines a dynamic graph structure and a Kolmogorov–Arnold Network (KAN) enhanced neural network. Firstly, a spectral embedding fuzzy C-means (FCM) cluster partition method combining geographic location and numerical weather prediction (NWP) is proposed to solve the problem of insufficient spatio-temporal representation ability of traditional methods. Secondly, a dynamic directed graph construction mechanism based on a stacked wind direction matrix and wind speed mutual information is designed to describe the directional correlation between stations with the evolution of meteorological conditions. Finally, a prediction model of dynamic graph convolution and Transformer based on KAN enhancement (DGTK-Net) is constructed to improve the fitting ability of complex nonlinear relationships. Based on the cluster data of 31 wind farms in Gansu Province of China and the cluster data of 70 wind farms in Inner Mongolia, a case study is carried out. The results show that the proposed model is significantly better than the comparison methods in terms of key evaluation indicators, and the root mean square error is reduced by about 1.16% on average. This method provides a prediction tool that can adapt to time and space changes for engineering practice, which is helpful to improve the wind power consumption capacity and operation economy of the power grid. Full article
Show Figures

Figure 1

23 pages, 2542 KB  
Article
Class-Balanced Convolutional Neural Networks for Digital Mammography Image Classification in Breast Cancer Diagnosis
by Evangelos Mavropoulos, Paraskevi Zacharia, Nikolaos Laskaris and Evangelos Pallis
Electronics 2026, 15(2), 486; https://doi.org/10.3390/electronics15020486 - 22 Jan 2026
Viewed by 154
Abstract
This study introduces a class-balanced Convolutional Neural Network (CNN) framework specifically designed for the binary classification of breast tumors in digital mammography. The proposed method systematically addresses the pervasive issue of class imbalance in medical imaging datasets by implementing advanced dataset balancing strategies, [...] Read more.
This study introduces a class-balanced Convolutional Neural Network (CNN) framework specifically designed for the binary classification of breast tumors in digital mammography. The proposed method systematically addresses the pervasive issue of class imbalance in medical imaging datasets by implementing advanced dataset balancing strategies, which resulted in a significant reduction in false negatives that is critical in early breast cancer detection. The proposed architecture is designed for high-resolution mammograms and employs regularization techniques, such as dropout and L2 weight decay, which are intended to enhance generalization and reduce the risk of overfitting. Comprehensive data augmentation and normalization further enhance the model’s robustness and adaptability to real-world clinical variability. Evaluated on the MIAS dataset, our balanced CNN achieved an accuracy of 98.84%, exhibiting both sensitivity and overall reliability. This work demonstrates that a class-balanced CNN can deliver both high diagnostic accuracy and computational efficiency, indicating potential for future use in clinical screening workflows. The system’s ability to minimize diagnostic errors and support radiologists with reliable, data-driven predictions represents an exploratory step toward improving automated breast cancer detection. Full article
Show Figures

Figure 1

24 pages, 8351 KB  
Article
Resolving Knowledge Gaps in Liquid Crystal Delay Line Phase Shifters for 5G/6G mmW Front-Ends
by Jinfeng Li and Haorong Li
Electronics 2026, 15(2), 485; https://doi.org/10.3390/electronics15020485 - 22 Jan 2026
Viewed by 1128
Abstract
In the context of fifth-generation (5G) communications and the dawn of sixth-generation (6G) networks, a surged societal demand on bandwidth and data rate and more stringent commercial requirements on transmission efficiency, cost, and reliability are increasingly evident and, hence, driving the maturity of [...] Read more.
In the context of fifth-generation (5G) communications and the dawn of sixth-generation (6G) networks, a surged societal demand on bandwidth and data rate and more stringent commercial requirements on transmission efficiency, cost, and reliability are increasingly evident and, hence, driving the maturity of reconfigurable millimeter-wave (mmW) and terahertz (THz) devices and systems, in particular, liquid crystal (LC)-based tunable solutions for delay line phase shifters (DLPSs). However, the field of LC-combined electronics has witnessed only incremental developments in the past decade. First, the tuning principle has largely been unchanged (leveraging the shape anisotropy of LC molecules in microscale and continuum mechanics in macroscale for variable polarizability). Second, LC-enabled devices’ performance has yet to be standardized (suboptimal case by case at different frequency domains). In this context, this work points out three underestimated knowledge gaps as drawn from our theoretical designs, computational simulations, and experimental prototypes, respectively. The first gap reports previously overlooked physical constraints from the analytical model of an LC-embedded coaxial DLPS. A new geometry-dielectric bound is identified. The second gap deals with the lack of consideration in the suboptimal dispersion behavior in differential delay time (DDT) and differential delay length (DDL) for LC phase-shifting devices. A new figure of merit (FoM) is proposed and defined at the V-band (60 GHz) to comprehensively evaluate the ratios of the DDT and DDL over their standard deviations across the 54 to 66 GHz spectrum. The third identified gap deals with the in-depth explanation of our recent experimental results and outlook for partial leakage attack analysis of LC phase shifters in modern eavesdropping. Full article
Show Figures

Figure 1

15 pages, 5659 KB  
Article
Compact S- and C-Band Single-/Dual-Band Bandpass Filters with Multiple Transmission Zeros Using Spoof Surface Plasmon Polaritons and Half-Mode Substrate Integrated Waveguide
by Baoping Ren, Pingping Zhang and Kaida Xu
Electronics 2026, 15(2), 484; https://doi.org/10.3390/electronics15020484 - 22 Jan 2026
Viewed by 157
Abstract
In this paper, a flower-shaped spoof surface plasmon polaritons (SSPPs) unit with strong slow-wave effect is proposed to construct bandpass filters (BPFs). Benefiting from extended current path induced by addition of rotated stubs around rectangular unit, the proposed SSPPs unit exhibits reduced asymptotic [...] Read more.
In this paper, a flower-shaped spoof surface plasmon polaritons (SSPPs) unit with strong slow-wave effect is proposed to construct bandpass filters (BPFs). Benefiting from extended current path induced by addition of rotated stubs around rectangular unit, the proposed SSPPs unit exhibits reduced asymptotic frequency. Following this, a single-band filter boasting multiple transmission zeros (TZs) in its upper stopband is developed by embedding the unit into half-mode substrate integrated waveguide (HMSIW). To improve suppression of the lower stopband, a pair of open circuited stubs are loaded to produce TZs and enhance its frequency selectivity. Consequently, the single-band BPF realizes an impressive roll-off rate of 0.116 dB/MHz. Subsequently, geometric dimensions of the open-circuited stubs are modified to dispose the TZs into passband and acquire dual-band operation. In addition, defected ground structures (DGSs) are loaded to broaden the bandwidth of notch between two passbands. Finally, a dual-band filter with a wide suppression band of 0.50 GHz is developed. With roll-off rates of 0.096 and 0.119 dB/MHz, the filter demonstrates good selectivity as well. Full article
Show Figures

Figure 1

15 pages, 3879 KB  
Article
Bluetooth Low Energy-Based Docking Solution for Mobile Robots
by Kyuman Lee
Electronics 2026, 15(2), 483; https://doi.org/10.3390/electronics15020483 - 22 Jan 2026
Viewed by 122
Abstract
Existing docking methods for mobile robots rely on a LiDAR sensor or image processing using a camera. Although both demonstrate excellent performance in terms of sensing distance and spatial resolution, they are sensitive to environmental effects, such as illumination and occlusion, and are [...] Read more.
Existing docking methods for mobile robots rely on a LiDAR sensor or image processing using a camera. Although both demonstrate excellent performance in terms of sensing distance and spatial resolution, they are sensitive to environmental effects, such as illumination and occlusion, and are expensive. Some environments or conditions require low-power, low-cost novel docking solutions that are less sensitive to the environment. In this study, we propose a guidance and navigation solution for a mobile robot to dock into a docking station using the values of the angle of arrival and received signal strength indicator between the mobile robot and the docking station, measured via wireless communication based on Bluetooth low energy (BLE). This proposed algorithm is a LiDAR- and camera-free docking solution. The proposed algorithm is used to run an actual mobile robot and BLE transceiver hardware, and the obtained result is significantly close to the ground truth for docking. Full article
Show Figures

Figure 1

15 pages, 13678 KB  
Article
A New Low-Noise Power Stage for the GAIA LNA-Biasing Board in Next-Generation Cryogenic Receivers
by Pierluigi Ortu, Andrea Saba, Giuseppe Valente, Alessandro Navarrini, Alessandro Cabras, Roberto Caocci and Giorgio Montisci
Electronics 2026, 15(2), 482; https://doi.org/10.3390/electronics15020482 - 22 Jan 2026
Viewed by 117
Abstract
This paper presents the design and implementation of the Power Stage GAIA (PSG), a high-current digital bias board developed by the Italian National Institute for Astrophysics (INAF) to extend the capabilities of the GAIA bias system. The PSG was developed within the Advanced [...] Read more.
This paper presents the design and implementation of the Power Stage GAIA (PSG), a high-current digital bias board developed by the Italian National Institute for Astrophysics (INAF) to extend the capabilities of the GAIA bias system. The PSG was developed within the Advanced European THz Receiver Array (AETHRA) project to support next-generation cryogenic receivers for millimeter-wave astronomy. Specifically, the AETHRA Work Package 1 (WP1) W-band downconverter integrates Monolithic Microwave Integrated Circuits (MMICs) requiring currents significantly exceeding the 50 mA limit of standard bias boards. To address these requirements, the PSG introduces a modular extension providing ten independent channels, each capable of delivering up to 500 mA with a programmable output range of 0–5 V. A key feature of the design is the adoption of a fully linear architecture based on LT1970 power amplifiers and INA225 precision sensors managed via an I2C digital interface. This approach ensures the high current capability required by modern power amplifiers while strictly avoiding the spectral noise and Radio Frequency Interference (RFI) typical of switching power supplies. Experimental validation confirms the system’s robustness and precision: the board demonstrated linear operation up to 460 mA and exceptional long-term stability, with a measured RMS voltage deviation below 50 µV. These results establish the PSG as a scalable, low-noise solution suitable for biasing high-power MMICs in future cryogenic receiver arrays. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

15 pages, 2027 KB  
Article
Weight Standardization Fractional Binary Neural Network for Image Recognition in Edge Computing
by Chih-Lung Lin, Zi-Qing Liang, Jui-Han Lin, Chun-Chieh Lee and Kuo-Chin Fan
Electronics 2026, 15(2), 481; https://doi.org/10.3390/electronics15020481 - 22 Jan 2026
Viewed by 122
Abstract
In order to achieve better accuracy, modern models have become increasingly large, leading to an exponential increase in computational load, making it challenging to apply them to edge computing. Binary neural networks (BNNs) are models that quantize the filter weights and activations to [...] Read more.
In order to achieve better accuracy, modern models have become increasingly large, leading to an exponential increase in computational load, making it challenging to apply them to edge computing. Binary neural networks (BNNs) are models that quantize the filter weights and activations to 1-bit. These models are highly suitable for small chips like advanced RISC machines (ARMs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), system-on-chips (SoCs) and other edge computing devices. To design a model that is more friendly to edge computing devices, it is crucial to reduce the floating-point operations (FLOPs). Batch normalization (BN) is an essential tool for binary neural networks; however, when convolution layers are quantized to 1-bit, the floating-point computation cost of BN layers becomes significantly high. This paper aims to reduce the floating-point operations by removing the BN layers from the model and introducing the scaled weight standardization convolution (WS-Conv) method to avoid the significant accuracy drop caused by the absence of BN layers, and to enhance the model performance through a series of optimizations, adaptive gradient clipping (AGC) and knowledge distillation (KD). Specifically, our model maintains a competitive computational cost and accuracy, even without BN layers. Furthermore, by incorporating a series of training methods, the model’s accuracy on CIFAR-100 is 0.6% higher than the baseline model, fractional activation BNN (FracBNN), while the total computational load is only 46% of the baseline model. With unchanged binary operations (BOPs), the FLOPs are reduced to nearly zero, making it more suitable for embedded platforms like FPGAs or other edge computers. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

30 pages, 7552 KB  
Review
Physics-Informed Neural Networks for Underwater Acoustic Propagation Modeling: A Review
by Yuxiang Gao, Peng Xiao, Shiwei Xie and Zhenglin Li
Electronics 2026, 15(2), 480; https://doi.org/10.3390/electronics15020480 - 22 Jan 2026
Viewed by 317
Abstract
Physics-informed neural networks (PINNs) have recently attracted considerable attention as a framework for solving partial differential equations. Underwater sound-field prediction fundamentally relies on solving acoustic wave equations, making PINNs a natural candidate for this application. This paper reviews recent developments in PINN-based modeling [...] Read more.
Physics-informed neural networks (PINNs) have recently attracted considerable attention as a framework for solving partial differential equations. Underwater sound-field prediction fundamentally relies on solving acoustic wave equations, making PINNs a natural candidate for this application. This paper reviews recent developments in PINN-based modeling of underwater acoustic propagation, which we group into two main lines of research. The first introduces mathematically motivated simplifications of the governing equations and then employs PINNs as efficient solvers; examples include ray-based PINNs and PINN estimators of modal wavenumbers. The second focuses on improving computational performance by tailoring network architectures and hyperparameters, such as spatial domain-decomposition strategies. While PINNs demonstrate significant potential, challenges persist regarding computational efficiency and convergence in high-frequency regimes. Future research directions are identified, emphasizing a multi-faceted strategy that systematically addresses limitations at both the physical formulation level and the neural network architecture level. By integrating advanced hybrid physics-data modeling and scalable training algorithms, this review highlights the pathway toward bridging the gap between theoretical frameworks and realistic ocean applications. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

28 pages, 2192 KB  
Article
AptEVS: Adaptive Edge-and-Vehicle Scheduling for Hierarchical Federated Learning over Vehicular Networks
by Yu Tian, Nina Wang, Zongshuai Zhang, Wenhao Zou, Liangjie Zhao, Shiyao Liu and Lin Tian
Electronics 2026, 15(2), 479; https://doi.org/10.3390/electronics15020479 - 22 Jan 2026
Viewed by 140
Abstract
Hierarchical federated learning (HFL) has emerged as a promising paradigm for distributed machine learning over vehicular networks. Despite recent advances in vehicle selection and resource allocation, most still adopt a fixed Edge-and-Vehicle Scheduling (EVS) configuration that keeps the number of participating edge nodes [...] Read more.
Hierarchical federated learning (HFL) has emerged as a promising paradigm for distributed machine learning over vehicular networks. Despite recent advances in vehicle selection and resource allocation, most still adopt a fixed Edge-and-Vehicle Scheduling (EVS) configuration that keeps the number of participating edge nodes and vehicles per node constant across training rounds. However, given the diverse training tasks and dynamic vehicular environments, our experiments confirm that such static configurations struggle to efficiently meet the task-specific requirements across model accuracy, time delay, and energy consumption. To address this, we first formulate a unified, long-term training cost metric that balances these conflicting objectives. We then propose AptEVS, an adaptive scheduling framework based on deep reinforcement learning (DRL), designed to minimize this cost. The core of AptEVS is its phase-aware design, which adapts the scheduling strategy by first identifying the current training phase and then switching to specialized strategies accordingly. Extensive simulations demonstrate that AptEVS learns an effective scheduling policy online from scratch, consistently outperforming baselines and and reducing the long-term training cost by up to 66.0%. Our findings demonstrate that phase-aware DRL is both feasible and highly effective for resource scheduling over complex vehicular networks. Full article
(This article belongs to the Special Issue Technology of Mobile Ad Hoc Networks)
Show Figures

Figure 1

17 pages, 8593 KB  
Article
Adaptive Solving Method for Power System Operation Based on Knowledge-Driven LLM Agents
by Baoliang Li, Hengxu Zhang and Yongji Cao
Electronics 2026, 15(2), 478; https://doi.org/10.3390/electronics15020478 - 22 Jan 2026
Viewed by 166
Abstract
Large language models (LLM) have achieved remarkable advances in natural-language understanding and content generation, and LLM-based agents demonstrate strong adaptability, flexibility, and robustness in handling complex tasks and enabling automated decision-making. Determining the operating mode of a power system requires repeated adjustments of [...] Read more.
Large language models (LLM) have achieved remarkable advances in natural-language understanding and content generation, and LLM-based agents demonstrate strong adaptability, flexibility, and robustness in handling complex tasks and enabling automated decision-making. Determining the operating mode of a power system requires repeated adjustments of boundary conditions to address violations. Conventional approaches include expert-driven power flow calculations and optimal power flow methods, the latter of which often lack clear physical interpretability during the iterative optimization process. This study proposes a novel paradigm for automated computation and adjustment of power system operating modes based on LLM-driven multi-agent systems. The approach leverages the reasoning capabilities of LLMs to enhance the adaptability of power flow adjustment strategies, while multi-agent coordination with power flow calculation modules ensures computational accuracy, enabling a natural-language-guided adaptive operational computation and adjustment process. The framework also incorporates retrieval-augmented generation techniques to access external knowledge bases and databases, further improving the agents’ understanding of system operational patterns and the accuracy of decision-making. This method constitutes an exploratory application of LLMs and multi-agent technologies in power system computational analysis, highlighting the considerable potential of LLMs to extend and enhance traditional power system analysis methodologies. Full article
(This article belongs to the Special Issue AI-Enhanced Stability and Resilience in Modern Power Systems)
Show Figures

Figure 1

26 pages, 3088 KB  
Article
A Human-Centered Visual Cognitive Framework for Traffic Pair Crossing Identification in Human–Machine Teaming
by Bufan Liu, Sun Woh Lye, Terry Liang Khin Teo and Hong Jie Wee
Electronics 2026, 15(2), 477; https://doi.org/10.3390/electronics15020477 - 22 Jan 2026
Viewed by 145
Abstract
Human–machine teaming (HMT) in air traffic management (ATM) promises safer, more efficient operations by combining human expertise in decision-making with machine efficiency in data processing, where traffic pair crossing identification is crucial for effective conflict detection and resolution by recognizing aircraft pairs that [...] Read more.
Human–machine teaming (HMT) in air traffic management (ATM) promises safer, more efficient operations by combining human expertise in decision-making with machine efficiency in data processing, where traffic pair crossing identification is crucial for effective conflict detection and resolution by recognizing aircraft pairs that may lead to conflict. To facilitate this goal, this paper presents a four-phase cognitive framework to enhance HMT for monitoring traffic pairs at crossing points through a human-centered, visual-based approach. The visual cognitive framework integrates three data streams—eye-tracking metrics, mouse-over actions, and issued radar commands—to capture the traffic context from the controller’s perspective. A target pair identification method is designed to generate potential conflict pairs. Controller behavior is then modeled using a sighting timeline, yielding insights to develop the cognitive mechanism. Using air traffic crossing-conflict monitoring in en route airspace as a case study, the framework successfully captures the state of controllers’ monitoring and awareness behavior through tests on five target flight pairs under various crossing conditions. Specifically, aware monitoring activities are characterized by higher fixation count on either flight across a 10 min window, with 53% to 100% of visual input activities occurring between 8 to 7 and 3 to 2 min before crossing, ensuring timely conflict management. Furthermore, the study quantifies the effect of crossing geometry, whereby narrow-angle crossings (21 degrees) require significantly higher monitoring intensity (15 paired sightings) compared to wide or moderate angle crossings. These results indicate that controllers exhibit distinct monitoring and awareness behaviors when identifying and managing conflicts across the different test pairs, demonstrating the effectiveness and applicability of the proposed visual cognitive framework. Full article
Show Figures

Figure 1

52 pages, 3528 KB  
Review
Advanced Fault Detection and Diagnosis Exploiting Machine Learning and Artificial Intelligence for Engineering Applications
by Davide Paolini, Pierpaolo Dini, Abdussalam Elhanashi and Sergio Saponara
Electronics 2026, 15(2), 476; https://doi.org/10.3390/electronics15020476 - 22 Jan 2026
Viewed by 516
Abstract
Modern engineering systems require reliable and timely Fault Detection and Diagnosis (FDD) to ensure operational safety and resilience. Traditional model-based and rule-based approaches, although interpretable, exhibit limited scalability and adaptability in complex, data-intensive environments. This survey provides a systematic overview of recent studies [...] Read more.
Modern engineering systems require reliable and timely Fault Detection and Diagnosis (FDD) to ensure operational safety and resilience. Traditional model-based and rule-based approaches, although interpretable, exhibit limited scalability and adaptability in complex, data-intensive environments. This survey provides a systematic overview of recent studies exploring Machine Learning (ML) and Artificial Intelligence (AI) techniques for FDD across industrial, energy, Cyber-Physical Systems (CPS)/Internet of Things (IoT), and cybersecurity domains. Deep architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Transformers, and Graph Neural Networks (GNNs) are compared with unsupervised, hybrid, and physics-informed frameworks, emphasizing their respective strengths in adaptability, robustness, and interpretability. Quantitative synthesis and radar-based assessments suggest that AI-driven FDD approaches offer increased adaptability, scalability, and early fault detection capabilities compared to classical methods, while also introducing new challenges related to interpretability, robustness, and deployment. Emerging research directions include the development of foundation and multimodal models, federated learning (FL), and privacy-preserving learning, as well as physics-guided trustworthy AI. These trends indicate a paradigm shift toward self-adaptive, interpretable, and collaborative FDD systems capable of sustaining reliability, transparency, and autonomy across critical infrastructures. Full article
Show Figures

Figure 1

37 pages, 483 KB  
Review
Lattice-Based Cryptographic Accelerators for the Post-Quantum Era: Architectures, Optimizations, and Implementation Challenges
by Hua Yan, Lei Wu, Qiming Sun and Pengzhou He
Electronics 2026, 15(2), 475; https://doi.org/10.3390/electronics15020475 - 22 Jan 2026
Viewed by 441
Abstract
The imminent threat of large-scale quantum computers to modern public-key cryptographic devices has led to extensive research into post-quantum cryptography (PQC). Lattice-based schemes have proven to be the top candidate among existing PQC schemes due to their strong security guarantees, versatility, and relatively [...] Read more.
The imminent threat of large-scale quantum computers to modern public-key cryptographic devices has led to extensive research into post-quantum cryptography (PQC). Lattice-based schemes have proven to be the top candidate among existing PQC schemes due to their strong security guarantees, versatility, and relatively efficient operations. However, the computational cost of lattice-based algorithms—including various arithmetic operations such as Number Theoretic Transform (NTT), polynomial multiplication, and sampling—poses considerable performance challenges in practice. This survey offers a comprehensive review of hardware acceleration for lattice-based cryptographic schemes—specifically both the architectural and implementation details of the standardized algorithms in the category CRYSTALS-Kyber, CRYSTALS-Dilithium, and FALCON (Fast Fourier Lattice-Based Compact Signatures over NTRU). It examines optimization measures at various levels, such as algorithmic optimization, arithmetic unit design, memory hierarchy management, and system integration. The paper compares the various performance measures (throughput, latency, area, and power) of Field-Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) implementations. We also address major issues related to implementation, side-channel resistance, resource constraints within IoT (Internet of Things) devices, and the trade-offs between performance and security. Finally, we point out new research opportunities and existing challenges, with implications for hardware accelerator design in the post-quantum cryptographic environment. Full article
28 pages, 5825 KB  
Article
Deep Learning Computer Vision-Based Automated Localization and Positioning of the ATHENA Parallel Surgical Robot
by Florin Covaciu, Bogdan Gherman, Nadim Al Hajjar, Ionut Zima, Calin Popa, Alexandru Pusca, Andra Ciocan, Calin Vaida, Anca-Elena Iordan, Paul Tucan, Damien Chablat and Doina Pisla
Electronics 2026, 15(2), 474; https://doi.org/10.3390/electronics15020474 - 22 Jan 2026
Viewed by 172
Abstract
Manual alignment between the trocar, surgical instrument, and robot during minimally invasive surgery (MIS) can be time-consuming and error-prone, and many existing systems do not provide autonomous localization and pose estimation. This paper presents an artificial intelligence (AI)-assisted, vision-guided framework for automated localization [...] Read more.
Manual alignment between the trocar, surgical instrument, and robot during minimally invasive surgery (MIS) can be time-consuming and error-prone, and many existing systems do not provide autonomous localization and pose estimation. This paper presents an artificial intelligence (AI)-assisted, vision-guided framework for automated localization and positioning of the ATHENA parallel surgical robot. The proposed approach combines an Intel RealSense RGB–depth (RGB-D) camera with a You Only Look Once version 11 (YOLO11) object detection model to estimate the 3D spatial coordinates of key surgical components in real time. The estimated coordinates are streamed over Transmission Control Protocol/Internet Protocol (TCP/IP) to a programmable logic controller (PLC) using Modbus/TCP, enabling closed-loop robot positioning for automated docking. Experimental validation in a controlled setup designed to replicate key intraoperative constraints demonstrated submillimeter positioning accuracy (≤0.8 mm), an average end-to-end latency of 67 ms, and a 42% reduction in setup time compared with manual alignment, while remaining robust under variable lighting. These results indicate that the proposed perception-to-control pipeline is a practical step toward reliable autonomous robotic docking in MIS workflows. Full article
Show Figures

Figure 1

42 pages, 6277 KB  
Article
Process-Aware Selective Disclosure and Identity Unlinkability: A Tag-Based Interoperability-Enhancing Digital Identity Framework and Its Application to Logistics Transportation Workflows
by Junliang Liu, Zhiyao Liang and Qiuyun Lyu
Electronics 2026, 15(2), 473; https://doi.org/10.3390/electronics15020473 - 22 Jan 2026
Viewed by 177
Abstract
This paper proposes a process-aware, tag-based digital identity framework that enhances interoperability while enabling identity unlinkability and selective disclosure across multi-party workflows involving sensitive data. We realize this framework within the self-sovereign identity (SSI) paradigm, employing zk-SNARK–based zero-knowledge proofs to enable verifiable identity [...] Read more.
This paper proposes a process-aware, tag-based digital identity framework that enhances interoperability while enabling identity unlinkability and selective disclosure across multi-party workflows involving sensitive data. We realize this framework within the self-sovereign identity (SSI) paradigm, employing zk-SNARK–based zero-knowledge proofs to enable verifiable identity authentication without plaintext disclosure. The framework introduces a protocol-tagging mechanism to support multiple proof systems within a unified architecture, thereby enhancing SSI scalability and interoperability. Its core innovation lies in combining identity unlinkability and process-driven data disclosure: derived sub-identities mitigate identity-linkage attacks, while layered encryption enables selective, stepwise decryption of sensitive information (e.g., delivery addresses), ensuring participants access only the minimal information necessary for their tasks. In addition, zero-knowledge proof-based verification guarantees that the validation of derived sub-identities can be performed without sharing any plaintext attributes or identifying factors. We applied the framework to logistics, where sub-identities anonymize participants and layered encryption allows for delivery addresses to be decrypted progressively along the logistics chain, with only the final courier authorized to access complete information. During the parcel receipt process, users can complete verification using derived sub-identities and zero-knowledge proofs alone, without disclosing any real personal information or attributes that could be linked back to their identity. Trusted Execution Environments (TEEs) ensure the authenticity of decryption requests, while blockchain provides immutable audit trails. A demonstration system was implemented, formally verified using Scyther, and performance-tested across multiple platforms, including resource-constrained environments, showing high efficiency and strong practical potential. The core paradigms of identity unlinkability and process-driven data disclosure are generalizable and applicable to multi-party scenarios involving sensitive data flows. Full article
Show Figures

Figure 1

19 pages, 9300 KB  
Article
Performance Analysis and Predictive Modeling of Microinverters Under Varying Environmental Conditions
by Sahin Gullu, Mehmet Onur Kok and Khalil Alluhaybi
Electronics 2026, 15(2), 472; https://doi.org/10.3390/electronics15020472 - 22 Jan 2026
Viewed by 100
Abstract
This study conducts both experimental and statistical analyses of microinverter performance within a compact AC-PV module that integrates a PV panel and a microinverter without battery integration. Using measurement data in combination with correlation analysis, derived thermal indicators, and quadratic regression modeling, the [...] Read more.
This study conducts both experimental and statistical analyses of microinverter performance within a compact AC-PV module that integrates a PV panel and a microinverter without battery integration. Using measurement data in combination with correlation analysis, derived thermal indicators, and quadratic regression modeling, the research provides a comprehensive quantitative assessment of microinverter behavior under practical operating conditions. A central finding is that the PV module’s temperature rise above ambient, ΔTmodule, serves as the most reliable single predictor of output power with a coefficient of determination of R2 = 0.85. The coefficient determination of ΔTmodule surpasses even solar irradiance and the microinverter temperature rise, ΔTmicro, with R2 = 0.80 and R2 = 0.75, respectively. This underscores the excess thermal loading of the module, rather than the absolute temperature alone. In contrast, ambient temperature (R2 = 0.04) proves to be a negligible variable for output power prediction. Also, comparing experimental temperatures with semi-empirical models showed that the PV temperature formula captures key thermal behavior, and the difference between theoretical and measured values is around 12%. From a design standpoint, these results highlight that enhancing thermal management at the module–inverter interface can directly improve output stability and ensure battery integration in the long-term reliability of an AC-PV module in future studies. Full article
Show Figures

Figure 1

25 pages, 911 KB  
Article
Performance-Driven End-to-End Optimization for UAV-Assisted Satellite Downlink with Hybrid NOMA/OMA Transmission
by Tie Liu, Chenhua Sun, Yasheng Zhang and Wenyu Sun
Electronics 2026, 15(2), 471; https://doi.org/10.3390/electronics15020471 - 22 Jan 2026
Viewed by 95
Abstract
Unmanned aerial vehicle (UAV)-assisted satellite downlink transmission is a promising solution for improving coverage and throughput under challenging propagation conditions. However, the achievable performance gains are fundamentally constrained by the coupling between access transmission and the satellite–UAV backhaul, especially when decode-and-forward (DF) relaying [...] Read more.
Unmanned aerial vehicle (UAV)-assisted satellite downlink transmission is a promising solution for improving coverage and throughput under challenging propagation conditions. However, the achievable performance gains are fundamentally constrained by the coupling between access transmission and the satellite–UAV backhaul, especially when decode-and-forward (DF) relaying and hybrid multiple access are employed. In this paper, we investigate the problem of end-to-end downlink sum-rate maximization in a UAV-assisted satellite network with hybrid non-orthogonal multiple access (NOMA)/orthogonal multiple access (OMA) transmission. We propose a performance-driven end-to-end optimization framework, in which UAV placement is optimized as an outer-layer control variable through an iterative procedure. For each candidate UAV position, a greedy transmission mode selection mechanism and a KKT-based satellite-to-UAV backhaul bandwidth allocation scheme are jointly executed in the inner layer to evaluate the resulting end-to-end downlink performance, whose feedback is then used to update the UAV position until convergence. Simulation results show that the proposed framework consistently outperforms benchmark schemes without requiring additional spectrum or transmit power. Under low satellite elevation angles, the proposed design improves system sum rate and spectral efficiency by approximately 25–35% compared with satellite-only NOMA transmission. In addition, the average user rate is increased by up to 37% under moderate network sizes, while maintaining stable relative gains as the number of users increases, confirming the effectiveness and scalability of the proposed approach. Full article
Show Figures

Figure 1

17 pages, 2713 KB  
Article
Enhanced Command Filter-Based Adaptive Asymptotic Backstepping Tracking Control and Its Application
by Dexu Wang and Jiapeng Liu
Electronics 2026, 15(2), 470; https://doi.org/10.3390/electronics15020470 - 22 Jan 2026
Viewed by 100
Abstract
We propose an adaptive backstepping-based asymptotic control scheme for a class of nonlinear systems with uncertain dynamics. By employing the enhanced command filter, the virtual stabilizing functions are reconstructed to solve the zero-error regulation problem. Then, a novel adaptive control strategy is introduced [...] Read more.
We propose an adaptive backstepping-based asymptotic control scheme for a class of nonlinear systems with uncertain dynamics. By employing the enhanced command filter, the virtual stabilizing functions are reconstructed to solve the zero-error regulation problem. Then, a novel adaptive control strategy is introduced to improve the system performance in the presence of uncertain functions. Compared with existing command-filtered backstepping controllers, the standard error compensation mechanism is not required in our controller. System performance analysis shows that the asymptotic convergence of the tracking error is guaranteed under our control method. Finally, the effectiveness of the proposed asymptotic control strategy is validated by using simulation results. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

13 pages, 2210 KB  
Article
High-Throughput Control-Data Acquisition for Multicore MCU-Based Real-Time Control Systems Using Double Buffering over Ethernet
by Seung-Hun Lee, Duc M. Tran and Joon-Young Choi
Electronics 2026, 15(2), 469; https://doi.org/10.3390/electronics15020469 - 22 Jan 2026
Viewed by 219
Abstract
For the design, implementation, performance optimization, and predictive maintenance of high-speed real-time control systems with sub-millisecond control periods, the capability to acquire large volumes of high-rate control data in real time is required without interfering with normal control operation that is repeatedly executed [...] Read more.
For the design, implementation, performance optimization, and predictive maintenance of high-speed real-time control systems with sub-millisecond control periods, the capability to acquire large volumes of high-rate control data in real time is required without interfering with normal control operation that is repeatedly executed in each extremely short control cycle. In this study, we propose a control-data acquisition method for high-speed real-time control systems with sub-millisecond control periods, in which control data are transferred to an external host device via Ethernet in real time. To enable the transmission of high-rate control data without disturbing the real-time control operation, a multicore microcontroller unit (MCU) is adopted, where the control task and the data transmission task are executed on separately assigned central processing unit (CPU) cores. Furthermore, by applying a double-buffering algorithm, continuous Ethernet communication without intermediate waiting time is achieved, resulting in a substantial improvement in transmission throughput. Using a control card based on TI’s multicore MCU TMS320F28388D, which consists of dual digital signal processor cores and one connectivity manager (CM) core, the proposed control-data acquisition method is implemented and an actual experimental environment is constructed. Experimental results show that the double-buffering transmission achieves a maximum throughput of 94.2 Mbps on a 100 Mbps Fast Ethernet link, providing a 38.5% improvement over the single-buffering case and verifying the high performance and efficiency of the proposed data acquisition method. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

14 pages, 2215 KB  
Article
Preoperative Surgical Planning for Lumbar Spine Pedicle Screw Placement Using PointNet
by Seokbin Hwang, Suk-Joong Lee and Sungmin Kim
Electronics 2026, 15(2), 468; https://doi.org/10.3390/electronics15020468 - 21 Jan 2026
Viewed by 170
Abstract
This study introduces a novel framework for defining screw trajectory that utilizes PointNet—a deep neural network trained on lumbar vertebrae point clouds—to improve the manual surgical planning procedures. The conventional architecture of PointNet was modified to accommodate various vertebral orientations and predict six [...] Read more.
This study introduces a novel framework for defining screw trajectory that utilizes PointNet—a deep neural network trained on lumbar vertebrae point clouds—to improve the manual surgical planning procedures. The conventional architecture of PointNet was modified to accommodate various vertebral orientations and predict six values, which were reconstructed into two control points that define a linear trajectory. A custom loss function was designed to align the predicted trajectory with the ground-truth trajectory. The neural networks were trained on 4284 point clouds of vertebrae, and 28 unseen point clouds were used to evaluate the model’s performance based on translational error, angular error, and clinical accuracy. For the left pedicle, the mean translational errors were 1.5 ± 0.8 mm at the entry point and 2.3 ± 1.2 mm at the target point. For the right pedicle, the mean translational errors were 1.5 ± 0.7 mm at the entry point and 2.3 ± 1.0 mm at the target point. The mean angular error was 3.5 ± 2.3° for the left pedicle and 3.9 ± 1.7° for the right pedicle. Clinically, the network generated 52 out of 56 trajectories without medial-cortical violations of the spinal canal. The trained neural network demonstrated promising technical and clinical accuracy, generating feasible screw trajectories across various vertebral orientations. Integrating a spinal segmentation network with the proposed framework could enable fully automated surgical planning in the future. Full article
Show Figures

Figure 1

15 pages, 2333 KB  
Article
Transient Synchronization Stability Analysis of DFIG-Based Wind Turbines with Virtual Resistance Demagnetization Control
by Xiaohe Wang, Xiaofei Chang, Ming Yan, Zhanqi Huang and Chao Wu
Electronics 2026, 15(2), 467; https://doi.org/10.3390/electronics15020467 - 21 Jan 2026
Viewed by 179
Abstract
With the increasing penetration of wind power, the transient synchronization stability of doubly fed induction generator (DFIG)-based wind turbines during grid faults has become a critical issue. While conventional fault ride-through methods like Crowbar protection can ensure safety, they compromise system controllability and [...] Read more.
With the increasing penetration of wind power, the transient synchronization stability of doubly fed induction generator (DFIG)-based wind turbines during grid faults has become a critical issue. While conventional fault ride-through methods like Crowbar protection can ensure safety, they compromise system controllability and worsen grid voltage conditions. Virtual resistance demagnetization control has emerged as a promising alternative due to its simple structure and effective flux damping. However, its impact on transient synchronization stability has not been revealed in existing studies. To fill this gap, this paper presents a comprehensive analysis of the transient synchronization stability of DFIG systems under virtual resistance control, introducing a novel fourth-order transient synchronization model that explicitly captures the coupling between the virtual resistance demagnetization control and phase-locked loop (PLL) dynamics. The model reveals the emergence of transient power and positive damping terms induced by the virtual resistance, which play a pivotal role in stabilizing the system. Furthermore, this work theoretically investigates how the virtual resistance and current loop’s proportional-integral (PI) parameters jointly influence transient stability, demonstrating that increasing the virtual resistance while reducing the integral gain of the current loop significantly enhances synchronization stability. Simulation results validate the accuracy of the model and the effectiveness of the proposed analysis. The findings provide a theoretical foundation for optimizing control parameters and improving the stability of DFIG-based wind turbines during grid faults. Full article
Show Figures

Figure 1

33 pages, 2850 KB  
Article
Automated Vulnerability Scanning and Prioritisation for Domestic IoT Devices/Smart Homes: A Theoretical Framework
by Diego Fernando Rivas Bustos, Jairo A. Gutierrez and Sandra J. Rueda
Electronics 2026, 15(2), 466; https://doi.org/10.3390/electronics15020466 - 21 Jan 2026
Viewed by 307
Abstract
The expansion of Internet of Things (IoT) devices in domestic smart homes has created new conveniences but also significant security risks. Insecure firmware, weak authentication and weak encryption leave households exposed to privacy breaches, data leakage and systemic attacks. Although research has addressed [...] Read more.
The expansion of Internet of Things (IoT) devices in domestic smart homes has created new conveniences but also significant security risks. Insecure firmware, weak authentication and weak encryption leave households exposed to privacy breaches, data leakage and systemic attacks. Although research has addressed several challenges, contributions remain fragmented and difficult for non-technical users to apply. This work addresses the following research question: How can a theoretical framework be developed to enable automated vulnerability scanning and prioritisation for non-technical users in domestic IoT environments? A Systematic Literature Review of 40 peer-reviewed studies, conducted under PRISMA 2020 guidelines, identified four structural gaps: dispersed vulnerability knowledge, fragmented scanning approaches, over-reliance on technical severity in prioritisation and weak protocol standardisation. The paper introduces a four-module framework: a Vulnerability Knowledge Base, an Automated Scanning Engine, a Context-Aware Prioritisation Module and a Standardisation and Interoperability Layer. The framework advances knowledge by integrating previously siloed approaches into a layered and iterative artefact tailored to households. While limited to conceptual evaluation, the framework establishes a foundation for future work in prototype development, household usability studies and empirical validation. By addressing fragmented evidence with a coherent and adaptive design, the study contributes to both academic understanding and practical resilience, offering a pathway toward more secure and trustworthy domestic IoT ecosystems. Full article
Show Figures

Figure 1

31 pages, 3765 KB  
Article
Rain Detection in Solar Insecticidal Lamp IoTs Systems Based on Multivariate Wireless Signal Feature Learning
by Lingxun Liu, Lei Shu, Yiling Xu, Kailiang Li, Ru Han, Qin Su and Jiarui Fang
Electronics 2026, 15(2), 465; https://doi.org/10.3390/electronics15020465 - 21 Jan 2026
Viewed by 171
Abstract
Solar insecticidal lamp Internet of Things (SIL-IoTs) systems are widely deployed in agricultural environments, where accurate and timely rain-detection is crucial for system stability and energy-efficient operation. However, existing rain-sensing solutions rely on additional hardware, leading to increased cost and maintenance complexity. This [...] Read more.
Solar insecticidal lamp Internet of Things (SIL-IoTs) systems are widely deployed in agricultural environments, where accurate and timely rain-detection is crucial for system stability and energy-efficient operation. However, existing rain-sensing solutions rely on additional hardware, leading to increased cost and maintenance complexity. This study proposes a hardware-free rain detection method based on multivariate wireless signal feature learning, using LTE communication data. A large-scale primary dataset containing 11.84 million valid samples was collected from a real farmland SIL-IoTs deployment in Nanjing, recording RSRP, RSRQ, and RSSI at 1 Hz. To address signal heterogeneity, a signal-strength stratification strategy and a dual-rate EWMA-based adaptive signal-leveling mechanism were introduced. Four machine-learning models—Logistic Regression, Random Forest, XGBoost, and LightGBM—were trained and evaluated using both the primary dataset and an external test dataset collected in Changsha and Dongguan. Experimental results show that XGBoost achieves the highest detection accuracy, whereas LightGBM provides a favorable trade-off between performance and computational cost. Evaluation using accuracy, precision, recall, F1-score, and ROC-AUC indicates that all metrics exceed 0.975. The proposed method demonstrates strong accuracy, robustness, and cross-regional generalization, providing a practical and scalable solution for rain detection in agricultural IoT systems without additional sensing hardware. Full article
Show Figures

Figure 1

24 pages, 1420 KB  
Article
Distributed Photovoltaic–Storage Hierarchical Aggregation Method Based on Multi-Source Multi-Scale Data Fusion
by Shaobo Yang, Xuekai Hu, Lei Wang, Guanghui Sun, Min Shi, Zhengji Meng, Zifan Li, Zengze Tu and Jiapeng Li
Electronics 2026, 15(2), 464; https://doi.org/10.3390/electronics15020464 - 21 Jan 2026
Viewed by 125
Abstract
Accurate model aggregation is pivotal for the efficient dispatch and control of massive distributed photovoltaic (PV) and energy storage (ES) resources. However, the lack of unified standards across equipment manufacturers results in inconsistent data formats and resolutions. Furthermore, external disturbances like noise and [...] Read more.
Accurate model aggregation is pivotal for the efficient dispatch and control of massive distributed photovoltaic (PV) and energy storage (ES) resources. However, the lack of unified standards across equipment manufacturers results in inconsistent data formats and resolutions. Furthermore, external disturbances like noise and packet loss exacerbate the problem. The resulting data are massive, multi-source, and heterogeneous, which poses severe challenges to building effective aggregation models. To address these issues, this paper proposes a hierarchical aggregation method based on multi-source multi-scale data fusion. First, a Multi-source Multi-scale Decision Table (Ms-MsDT) model is constructed to establish a unified framework for the flexible storage and representation of heterogeneous PV-ES data. Subsequently, a two-stage fusion framework is developed, combining Information Gain (IG) for global coarse screening and Scale-based Trees (SbT) for local fine-grained selection. This approach achieves adaptive scale optimization, effectively balancing data volume reduction with high-fidelity feature preservation. Finally, a hierarchical aggregation mechanism is introduced, employing the Analytic Hierarchy Process (AHP) and a weight-guided improved K-Means algorithm to perform targeted clustering tailored to the specific control requirements of different voltage levels. Validation on an IEEE-33 node system demonstrates that the proposed method significantly improves data approximation precision and clustering compactness compared to conventional approaches. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

19 pages, 4754 KB  
Article
Enhancing Adversarial Policy Learning via Value-Based Reward Shaping
by Bo Hou, Guangyu Pan and Yao Chen
Electronics 2026, 15(2), 463; https://doi.org/10.3390/electronics15020463 - 21 Jan 2026
Viewed by 177
Abstract
In adversarial reinforcement learning, designing dense reward functions is a traditional approach to address the sparsity of adversarial objectives. However, conventional reward design often relies on high-quality domain knowledge and may fail in practice, thereby inducing objective misalignment—a discrepancy between optimizing the designed [...] Read more.
In adversarial reinforcement learning, designing dense reward functions is a traditional approach to address the sparsity of adversarial objectives. However, conventional reward design often relies on high-quality domain knowledge and may fail in practice, thereby inducing objective misalignment—a discrepancy between optimizing the designed reward and achieving the true adversarial utility. To reduce this discrepancy, a Value-Based Reward Shaping (VBRS) framework is proposed. VBRS integrates an intrinsic state-value estimate, which is a dynamic predictor of long-term utility, into the immediate reward function. As a result, exploration can be encouraged toward states predicted to be strategically advantageous, potentially avoiding some local optima in practice. Experiments demonstrate that VBRS outperforms a baseline that relies solely on the original reward function. The results confirm that the proposed method enhances adversarial performance and helps bridge the gap between designed reward guidance and the adversarial objective. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop