electronics-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4260 KB  
Article
CMCLTrack: Reliability-Modulated Cross-Modal Adapter and Cross-Layer Mamba Fusion for RGB-T Tracking
by Pengfei Li, Xiaohe Li and Zide Fan
Electronics 2026, 15(5), 989; https://doi.org/10.3390/electronics15050989 - 27 Feb 2026
Viewed by 304
Abstract
Single-object tracking has progressed rapidly, yet it remains fragile under low illumination, occlusion, and background clutter. RGB-Thermal (RGB-T) tracking improves robustness via modality complementarity, yet many existing trackers do not dynamically switch the dominant modality as sensing quality changes and often rely on [...] Read more.
Single-object tracking has progressed rapidly, yet it remains fragile under low illumination, occlusion, and background clutter. RGB-Thermal (RGB-T) tracking improves robustness via modality complementarity, yet many existing trackers do not dynamically switch the dominant modality as sensing quality changes and often rely on simple late fusion at a single stage, underutilizing multi-level features across the backbone. To address these challenges, we propose CMCLTrack, a unified framework that integrates the Reliability-Modulated Cross-Modal Adapter (RMCA) and the Cross-Layer Mamba Fusion (CLMF). Specifically, RMCA performs reliability-aware bidirectional cross-modal interaction by dynamically weighting modality contributions, while CLMF efficiently aggregates complementary cues from multiple encoder layers to exploit multi-level representations. To stabilize the learning of layer-wise modality reliability, we additionally incorporate a cross-layer reliability smoothness regularization. Extensive experiments on multiple RGB-T tracking benchmarks demonstrate that CMCLTrack achieves competitive performance compared to existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in Multitarget Tracking and Applications)
Show Figures

Figure 1

35 pages, 1070 KB  
Article
Adaptive Deep Learning Framework for Emotion Recognition in Social Robots: Toward Inclusive Human–Robot Interaction for Users with Special Needs
by Eryka Probierz and Adam Gałuszka
Electronics 2026, 15(5), 924; https://doi.org/10.3390/electronics15050924 - 25 Feb 2026
Viewed by 492
Abstract
Emotion recognition is a key capability of social robots operating in real-world human-centered environments, especially when interacting with users with special needs. Such users may express emotions in atypical, subtle, or strongly context-dependent ways. These characteristics pose significant challenges for conventional emotion recognition [...] Read more.
Emotion recognition is a key capability of social robots operating in real-world human-centered environments, especially when interacting with users with special needs. Such users may express emotions in atypical, subtle, or strongly context-dependent ways. These characteristics pose significant challenges for conventional emotion recognition systems. This paper proposes an adaptive deep learning framework for emotion recognition in social robots. The framework is designed to support inclusive and accessible human–robot interaction. It combines region-based convolutional neural networks with adaptive learning mechanisms. These mechanisms explicitly model individual variability, contextual information, and interaction dynamics. Multiple deep architectures are evaluated to assess robustness across diverse emotional expressions, including those influenced by cognitive, sensory, or developmental differences. Rather than relying on fixed emotion models, the proposed approach emphasizes adaptability. The system dynamically adjusts its perception strategies to user-specific expressive patterns. Experimental validation is conducted using context-aware emotion datasets. Performance is evaluated in terms of detection accuracy, robustness to variability, and generalization across emotion categories. The results show that adaptive mechanisms improve recognition performance in scenarios characterized by non-standard or low-intensity expressions, compared to static baseline models. This study highlights the importance of flexible, context-sensitive perception for inclusive social robotics. It also discusses design implications for deploying emotion-aware robots in assistive, educational, and therapeutic settings. Overall, the proposed framework represents a step toward socially intelligent robots capable of engaging more effectively with users with special needs. Full article
(This article belongs to the Special Issue Research on Deep Learning and Human-Robot Collaboration)
Show Figures

Figure 1

14 pages, 2793 KB  
Article
A Cross-Domain Authentication Key Agreement Protocol for Edge Computing
by Zhaobo Wang, Wen Feng, Yifeng Yin and Zhiyong Jing
Electronics 2026, 15(5), 946; https://doi.org/10.3390/electronics15050946 - 25 Feb 2026
Viewed by 360
Abstract
With the rapid development of edge computing in the Industrial Internet, data sharing schemes among edge users require reliable cross-domain authentication and key agreement mechanisms to guarantee the security and reliability of inter-device communication. To tackle the deficiencies of existing group key agreement [...] Read more.
With the rapid development of edge computing in the Industrial Internet, data sharing schemes among edge users require reliable cross-domain authentication and key agreement mechanisms to guarantee the security and reliability of inter-device communication. To tackle the deficiencies of existing group key agreement schemes, including dependence on trusted third parties, high computational overhead, and the difficulty of achieving both privacy preservation and attack resistance, this paper presents a cross-domain authenticated key agreement protocol designed for edge computing environments. This protocol supports anonymous identity authentication between cross-domain users, and innovatively constructs a multi-dimensional virtual iterative cyberspace model to generate massive secure keys via the collaborative iteration of multi-user key sequences. The proposed protocol is decentralized, lightweight, and resistant to replay attacks and man-in-the-middle attacks, while satisfying forward and backward secrecy. Security analysis and performance comparison experiments illustrate that the protocol significantly reduces computational and communication overhead, matches the resource-constrained characteristics of edge devices, and can be widely deployed in large-scale data encryption and sharing scenarios under edge computing environments. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

20 pages, 3629 KB  
Article
HS-FP and SS-FP: Fine-Pruning-Based Backdoor Elimination for Spiking Neural Networks on Neuromorphic Event Data
by Ki-Ho Kim and Eun-Kyu Lee
Electronics 2026, 15(5), 937; https://doi.org/10.3390/electronics15050937 - 25 Feb 2026
Viewed by 373
Abstract
Spiking Neural Networks (SNNs) have attracted increasing attention due to their energy efficiency and suitability for neuromorphic data processing. Despite these advantages, the security of SNNs—particularly their robustness against backdoor attacks—remains underexplored. This study revisits fine-pruning, a widely adopted backdoor defense technique in [...] Read more.
Spiking Neural Networks (SNNs) have attracted increasing attention due to their energy efficiency and suitability for neuromorphic data processing. Despite these advantages, the security of SNNs—particularly their robustness against backdoor attacks—remains underexplored. This study revisits fine-pruning, a widely adopted backdoor defense technique in deep neural networks, and adapts it to the unique spatio-temporal characteristics of SNNs. We propose two SNN-specific fine-pruning methods: Hook–Surrogate Gradient-based fine-pruning (HS-FP) and Spike–STDP-based fine-pruning (SS-FP). HS-FP leverages hook-based activation analysis with surrogate gradient learning, while SS-FP integrates total spike activity with hybrid STDP and surrogate gradient fine-tuning. We evaluate both methods against static, moving, and smart backdoor attacks on two neuromorphic benchmarks, N-MNIST and DVS128-Gesture. Experimental results show that both approaches reduce the attack success rate down to approximately 10% while preserving model accuracy above 99% on N-MNIST and achieving substantial recovery on DVS128-Gesture. Moreover, our analysis reveals that several phenomena observed in fine-pruning-based defenses for deep neural networks—such as mixed-function neurons and backdoor reactivation during fine-tuning—also manifest in SNNs. These findings highlight both the effectiveness and limitations of fine-pruning in the SNN domain and suggest promising directions for extending existing DNN security methodologies to neuromorphic systems. Full article
Show Figures

Figure 1

22 pages, 1271 KB  
Article
Leveraging MCP and Corrective RAG for Scalable and Interoperable Multi-Agent Healthcare Systems
by Dimitrios Kalathas, Andreas Menychtas, Panayiotis Tsanakas and Ilias Maglogiannis
Electronics 2026, 15(4), 888; https://doi.org/10.3390/electronics15040888 - 21 Feb 2026
Viewed by 519
Abstract
The rapid evolution of Generative AI (GenAI) has created the conditions for developing innovative solutions that disrupt all fields of human-related activities. Within the healthcare sector, numerous AI-driven applications have emerged, offering comprehensive health-related insights and addressing user questions in real time. Nevertheless, [...] Read more.
The rapid evolution of Generative AI (GenAI) has created the conditions for developing innovative solutions that disrupt all fields of human-related activities. Within the healthcare sector, numerous AI-driven applications have emerged, offering comprehensive health-related insights and addressing user questions in real time. Nevertheless, most of them use general-purpose Large Language Models (LLMs); consequently, the responses may not be as accurate as required in clinical settings. Therefore, the research community is adopting efficient architectures, such as Multi-Agent Systems (MAS) to optimize task allocation, reasoning processes, and system scalability. Most recently, the Model Context Protocol (MCP) has been introduced; however, very few applications apply this protocol within a healthcare MAS. Furthermore, Retrieval-Augmented Generation (RAG) has proven essential for grounding AI responses in verified clinical literature. This paper proposes a novel architecture that integrates these technologies to create an advanced Agentic Corrective RAG (CRAG) system. Unlike standard approaches, this method incorporates an active evaluation layer that autonomously detects retrieval failures and triggers corrective fallback mechanisms to ensure safety and accuracy. A comparative analysis was conducted for this architecture against Typical RAG and Cache-Augmented Generation (CAG), demonstrating that the proposed solution improves workflow efficiency and enables more accurate, context-aware interventions in healthcare. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Emerging Applications)
Show Figures

Figure 1

16 pages, 2588 KB  
Article
Smart Home IoT Forensics in Matter Ecosystems: A Data Extraction Method Using Multi-Admin
by Sungbum Kim, Sungmoon Kwon and Taeshik Shon
Electronics 2026, 15(4), 884; https://doi.org/10.3390/electronics15040884 - 20 Feb 2026
Viewed by 417
Abstract
As the smart home ecosystem expands with the adoption of Matter, a wide variety of Internet of Things (IoT) devices are entering the market, and these devices are becoming more complex, as they support diverse functionalities. Consequently, smart home forensics often requires data [...] Read more.
As the smart home ecosystem expands with the adoption of Matter, a wide variety of Internet of Things (IoT) devices are entering the market, and these devices are becoming more complex, as they support diverse functionalities. Consequently, smart home forensics often requires data extraction procedures that are specific to each device and platform, which increases the technical burden and time costs for investigators. To address these challenges, this study proposes a method that leverages Matter Multi-Admin support for multiple fabrics to enable efficient data acquisition from Matter-enabled IoT devices, regardless of the underlying smart home platform. This method configures a forensic Matter controller using chip-tool and commissions IoT devices that have already been commissioned to a smart home platform into a secondary fabric via Multi-Admin. The forensic controller then performs data extraction using standardized Matter interfaces. The proposed approach was validated on our smart home testbed by targeting a Matter smart bulb commissioned to the SmartThings platform and successfully extracting data generated by the platform, thereby demonstrating the utility of the method. The results indicate that the method enables nondestructive and efficient evidence acquisition from smart home IoT devices and can support future research and real-world investigations. Full article
(This article belongs to the Special Issue New Challenges in IoT Security)
Show Figures

Figure 1

16 pages, 1038 KB  
Article
The Agency-First Framework: Operationalizing Human-Centric Interaction and Evaluation Heuristics for Generative AI
by Christos Troussas, Christos Papakostas, Akrivi Krouska and Cleo Sgouropoulou
Electronics 2026, 15(4), 877; https://doi.org/10.3390/electronics15040877 - 20 Feb 2026
Viewed by 867
Abstract
Current generative AI systems primarily utilize a prompt–response interaction model that restricts user intervention during the creative process. This lack of granular control creates a significant disconnect between user intent and machine output, which we define as the “Agency Gap”. This paper introduces [...] Read more.
Current generative AI systems primarily utilize a prompt–response interaction model that restricts user intervention during the creative process. This lack of granular control creates a significant disconnect between user intent and machine output, which we define as the “Agency Gap”. This paper introduces the Agency-First Framework (AFF), which combines cognitive engineering and co-active design approaches to formally define human-AI collaboration. This is operationalized through the development of ten Generative AI Agency (GAIA) Heuristics, a systematic method for evaluating agency-centric interactions within stochastic generative settings. By translating the theoretical layers of the AFF into measurable criteria, the GAIA heuristics provide the necessary instrument for the empirical auditing of existing systems and the guidance of agency-centric redesigns. Unlike existing assistive AI guidelines that focus on output-level usability, the AFF establishes agency as a first-class design construct, enabling mid-process intervention and the steering of the model’s latent reasoning trajectory. Validation of the AFF was conducted through a two-tiered empirical evaluation: (1) an expert heuristic audit of state-of-the-art platforms, such as ChatGPT-o1 and Midjourney v6, which achieved high inter-rater reliability, and (2) a controlled redesign study. The latter demonstrated that agency-centric interfaces significantly enhance the Sense of Agency and Intent Alignment Accuracy compared to baseline prompt-response models, even when introducing a deliberate increase in task completion time—a phenomenon we describe as “productive friction” or an intentional interaction slowdown designed to prioritize cognitive engagement and user control over raw speed. Overall, the findings suggest that the restoration of meaningful user agency requires a shift from “seamless” system efficiency towards “productive friction”, where controllability and transparency within the generative process are prioritized. The major contribution of this work is the provision of a scalable, empirically validated framework and set of heuristics that equip designers to move beyond prompt-centric interaction, establishing a methodological foundation for agency-preserving generative AI systems. Full article
Show Figures

Figure 1

42 pages, 1277 KB  
Article
A Hybrid Time Series Forecasting Model Combining ARIMA and Decision Trees to Detect Attacks in MITRE ATT&CK Labeled Zeek Log Data
by Raymond Freeman, Sikha S. Bagui, Subhash C. Bagui, Dustin Mink, Sarah Cameron and Germano Correa Silva De Carvalho
Electronics 2026, 15(4), 871; https://doi.org/10.3390/electronics15040871 - 19 Feb 2026
Viewed by 399
Abstract
Intrusion detection systems face challenges in processing high-volume network traffic while maintaining accuracy across diverse low volume attack types. This study presents a hybrid approach combining ARIMA time series forecasting with Decision Tree classification to detect attacks in Zeek network flow data labeled [...] Read more.
Intrusion detection systems face challenges in processing high-volume network traffic while maintaining accuracy across diverse low volume attack types. This study presents a hybrid approach combining ARIMA time series forecasting with Decision Tree classification to detect attacks in Zeek network flow data labeled with MITRE ATT&CK tactics, leveraging PySpark for scalability. ARIMA identifies temporal anomalies which Decision Trees then classify by attack type. The ARIMA model was evaluated across 13 MITRE ATT&CK tactics, though only 7 maintained sufficient class balance for valid assessment. Results are reported at three evaluation levels: Baseline (Decision Tree only), ARIMA-DT (Decision Tree tested on ARIMA-filtered anomalies), and End-to-End (pipeline performance measured against the original test population). The hybrid model demonstrated two distinct benefits: performance improvement for detectable attacks and detection enablement for previously undetectable attacks. For high-volume attacks with existing baseline detection, ARIMA preprocessing substantially improved performance, for example, Reconnaissance achieved an ARIMA-DT F1 score of 99.71% (from a baseline of 80.88%) with End-to-End metrics confirming this improvement at 97.59% F1-score. Credential Access reached a perfect 100% precision and recall on the ARIMA-filtered subset (from a baseline recall of 7.48%); however, End-to-End evaluation revealed that ARIMA filtering removed the vast majority of Credential Access attacks, resulting in a 1.28% End-to-End F1-score—worse than the baseline F1-score of 7.41%—demonstrating that the hybrid pipeline is counterproductive for attack types whose flow characteristics closely resemble legitimate traffic. More significantly, ARIMA preprocessing enabled detection where traditional Decision Trees completely failed (0% recall) for four stealthy attack types: Defense Evasion (ARIMA-DT recall of 93.22%, End-to-End 67.83%), Discovery (ARIMA-DT recall of 100%, End-to-End 63.43%), Persistence (ARIMA-DT recall of 86.92%, End-to-End 73.38%), and Privilege Escalation (ARIMA-DT recall of 89.93%, End-to-End 64.68%). These results demonstrate that ARIMA-based statistical anomaly detection is particularly effective for attacks involving subtle, low-volume activities that blend with legitimate operations, while also improving classification accuracy for high-volume reconnaissance activities. Full article
(This article belongs to the Special Issue Recent Advances in Intrusion Detection Systems Using Machine Learning)
Show Figures

Figure 1

30 pages, 8046 KB  
Article
A Progressive Evaluation of MIMO Techniques in LoRa-Type Wireless Sensor Networks Under Imperfect Channel State Information
by Nikolaos Mouziouras, Andreas Tsormpatzoglou and Constantinos T. Angelis
Electronics 2026, 15(4), 867; https://doi.org/10.3390/electronics15040867 - 19 Feb 2026
Viewed by 266
Abstract
Low-Power Wide-Area Network (LPWAN) technologies play a central role in large-scale wireless sensor network (WSN) deployments, where energy efficiency, coverage and reliability dominate over throughput. Among them, Long Range (LoRa) technology has emerged as a widely adopted physical-layer solution due to its ability [...] Read more.
Low-Power Wide-Area Network (LPWAN) technologies play a central role in large-scale wireless sensor network (WSN) deployments, where energy efficiency, coverage and reliability dominate over throughput. Among them, Long Range (LoRa) technology has emerged as a widely adopted physical-layer solution due to its ability to operate at extremely low signal-to-noise ratios (SNRs). While multi-antenna techniques can potentially enhance link performance, their applicability in LoRa-type systems is constrained by low-SNR operation, strict energy budgets and the quality of channel state information (CSI). This paper presents a systematic and progressively structured evaluation of multiple-input multiple-output (MIMO) techniques in LoRa-type systems under representative operating conditions. A multi-stage simulation framework, implemented using the Vienna SLS v2.0 (Q3) simulator and adapted to LoRa-like waveforms, is employed to isolate the impact of large-scale propagation, small-scale fading, antenna configuration and CSI quality. The analysis starts from a system-level coverage baseline and advances to link-level evaluations of diversity-oriented MIMO schemes and spatial multiplexing configurations under both ideal and imperfect CSI. The results demonstrate that spatial diversity techniques are well aligned with the operational characteristics of LoRa links, offering robust performance in low-SNR regimes and under limited CSI accuracy. In contrast, spatial multiplexing exhibits higher sensitivity to channel estimation errors, with its practical benefits becoming apparent primarily when evaluated using throughput-oriented metrics such as packet error rate and normalized goodput. Overall, the study highlights the fundamental trade-off between reliability and capacity in LoRa MIMO systems and provides design-oriented insights for wireless sensor network deployments. Full article
(This article belongs to the Special Issue Wireless Sensor Network: Latest Advances and Prospects)
Show Figures

Figure 1

25 pages, 1932 KB  
Article
Blockchain-Enabled Governance for Health IoT Data Access via Interpretable Multi-Objective Optimization and Bargaining Under Privacy–Latency–Robustness Trade-Offs
by Farshid Keivanian, Yining Hu and Saman Shojae Chaeikar
Electronics 2026, 15(4), 864; https://doi.org/10.3390/electronics15040864 - 18 Feb 2026
Viewed by 424
Abstract
Health Internet of Things (Health IoT) systems continuously stream sensitive physiological data, making data access governance safety-critical under conflicting objectives such as privacy risk, latency, energy/resource cost, and robustness, especially when conditions change during emergencies. This paper proposes FiB-MOBA-EAFG, a hybrid blockchain–AI framework [...] Read more.
Health Internet of Things (Health IoT) systems continuously stream sensitive physiological data, making data access governance safety-critical under conflicting objectives such as privacy risk, latency, energy/resource cost, and robustness, especially when conditions change during emergencies. This paper proposes FiB-MOBA-EAFG, a hybrid blockchain–AI framework that separates on-chain accountability from off-chain decision intelligence. Off-chain, fuzzy context inference parameterizes scenario priorities, Pareto-based multi-objective search generates candidate governance policies, an emergency-aware feasibility guard filters unsafe trade-offs, and a bargaining-based selector chooses a single deployable policy. On chain, the blockchain layer records consent commitments, access events, and hashes of the selected policy and decision trace, serving as an immutable audit and accountability substrate rather than an online decision or optimization engine, while raw health data remain off-chain. Using simulation studies of home remote monitoring, clinic telehealth, and emergency triage under stochastic network variation and adversarial device behavior, FiB-MOBA-EAFG improves robustness and yields more repeatable policy selection than rule-based control and scalarized baselines within the evaluated simulation scenarios, while maintaining latency within ranges compatible with modeled edge deployment constraints through explicit emergency-aware feasibility constraints. A budget-matched random-search ablation further indicates that structured Pareto exploration is needed to reliably obtain robust, low-risk governance policies. Full article
(This article belongs to the Special Issue Blockchain-Enabled Management Systems in Health IoT)
Show Figures

Figure 1

22 pages, 2506 KB  
Article
CycleGAN-Based Data Augmentation for Scanning Electron Microscope Images to Enhance Integrated Circuit Manufacturing Defect Classification
by Andrew Yen, Nemo Chang, Jean Chien, Lily Chuang and Eric Lee
Electronics 2026, 15(4), 803; https://doi.org/10.3390/electronics15040803 - 13 Feb 2026
Viewed by 377
Abstract
Semiconductor defect inspection is frequently hindered by data scarcity and the resulting class imbalance in supervised learning. This study proposes a CycleGAN-based data augmentation pipeline designed to synthesize realistic defective CD-SEM images from abundant normal patterns, incorporating a quantitative quality control mechanism. Using [...] Read more.
Semiconductor defect inspection is frequently hindered by data scarcity and the resulting class imbalance in supervised learning. This study proposes a CycleGAN-based data augmentation pipeline designed to synthesize realistic defective CD-SEM images from abundant normal patterns, incorporating a quantitative quality control mechanism. Using an ADI CD-SEM dataset, we conducted a sensitivity analysis by cropping original 1024 × 1024 micrographs into 512 × 512 and 256 × 256 inputs. Our results indicate that increasing the effective defect-area ratio is critical for improving generative stability and defect visibility. To ensure data integrity, we applied a screening protocol based on the Structural Similarity Index (SSIM) and a median absolute deviation noise metric to exclude low-fidelity outputs. When integrated into the training of XceptionNet classifiers, this filtered augmentation strategy yielded substantial performance gains on a held-out test set, specifically improving the Recall and F1 score while maintaining a near-ceiling AUC. These results demonstrate that controlled CycleGAN augmentation, coupled with objective quality filtering, effectively mitigates class imbalance constraints and significantly enhances the robustness of automated defect detection. Full article
Show Figures

Figure 1

28 pages, 2899 KB  
Article
Design of Secure Communication Networks for UAV Platform Empowered by Lightweight Authentication Protocols
by Muhammet A. Sen, Saba Al-Rubaye and Antonios Tsourdos
Electronics 2026, 15(4), 785; https://doi.org/10.3390/electronics15040785 - 12 Feb 2026
Viewed by 460
Abstract
Flying Ad Hoc Networks (FANETs) formed by cooperative Unmanned Aerial Vehicles (UAVs) require formally proven secure and resource-efficient authentication because open wireless channels allow active adversaries to inject commands, replay traffic, and impersonate nodes. Conventional certificate-based mechanisms impose key management overhead and remain [...] Read more.
Flying Ad Hoc Networks (FANETs) formed by cooperative Unmanned Aerial Vehicles (UAVs) require formally proven secure and resource-efficient authentication because open wireless channels allow active adversaries to inject commands, replay traffic, and impersonate nodes. Conventional certificate-based mechanisms impose key management overhead and remain vulnerable under device capture, while existing lightweight and Physical Unclonable Function (PUF)-assisted proposals commonly assume stable connectivity, lack formal adversarial verification, or are evaluated only through simulation. This paper presents a lightweight PUF-assisted authentication protocol designed for dynamic multi-hop FANET operation. The scheme provides mutual UAV–Ground Station (GS) authentication and session key establishment and further enables secure UAV–UAV communication using an off-path ticket mechanism that eliminates continuous infrastructure dependence. The protocol is constructed through verification-driven refinement and formally analysed under the Dolev–Yao model, establishing authentication and session key secrecy and resistance to replay and impersonation attacks. Implementation-oriented latency measurements on Raspberry-Pi-class embedded platforms demonstrate that cryptographic processing time can be further reduced with hardware improvements, while the overall end-to-end delay is still largely determined by channel conditions and connection behaviour. Comparative evaluation shows reduced communication cost and broader security coverage relative to existing UAV authentication schemes, indicating practical deployability in large-scale FANET environments. Full article
(This article belongs to the Special Issue Wireless Sensor Network: Latest Advances and Prospects)
Show Figures

Graphical abstract

16 pages, 1372 KB  
Article
Spatio-Temporal Deep Learning-Assisted Multi-Period AC Optimal Power Flow
by Jihun Kim, Sojin Park, Dongwoo Kang and Hunyoung Shin
Electronics 2026, 15(4), 761; https://doi.org/10.3390/electronics15040761 - 11 Feb 2026
Viewed by 336
Abstract
The increasing penetration of renewable energy resources has amplified variability and uncertainty in power systems, reducing the effectiveness of conventional single-period Optimal Power Flow (OPF) strategies. Multi-period AC-OPF offers a more comprehensive framework by incorporating inter-temporal constraints and resource flexibility, but its high [...] Read more.
The increasing penetration of renewable energy resources has amplified variability and uncertainty in power systems, reducing the effectiveness of conventional single-period Optimal Power Flow (OPF) strategies. Multi-period AC-OPF offers a more comprehensive framework by incorporating inter-temporal constraints and resource flexibility, but its high computational complexity and strong temporal coupling make large-scale applications challenging, often causing scalability issues and convergence difficulties in conventional solvers. We address these issues with a spatio-temporal deep learning model that combines a Graph Attention Network (GAT) for topology-aware feature learning with a Temporal Convolutional Network (TCN) for multi-period temporal modeling. The proposed model is trained on large-scale 500-bus and 1354-bus systems under both 8-period and 24-period settings, and it achieves robust scalability with consistently high prediction accuracy. Using the model’s predictions, we construct an initial solution and provide it to a conventional OPF solver, which improves convergence performance and demonstrates the model’s effectiveness as an auxiliary tool for complex MP-ACOPF problems. Full article
(This article belongs to the Special Issue Edge-Intelligent Sustainable Cyber-Physical Systems)
Show Figures

Figure 1

19 pages, 3302 KB  
Article
Empirical Analysis of Heterogeneous Multi-Orbit Satellite Networks for Communication Resilience in Island Regions
by Yi-Cheng Lin, Tuck Wai Choong, Zheng Cheng Pang, Ping-Hsiang Chuang, Yao-Ching Huang, Ming-Te Chen and Jenq-Shiou Leu
Electronics 2026, 15(4), 773; https://doi.org/10.3390/electronics15040773 - 11 Feb 2026
Viewed by 412
Abstract
Integrating Geostationary (GEO), Medium Earth Orbit (MEO), and Low Earth Orbit (LEO) satellite systems offers a promising solution for enhancing communication resilience in disaster-prone island regions. However, effective integration via Software-Defined Wide Area Networks (SD-WANs) faces challenges due to the heterogeneous stochastic characteristics [...] Read more.
Integrating Geostationary (GEO), Medium Earth Orbit (MEO), and Low Earth Orbit (LEO) satellite systems offers a promising solution for enhancing communication resilience in disaster-prone island regions. However, effective integration via Software-Defined Wide Area Networks (SD-WANs) faces challenges due to the heterogeneous stochastic characteristics of these links. This study presents a comprehensive performance benchmark of GEO, MEO, and LEO satellite links based on long-duration empirical campaigns conducted in Taiwan. Our findings quantify critical integration hurdles, specifically the “long-tail” latency distribution in LEO links induced by frequent handovers and significant TCP throughput degradation modeled by the Mathis equation. Furthermore, empirical tests demonstrate that simplistic link aggregation across these heterogeneous orbits results in severe packet reordering and goodput collapse. Based on these results, we propose a conceptual resilience-oriented SD-WAN architecture incorporating intelligent failover thresholds and application-aware routing policies. This work provides foundational data and a design framework to guide the future development of robust multi-layered satellite communication systems for disaster management. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

28 pages, 635 KB  
Article
Harmonizing Supervised Fine-Tuning and Reinforcement Learning with Reward-Based Sampling for Continual Machine Unlearning
by Jiaqi Lang, Jiahao Zhao, Linjing Li and Daniel Dajun Zeng
Electronics 2026, 15(4), 771; https://doi.org/10.3390/electronics15040771 - 11 Feb 2026
Viewed by 412
Abstract
Large language models (LLMs) are pretrained on massive internet data and inevitably memorize sensitive or copyrighted content. This continually raises privacy, legal, and security concerns. Machine unlearning has been proposed as an approach to remove the influence of undesired data while maintaining model [...] Read more.
Large language models (LLMs) are pretrained on massive internet data and inevitably memorize sensitive or copyrighted content. This continually raises privacy, legal, and security concerns. Machine unlearning has been proposed as an approach to remove the influence of undesired data while maintaining model utility. However, in real-world scenarios, unlearning requests continuously emerge, and existing approaches often struggle to handle these sequential requests, leading to utility degradation. To address this challenge, we propose the harmonization of Supervised fine-tuning and Reinforcement learning with Reward-based Sampling (SRRS) framework, which dynamically harmonizes supervised fine-tuning (SFT) and reinforcement learning (RL) via reward signals: SFT ensures forgetting efficacy, while RL preserves utility under continual adaptation. By harmonizing these paradigms, SRRS achieves reliable forgetting and sustained utility across sequential unlearning tasks, demonstrating competitive performance compared to baseline methods on TOFU and R-TOFU datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence Safety and Security)
Show Figures

Figure 1

20 pages, 554 KB  
Article
Balancing Long–Short-Term User Preferences via Multilevel Sequential Patterns for Review-Aware Recommendation
by Li Jin, Xinzhe Li, Suji Kim and Jaekyeong Kim
Electronics 2026, 15(4), 753; https://doi.org/10.3390/electronics15040753 - 10 Feb 2026
Viewed by 328
Abstract
Personalized recommender systems play an essential role in enhancing user experience by accurately predicting user preferences. Previous approaches mainly focus on modeling long-term preferences or capturing short-term dynamics through sequential patterns, while few achieve an effective balance between the two. This study proposes [...] Read more.
Personalized recommender systems play an essential role in enhancing user experience by accurately predicting user preferences. Previous approaches mainly focus on modeling long-term preferences or capturing short-term dynamics through sequential patterns, while few achieve an effective balance between the two. This study proposes Rec-SSP, a novel review-aware recommendation model that integrates long-term and short-term preferences through a gated fusion mechanism. Long-term preferences are extracted from aggregated user reviews, whereas short-term preferences are modeled by identifying sequential patterns from recent interactions at both the review and category levels. This multilevel design captures fine-grained opinions across items, ensuring a more accurate understanding of the evolving user intent. This study conducted various experiments on real-world datasets, showing that Rec-SSP outperforms baseline models. These findings demonstrate that balancing long-term and short-term preferences with multilevel sequence modeling can significantly improve recommendation accuracy across diverse domains. Full article
(This article belongs to the Special Issue Machine/Deep Learning Applications and Intelligent Systems)
Show Figures

Figure 1

36 pages, 24812 KB  
Review
Artificial Intelligence-Enhanced Droop Control for Renewable Energy-Based Microgrids: A Comprehensive Review
by Michael Addai and Petr Musilek
Electronics 2026, 15(3), 707; https://doi.org/10.3390/electronics15030707 - 6 Feb 2026
Viewed by 803
Abstract
The integration of renewable energy sources into modern power systems requires advanced control strategies to maintain stability, reliability, and efficiency. This paper presents a comprehensive review of the application of artificial intelligence techniques, including machine learning, deep learning, and reinforcement learning, in improving [...] Read more.
The integration of renewable energy sources into modern power systems requires advanced control strategies to maintain stability, reliability, and efficiency. This paper presents a comprehensive review of the application of artificial intelligence techniques, including machine learning, deep learning, and reinforcement learning, in improving droop control for renewable energy integration. These artificial intelligence-based methods address key challenges such as frequency and voltage regulation, power sharing, and grid compliance under conditions of high renewable penetration. Machine learning approaches, such as support vector machines, are used to optimize droop parameters for dynamic grid conditions, while deep learning models, including recurrent neural networks, capture complex system dynamics to enhance the stability of distributed energy systems. Reinforcement learning algorithms enable adaptive, autonomous control, improving multi-objective optimization within microgrids. In addition, emerging directions such as transfer learning and real-time data analytics are explored for their potential to enhance scalability and resilience. Overall, this review synthesizes recent advances to demonstrate the growing impact of artificial intelligence in droop control and outlines future pathways toward more intelligent and sustainable power systems. Full article
Show Figures

Graphical abstract

22 pages, 1612 KB  
Article
Lightweight 1D-CNN-Based Battery State-of-Charge Estimation and Hardware Development
by Seungbum Kang, Yoonjae Lee, Gahyeon Jang and Seongsoo Lee
Electronics 2026, 15(3), 704; https://doi.org/10.3390/electronics15030704 - 6 Feb 2026
Viewed by 414
Abstract
This paper presents the FPGA implementation and verification of a lightweight one-dimensional convolutional neural network (1D-CNN) pipeline for real-time battery state-of-charge (SoC) estimation in automotive battery management systems. The proposed model employs separable 1D convolution and global average pooling, and applies aggressive structured [...] Read more.
This paper presents the FPGA implementation and verification of a lightweight one-dimensional convolutional neural network (1D-CNN) pipeline for real-time battery state-of-charge (SoC) estimation in automotive battery management systems. The proposed model employs separable 1D convolution and global average pooling, and applies aggressive structured pruning to reduce the number of parameters from 3121 to 358, representing an 88.5% reduction, without significant accuracy loss. Using quantization-aware training (QAT), the network is trained and executed in INT8, which reduces weight storage to one-quarter of the 32-bit baseline while maintaining high estimation accuracy with a Mean Absolute Error (MAE) of 0.0172. The hardware adopts a time-multiplexed single MAC architecture with FSM control, occupying 98,410 gates under a 28 nm process. Evaluations on an FPGA testbed with representative drive-cycle inputs show that the proposed INT8 pipeline achieves performance comparable to the floating-point reference with negligible precision drop, demonstrating its suitability for in-vehicle BMS deployment. Full article
Show Figures

Figure 1

25 pages, 7527 KB  
Article
Heterogeneous Multi-Domain Dataset Synthesis to Facilitate Privacy and Risk Assessments in Smart City IoT
by Matthew Boeding, Michael Hempel, Hamid Sharif and Juan Lopez, Jr.
Electronics 2026, 15(3), 692; https://doi.org/10.3390/electronics15030692 - 5 Feb 2026
Viewed by 486
Abstract
The emergence of the Smart Cities paradigm and the rapid expansion and integration of Internet of Things (IoT) technologies within this context have created unprecedented opportunities for high-resolution behavioral analytics, urban optimization, and context-aware services. However, this same proliferation intensifies privacy risks, particularly [...] Read more.
The emergence of the Smart Cities paradigm and the rapid expansion and integration of Internet of Things (IoT) technologies within this context have created unprecedented opportunities for high-resolution behavioral analytics, urban optimization, and context-aware services. However, this same proliferation intensifies privacy risks, particularly those arising from cross-modal data linkage across heterogeneous sensing platforms. To address these challenges, this paper introduces a comprehensive, statistically grounded framework for generating synthetic, multimodal IoT datasets tailored to Smart City research. The framework produces behaviorally plausible synthetic data suitable for preliminary privacy risk assessment and as a benchmark for future re-identification studies, as well as for evaluating algorithms in mobility modeling, urban informatics, and privacy-enhancing technologies. As part of our approach, we formalize probabilistic methods for synthesizing three heterogeneous and operationally relevant data streams—cellular mobility traces, payment terminal transaction logs, and Smart Retail nutrition records—capturing the behaviors of a large number of synthetically generated urban residents over a 12-week period. The framework integrates spatially explicit merchant selection using K-Dimensional (KD)-tree nearest-neighbor algorithms, temporally correlated anchor-based mobility simulation reflective of daily urban rhythms, and dietary-constraint filtering to preserve ecological validity in consumption patterns. In total, the system generates approximately 116 million mobility pings, 5.4 million transactions, and 1.9 million itemized purchases, yielding a reproducible benchmark for evaluating multimodal analytics, privacy-preserving computation, and secure IoT data-sharing protocols. To show the validity of this dataset, the underlying distributions of these residents were successfully validated against reported distributions in published research. We present preliminary uniqueness and cross-modal linkage indicators; comprehensive re-identification benchmarking against specific attack algorithms is planned as future work. This framework can be easily adapted to various scenarios of interest in Smart Cities and other IoT applications. By aligning methodological rigor with the operational needs of Smart City ecosystems, this work fills critical gaps in synthetic data generation for privacy-sensitive domains, including intelligent transportation systems, urban health informatics, and next-generation digital commerce infrastructures. Full article
Show Figures

Figure 1

37 pages, 501 KB  
Article
Comparative Analysis of Attribute-Based Encryption Schemes for Special Internet of Things Applications
by Łukasz Pióro, Krzysztof Kanciak and Zbigniew Zieliński
Electronics 2026, 15(3), 697; https://doi.org/10.3390/electronics15030697 - 5 Feb 2026
Viewed by 523
Abstract
Attribute-based encryption (ABE) is an advanced public key encryption mechanism that enables the precise control of access to encrypted data based on attributes assigned to users and data. Attribute-based access control (ABAC), which is built on ABE, is crucial in providing dynamic, fine-grained, [...] Read more.
Attribute-based encryption (ABE) is an advanced public key encryption mechanism that enables the precise control of access to encrypted data based on attributes assigned to users and data. Attribute-based access control (ABAC), which is built on ABE, is crucial in providing dynamic, fine-grained, and context-aware security management in modern Internet of Things (IoT) applications. ABAC controls access based on attributes associated with users, devices, resources, and environmental conditions rather than fixed roles, making it highly adaptable to the complex and heterogeneous nature of IoT ecosystems. ABE can significantly improve the security and manageability of modern military IoT systems. Nevertheless, its practical implementation requires obtaining a range of performance data and assessing the additional overhead, particularly regarding data transmission efficiency. This paper provides a comparative analysis of the performance of two cryptographic schemes for attribute-based encryption in the context of special Internet of Things (IoT) applications. This applies to special environments, both military and civilian, where infrastructure is unreliable and dynamic and decisions must be made locally and in near-real time. From a security perspective, there is a need for strong authentication, precise access control, and a zero-trust approach at the network edge as well. The CIRCL scheme, based on traditional pairing-based ABE (CP-ABE), is compared with the newer Covercrypt scheme, a hybrid key encapsulation mechanism with access control (KEMAC) that provides quantum resistance. The main goal is to determine which scheme scales better and meets the performance requirements for two different scenarios: large corporate networks (where scalability is key) and tactical edge networks (where minimal bandwidth and post-quantum security are paramount). The benchmark results are used to compare the operating costs in detail, such as the key generation time, message encryption and decryption times, public key size, and cipher overhead, showing that Covercrypt provides a reduction in ciphertext overhead in tactical scenarios, while CIRCL offers faster decryption throughput in large-scale enterprise environments. It is concluded that the optimal choice depends on the specific constraints of the operating environment. Full article
(This article belongs to the Special Issue Computer Networking Security and Privacy)
Show Figures

Figure 1

18 pages, 538 KB  
Article
Enhancing Vehicle IoT Security with PQC: A Lightweight Approach for Encrypted Sensor Data Transmission
by Jackson Diaz-Gorrin and Candido Caballero-Gil
Electronics 2026, 15(3), 684; https://doi.org/10.3390/electronics15030684 - 4 Feb 2026
Viewed by 468
Abstract
Cybersecurity threats are evolving constantly, and the arrival of quantum computing raises serious doubts about whether today’s cryptographic methods will hold up over time. This concern has motivated interest in algorithms designed to resist future attacks, with CRYSTALS-Kyber emerging as a practical candidate [...] Read more.
Cybersecurity threats are evolving constantly, and the arrival of quantum computing raises serious doubts about whether today’s cryptographic methods will hold up over time. This concern has motivated interest in algorithms designed to resist future attacks, with CRYSTALS-Kyber emerging as a practical candidate and forming the basis of an NIST post-quantum standard. This study focuses on protecting data exchanged between a vehicle sensor suite and cloud services over the Message Queuing Telemetry Transport protocol. Performance must remain acceptable; therefore, attention centers on lightweight and efficient execution while leveraging the board’s hardware capabilities to keep latency and resource usage low. Adding this layer of post-quantum encryption helps limit the exposure of critical telemetry and control data to sophisticated adversaries. It also aims to preserve integrity and confidentiality in vehicular communications as the Internet of Things becomes increasingly connected. This approach maintains a practical balance between forward-looking security and real-world deployability. Full article
(This article belongs to the Special Issue New Technologies in Applied Cryptography and Network Security)
Show Figures

Figure 1

25 pages, 5185 KB  
Review
A Review of Routing and Resource Optimization in Quantum Networks
by Md. Shazzad Hossain Shaon and Mst Shapna Akter
Electronics 2026, 15(3), 557; https://doi.org/10.3390/electronics15030557 - 28 Jan 2026
Viewed by 770
Abstract
Quantum computing is a new discipline that uses the ideas of quantum physics to do calculations that are not possible with conventional computers. Quantum bits, called qubits, could exist in superposition states, making them suitable for parallel processing in contrast to traditional bits. [...] Read more.
Quantum computing is a new discipline that uses the ideas of quantum physics to do calculations that are not possible with conventional computers. Quantum bits, called qubits, could exist in superposition states, making them suitable for parallel processing in contrast to traditional bits. When it comes to addressing complex challenges like proof simulation, optimization, and cryptography, quantum entanglement and quantum interference provide exponential improvements. This survey focuses on recent advances in entanglement routing, quantum key distribution (QKD), and qubit management for short- and long-distance quantum communication. It studies optimization approaches such as integer programming, reinforcement learning, and collaborative methods, evaluating their efficacy in terms of throughput, scalability, and fairness. Despite improvements, challenges remain in dynamic network adaptation, resource limits, and error correction. Addressing these difficulties necessitates the creation of hybrid quantum–classical algorithms for efficient resource allocation, hardware-aware designs to improve real-world deployment, and fault-tolerant architecture. Therefore, this survey suggests that future research focus on integrating quantum networks with existing classical infrastructure to improve security, dependability, and mainstream acceptance. This connection has significance for applications that require secure communication, financial transactions, and critical infrastructure protection. Full article
Show Figures

Figure 1

26 pages, 48080 KB  
Article
Teleoperation of Dual-Arm Manipulators via VR Interfaces: A Framework Integrating Simulation and Real-World Control
by Alejandro Torrejón, Sergio Eslava, Jorge Calderón, Pedro Núñez and Pablo Bustos
Electronics 2026, 15(3), 572; https://doi.org/10.3390/electronics15030572 - 28 Jan 2026
Viewed by 745
Abstract
We present a virtual reality (VR) framework for controlling dual-arm robotic manipulators through immersive interfaces, integrating both simulated and real-world platforms. The system combines the Webots robotics simulator with Unreal Engine 5.6.1 to provide real-time visualization and interaction, enabling users to manipulate each [...] Read more.
We present a virtual reality (VR) framework for controlling dual-arm robotic manipulators through immersive interfaces, integrating both simulated and real-world platforms. The system combines the Webots robotics simulator with Unreal Engine 5.6.1 to provide real-time visualization and interaction, enabling users to manipulate each arm’s tool point via VR controllers with natural depth perception and motion tracking. The same control interface is seamlessly extended to a physical dual-arm robot, enabling teleoperation within the same VR environment. Our architecture supports real-time bidirectional communication between the VR layer and both the simulator and hardware, enabling responsive control and feedback. We describe the system design and performance evaluation in both domains, demonstrating the viability of immersive VR as a unified interface for simulation and physical robot control. Full article
Show Figures

Figure 1

24 pages, 29852 KB  
Article
Dual-Axis Transformer-GNN Framework for Touchless Finger Location Sensing by Using Wi-Fi Channel State Information
by Minseok Koo and Jaesung Park
Electronics 2026, 15(3), 565; https://doi.org/10.3390/electronics15030565 - 28 Jan 2026
Viewed by 350
Abstract
Camera, lidar, and wearable-based gesture recognition technologies face practical limitations such as lighting sensitivity, occlusion, hardware cost, and user inconvenience. Wi-Fi channel state information (CSI) can be used as a contactless alternative to capture subtle signal variations caused by human motion. However, existing [...] Read more.
Camera, lidar, and wearable-based gesture recognition technologies face practical limitations such as lighting sensitivity, occlusion, hardware cost, and user inconvenience. Wi-Fi channel state information (CSI) can be used as a contactless alternative to capture subtle signal variations caused by human motion. However, existing CSI-based methods are highly sensitive to domain shifts and often suffer notable performance degradation when applied to environments different from the training conditions. To address this issue, we propose a domain-robust touchless finger location sensing framework that operates reliably even in a single-link environment composed of commercial Wi-Fi devices. The proposed system applies preprocessing procedures to reduce noise and variability introduced by environmental factors and introduces a multi-domain segment combination strategy to increase the domain diversity during training. In addition, the dual-axis transformer learns temporal and spatial features independently, and the GNN-based integration module incorporates relationships among segments originating from different domains to produce more generalized representations. The proposed model is evaluated using CSI data collected from various users and days; experimental results show that the proposed method achieves an in-domain accuracy of 99.31% and outperforms the best baseline by approximately 4% and 3% in cross-user and cross-day evaluation settings, respectively, even in a single-link setting. Our work demonstrates a viable path for robust, calibration-free finger-level interaction using ubiquitous single-link Wi-Fi in real-world and constrained environments, providing a foundation for more reliable contactless interaction systems. Full article
Show Figures

Figure 1

29 pages, 2945 KB  
Article
Physics-Informed Neural Network for Denoising Images Using Nonlinear PDE
by Carlos Osorio Quero and Maria Liz Crespo
Electronics 2026, 15(3), 560; https://doi.org/10.3390/electronics15030560 - 28 Jan 2026
Viewed by 1135
Abstract
Noise remains a persistent limitation in coherent imaging systems, degrading image quality and hindering accurate interpretation in critical applications such as remote sensing, medical imaging, and non-destructive testing. This paper presents a physics-informed deep learning framework for effective image denoising under complex noise [...] Read more.
Noise remains a persistent limitation in coherent imaging systems, degrading image quality and hindering accurate interpretation in critical applications such as remote sensing, medical imaging, and non-destructive testing. This paper presents a physics-informed deep learning framework for effective image denoising under complex noise conditions. The proposed approach integrates nonlinear partial differential equations (PDEs), including the heat equation, diffusion models, MPMC, and the Zhichang Guo (ZG) method, into advanced neural network architectures such as ResUNet, UNet, U2Net, and Res2UNet. By embedding physical constraints directly into the training process, the framework couples data-driven learning with physics-based priors to enhance noise suppression and preserve structural details. Experimental evaluations across multiple datasets demonstrate that the proposed method consistently outperforms conventional denoising techniques, achieving higher PSNR, SSIM, ENL, and CNR values. These results confirm the effectiveness of combining physics-informed neural networks with deep architectures and highlight their potential for advanced image restoration in real-world, high-noise imaging scenarios. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network: 2nd Edition)
Show Figures

Figure 1

18 pages, 42966 KB  
Article
A Model-Based Design and Verification Framework for Virtual ECUs in Automotive Seat Control Systems
by Anna Yang, Woo Jin Han, Hyun Suk Cho, Dong-Woo Koh and Jae-Gon Kim
Electronics 2026, 15(3), 569; https://doi.org/10.3390/electronics15030569 - 28 Jan 2026
Viewed by 706
Abstract
As automotive software continues to grow in scale and timing sensitivity, hardware-independent verification in the early design phase has become increasingly important—especially for safety-critical, body-domain controllers. This study proposes a framework that integrates MBD (Model-Based Design), AUTOSAR (Automotive Open System Architecture) Classic Platform [...] Read more.
As automotive software continues to grow in scale and timing sensitivity, hardware-independent verification in the early design phase has become increasingly important—especially for safety-critical, body-domain controllers. This study proposes a framework that integrates MBD (Model-Based Design), AUTOSAR (Automotive Open System Architecture) Classic Platform configuration, and vECU (Virtual Electronic Control Unit) execution into a single, repeatable development workflow. Control logic validated in Simulink is translated into AUTOSAR-compliant software, built into a QEMU (Quick EMUlator)-based vECU, and exercised in DRIM-SimHub using both virtual stimuli and a real sensor–actuator signal delivered through a dedicated I/O interface board. Using a seat–slide virtual limit controller as a representative case, the proposed workflow enables consistent reuse of the test scenarios across model-in-the-loop (MiL), software-in-the-loop (SiL), and virtual ECU stages, while preserving production-level timing behavior and the semantics of the AUTOSAR runtime. The experimental results show that the vECU accurately reproduces the PWM outputs, Hall sensor pulse timing, and limit–stop decisions of physical ECU, and that integration issues previously discovered only in HiL tests can be exposed much earlier. Overall, the workflow shortens verification cycles, improves the observability of timing-dependent behavior, and provides a practical basis for early validation in software-defined vehicle development. Full article
Show Figures

Figure 1

24 pages, 1253 KB  
Article
Re-Evaluating Android Malware Detection: Tabular Features, Vision Models, and Ensembles
by Prajwal Hosahalli Dayananda and Zesheng Chen
Electronics 2026, 15(3), 544; https://doi.org/10.3390/electronics15030544 - 27 Jan 2026
Viewed by 699
Abstract
Static, machine learning-based malware detection is widely used in Android security products, where even small increases in false-positive rates can impose significant burdens on analysts and cause unacceptable disruptions for end users. Both tabular features and image-based representations have been explored for Android [...] Read more.
Static, machine learning-based malware detection is widely used in Android security products, where even small increases in false-positive rates can impose significant burdens on analysts and cause unacceptable disruptions for end users. Both tabular features and image-based representations have been explored for Android malware detection. However, existing public benchmark datasets do not provide paired tabular and image representations for the same samples, limiting direct comparisons between tabular models and vision-based models. This work investigates whether carefully engineered, domain-specific tabular features can match or surpass the performance of state-of-the-art deep vision models under strict false-positive-rate constraints, and whether ensemble approaches justify their additional complexity. To enable this analysis, we construct a large corpus of Android applications with paired static representations and evaluate six popular machine learning models on the exact same samples: two tabular models using EMBER features, two tabular models using extended EMBER features, and two vision-based models using malware images. Our results show that a LightGBM model trained on extended EMBER features outperforms all other evaluated models, as well as a state-of-the-art approach trained on a much larger dataset. Furthermore, we develop an ensemble model combining both tabular and vision-based detectors, which yields a modest performance improvement but at the cost of substantial additional computational and engineering overhead. Full article
(This article belongs to the Special Issue Feature Papers in Networks: 2025–2026 Edition)
Show Figures

Figure 1

18 pages, 1408 KB  
Article
Joint Effect of Signal Strength, Bitrate, and Topology on Video Playback Delays of 802.11ax Gigabit Wi-Fi
by Nurul I. Sarkar and Sonia Gul
Electronics 2026, 15(3), 531; https://doi.org/10.3390/electronics15030531 - 26 Jan 2026
Viewed by 420
Abstract
This paper presents a performance evaluation of IEEE 802.11ax (Wi-Fi 6) networks using a combination of real-world testbed measurements and simulation-based analysis. The paper investigates the combined effect of received signal strength (RSSI), application bitrate, and network topology on video playback delays of [...] Read more.
This paper presents a performance evaluation of IEEE 802.11ax (Wi-Fi 6) networks using a combination of real-world testbed measurements and simulation-based analysis. The paper investigates the combined effect of received signal strength (RSSI), application bitrate, and network topology on video playback delays of 802.11ax. The effect of frequency band and client density on system performance is also investigated. Testbed measurements and field experiments were conducted in indoor environments using dual-band (2.4 GHz and 5 GHz) ad hoc and infrastructure network configurations. OMNeT++ based simulations are conducted to explore scalability by increasing the number of wireless clients. The results obtained show that the infrastructure-based deployments provide more stable video playback than the ad hoc network, particularly under varying RSSI conditions. While the 5 GHz band delivers higher throughput at a short range, the 2.4 GHz band offers improved coverage at reduced system performance. The simulation results further demonstrate significant degradation in throughput and latency as client density increases. To contextualize the observed performance, a baseline comparison with 802.11ac is incorporated, highlighting the relative improvements and remaining limitations of 802.11ax within the evaluated signal and load conditions. The findings provide practical deployment insights for video-centric wireless networks and inform the optimization of next-generation Wi-Fi deployments. Full article
Show Figures

Figure 1

11 pages, 4271 KB  
Article
A Low-Power High-Precision Discrete-Time Delta–Sigma Modulator for Battery Management System
by Ying Li and Wenyuan Li
Electronics 2026, 15(3), 535; https://doi.org/10.3390/electronics15030535 - 26 Jan 2026
Viewed by 633
Abstract
This paper presents a low-power high-precision Discrete-Time Delta–Sigma (DT-DS) analog-to-digital converter (ADC) for a Battery Management System (BMS), which is critical for monitoring key battery parameters such as voltage, current, and temperature. This design employs a second-order Cascade of Integrators FeedForward (CIFF) architecture [...] Read more.
This paper presents a low-power high-precision Discrete-Time Delta–Sigma (DT-DS) analog-to-digital converter (ADC) for a Battery Management System (BMS), which is critical for monitoring key battery parameters such as voltage, current, and temperature. This design employs a second-order Cascade of Integrators FeedForward (CIFF) architecture using a hybrid chopping technique to effectively suppress 1/f noise and offset. Fabricated in a 180 nm Bipolar-CMOS-DMOS (BCD) process, the ADC achieves a peak signal-to-noise ratio (SNR) of 91.2 dB and a peak signal-to-noise-and-distortion ratio (SNDR) of 90.6 dB within a 600 Hz bandwidth, while consuming only 35 µA from a 1.8 V supply. This corresponds to a figure-of-merit (FoM) of 160.4 dB, calculated based on the SNDR, bandwidth, and power dissipation. Full article
(This article belongs to the Special Issue Feature Papers in Electrical and Autonomous Vehicles, Volume 2)
Show Figures

Figure 1

20 pages, 1385 KB  
Article
Development of an IoT System for Acquisition of Data and Control Based on External Battery State of Charge
by Aleksandar Valentinov Hristov, Daniela Gotseva, Roumen Ivanov Trifonov and Jelena Petrovic
Electronics 2026, 15(3), 502; https://doi.org/10.3390/electronics15030502 - 23 Jan 2026
Viewed by 502
Abstract
In the context of small, battery-powered systems, a lightweight, reusable architecture is needed for integrated measurement, visualization, and cloud telemetry that minimizes hardware complexity and energy footprint. Existing solutions require high resources. This limits their applicability in Internet of Things (IoT) devices with [...] Read more.
In the context of small, battery-powered systems, a lightweight, reusable architecture is needed for integrated measurement, visualization, and cloud telemetry that minimizes hardware complexity and energy footprint. Existing solutions require high resources. This limits their applicability in Internet of Things (IoT) devices with low power consumption. The present work demonstrates the process of design, implementation and experimental evaluation of a single-cell lithium-ion battery monitoring prototype, intended for standalone operation or integration into other systems. The architecture is compact and energy efficient, with a reduction in complexity and memory usage: modular architecture with clearly distinguished responsibilities, avoidance of unnecessary dynamic memory allocations, centralized error handling, and a low-power policy through the usage of deep sleep mode. The data is stored in a cloud platform, while minimal storage is used locally. The developed system combines the functional requirements for an embedded external battery monitoring system: local voltage and current measurement, approximate estimation of the State of Charge (SoC) using a look-up table (LUT) based on the discharge characteristic, and visualization on a monochrome OLED display. The conducted experiments demonstrate the typical U(t) curve and the triggering of the indicator at low charge levels (LOW − SoC ≤ 20% and CRITICAL − SoC ≤ 5%) in real-world conditions and the absence of unwanted switching of the state near the voltage thresholds. Full article
Show Figures

Figure 1

28 pages, 2192 KB  
Article
AptEVS: Adaptive Edge-and-Vehicle Scheduling for Hierarchical Federated Learning over Vehicular Networks
by Yu Tian, Nina Wang, Zongshuai Zhang, Wenhao Zou, Liangjie Zhao, Shiyao Liu and Lin Tian
Electronics 2026, 15(2), 479; https://doi.org/10.3390/electronics15020479 - 22 Jan 2026
Viewed by 279
Abstract
Hierarchical federated learning (HFL) has emerged as a promising paradigm for distributed machine learning over vehicular networks. Despite recent advances in vehicle selection and resource allocation, most still adopt a fixed Edge-and-Vehicle Scheduling (EVS) configuration that keeps the number of participating edge nodes [...] Read more.
Hierarchical federated learning (HFL) has emerged as a promising paradigm for distributed machine learning over vehicular networks. Despite recent advances in vehicle selection and resource allocation, most still adopt a fixed Edge-and-Vehicle Scheduling (EVS) configuration that keeps the number of participating edge nodes and vehicles per node constant across training rounds. However, given the diverse training tasks and dynamic vehicular environments, our experiments confirm that such static configurations struggle to efficiently meet the task-specific requirements across model accuracy, time delay, and energy consumption. To address this, we first formulate a unified, long-term training cost metric that balances these conflicting objectives. We then propose AptEVS, an adaptive scheduling framework based on deep reinforcement learning (DRL), designed to minimize this cost. The core of AptEVS is its phase-aware design, which adapts the scheduling strategy by first identifying the current training phase and then switching to specialized strategies accordingly. Extensive simulations demonstrate that AptEVS learns an effective scheduling policy online from scratch, consistently outperforming baselines and and reducing the long-term training cost by up to 66.0%. Our findings demonstrate that phase-aware DRL is both feasible and highly effective for resource scheduling over complex vehicular networks. Full article
(This article belongs to the Special Issue Technology of Mobile Ad Hoc Networks)
Show Figures

Figure 1

26 pages, 3088 KB  
Article
A Human-Centered Visual Cognitive Framework for Traffic Pair Crossing Identification in Human–Machine Teaming
by Bufan Liu, Sun Woh Lye, Terry Liang Khin Teo and Hong Jie Wee
Electronics 2026, 15(2), 477; https://doi.org/10.3390/electronics15020477 - 22 Jan 2026
Viewed by 295
Abstract
Human–machine teaming (HMT) in air traffic management (ATM) promises safer, more efficient operations by combining human expertise in decision-making with machine efficiency in data processing, where traffic pair crossing identification is crucial for effective conflict detection and resolution by recognizing aircraft pairs that [...] Read more.
Human–machine teaming (HMT) in air traffic management (ATM) promises safer, more efficient operations by combining human expertise in decision-making with machine efficiency in data processing, where traffic pair crossing identification is crucial for effective conflict detection and resolution by recognizing aircraft pairs that may lead to conflict. To facilitate this goal, this paper presents a four-phase cognitive framework to enhance HMT for monitoring traffic pairs at crossing points through a human-centered, visual-based approach. The visual cognitive framework integrates three data streams—eye-tracking metrics, mouse-over actions, and issued radar commands—to capture the traffic context from the controller’s perspective. A target pair identification method is designed to generate potential conflict pairs. Controller behavior is then modeled using a sighting timeline, yielding insights to develop the cognitive mechanism. Using air traffic crossing-conflict monitoring in en route airspace as a case study, the framework successfully captures the state of controllers’ monitoring and awareness behavior through tests on five target flight pairs under various crossing conditions. Specifically, aware monitoring activities are characterized by higher fixation count on either flight across a 10 min window, with 53% to 100% of visual input activities occurring between 8 to 7 and 3 to 2 min before crossing, ensuring timely conflict management. Furthermore, the study quantifies the effect of crossing geometry, whereby narrow-angle crossings (21 degrees) require significantly higher monitoring intensity (15 paired sightings) compared to wide or moderate angle crossings. These results indicate that controllers exhibit distinct monitoring and awareness behaviors when identifying and managing conflicts across the different test pairs, demonstrating the effectiveness and applicability of the proposed visual cognitive framework. Full article
Show Figures

Figure 1

13 pages, 2210 KB  
Article
High-Throughput Control-Data Acquisition for Multicore MCU-Based Real-Time Control Systems Using Double Buffering over Ethernet
by Seung-Hun Lee, Duc M. Tran and Joon-Young Choi
Electronics 2026, 15(2), 469; https://doi.org/10.3390/electronics15020469 - 22 Jan 2026
Viewed by 475
Abstract
For the design, implementation, performance optimization, and predictive maintenance of high-speed real-time control systems with sub-millisecond control periods, the capability to acquire large volumes of high-rate control data in real time is required without interfering with normal control operation that is repeatedly executed [...] Read more.
For the design, implementation, performance optimization, and predictive maintenance of high-speed real-time control systems with sub-millisecond control periods, the capability to acquire large volumes of high-rate control data in real time is required without interfering with normal control operation that is repeatedly executed in each extremely short control cycle. In this study, we propose a control-data acquisition method for high-speed real-time control systems with sub-millisecond control periods, in which control data are transferred to an external host device via Ethernet in real time. To enable the transmission of high-rate control data without disturbing the real-time control operation, a multicore microcontroller unit (MCU) is adopted, where the control task and the data transmission task are executed on separately assigned central processing unit (CPU) cores. Furthermore, by applying a double-buffering algorithm, continuous Ethernet communication without intermediate waiting time is achieved, resulting in a substantial improvement in transmission throughput. Using a control card based on TI’s multicore MCU TMS320F28388D, which consists of dual digital signal processor cores and one connectivity manager (CM) core, the proposed control-data acquisition method is implemented and an actual experimental environment is constructed. Experimental results show that the double-buffering transmission achieves a maximum throughput of 94.2 Mbps on a 100 Mbps Fast Ethernet link, providing a 38.5% improvement over the single-buffering case and verifying the high performance and efficiency of the proposed data acquisition method. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

28 pages, 1241 KB  
Article
Joint Learning for Metaphor Detection and Interpretation Based on Gloss Interpretation
by Yanan Liu, Hai Wan and Jinxia Lin
Electronics 2026, 15(2), 456; https://doi.org/10.3390/electronics15020456 - 21 Jan 2026
Viewed by 307
Abstract
Metaphor is ubiquitous in daily communication and makes language expression more vivid. Identifying metaphorical words, known as metaphor detection, is crucial for capturing the real meaning of a sentence. As an important step of metaphorical understanding, the correct interpretation of metaphorical words [...] Read more.
Metaphor is ubiquitous in daily communication and makes language expression more vivid. Identifying metaphorical words, known as metaphor detection, is crucial for capturing the real meaning of a sentence. As an important step of metaphorical understanding, the correct interpretation of metaphorical words directly affects metaphor detection. This article investigates how to use metaphor interpretation to enhance metaphor detection. Since previous approaches for metaphor interpretation are coarse-grained or constrained by ambiguous meanings of substitute words, we propose a different interpretation mechanism that explains metaphorical words by means of gloss-based interpretations. To comprehensively explore the optimal joint strategy, we go beyond previous work by designing diverse model architectures. We investigate both classification and sequence labeling paradigms, incorporating distinct component designs based on MIP and SPV theories. Furthermore, we integrate Part-of-Speech tags and external knowledge to further refine the feature representation. All methods utilize pre-trained language models to encode text and capture semantic information of the text. Since this mechanism involves both metaphor detection and metaphor interpretation but there is a lack of datasets annotated for both tasks, we have enhanced three datasets with glosses for metaphor detection: one Chinese dataset (PSUCMC) and two English datasets (TroFi and VUA). Experimental results demonstrate that the proposed joint methods are superior to or at least comparable to state-of-the-art methods on the three enhanced datasets. Results confirm that joint learning of metaphor detection and gloss-based interpretation makes metaphor detection more accurate. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

31 pages, 3765 KB  
Article
Rain Detection in Solar Insecticidal Lamp IoTs Systems Based on Multivariate Wireless Signal Feature Learning
by Lingxun Liu, Lei Shu, Yiling Xu, Kailiang Li, Ru Han, Qin Su and Jiarui Fang
Electronics 2026, 15(2), 465; https://doi.org/10.3390/electronics15020465 - 21 Jan 2026
Viewed by 284
Abstract
Solar insecticidal lamp Internet of Things (SIL-IoTs) systems are widely deployed in agricultural environments, where accurate and timely rain-detection is crucial for system stability and energy-efficient operation. However, existing rain-sensing solutions rely on additional hardware, leading to increased cost and maintenance complexity. This [...] Read more.
Solar insecticidal lamp Internet of Things (SIL-IoTs) systems are widely deployed in agricultural environments, where accurate and timely rain-detection is crucial for system stability and energy-efficient operation. However, existing rain-sensing solutions rely on additional hardware, leading to increased cost and maintenance complexity. This study proposes a hardware-free rain detection method based on multivariate wireless signal feature learning, using LTE communication data. A large-scale primary dataset containing 11.84 million valid samples was collected from a real farmland SIL-IoTs deployment in Nanjing, recording RSRP, RSRQ, and RSSI at 1 Hz. To address signal heterogeneity, a signal-strength stratification strategy and a dual-rate EWMA-based adaptive signal-leveling mechanism were introduced. Four machine-learning models—Logistic Regression, Random Forest, XGBoost, and LightGBM—were trained and evaluated using both the primary dataset and an external test dataset collected in Changsha and Dongguan. Experimental results show that XGBoost achieves the highest detection accuracy, whereas LightGBM provides a favorable trade-off between performance and computational cost. Evaluation using accuracy, precision, recall, F1-score, and ROC-AUC indicates that all metrics exceed 0.975. The proposed method demonstrates strong accuracy, robustness, and cross-regional generalization, providing a practical and scalable solution for rain detection in agricultural IoT systems without additional sensing hardware. Full article
Show Figures

Figure 1

18 pages, 3705 KB  
Article
Cross-Platform Multi-Modal Transfer Learning Framework for Cyberbullying Detection
by Weiqi Zhang, Chengzu Dong, Aiting Yao, Asef Nazari and Anuroop Gaddam
Electronics 2026, 15(2), 442; https://doi.org/10.3390/electronics15020442 - 20 Jan 2026
Viewed by 515
Abstract
Cyberbullying and hate speech increasingly appear in multi-modal social media posts, where images and text are combined in diverse and fast changing ways across platforms. These posts differ in style, vocabulary and layout, and labeled data are sparse and noisy, which makes it [...] Read more.
Cyberbullying and hate speech increasingly appear in multi-modal social media posts, where images and text are combined in diverse and fast changing ways across platforms. These posts differ in style, vocabulary and layout, and labeled data are sparse and noisy, which makes it difficult to train detectors that are both reliable and deployable under tight computational budgets. Many high performing systems rely on large vision language backbones, full parameter fine tuning, online retrieval or model ensembles, which raises training and inference costs. We present a parameter efficient cross-platform multi-modal transfer learning framework for cyberbullying and hateful content detection. Our framework has three components. First, we perform domain adaptive pretraining of a compact ViLT backbone on in domain image-text corpora. Second, we apply parameter efficient fine tuning that updates only bias terms, a small subset of LayerNorm parameters and the classification head, leaving the inference computation graph unchanged. Third, we use noise aware knowledge distillation from a stronger teacher built from pretrained text and CLIP based image-text encoders, where only high confidence, temperature scaled predictions are used as soft labels during training, and teacher models and any retrieval components are used only offline. We evaluate primarily on Hateful Memes and use IMDB as an auxiliary text only benchmark to show that the deployment aware PEFT + offline-KD recipe can still be applied when other modalities are unavailable. On Hateful Memes, our student updates only 0.11% of parameters and retain about 96% of the AUROC of full fine-tuning. Full article
(This article belongs to the Special Issue Data Privacy and Protection in IoT Systems)
Show Figures

Figure 1

28 pages, 2865 KB  
Article
Reliability Assessment of Power System Microgrid Using Fault Tree Analysis: Qualitative and Quantitative Analysis
by Shravan Kumar Akula and Hossein Salehfar
Electronics 2026, 15(2), 433; https://doi.org/10.3390/electronics15020433 - 19 Jan 2026
Cited by 2 | Viewed by 593
Abstract
Renewable energy sources account for approximately one-quarter of the total electric power generating capacity in the United States. These sources increase system complexity, with potential negative impacts caused by their inherent variability. A microgrid, a decentralized local grid, offers an excellent solution for [...] Read more.
Renewable energy sources account for approximately one-quarter of the total electric power generating capacity in the United States. These sources increase system complexity, with potential negative impacts caused by their inherent variability. A microgrid, a decentralized local grid, offers an excellent solution for integrating these sources into the system’s generation mix in a cost-effective and efficient manner. This paper presents a comprehensive fault tree analysis for the reliability assessment of microgrids, ensuring their safe operation. In this work, fault tree analysis of a microgrid in grid-tied mode with solar, wind, and battery energy storage systems is performed, and the results are reported. The analyses and calculations are performed using the Relyence software suite. The fault tree analysis was performed using various calculation methods, including exact (conventional fault tree analysis), simulation (Monte Carlo simulation), cut-set summation, Esary–Proschan, and cross-product. Once these analyses were completed, the results were compared with the ‘exact’ method as the base case. Critical risk measures, such as unavailability, conditional failure intensity, failure frequency, mean unavailability, number of failures, and minimal cut-sets, were documented and compared. Importance measures, such as marginal or Birnbaum, criticality, diagnostic, risk achievement, and risk reduction worth, were also computed and tabulated. Details of all cut-sets and the probability of failure are presented. The calculated importance measures would help microgrid operators focus on events that yield the greatest system improvements and maintain an acceptable range of risk levels to ensure safe operation and improved system reliability. Full article
Show Figures

Figure 1

29 pages, 7700 KB  
Article
Secure and Decentralised Swarm Authentication Using Hardware Security Primitives
by Sagir Muhammad Ahmad and Barmak Honarvar Shakibaei Asli
Electronics 2026, 15(2), 423; https://doi.org/10.3390/electronics15020423 - 18 Jan 2026
Viewed by 502
Abstract
Autonomous drone swarms are increasingly deployed in critical domains such as infrastructure inspection, environmental monitoring, and emergency response. While their distributed operation enables scalability and resilience, it also introduces new vulnerabilities, particularly in authentication and trust establishment. Conventional cryptographic solutions, including public key [...] Read more.
Autonomous drone swarms are increasingly deployed in critical domains such as infrastructure inspection, environmental monitoring, and emergency response. While their distributed operation enables scalability and resilience, it also introduces new vulnerabilities, particularly in authentication and trust establishment. Conventional cryptographic solutions, including public key infrastructures (PKI) and symmetric key protocols, impose computational and connectivity requirements unsuited to resource-constrained and external infrastructure-free swarm deployments. In this paper, we present a decentralized authentication scheme rooted in hardware security primitives (HSPs); specifically, Physical Unclonable Functions (PUFs) and True Random Number Generators (TRNGs). The protocol leverages master-initiated token broadcasting, iterative HSP seed evolution, randomized response delays, and statistical trust evaluation to detect cloning, replay, and impersonation attacks without reliance on centralized authorities or pre-distributed keys. Simulation studies demonstrate that the scheme achieves lightweight operation, rapid anomaly detection, and robustness against wireless interference, making it well-suited for real-time swarm systems. Full article
(This article belongs to the Special Issue Unmanned Aircraft Systems with Autonomous Navigation, 2nd Edition)
Show Figures

Figure 1

44 pages, 996 KB  
Article
Adaptive Hybrid Consensus Engine for V2X Blockchain: Real-Time Entropy-Driven Control for High Energy Efficiency and Sub-100 ms Latency
by Rubén Juárez and Fernando Rodríguez-Sela
Electronics 2026, 15(2), 417; https://doi.org/10.3390/electronics15020417 - 17 Jan 2026
Viewed by 427
Abstract
We present an adaptive governance engine for blockchain-enabled Vehicular Ad Hoc Networks (VANETs) that regulates the latency–energy–coherence trade-off under rapid topology changes. The core contribution is an Ideal Information Cycle (an operational abstraction of information injection/validation) and a modular VANET Engine implemented as [...] Read more.
We present an adaptive governance engine for blockchain-enabled Vehicular Ad Hoc Networks (VANETs) that regulates the latency–energy–coherence trade-off under rapid topology changes. The core contribution is an Ideal Information Cycle (an operational abstraction of information injection/validation) and a modular VANET Engine implemented as a real-time control loop in NS-3.35. At runtime, the Engine monitors normalized Shannon entropies—informational entropy S over active transactions and spatial entropy Hspatial over occupancy bins (both on [0,1])—and adapts the consensus mode (latency-feasible PoW versus signature/quorum-based modes such as PoS/FBA) together with rigor parameters via calibrated policy maps. Governance is formulated as a constrained operational objective that trades per-block resource expenditure (radio + cryptography) against a Quality-of-Information (QoI) proxy derived from delay/error tiers, while maintaining timeliness and ledger-coherence pressure. Cryptographic cost is traced through counted operations, Ecrypto=ehnhash+esignsig, and coherence is tracked using the LCP-normalized definition Dledger(t) computed from the longest common prefix (LCP) length across nodes. We evaluate the framework under urban/highway mobility, scheduled partitions, and bounded adversarial stressors (Sybil identities and Byzantine proposers), using 600 s runs with 30 matched random seeds per configuration and 95% bias-corrected and accelerated (BCa) bootstrap confidence intervals. In high-disorder regimes (S0.8), the Engine reduces total per-block energy (radio + cryptography) by more than 90% relative to a fixed-parameter PoW baseline tuned to the same agreement latency target. A consensus-first triggering policy further lowers agreement latency and improves throughput compared with broadcast-first baselines. In the emphasized urban setting under high mobility (v=30 m/s), the Engine keeps agreement/commit latency in the sub-100 ms range while maintaining finality typically within sub-150 ms ranges, bounds orphaning (≤10%), and reduces average ledger divergence below 0.07 at high spatial disorder. The main evaluation is limited to N100 vehicles under full PHY/MAC fidelity. PoW targets are intentionally latency-feasible and are not intended to provide cryptocurrency-grade majority-hash security; operational security assumptions and mode transition safeguards are discussed in the manuscript. Full article
(This article belongs to the Special Issue Intelligent Technologies for Vehicular Networks, 2nd Edition)
Show Figures

Figure 1

21 pages, 321 KB  
Review
Privacy-Preserving Protocols in Smart Cities and Industrial IoT: Challenges, Trends, and Future Directions
by Manuel José Cabral dos Santos Reis
Electronics 2026, 15(2), 399; https://doi.org/10.3390/electronics15020399 - 16 Jan 2026
Cited by 1 | Viewed by 816
Abstract
The increasing deployment of interconnected devices in Smart Cities and Industrial Internet of Things (IIoT) environments has significantly enhanced operational efficiency, automation, and real-time data analytics. However, this rapid digitization also introduces complex security and privacy challenges, particularly in the handling of sensitive [...] Read more.
The increasing deployment of interconnected devices in Smart Cities and Industrial Internet of Things (IIoT) environments has significantly enhanced operational efficiency, automation, and real-time data analytics. However, this rapid digitization also introduces complex security and privacy challenges, particularly in the handling of sensitive data across heterogeneous and resource-constrained networks. This review explores the current landscape of privacy-preserving protocols designed for Smart City and IIoT infrastructures. We examine state-of-the-art approaches including lightweight cryptographic schemes, secure data aggregation, anonymous communication protocols, and blockchain-based frameworks. The paper also analyzes practical trade-offs between security, latency, and computational overhead in real-world deployments. Open research challenges such as secure interoperability, privacy in federated learning, and resilience against AI-driven cyberattacks are discussed. Finally, the paper outlines promising research directions and technologies that can enable scalable, secure, and privacy-aware network infrastructures for future urban and industrial ecosystems. Full article
(This article belongs to the Special Issue Computer Networking Security and Privacy)
Show Figures

Figure 1

28 pages, 22992 KB  
Article
Domain Knowledge-Infused Synthetic Data Generation for LLM-Based ICS Intrusion Detection: Mitigating Data Scarcity and Imbalance
by Seokhyun Ann, Hongeun Kim, Suhyeon Park, Seong-je Cho, Joonmo Kim and Harksu Cho
Electronics 2026, 15(2), 371; https://doi.org/10.3390/electronics15020371 - 14 Jan 2026
Viewed by 775
Abstract
Industrial control systems (ICSs) are increasingly interconnected with enterprise IT networks and remote services, which expands the attack surface of operational technology (OT) environments. However, collecting sufficient attack traffic from real OT/ICS networks is difficult, and the resulting scarcity and class imbalance of [...] Read more.
Industrial control systems (ICSs) are increasingly interconnected with enterprise IT networks and remote services, which expands the attack surface of operational technology (OT) environments. However, collecting sufficient attack traffic from real OT/ICS networks is difficult, and the resulting scarcity and class imbalance of malicious data hinder the development of intrusion detection systems (IDSs). At the same time, large language models (LLMs) have shown promise for security analytics when system events are expressed in natural language. This study investigates an LLM-based network IDS for a smart-factory OT/ICS environment and proposes a synthetic data generation method that injects domain knowledge into attack samples. Using the ICSSIM simulator, we construct a bottle-filling smart factory, implement six MITRE ATT&CK for ICS-based attack scenarios, capture Modbus/TCP traffic, and convert each request–response pair into a natural-language description of network behavior. We then generate synthetic attack descriptions with GPT by combining (1) statistical properties of normal traffic, (2) MITRE ATT&CK for ICS tactics and techniques, and (3) expert knowledge obtained from executing the attacks in ICSSIM. The Llama 3.1 8B Instruct model is fine-tuned with QLoRA on a seven-class classification task (Benign vs. six attack types) and evaluated on a test set composed exclusively of real ICSSIM traffic. Experimental results show that synthetic data generated only from statistical information, or from statistics plus MITRE descriptions, yield limited performance, whereas incorporating environment-specific expert knowledge is associated with substantially higher performance on our ICSSIM-based expanded test set (100% accuracy in binary detection and 96.49% accuracy with a macro F1-score of 0.958 in attack-type classification). Overall, these findings suggest that domain-knowledge-infused synthetic data and natural-language traffic representations can support LLM-based IDSs in OT/ICS smart-factory settings; however, further validation on larger and more diverse datasets is needed to confirm generality. Full article
(This article belongs to the Special Issue AI-Enhanced Security: Advancing Threat Detection and Defense)
Show Figures

Figure 1

31 pages, 643 KB  
Systematic Review
The Use of Business Intelligence and Analytics in Electric Vehicle Technology: A Comprehensive Survey
by Alexandra Bousia
Electronics 2026, 15(2), 366; https://doi.org/10.3390/electronics15020366 - 14 Jan 2026
Viewed by 830
Abstract
The emerging urbanization and the extensive increase of the transportation sector are responsible for the significant increase in carbon dioxide emissions. Therefore, replacing traditional cars with Electric Vehicles (EVs) is a promising solution, offering a clearer alternative. EVs are becoming more and more [...] Read more.
The emerging urbanization and the extensive increase of the transportation sector are responsible for the significant increase in carbon dioxide emissions. Therefore, replacing traditional cars with Electric Vehicles (EVs) is a promising solution, offering a clearer alternative. EVs are becoming more and more well-known and are being quickly used worldwide. However, the exponential rise in EV sales has also raised a number of issues, which are becoming important and demanding. These challenges include the need of driving security, the battery degradation, the inadequate infrastructure for charging EVs, and the uneven energy distribution. In order for EVs to reach their full potential, intelligent systems and innovative technologies need to be introduced in the field of EVs. This is where business intelligence (BI) can be employed, along with artificial intelligence (AI), data analytics, and machine learning. In this paper, we provide a comprehensive survey on the use of BI strategies in the EV transportation sector. We first introduce the EVs and charging station technologies. Then, research works on the application of BI and data analysis techniques in EV technology are reviewed to further understand the challenges and open issues for the research and industry community. Moreover, related works on accident analysis, battery health prediction, charging station analysis, intelligent infrastructure, locating charging stations analysis, and autonomous driving are investigated. This survey systematically reviews 75 peer-reviewed studies published between 2020 and 2025. Finally, we discuss the fundamental limitations and the future open challenges in the aforementioned topics. Full article
(This article belongs to the Special Issue Electronic Architecture for Autonomous Vehicles)
Show Figures

Figure 1

34 pages, 12645 KB  
Article
Multimodal Intelligent Perception at an Intersection: Pedestrian and Vehicle Flow Dynamics Using a Pipeline-Based Traffic Analysis System
by Bao Rong Chang, Hsiu-Fen Tsai and Chen-Chia Chen
Electronics 2026, 15(2), 353; https://doi.org/10.3390/electronics15020353 - 13 Jan 2026
Viewed by 523
Abstract
Traditional automated monitoring systems adopted for Intersection Traffic Control still face challenges, including high costs, maintenance difficulties, insufficient coverage, poor multimodal data integration, and limited traffic information analysis. To address these issues, the study proposes a sovereign AI-driven Smart Transportation governance approach, developing [...] Read more.
Traditional automated monitoring systems adopted for Intersection Traffic Control still face challenges, including high costs, maintenance difficulties, insufficient coverage, poor multimodal data integration, and limited traffic information analysis. To address these issues, the study proposes a sovereign AI-driven Smart Transportation governance approach, developing a mobile AI solution equipped with multimodal perception, task decomposition, memory, reasoning, and multi-agent collaboration capabilities. The proposed system integrates computer vision, multi-object tracking, natural language processing, Retrieval-Augmented Generation (RAG), and Large Language Models (LLMs) to construct a Pipeline-based Traffic Analysis System (PTAS). The PTAS can produce real-time statistics on pedestrian and vehicle flows at intersections, incorporating potential risk factors such as traffic accidents, construction activities, and weather conditions for multimodal data fusion analysis, thereby providing forward-looking traffic insights. Experimental results demonstrate that the enhanced DuCRG-YOLOv11n pre-trained model, equipped with our proposed new activation function βsilu, can accurately identify various vehicle types in object detection, achieving a frame rate of 68.25 FPS and a precision of 91.4%. Combined with ByteTrack, it can track over 90% of vehicles in medium- to low-density traffic scenarios, obtaining a 0.719 in MOTA and a 0.08735 in MOTP. In traffic flow analysis, the RAG of Vertex AI, combined with Claude Sonnet 4 LLMs, provides a more comprehensive view, precisely interpreting the causes of peak-hour congestion and effectively compensating for missing data through contextual explanations. The proposed method can enhance the efficiency of urban traffic regulation and optimizes decision support in intelligent transportation systems. Full article
(This article belongs to the Special Issue Interactive Design for Autonomous Driving Vehicles)
Show Figures

Figure 1

29 pages, 2829 KB  
Article
Real-Time Deterministic Lane Detection on CPU-Only Embedded Systems via Binary Line Segment Filtering
by Shang-En Tsai, Shih-Ming Yang and Chia-Han Hsieh
Electronics 2026, 15(2), 351; https://doi.org/10.3390/electronics15020351 - 13 Jan 2026
Cited by 1 | Viewed by 664
Abstract
The deployment of Advanced Driver-Assistance Systems (ADAS) in economically constrained markets frequently relies on hardware architectures that lack dedicated graphics processing units. Within such environments, the integration of deep neural networks faces significant hurdles, primarily stemming from strict limitations on energy consumption, the [...] Read more.
The deployment of Advanced Driver-Assistance Systems (ADAS) in economically constrained markets frequently relies on hardware architectures that lack dedicated graphics processing units. Within such environments, the integration of deep neural networks faces significant hurdles, primarily stemming from strict limitations on energy consumption, the absolute necessity for deterministic real-time response, and the rigorous demands of safety certification protocols. Meanwhile, traditional geometry-based lane detection pipelines continue to exhibit limited robustness under adverse illumination conditions, including intense backlighting, low-contrast nighttime scenes, and heavy rainfall. Motivated by these constraints, this work re-examines geometry-based lane perception from a sensor-level viewpoint and introduces a Binary Line Segment Filter (BLSF) that leverages the inherent structural regularity of lane markings in bird’s-eye-view (BEV) imagery within a computationally lightweight framework. The proposed BLSF is integrated into a complete pipeline consisting of inverse perspective mapping, median local thresholding, line-segment detection, and a simplified Hough-style sliding-window fitting scheme combined with RANSAC. Experiments on a self-collected dataset of 297 challenging frames show that the inclusion of BLSF significantly improves robustness over an ablated baseline while sustaining real-time performance on a 2 GHz ARM CPU-only platform. Additional evaluations on the Dazzling Light and Night subsets of the CULane and LLAMAS benchmarks further confirm consistent gains of approximately 6–7% in F1-score, together with corresponding improvements in IoU. These results demonstrate that interpretable, geometry-driven lane feature extraction remains a practical and complementary alternative to lightweight learning-based approaches for cost- and safety-critical ADAS applications. Full article
(This article belongs to the Special Issue Feature Papers in Electrical and Autonomous Vehicles, Volume 2)
Show Figures

Figure 1

23 pages, 3086 KB  
Article
MARL-Driven Decentralized Crowdsourcing Logistics for Time-Critical Multi-UAV Networks
by Juhyeong Han and Hyunbum Kim
Electronics 2026, 15(2), 331; https://doi.org/10.3390/electronics15020331 - 12 Jan 2026
Viewed by 375
Abstract
Centralized UAV logistics controllers can achieve strong navigation performance in controlled settings, but they do not capture key deployment factors in crowdsourcing-enabled emergency logistics, where heterogeneous UAV owners participate with unreliability and dropout, and incentive expenditure and fairness must be accounted for. This [...] Read more.
Centralized UAV logistics controllers can achieve strong navigation performance in controlled settings, but they do not capture key deployment factors in crowdsourcing-enabled emergency logistics, where heterogeneous UAV owners participate with unreliability and dropout, and incentive expenditure and fairness must be accounted for. This paper presents a decentralized crowdsourcing multi-UAV emergency logistics framework on an edge-orchestrated architecture that (i) performs urgency-aware dispatch under distance/energy/payload constraints, (ii) tracks reliability and participation dynamics under stress (unreliable agents and dropout), and (iii) quantifies incentive feasibility via total payment and payment inequality (Gini). We adopt a hybrid decision design in which PPO/DQN policies provide real-time navigation/control, while GA/ACO act as planning-level route refinement modules (not reinforcement learning) to improve global candidate quality under safety constraints. We evaluate the framework in a controlled grid-world simulator and explicitly report stress-matched re-evaluation results under matched stress settings, where applicable. In the nominal comparison, centralized DQN attains high navigation-centric success (e.g., 0.970 ± 0.095) with short reach steps, but it omits incentives by construction, whereas the proposed crowdsourcing method reports measurable payment and fairness outcomes (e.g., payment and Gini) and remains evaluable under unreliability and dropout sweeps. We further provide a utility decomposition that attributes negative-utility regimes primarily to collision-related costs and secondarily to incentive expenditure, clarifying the operational trade-off between mission value, safety risk, and incentive cost. Overall, the results indicate that navigation-only baselines can appear strong when participation economics are ignored, while a deployable crowdsourcing system must explicitly expose incentive/fairness and robustness characteristics under stress. Full article
(This article belongs to the Special Issue Parallel and Distributed Computing for Emerging Applications)
Show Figures

Figure 1

33 pages, 4488 KB  
Article
New Fuzzy Aggregators for Ordered Fuzzy Numbers for Trend and Uncertainty Analysis
by Miroslaw Kozielski, Piotr Prokopowicz and Dariusz Mikolajewski
Electronics 2026, 15(2), 309; https://doi.org/10.3390/electronics15020309 - 10 Jan 2026
Viewed by 270
Abstract
Decision-making under uncertainty, especially when dealing with incomplete or linguistically described data, remains a significant challenge in various fields of science and industry. The increasing complexity of real-world problems necessitates the development of mathematical models and data processing techniques that effectively address uncertainty [...] Read more.
Decision-making under uncertainty, especially when dealing with incomplete or linguistically described data, remains a significant challenge in various fields of science and industry. The increasing complexity of real-world problems necessitates the development of mathematical models and data processing techniques that effectively address uncertainty and incompleteness. Aggregators play a key role in solving these problems, particularly in fuzzy systems, where they constitute fundamental tools for decision-making, data analysis, and information fusion. Aggregation functions have been extensively studied and applied in many fields of science and engineering. Recent research has explored their usefulness in fuzzy control systems, highlighting both their advantages and limitations. One promising approach is the use of ordered fuzzy numbers (OFNs), which can represent directional tendencies in data. Previous studies have introduced the property of direction sensitivity and the corresponding determinant parameter, which enables the analysis of correspondence between OFNs and facilitates inference operations. The aim of this paper is to examine existing aggregate functions for fuzzy set numbers and assess their suitability within OFNs. By analyzing the properties, theoretical foundations, and practical applications of these functions, we aim to identify a suitable aggregation operator that complies with the principles of OFN while ensuring consistency and efficiency in decision-making based on fuzzy structures. This paper introduces a novel aggregation approach that preserves the expected mathematical properties while incorporating the directional components inherent to OFN. The proposed method aims to improve the robustness and interpretability of fuzzy reasoning systems under uncertainty. Full article
(This article belongs to the Special Issue Advances in Intelligent Systems and Networks, 2nd Edition)
Show Figures

Figure 1

31 pages, 13729 KB  
Article
Stage-Wise SOH Prediction Using an Improved Random Forest Regression Algorithm
by Wei Xiao, Jun Jia, Wensheng Gao, Haibo Li, Hong Xu, Weidong Zhong and Ke He
Electronics 2026, 15(2), 287; https://doi.org/10.3390/electronics15020287 - 8 Jan 2026
Viewed by 496
Abstract
In complex energy storage operating scenarios, batteries seldom undergo complete charge–discharge cycles required for periodic capacity calibration. Methods based on accelerated aging experiments can indicate possible aging paths; however, due to uncertainties like changing operating conditions, environmental variations, and manufacturing inconsistencies, the degradation [...] Read more.
In complex energy storage operating scenarios, batteries seldom undergo complete charge–discharge cycles required for periodic capacity calibration. Methods based on accelerated aging experiments can indicate possible aging paths; however, due to uncertainties like changing operating conditions, environmental variations, and manufacturing inconsistencies, the degradation information obtained from such experiments may not be applicable to the entire lifecycle. To address this, we developed a stage-wise state-of-health (SOH) prediction approach that combined offline training with online updating. During the offline training phase, multiple single-cell experiments were conducted under various combinations of depth of discharge (DOD) and C-rate. Multi-dimensional health features (HFs) were extracted, and an accelerated aging probability pAA was defined. Based on the correlation statistics between HFs, kHF, the SOH, and pAA, all cells in the dataset were divided into general early, middle, and late aging stages. For each stage, cells were further classified by their longevity (long, medium, and short), and multiple models were trained offline for each category. The results show that models trained on cells following similar aging paths achieve significantly better performance than a model trained on all data combined. Meanwhile, HF optimization was performed via a three-step process: an initial screening based on expert knowledge, a second screening using Spearman correlation coefficients, and an automatic feature importance ranking using a random forest regression (RFR) model. The proposed method is innovative in the following ways: (1) The stage-wise multi-model strategy significantly improves the SOH prediction accuracy across the entire lifecycle, maintaining the mean absolute percentage error (MAPE) within 1%. (2) The improved model provides uncertainty quantification, issuing a warning signal at least 50 cycles before the onset of accelerated aging. (3) The analysis of feature importance from the model outputs allows the indirect identification of the primary aging mechanisms at different stages. (4) The model is robust against missing or low-quality HFs. If certain features cannot be obtained or are of poor quality, the prediction process does not fail. Full article
Show Figures

Figure 1

24 pages, 3204 KB  
Article
AMUSE++: A Mamba-Enhanced Speech Enhancement Framework with Bi-Directional and Advanced Front-End Modeling
by Tsung-Jung Li, Berlin Chen and Jeih-Weih Hung
Electronics 2026, 15(2), 282; https://doi.org/10.3390/electronics15020282 - 8 Jan 2026
Viewed by 787
Abstract
This study presents AMUSE++, an advanced speech enhancement framework that extends the MUSE++ model by redesigning its core Mamba module with two major improvements. First, the originally unidirectional one-dimensional (1D) Mamba is transformed into a bi-directional architecture to capture temporal dependencies more effectively. [...] Read more.
This study presents AMUSE++, an advanced speech enhancement framework that extends the MUSE++ model by redesigning its core Mamba module with two major improvements. First, the originally unidirectional one-dimensional (1D) Mamba is transformed into a bi-directional architecture to capture temporal dependencies more effectively. Second, this module is extended to a two-dimensional (2D) structure that jointly models both time and frequency dimensions, capturing richer speech features essential for enhancement tasks. In addition to these structural changes, we propose a Preliminary Denoising Module (PDM) as an advanced front-end, which is composed of multiple cascaded 2D bi-directional Mamba Blocks designed to preprocess and denoise input speech features before the main enhancement stage. Extensive experiments on the VoiceBank+DEMAND dataset demonstrate that AMUSE++ significantly outperforms both the backbone MUSE++ across a variety of objective speech enhancement metrics, including improvements in perceptual quality and intelligibility. These results confirm that the combination of bi-directionality, two-dimensional modeling, and an enhanced denoising frontend provides a powerful approach for tackling challenging noisy speech scenarios. AMUSE++ thus represents a notable advancement in neural speech enhancement architectures, paving the way for more effective and robust speech enhancement systems in real-world applications. Full article
Show Figures

Figure 1

15 pages, 3153 KB  
Article
Decentralized Q-Learning for Multi-UAV Post-Disaster Communication: A Robotarium-Based Evaluation Across Urban Environments
by Udhaya Mugil Damodarin, Cristian Valenti, Sergio Spanò, Riccardo La Cesa, Luca Di Nunzio and Gian Carlo Cardarilli
Electronics 2026, 15(1), 242; https://doi.org/10.3390/electronics15010242 - 5 Jan 2026
Viewed by 368
Abstract
Large-scale disasters such as earthquakes and floods often cause the collapse of terrestrial communication networks, isolating affected communities and disrupting rescue coordination. Unmanned aerial vehicles (UAVs) can serve as rapid-deployment aerial relays to restore connectivity in such emergencies. This work presents a decentralized [...] Read more.
Large-scale disasters such as earthquakes and floods often cause the collapse of terrestrial communication networks, isolating affected communities and disrupting rescue coordination. Unmanned aerial vehicles (UAVs) can serve as rapid-deployment aerial relays to restore connectivity in such emergencies. This work presents a decentralized Q-learning framework in which each UAV operates as an independent agent that learns to maintain reliable two-hop links between mobile ground users. The framework integrates user mobility, UAV–user assignment, multi-UAV coordination, and failure tracking to enhance adaptability under dynamic conditions. The system is implemented and evaluated on the Robotarium platform, with propagation modeled using the Al-Hourani air-to-ground path loss formulation. Experiments conducted across Suburban, Dense Urban, and Highrise Urban environments show throughput gains of up to 20% compared with random placement baselines while maintaining failure rates below 5%. These results demonstrate that decentralized learning offers a scalable and resilient foundation for UAV-assisted emergency communication in environments where conventional infrastructure is unavailable. Full article
Show Figures

Figure 1

34 pages, 3066 KB  
Article
Underwater Antenna Technologies with Emphasis on Submarine and Autonomous Underwater Vehicles (AUVs)
by Dimitrios G. Arnaoutoglou, Tzichat M. Empliouk, Dimitrios-Naoum Papamoschou, Yiannis Kyriacou, Andreas Papanastasiou, Theodoros N. F. Kaifas and George A. Kyriacou
Electronics 2026, 15(1), 219; https://doi.org/10.3390/electronics15010219 - 2 Jan 2026
Viewed by 1100
Abstract
Following the persistent evolution of terrestrial 5G wireless systems, a new field of underwater communication has emerged for various related applications like environmental monitoring, underwater mining, and marine research. However, establishing reliable high-speed underwater networks remains notoriously difficult due to the severe RF [...] Read more.
Following the persistent evolution of terrestrial 5G wireless systems, a new field of underwater communication has emerged for various related applications like environmental monitoring, underwater mining, and marine research. However, establishing reliable high-speed underwater networks remains notoriously difficult due to the severe RF attenuation in conductive seawater, which strictly limits range coverage. In this article, we focus on a comprehensive review of different antenna types for future underwater communication and sensing systems, evaluating their performance and suitability for Autonomous Underwater Vehicles (AUVs). We critically examine and compare distinct antenna technologies, including Magnetic Induction (MI) coils, electrically short dipoles, wideband traveling wave antennas, printed planar antennas, and novel magnetoelectric (ME) resonators. Specifically, these antennas are compared in terms of physical footprint, operating frequency, bandwidth, and realized gain, revealing the trade-offs between miniaturization and radiation efficiency. Our analysis aims to identify the benefits and weaknesses of the different antenna types while emphasizing the necessity of innovative antenna designs to overcome the fundamental propagation limits of the underwater channel. Full article
Show Figures

Figure 1

Back to TopTop