electronics-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 25231 KB  
Article
Low-Cost and Fully Metallic Reconfigurable Leaky-Wave Antenna Based on 3D-Printing Technology for Multi-Beam Operation
by Miguel Díaz-Martín, Carlos Molero, Ginés Martínez-García and Marcos Baena-Molina
Electronics 2025, 14(23), 4723; https://doi.org/10.3390/electronics14234723 - 30 Nov 2025
Viewed by 248
Abstract
Global data consumption is experiencing exponential growth, driving the demand for wireless links with higher transmission speeds, lower latency, and support for emerging applications such as 6G. A promising approach to address these requirements is the use of higher-frequency bands, which in turn [...] Read more.
Global data consumption is experiencing exponential growth, driving the demand for wireless links with higher transmission speeds, lower latency, and support for emerging applications such as 6G. A promising approach to address these requirements is the use of higher-frequency bands, which in turn necessitates the development of advanced antenna systems. This work presents the design and experimental validation of a reconfigurable, low-cost leaky-wave antenna capable of controlling the propagation direction of single-, dual-, and triple-beam configurations in the FR3 frequency band. The antenna employs slotted periodic patterns to enable directional electromagnetic field leakage, and it is based on a cost-effective and simple 3D-printing fabrication process. Laboratory testing confirms the theoretical and simulated predictions, demonstrating the feasibility of the proposed antenna solution. Full article
Show Figures

Figure 1

17 pages, 4978 KB  
Article
Optimizing Periodic Intervals in Multi-Stage Waveguide Stub Bandstop Filters for Microwave Leakage Suppression
by Yusuke Kusama, Hao-Hui Chen, Yao-Wen Hsu, Kyohei Murayama and Robert Weston Johnston
Electronics 2025, 14(23), 4660; https://doi.org/10.3390/electronics14234660 - 27 Nov 2025
Viewed by 207
Abstract
Waveguide bandstop filters (BSFs) play a key role in preventing electromagnetic wave leakage from gaps or sample entrances and exits, which can compromise safety, work efficiency, and electromagnetic compatibility. This study designs a waveguide BSF using a finite periodic structure of cascaded short-circuited [...] Read more.
Waveguide bandstop filters (BSFs) play a key role in preventing electromagnetic wave leakage from gaps or sample entrances and exits, which can compromise safety, work efficiency, and electromagnetic compatibility. This study designs a waveguide BSF using a finite periodic structure of cascaded short-circuited E-plane stubs (chokes) to achieve a stopband with the transmission coefficient |S21| ≤ −30 dB and a 4% relative bandwidth. We investigate the impact of stub width on bandwidth broadening and stub spacing in cascade connections on spurious passband suppression. Electromagnetic and circuit simulations, validated experimentally, reveal that stub spacing at odd multiples of a quarter guided wavelength (λg/4) minimizes spurious passbands, with wider stubs and larger spacings enhancing stopband characteristics. This indicates that there is a great advantage in reducing the number of resonators and manufacturing costs. These findings provide practical and new design guidelines for designing efficient BSFs for preventing microwave leakage and may have applications in other filters or array antennas using periodicity. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

19 pages, 3438 KB  
Article
Geometry-Aware Cross-Modal Translation with Temporal Consistency for Robust Multi-Sensor Fusion in Autonomous Driving
by Zhengyi Lu, Jinxiang Pang and Zhehai Zhou
Electronics 2025, 14(23), 4663; https://doi.org/10.3390/electronics14234663 - 27 Nov 2025
Viewed by 369
Abstract
Intelligent Transportation Systems (ITSs), particularly autonomous driving, face critical challenges when sensor modalities fail due to adverse conditions or hardware malfunctions, causing severe perception degradation that threatens system-wide reliability. We present a unified geometry-aware cross-modal translation framework that synthesizes missing sensor data while [...] Read more.
Intelligent Transportation Systems (ITSs), particularly autonomous driving, face critical challenges when sensor modalities fail due to adverse conditions or hardware malfunctions, causing severe perception degradation that threatens system-wide reliability. We present a unified geometry-aware cross-modal translation framework that synthesizes missing sensor data while maintaining temporal consistency and quantifying uncertainty. Our pipeline enforces 92.7% frame-to-frame stability via an optical-flow-guided spatio-temporal module with smoothness regularization, preserves fine-grained 3D geometry through pyramid-level multi-scale alignment constrained by the Chamfer distance, surface normals, and edge consistency, and ultimately delivers dropout-tolerant perception by adaptively fusing multi-modal cues according to pixel-wise uncertainty estimates. Extensive evaluation on KITTI-360, nuScenes, and a newly collected Real-World Sensor Failure dataset demonstrates state-of-the-art performance: 35% reduction in Chamfer distance, 5% improvement in BEV (bird’s eye view) segmentation mIoU (mean Intersection over Union) (79.3%), and robust operation maintaining mIoU under complete sensor loss for 45+ s. The framework achieves real-time performance at 17 fps with 57% fewer parameters than competing methods, enabling deployment-ready sensor-agnostic perception for safety-critical autonomous driving applications. Full article
Show Figures

Figure 1

19 pages, 2140 KB  
Article
AI-Driven Adaptive Segmentation of Timed Up and Go Test Phases Using a Smartphone
by Muntazir Rashid, Arshad Sher, Federico Villagra Povina and Otar Akanyeti
Electronics 2025, 14(23), 4650; https://doi.org/10.3390/electronics14234650 - 26 Nov 2025
Viewed by 402
Abstract
The Timed Up and Go (TUG) test is a widely used clinical tool for assessing mobility and fall risk in older adults and individuals with neurological or musculoskeletal conditions. While it provides a quick measure of functional independence, traditional stopwatch-based timing offers only [...] Read more.
The Timed Up and Go (TUG) test is a widely used clinical tool for assessing mobility and fall risk in older adults and individuals with neurological or musculoskeletal conditions. While it provides a quick measure of functional independence, traditional stopwatch-based timing offers only a single completion time and fails to reveal which movement phases contribute to impairment. This study presents a smartphone-based system that automatically segments the TUG test into distinct phases, delivering objective and low-cost biomarkers of lower-limb performance. This approach enables clinicians to identify phase-specific impairments in populations such as individuals with Parkinson’s disease, and older adults, supporting precise diagnosis, personalized rehabilitation, and continuous monitoring of mobility decline and neuroplastic recovery. Our method combines adaptive preprocessing of accelerometer and gyroscope signals with supervised learning models (Random Forest, Support Vector Machine (SVM), and XGBoost) using statistical features to achieve continuous phase detection and maintain robustness against slow or irregular gait, accommodating individual variability. A threshold-based turn detection strategy captures both sharp and gradual rotations. Validation against video ground truth using group K-fold cross-validation demonstrated strong and consistent performance: start and end points were detected in 100% of trials. The mean absolute error for total time was 0.42 s (95% CI: 0.36–0.48 s). The average error across phases (stand, walk, turn) was less than 0.35 s, and macro F1 scores exceeded 0.85 for all models, with the SVM achieving the highest score of 0.882. Combining accelerometer and gyroscope features improved macro F1 by up to 12%. Statistical tests (McNemar, Bowker) confirmed significant differences between models, and calibration metrics indicated reliable probabilistic outputs (ROC-AUC > 0.96, Brier score < 0.08). These findings show that a single smartphone can deliver accurate, interpretable, and phase-aware TUG analysis without complex multi-sensor setups, enabling practical and scalable mobility assessment for clinical use. Full article
Show Figures

Figure 1

11 pages, 428 KB  
Article
RMF-A: An Availability Assurance Framework for Quantitative Evaluation of Operational Resilience
by Cheon-Ho Min and Jin Kwak
Electronics 2025, 14(23), 4644; https://doi.org/10.3390/electronics14234644 - 26 Nov 2025
Viewed by 294
Abstract
Recent data center incidents have revealed that certification under ISO 22301 and ISO/IEC 27001 does not guarantee real operational resilience. This study presents the Availability Assurance Framework (RMF-A), an extension of the NIST Risk Management Framework that introduces an Availability Assurance Phase. RMF-A [...] Read more.
Recent data center incidents have revealed that certification under ISO 22301 and ISO/IEC 27001 does not guarantee real operational resilience. This study presents the Availability Assurance Framework (RMF-A), an extension of the NIST Risk Management Framework that introduces an Availability Assurance Phase. RMF-A combines ISO-based management controls with NIST’s evidence-driven assessment using the Availability Evidence Model (AEM) and the Availability Assurance Index (AAI). AEM defines measurable indicators—recovery rate (RR), recovery time (RTO), and Detection Effectiveness (DET)—and AAI aggregates them into a quantitative assurance score. Validation using three open datasets—Google Cluster Trace, Azure Cloud Trace, and LANL HPC Logs—showed consistent assurance results: Google (AAI = 0.758, ATO-Conditional), Azure (AAI = 0.720, ATO-Conditional), and LANL HPC (AAI = 0.744, ATO-Conditional). The results confirm that RMF-A provides a reproducible, evidence-based approach to quantify operational resilience and ensure continuous availability. Full article
Show Figures

Figure 1

21 pages, 4092 KB  
Article
Enabling Scalable and Manufacturable Large-Scale Antenna Arrays Through Hexagonal Subarray Implementation over Goldberg Polyhedra
by Santiago Loza-Morcillo and José Luis Blanco-Murillo
Electronics 2025, 14(23), 4618; https://doi.org/10.3390/electronics14234618 - 25 Nov 2025
Viewed by 548
Abstract
We introduce a scalable and manufacturable approach to conformal large-scale antenna arrays, leveraging Goldberg Polyhedra configurations with hexagonal subarrays to enable cost-effective, high-performance beam steering. Planar array designs face challenges in phase control and beam deformation when steering away from the broadside, leading [...] Read more.
We introduce a scalable and manufacturable approach to conformal large-scale antenna arrays, leveraging Goldberg Polyhedra configurations with hexagonal subarrays to enable cost-effective, high-performance beam steering. Planar array designs face challenges in phase control and beam deformation when steering away from the broadside, leading to increased beamwidth and degraded angular resolution. Our near-spherical Goldberg structures offer a fabrication-friendly, periodic architecture that supports industrial scalability while enabling efficient 360° digital beamforming with minimal distortion. Simulation results confirm significant reductions in sidelobe levels and improved energy concentration, providing enhanced multibeam capabilities and simplified digital beamforming (DBF) control. This approach paves the way for next-generation radar and satellite systems requiring precise directional control, minimal interference, and robust, flexible beam steering performance. Full article
Show Figures

Figure 1

13 pages, 835 KB  
Article
Layer-Pipelined CNN Accelerator Design on 2.5D FPGAs
by Mengxuan Wang and Chang Wu
Electronics 2025, 14(23), 4587; https://doi.org/10.3390/electronics14234587 - 23 Nov 2025
Viewed by 334
Abstract
With the rapid advancement of 2.5D FPGA technology, the integration of multiple FPGA dies enables larger design capacity and higher computing power. This progress provides a high-speed hardware platform well-suited for neural network acceleration. In this paper, we present a high-performance accelerator design [...] Read more.
With the rapid advancement of 2.5D FPGA technology, the integration of multiple FPGA dies enables larger design capacity and higher computing power. This progress provides a high-speed hardware platform well-suited for neural network acceleration. In this paper, we present a high-performance accelerator design for large-scale neural networks on 2.5D FPGAs. First, we propose a layer pipeline architecture that utilizes multiple accelerator cores, each equipped with individual high-bandwidth DDR memory. To address inter-die data dependencies, we introduce a block convolution mechanism that enables independent and efficient computation across dies. Furthermore, we propose a design space exploration scheme to optimize computational efficiency under resource constraints. Experimental results demonstrate that our proposed accelerator achieves 4860.87 GOPS when running VGG-16 on the Alveo U250 board, significantly outperforming existing layer pipeline designs on the same platform. Full article
(This article belongs to the Special Issue Advances in High-Performance and Parallel Computing)
Show Figures

Figure 1

38 pages, 25113 KB  
Article
A Two-Stage End-to-End Framework for Robust Scene Text Spotting with Self-Calibrated Detection and Contextual Recognition
by Yuning Cheng, Jinhong Huang, Io San Tai, Subrota Kumar Mondal, Tianqi Wang and Hussain Mohammed Dipu Kabir
Electronics 2025, 14(23), 4594; https://doi.org/10.3390/electronics14234594 - 23 Nov 2025
Viewed by 610
Abstract
End-to-end scene text detection and recognition, which involves detecting and recognizing text in natural images, still faces significant challenges, particularly in handling text of arbitrary shapes, complex backgrounds, and computational efficiency requirements. This paper proposes a novel and viable end-to-end OCR framework that [...] Read more.
End-to-end scene text detection and recognition, which involves detecting and recognizing text in natural images, still faces significant challenges, particularly in handling text of arbitrary shapes, complex backgrounds, and computational efficiency requirements. This paper proposes a novel and viable end-to-end OCR framework that synergistically combines a powerful detection network with advanced recognition models. For text detection, we develop a method called Text Contrast Self-Calibrated Network (TextCSCN), which employs pixel-wise supervised contrastive learning to extract more discriminative features. TextCSCN addresses long-range dependency modeling and limited receptive field issues through self-calibrated convolutions and Global Convolutional Networks (GCNs). We further introduce an efficient Mamba-based bidirectional module for boundary refinement, enhancing both accuracy and speed. For text recognition, our framework employs a Swin Transformer backbone with Bidirectional Feature Pyramid Networks (BiFPNs) for optimized multi-scale feature extraction. We propose a Pre-Gated Contextual Attention Gate (PCAG) mechanism to effectively fuse visual and linguistic information while minimizing noise and uncertainty in multi-modal integration. Experiments on challenging benchmarks including TotalText and CTW1500 demonstrate the effectiveness of our approach. Our detection module achieves state-of-the-art performance with an F-score of 88.21% on TotalText, and the complete end-to-end system shows comparable improvements in recognition accuracy, establishing new benchmarks for scene text spotting. Full article
Show Figures

Figure 1

31 pages, 1411 KB  
Article
A Source-to-Source Compiler to Enable Hybrid Scheduling for High-Level Synthesis
by Yuhan She, Yanlong Huang, Jierui Liu, Ray C. C. Cheung and Hong Yan
Electronics 2025, 14(23), 4578; https://doi.org/10.3390/electronics14234578 - 22 Nov 2025
Viewed by 279
Abstract
High-Level Synthesis (HLS) has gained considerable attention for its ability to quickly generate hardware descriptions from untimed specifications. Most state-of-the-art commercial HLS tools employ static scheduling, which excels in compute-intensive applications but struggles with control-dominant designs. While some open-source tools propose dynamic and [...] Read more.
High-Level Synthesis (HLS) has gained considerable attention for its ability to quickly generate hardware descriptions from untimed specifications. Most state-of-the-art commercial HLS tools employ static scheduling, which excels in compute-intensive applications but struggles with control-dominant designs. While some open-source tools propose dynamic and hybrid scheduling techniques to synthesize dataflow-like architectures to improve speed, they lack well-established optimizations from static scheduling like datapath optimization and resource sharing, leading to frequency degradation and area overhead. Moreover, existing hybrid scheduling relies on extra dynamic synthesis support, either by dynamic or static HLS tools, and thereby loses generality. In this work, we propose another solution to achieve hybrid scheduling: a source-to-source compiler that exposes dynamism at the source code level, which reduces both frequency and area overhead while remaining fully compatible with modern static HLS tools without needing extra dynamic synthesis support. Experiments show significant improvements (1.26× speedup) on wall clock time (WCT) compared to VitisHLS and a better area–frequency–latency trade-off compared to dynamic (1.83× WCT speedup and 0.46× area) and hybrid (2.14× WCT speedup and 0.72× area) scheduling-based tools. Full article
(This article belongs to the Special Issue Emerging Applications of FPGAs and Reconfigurable Computing System)
Show Figures

Figure 1

27 pages, 659 KB  
Review
From Vulnerability to Robustness: A Survey of Patch Attacks and Defenses in Computer Vision
by Xinyun Liu and Ronghua Xu
Electronics 2025, 14(23), 4553; https://doi.org/10.3390/electronics14234553 - 21 Nov 2025
Viewed by 602
Abstract
Adversarial patch attacks have emerged as a powerful and practical threat to machine learning models in vision-based tasks. Unlike traditional perturbation-based adversarial attacks, which often require imperceptible changes to the entire input, patch attacks introduce localized and visible modifications that can consistently mislead [...] Read more.
Adversarial patch attacks have emerged as a powerful and practical threat to machine learning models in vision-based tasks. Unlike traditional perturbation-based adversarial attacks, which often require imperceptible changes to the entire input, patch attacks introduce localized and visible modifications that can consistently mislead deep neural networks across varying conditions. Their physical realizability makes them particularly concerning for real-world security-critical applications. In response, a growing body of research has proposed diverse defense strategies, including input preprocessing, robust model training, detection-based approaches, and certified defense mechanisms. In this paper, we provide a comprehensive review of patch-based adversarial attacks and corresponding defense techniques. First, we introduce a new task-oriented taxonomy that systematically categorizes patch attack methods according to their downstream vision applications (e.g., classification, detection, segmentation), and then we summarize defense mechanisms based on three major strategies: Patch Localization and Removal-based Defenses, Input Transformation and Reconstruction-based Defenses, Model Modification and Training-based Defenses. This unified framework provides an integrated perspective that bridges attack and defense research. Furthermore, we highlight open challenges, such as balancing robustness and model utility, addressing adaptive attackers, and ensuring physical-world resilience. Finally, we outline promising research directions to inspire future work toward building trustworthy and robust vision systems against patch-based adversarial threats. Full article
(This article belongs to the Special Issue Artificial Intelligence Safety and Security)
Show Figures

Figure 1

29 pages, 6088 KB  
Article
Lightweight AI for Sensor Fault Monitoring
by Bektas Talayoglu, Jerome Vande Velde and Bruno da Silva
Electronics 2025, 14(22), 4532; https://doi.org/10.3390/electronics14224532 - 19 Nov 2025
Viewed by 423
Abstract
Sensor faults can produce incorrect data and disrupt the operation of entire systems. In critical environments, such as healthcare, industrial automation, or autonomous platforms, these faults can lead to serious consequences if not detected early. This study explores how faults in MEMS microphones [...] Read more.
Sensor faults can produce incorrect data and disrupt the operation of entire systems. In critical environments, such as healthcare, industrial automation, or autonomous platforms, these faults can lead to serious consequences if not detected early. This study explores how faults in MEMS microphones can be classified using lightweight ML models suitable for devices with limited resources. A dataset was created for this work, including both real faults (normal, clipping, stuck, and spikes) caused by issues like acoustic overload and undervoltage, and synthetic faults (drift and bias). The goal was to simulate a range of fault behaviors, from clear malfunctions to more subtle signal changes. Convolutional Neural Networks (CNNs) and hybrid models that use CNNs for feature extraction with classifiers like Decision Trees, Random Forest, MLP, Extremely Randomized Trees, and XGBoost, were evaluated based on accuracy, F1-score, inference time, and model size towards real-time use in embedded systems. Experiments showed that using 2-s windows improved accuracy and F1-scores. These findings help design ML solutions for sensor fault classification in resource-limited embedded systems and IoT applications. Full article
Show Figures

Figure 1

21 pages, 3728 KB  
Article
A Multi-Core Benchmark Framework for Linux-Based Embedded Systems Using Synthetic Task-Set Generation
by Yixiao Xing, Yixiao Li and Hiroaki Takada
Electronics 2025, 14(22), 4515; https://doi.org/10.3390/electronics14224515 - 19 Nov 2025
Viewed by 432
Abstract
Accurately evaluating multi-core embedded systems remains a major challenge, as existing benchmarking methods and tools fail to reproduce realistic workloads with inter-core contentions. This study introduces a benchmark framework for Linux-based embedded systems that integrates a synthetic task-set generation model capable of reproducing [...] Read more.
Accurately evaluating multi-core embedded systems remains a major challenge, as existing benchmarking methods and tools fail to reproduce realistic workloads with inter-core contentions. This study introduces a benchmark framework for Linux-based embedded systems that integrates a synthetic task-set generation model capable of reproducing both computational and contention characteristics observed in real-world applications. Applying this benchmark to three Linux kernel variants on a 16-core embedded platform, we have identified distinct scalability patterns and contention sensitivities among kernel configurations. The results mainly demonstrate the framework’s capability to reveal performance characteristics under Linux, but the proposed methodology itself has high portability and extendability by design to support various multi-core platforms including the RTOS-based ones. Full article
Show Figures

Figure 1

14 pages, 2150 KB  
Article
A Flexible Multi-Core Hardware Architecture for Stereo-Based Depth Estimation CNNs
by Steven Colleman, Andrea Nardi-Dei, Marc C. W. Geilen, Sander Stuijk and Toon Goedemé
Electronics 2025, 14(22), 4425; https://doi.org/10.3390/electronics14224425 - 13 Nov 2025
Viewed by 319
Abstract
Stereo-based depth estimation is becoming more and more important in many applications like self-driving vehicles, earth observation, cartography, robotics and so on. Modern approaches to depth estimation employ artificial intelligence techniques, particularly convolutional neural networks (CNNs). However, stereo-based depth estimation networks involve dual [...] Read more.
Stereo-based depth estimation is becoming more and more important in many applications like self-driving vehicles, earth observation, cartography, robotics and so on. Modern approaches to depth estimation employ artificial intelligence techniques, particularly convolutional neural networks (CNNs). However, stereo-based depth estimation networks involve dual processing paths for left and right input images, which merge at intermediate layers, posing challenges for efficient deployment on modern hardware accelerators. Specifically, modern depth-first and layer-fused execution strategies, which are commonly used to reduce I/O communication and on-chip memory demands, are not readily compatible with such non-linear network structures. To address this limitation, we propose a flexible multi-core hardware architecture tailored for stereo-based depth estimation CNNs. The architecture supports layer-fused execution while efficiently managing dual-path computation and its fusion, enabling improved resource utilization. Experimental results demonstrate a latency reduction of up to 24% compared to state-of-the-art depth-first implementations that do not incorporate stereo-specific optimizations. Full article
(This article belongs to the Special Issue Multimedia Signal Processing and Computer Vision)
Show Figures

Figure 1

19 pages, 2716 KB  
Article
Analysis of a Hybrid Intrabody Communications Scheme for Wireless Cortical Implants
by Assefa K. Teshome and Daniel T. H. Lai
Electronics 2025, 14(22), 4410; https://doi.org/10.3390/electronics14224410 - 12 Nov 2025
Viewed by 290
Abstract
Implantable technologies targeting the cerebral cortex and deeper brain structures are increasingly utilised in human–machine interfacing, advanced neuroprosthetics, and clinical interventions for neurological conditions. These systems require highly efficient and low-power methods for exchanging information between the implant and external electronics. Traditional approaches [...] Read more.
Implantable technologies targeting the cerebral cortex and deeper brain structures are increasingly utilised in human–machine interfacing, advanced neuroprosthetics, and clinical interventions for neurological conditions. These systems require highly efficient and low-power methods for exchanging information between the implant and external electronics. Traditional approaches often rely on inductively coupled data transfer (ic-DT), where the same coils used for wireless power are modulated for communication. Other designs use high-frequency antenna-based radio systems, typically operating in the 401–406 MHz MedRadio band or the 2.4 GHz ISM band. A promising alternative is intrabody communication (IBC), which leverages the bioelectrical characteristics of body tissue to enable signal propagation. This work presents a theoretical investigation into two schemes—inductive coupling and galvanically coupled IBC (gc-IBC)—as applied to cortical data links, considering frequencies from 1 to 10 MHz and implant depths of up to 7 cm. We propose a hybrid solution where gc-IBC supports data transmission and inductive coupling facilitates wireless power delivery. Our findings indicate that gc-IBC can accommodate wider bandwidths than ic-DT and offers significantly reduced path loss, approximately 20 dB lower than those of conventional RF-based antenna systems. Full article
(This article belongs to the Special Issue Applications of Sensor Networks and Wireless Communications)
Show Figures

Figure 1

10 pages, 2053 KB  
Article
A Terahertz Dual-Band Transmitter in 40 nm CMOS for a Wideband Sparse Synthetic Bandwidth Radar
by Aguan Hong, Lina Su, Yanjun Wang and Xiang Yi
Electronics 2025, 14(22), 4392; https://doi.org/10.3390/electronics14224392 - 11 Nov 2025
Viewed by 312
Abstract
This paper presents a terahertz (THz) dual-band transmitter for a wideband sparse synthetic bandwidth radar. The transmitter employs an innovative single-path-reuse dual-band architecture. This architecture utilizes a proposed quad-transformer-coupled voltage-controlled oscillator (VCO) as an on-chip local oscillator source. It also incorporates an innovative [...] Read more.
This paper presents a terahertz (THz) dual-band transmitter for a wideband sparse synthetic bandwidth radar. The transmitter employs an innovative single-path-reuse dual-band architecture. This architecture utilizes a proposed quad-transformer-coupled voltage-controlled oscillator (VCO) as an on-chip local oscillator source. It also incorporates an innovative dual-harmonic generator and a dual-band antenna, which work together within the single signal path to generate both the fundamental frequency and its second harmonic, thereby creating the dual bands required for a sparse synthetic bandwidth radar. Fabricated in a TSMC 40 nm CMOS technology, measurement results show that the transmitter achieves a peak equivalent isotropically radiated power (EIRP) of −7.95 dBm in the low-frequency band (121.34∼126.85 GHz) and −7.86 dBm in the high-frequency band (242.68∼253.7 GHz), validating the proposed architecture’s capability to generate dual-band signals simultaneously. The entire chip occupies a compact area of only 0.54 × 0.62 mm2 and consumes 136 mW of DC power. Full article
Show Figures

Figure 1

34 pages, 11286 KB  
Article
Degradation of Multi-Task Prompting Across Six NLP Tasks and LLM Families
by Federico Di Maio and Manuel Gozzi
Electronics 2025, 14(21), 4349; https://doi.org/10.3390/electronics14214349 - 6 Nov 2025
Viewed by 999
Abstract
This study investigates how increasing prompt complexity affects the performance of Large Language Models (LLMs) across multiple Natural Language Processing (NLP) tasks. We introduce an incremental evaluation framework where six tasks—JSON formatting, English-Italian translation, sentiment analysis, emotion classification, topic extraction, and named entity [...] Read more.
This study investigates how increasing prompt complexity affects the performance of Large Language Models (LLMs) across multiple Natural Language Processing (NLP) tasks. We introduce an incremental evaluation framework where six tasks—JSON formatting, English-Italian translation, sentiment analysis, emotion classification, topic extraction, and named entity recognition—are progressively combined within a single prompt. Six representative open-source LLMs from different families (Llama 3.1 8B, Gemma 3 4B, Mistral 7B, Qwen3 4B, Granite 3.1 3B, and DeepSeek R1 7B) were systematically evaluated using local inference environments to ensure reproducibility. Results show that performance degradation is highly architecture-dependent: while Qwen3 4B maintained stable performance across all tasks, Gemma 3 4B and Granite 3.1 3B exhibited severe collapses in fine-grained semantic tasks. Interestingly, some models (e.g., Llama 3.1 8B and DeepSeek R1 7B) demonstrated positive transfer effects, improving in certain tasks under multitask conditions. Statistical analyses confirmed significant differences across models for structured and semantic tasks, highlighting the absence of a universal degradation rule. These findings suggest that multitask prompting resilience is shaped more by architectural design than by model size alone, and they motivate adaptive, model-specific strategies for prompt composition in complex NLP applications. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Emerging Applications)
Show Figures

Figure 1

33 pages, 22059 KB  
Review
Resistive Sensing in Soft Robotic Grippers: A Comprehensive Review of Strain, Tactile, and Ionic Sensors
by Donya Mostaghniyazdi and Shahab Edin Nodehi
Electronics 2025, 14(21), 4290; https://doi.org/10.3390/electronics14214290 - 31 Oct 2025
Viewed by 2128
Abstract
Soft robotic grippers have emerged as crucial tools for safe and adaptive manipulation of delicate and different objects, enabled by their compliant structures. These grippers need embedded sensing that offers proprioceptive and exteroceptive feedback in order to function consistently. Resistive sensing is unique [...] Read more.
Soft robotic grippers have emerged as crucial tools for safe and adaptive manipulation of delicate and different objects, enabled by their compliant structures. These grippers need embedded sensing that offers proprioceptive and exteroceptive feedback in order to function consistently. Resistive sensing is unique among transduction processes since it is easy to use, scalable, and compatible with deformable materials. The three main classes of resistive sensors used in soft robotic grippers are systematically examined in this review: ionic sensors, which are emerging multimodal devices that can capture both mechanical and environmental cues; tactile sensors, which detect contact, pressure distribution, and slip; and strain sensors, which monitor deformation and actuation states. Their methods of operation, material systems, fabrication techniques, performance metrics, and integration plans are all compared in the survey. The results show that sensitivity, linearity, durability, and scalability are all trade-offs across sensor categories, with ionic sensors showing promise as a new development for multipurpose soft grippers. There is also a discussion of difficulties, including hysteresis, long-term stability, and signal processing complexity. In order to move resistive sensing from lab prototypes to reliable, practical applications in domains like healthcare, food handling, and human–robot collaboration, the review concludes that developments in hybrid material systems, additive manufacturing, and AI-enhanced signal interpretation will be crucial. Full article
Show Figures

Figure 1

18 pages, 1539 KB  
Article
A Model of Output Power Control Method for Fault Ride-Through in a Single-Phase NPC Inverter-Based Power Conditioning System with IPOS DAB Converter and Battery
by Reo Emoto, Hiroaki Yamada and Tomokazu Mishima
Electronics 2025, 14(21), 4291; https://doi.org/10.3390/electronics14214291 - 31 Oct 2025
Viewed by 367
Abstract
Grid-tied inverters must satisfy fault ride-through (FRT) requirements to ensure grid stability during voltage disturbances. However, most existing FRT-related studies have focused on reactive current injection or voltage support functions, with few addressing how the active power reference should be dynamically controlled during [...] Read more.
Grid-tied inverters must satisfy fault ride-through (FRT) requirements to ensure grid stability during voltage disturbances. However, most existing FRT-related studies have focused on reactive current injection or voltage support functions, with few addressing how the active power reference should be dynamically controlled during voltage dips. In addition, few systems enable bidirectional power transfer or provide comprehensive verification under deep voltage dips. To address this issue, this paper proposes an output power control method for FRT in a single-phase neutral-point-clamped (NPC) inverter-based PCS consisting of an input-parallel output-series (IPOS) dual-active-bridge (DAB) converter and a battery. The proposed PCS dynamically reduces the output power reference according to the retained voltage while maintaining the inverter current within the rated limit, thereby ensuring stable operation. Computer simulations were conducted using Altair PSIM to verify the effectiveness of the proposed method. The results confirmed that the PCS satisfied the FRT requirements for all post-fault voltage levels. The injected current returned to its pre-fault value within 20 ms and 90 ms for 20% and 0% voltage dips, respectively, complying with the required recovery times. The proposed control method enhances grid resilience and maintains power quality in single-phase low-voltage distribution systems. Full article
(This article belongs to the Special Issue DC–DC Power Converter Technologies for Energy Storage Integration)
Show Figures

Graphical abstract

38 pages, 9535 KB  
Article
Novel Design and Experimental Validation of a Technique for Suppressing Distortion Originating from Various Sources in Multiantenna Full-Duplex Systems
by Keng-Hwa Liu, Juinn-Horng Deng and Min-Siou Yang
Electronics 2025, 14(21), 4300; https://doi.org/10.3390/electronics14214300 - 31 Oct 2025
Viewed by 335
Abstract
Complex distortion cancellation methods are often used at the radio frequency (RF) front end of multiantenna full-duplex transceivers to mitigate signal distortion; however, these methods have high computational complexity and limited practicality. To address these problems, the present study explored the complexities associated [...] Read more.
Complex distortion cancellation methods are often used at the radio frequency (RF) front end of multiantenna full-duplex transceivers to mitigate signal distortion; however, these methods have high computational complexity and limited practicality. To address these problems, the present study explored the complexities associated with such transceivers to develop a practical multistep approach for suppressing distortions arising from in-phase and quadrature (I/Q) imbalance, nonlinear power amplifier (PA) responses, and multipath self-interference caused by simultaneous transmissions on the same frequency. In this approach, the I/Q imbalance is estimated and then compensated for, following which nonlinear PA distortion is estimated and pre-compensated for. Subsequently, an auxiliary RF transmitter is combined with linearly regenerating self-interference signals to achieve full-duplex self-interference cancellation. The proposed method was implemented on a software-defined radio platform, with the distortion factor calibration specifically optimized for multiantenna full-duplex transceivers. The experimental results indicate that the image signal caused by I/Q imbalance can be suppressed by up to 60 dB through iterative computation. By combining IQI and DPD preprocessing, the nonlinear distortion spectrum can be reduced by 25 dB. Furthermore, integrating IQI, DPD, and self-interference preprocessing achieves up to 180 dB suppression of self-interference signals. Experimental results also demonstrate that the proposed method achieves approximately 20 dB suppression of self-interference. Thus, the method has high potential for enhancing the performance of multiantenna RF full-duplex systems. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

20 pages, 4224 KB  
Article
Reconfigurable Intelligence Surface Assisted Multiuser Downlink Communication with User Scheduling
by Zhengjun Dai and Xianyi Rui
Electronics 2025, 14(21), 4253; https://doi.org/10.3390/electronics14214253 - 30 Oct 2025
Viewed by 388
Abstract
The integration of Reconfigurable Intelligent Surfaces (RISs) into wireless networks is a promising paradigm for enhancing spectral efficiency and coverage in beyond-5G systems. However, in multiuser downlink scenarios, the joint optimization of discrete RIS phase shifts and user scheduling presents a high-dimensional combinatorial [...] Read more.
The integration of Reconfigurable Intelligent Surfaces (RISs) into wireless networks is a promising paradigm for enhancing spectral efficiency and coverage in beyond-5G systems. However, in multiuser downlink scenarios, the joint optimization of discrete RIS phase shifts and user scheduling presents a high-dimensional combinatorial challenge due to their tight coupling, which is often intractable with conventional methods. Furthermore, conventional RISs are limited by their unidirectional signal reflection, creating coverage blind spots. To address these issues, this paper first investigates a multi-user scheduling system assisted by a conventional RIS. We employed a vector projection relaxation method to transform the complex joint optimization problem, and then used an algorithm based on particle swarm optimization to jointly optimize the discrete phase shift and user scheduling. Simulation results demonstrate that this proposed algorithm significantly improves the system’s achievable data rate compared to existing benchmarks. Subsequently, to overcome the fundamental coverage limitation of conventional RISs, we extend our framework to two advanced systems: double-RIS and Simultaneously Transmitting and Reflecting RIS (STAR-RIS). For the STAR-RIS system, leveraging its energy-splitting protocol, we develop a novel joint optimization algorithm for phase shifts, amplitudes, and user scheduling based on an alternating optimization framework. This constitutes another significant contribution, as it effectively manages the added complexity of simultaneous transmission and reflection control. Simulations confirm that the STAR-RIS-assisted system, optimized by our algorithm, not only eliminates coverage blind spots but also surpasses the performance of traditional RIS, offering new perspectives for optimizing next-generation wireless communication networks. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

19 pages, 4017 KB  
Article
LACX: Locality-Aware Shared Data Migration in NUMA + CXL Tiered Memory
by Hayong Jeong, Binwon Song, Minwoo Jo and Heeseung Jo
Electronics 2025, 14(21), 4235; https://doi.org/10.3390/electronics14214235 - 29 Oct 2025
Viewed by 542
Abstract
In modern high-performance computing (HPC) and large-scale data processing environments, the efficient utilization and scalability of memory resources are critical determinants of overall system performance. Architectures such as non-uniform memory access (NUMA) and tiered memory systems frequently suffer performance degradation due to remote [...] Read more.
In modern high-performance computing (HPC) and large-scale data processing environments, the efficient utilization and scalability of memory resources are critical determinants of overall system performance. Architectures such as non-uniform memory access (NUMA) and tiered memory systems frequently suffer performance degradation due to remote accesses stemming from shared data among multiple tasks. This paper proposes LACX, a shared data migration technique leveraging Compute Express Link (CXL), to address these challenges. LACX preserves the migration cycle of automatic NUMA balancing (AutoNUMA) while identifying shared data characteristics and migrating such data to CXL memory instead of DRAM, thereby maximizing DRAM locality. The proposed method utilizes existing kernel structures and data to efficiently identify and manage shared data without incurring additional overhead, and it effectively avoids conflicts with AutoNUMA policies. Evaluation results demonstrate that, although remote accesses to shared data can degrade performance in low-tier memory scenarios, LACX significantly improves overall memory bandwidth utilization and system performance in high-tier memory and memory-intensive workload environments by distributing DRAM bandwidth. This work presents a practical, lightweight approach to shared data management in tiered memory environments and highlights new directions for next-generation memory management policies. Full article
(This article belongs to the Special Issue Future Technologies for Data Management, Processing and Application)
Show Figures

Figure 1

12 pages, 633 KB  
Article
Optimized FreeMark Post-Training White-Box Watermarking of Tiny Neural Networks
by Riccardo Adorante, Tullio Facchinetti and Danilo Pietro Pau
Electronics 2025, 14(21), 4237; https://doi.org/10.3390/electronics14214237 - 29 Oct 2025
Viewed by 313
Abstract
Neural networks are powerful, high-accuracy systems whose trained parameters represent a valuable intellectual property. Building models that reach top level performance is a complex task and requires substantial investments of time and money so protecting these assets is an increasingly important task. Extensive [...] Read more.
Neural networks are powerful, high-accuracy systems whose trained parameters represent a valuable intellectual property. Building models that reach top level performance is a complex task and requires substantial investments of time and money so protecting these assets is an increasingly important task. Extensive research has been carried out on Neural Network Watermarking, exploring the possibility of inserting a recognizable marker in a host model either in the form of a concealed bit-string or as a characteristic output, making it possible to confirm network ownership even in the presence of malicious attempts at erasing the embedded marker from the model. The study examines the applicability of Opt-FreeMark, a non-invasive post-training white-box watermarking technique, obtained by modifying and optimizing an already existing state-of-the-art technique for tiny neural networks. Here, “Tiny” refers to models intended for ultra-low-power deployments, such as those running on edge devices like sensors and micro-controllers. Watermark robustness is also demonstrated by simulating common model-modification attacks that try to eliminate it from the model while preserving performance; the results presented in the paper indicate that the watermarking scheme effectively protects the networks against these manipulations. Full article
Show Figures

Figure 1

26 pages, 3558 KB  
Article
Avocado: An Interpretable Fine-Grained Intrusion Detection Model for Advanced Industrial Control Network Attacks
by Xin Liu, Tao Liu and Ning Hu
Electronics 2025, 14(21), 4233; https://doi.org/10.3390/electronics14214233 - 29 Oct 2025
Viewed by 425
Abstract
Industrial control systems (ICS), as critical infrastructure supporting national operations, are increasingly threatened by sophisticated stealthy network attacks. These attacks often break malicious behaviors into multiple highly camouflaged packets, which are embedded into large-scale background traffic with low frequency, making them semantically and [...] Read more.
Industrial control systems (ICS), as critical infrastructure supporting national operations, are increasingly threatened by sophisticated stealthy network attacks. These attacks often break malicious behaviors into multiple highly camouflaged packets, which are embedded into large-scale background traffic with low frequency, making them semantically and temporally indistinguishable from normal traffic and thus evading traditional detection. Existing methods largely rely on flow-level statistics or long-sequence modeling, resulting in coarse detection granularity, high latency, and poor byte-level interpretability, falling short of industrial demands for real-time and actionable detection. To address these challenges, we propose Avocado, a fine-grained, multi-level intrusion detection model. Avocado’s core innovation lies in contextual flow-feature fusion: it models each packet jointly with its surrounding packet sequence, enabling independent abnormality detection and precise localization. Moreover, a shared-query multi-head self-attention mechanism is designed to quantify byte-level importance within packets. Experimental results show that Avocado significantly outperforms state-of-the-art flow-level methods on NGAS and CLIA-M221 datasets, improving packet-level detection ACC by 1.55% on average, and reducing FPR and FNR to 3.2%, 3.6% (NGAS), and 3.7%, 4.3% (CLIA-M221), respectively, demonstrating its superior performance in both detection and interpretability. Full article
(This article belongs to the Special Issue Novel Approaches for Deep Learning in Cybersecurity)
Show Figures

Figure 1

38 pages, 9358 KB  
Article
Generation of a Multi-Class IoT Malware Dataset for Cybersecurity
by Mazdak Maghanaki, Soraya Keramati, F. Frank Chen and Mohammad Shahin
Electronics 2025, 14(21), 4196; https://doi.org/10.3390/electronics14214196 - 27 Oct 2025
Viewed by 1146
Abstract
This study introduces a modular, behaviorally curated malware dataset suite consisting of eight independent sets, each specifically designed to represent a single malware class: Trojan, Mirai (botnet), ransomware, rootkit, worm, spyware, keylogger, and virus. In contrast to earlier approaches that aggregate all malware [...] Read more.
This study introduces a modular, behaviorally curated malware dataset suite consisting of eight independent sets, each specifically designed to represent a single malware class: Trojan, Mirai (botnet), ransomware, rootkit, worm, spyware, keylogger, and virus. In contrast to earlier approaches that aggregate all malware into large, monolithic collections, this work emphasizes the selection of features unique to each malware type. Feature selection was guided by established domain knowledge and detailed behavioral telemetry obtained through sandbox execution and a subsequent report analysis on the AnyRun platform. The datasets were compiled from two primary sources: (i) the AnyRun platform, which hosts more than two million samples and provides controlled, instrumented sandbox execution for malware, and (ii) publicly available GitHub repositories. To ensure data integrity and prevent cross-contamination of behavioral logs, each sample was executed in complete isolation, allowing for the precise capture of both static attributes and dynamic runtime behavior. Feature construction was informed by operational signatures characteristic of each malware category, ensuring that the datasets accurately represent the tactics, techniques, and procedures distinguishing one class from another. This targeted design enabled the identification of subtle but significant behavioral markers that are frequently overlooked in aggregated datasets. Each dataset was balanced to include benign, suspicious, and malicious samples, thereby supporting the training and evaluation of machine learning models while minimizing bias from disproportionate class representation. Across the full suite, 10,000 samples and 171 carefully curated features were included. This constitutes one of the first dataset collections intentionally developed to capture the behavioral diversity of multiple malware categories within the context of Internet of Things (IoT) security, representing a deliberate effort to bridge the gap between generalized malware corpora and class-specific behavioral modeling. Full article
Show Figures

Graphical abstract

25 pages, 22171 KB  
Article
Physics-Informed Co-Optimization of Fuel-CellFlying Vehicle Propulsion and Control Systems with Onboard Catalysis
by Yifei Bao, Chaoyi Chen, Hao Zhang and Nuo Lei
Electronics 2025, 14(21), 4150; https://doi.org/10.3390/electronics14214150 - 23 Oct 2025
Viewed by 480
Abstract
Fuel-cell flying vehicles suffer from limited endurance, while ammonia, decomposed onboard to supply hydrogen, offers a carbon-free, high-density solution to extend flight missions. However, the system’s performance is governed by a multi-scale coupling between propulsion and control systems. To this end, this paper [...] Read more.
Fuel-cell flying vehicles suffer from limited endurance, while ammonia, decomposed onboard to supply hydrogen, offers a carbon-free, high-density solution to extend flight missions. However, the system’s performance is governed by a multi-scale coupling between propulsion and control systems. To this end, this paper introduces a novel optimization paradigm, termed physics-informed gradient-enhanced multi-objective optimization (PI-GEMO), to simultaneously optimize the ammonia decomposition unit (ADU) catalyst composition, powertrain sizing, and flight control parameters. The PI-GEMO framework leverages a physics-informed neural network (PINN) as a differentiable surrogate model, which is trained not only on sparse simulation data but also on the governing differential equations of the system. This enables the use of analytical gradient information extracted from the trained PINN via automatic differentiation to intelligently guide the evolutionary search process. A comprehensive case study on a flying vehicle demonstrates that the PI-GEMO framework not only discovers a superior set of Pareto-optimal solutions compared to traditional methods but also critically ensures the physical plausibility of the results. Full article
(This article belongs to the Special Issue Eco-Safe Intelligent Mobility Development and Application)
Show Figures

Figure 1

20 pages, 7704 KB  
Article
Seamless User-Generated Content Processing for Smart Media: Delivering QoE-Aware Live Media with YOLO-Based Bib Number Recognition
by Alberto del Rio, Álvaro Llorente, Sofia Ortiz-Arce, Maria Belesioti, George Pappas, Alejandro Muñiz, Luis M. Contreras and Dimitris Christopoulos
Electronics 2025, 14(20), 4115; https://doi.org/10.3390/electronics14204115 - 21 Oct 2025
Viewed by 687
Abstract
The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, [...] Read more.
The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, and real-time quality assessment in a live sporting scenario. A key innovation of this work is the use of a cloud-native architecture based on Kubernetes, enabling dynamic and scalable integration of smartphone streams and remote production tools into a unified workflow. The system also included advanced cognitive services, such as a Video Quality Probe for estimating perceived visual quality and an AI Engine based on YOLO models for detection and recognition of runners and bib numbers. Together, these components enable a fully automated workflow for live production, combining real-time analysis and quality monitoring, capabilities that previously required manual or offline processing. The results demonstrated consistently high Mean Opinion Score (MOS) values above 3 72.92% of the time, confirming acceptable perceived quality under real network conditions, while the AI Engine achieved strong performance with a Precision of 93.6% and Recall of 80.4%. Full article
Show Figures

Figure 1

29 pages, 1671 KB  
Article
Towards Secure Legacy Manufacturing: A Policy-Driven Zero Trust Architecture Aligned with NIST CSF 2.0
by Cheon-Ho Min, Deuk-Hun Kim, Haomiao Yang and Jin Kwak
Electronics 2025, 14(20), 4109; https://doi.org/10.3390/electronics14204109 - 20 Oct 2025
Viewed by 994
Abstract
As smart manufacturing environments continue to evolve, operational technology systems are increasingly integrated with external networks and cloud-based platforms. However, many manufacturing facilities still use legacy systems running on end-of-support/life operating systems with discontinued security updates. It is difficult to mitigate the cyber [...] Read more.
As smart manufacturing environments continue to evolve, operational technology systems are increasingly integrated with external networks and cloud-based platforms. However, many manufacturing facilities still use legacy systems running on end-of-support/life operating systems with discontinued security updates. It is difficult to mitigate the cyber threats and risks for these systems using perimeter-based security models that isolate them from other networks. To address these constraints, a Zero Trust-based security architecture tailored for legacy manufacturing environments with practical field applicability is proposed. Our architecture builds upon the six core functions outlined in National Institute of Standards and Technology Cybersecurity Framework 2.0—identify, protect, detect, respond, recover, and govern—adapting them specifically to manufacturing environment security challenges. To achieve this, the architecture combines asset identification, policy-driven access control, secure SMB gateway transfers, automated anomaly detection and response, clean image recovery, and organizational governance procedures. This study validates the effectiveness and scalability of the proposed architecture through scenario-based simulations. When combining the EoSL defense hardening and gateway-based perimeter control, the architecture achieves approximately 99% overall threat suppression and a 98% reduction in critical-asset infection rates, demonstrating its strong resilience and scalability in large-scale legacy OT environments. Full article
(This article belongs to the Special Issue Industrial Process Control and Flexible Manufacturing Systems)
Show Figures

Figure 1

44 pages, 8751 KB  
Article
DataSense: A Real-Time Sensor-Based Benchmark Dataset for Attack Analysis in IIoT with Multi-Objective Feature Selection
by Amir Firouzi, Sajjad Dadkhah, Sebin Abraham Maret and Ali A. Ghorbani
Electronics 2025, 14(20), 4095; https://doi.org/10.3390/electronics14204095 - 19 Oct 2025
Viewed by 2511
Abstract
The widespread integration of Internet-connected devices into industrial environments has enhanced connectivity and automation but has also increased the exposure of industrial cyber–physical systems to security threats. Detecting anomalies is essential for ensuring operational continuity and safeguarding critical assets, yet the dynamic, real-time [...] Read more.
The widespread integration of Internet-connected devices into industrial environments has enhanced connectivity and automation but has also increased the exposure of industrial cyber–physical systems to security threats. Detecting anomalies is essential for ensuring operational continuity and safeguarding critical assets, yet the dynamic, real-time nature of such data poses challenges for developing effective defenses. This paper introduces DataSense, a comprehensive dataset designed to advance security research in industrial networked environments. DataSense contains synchronized sensor and network stream data, capturing interactions among diverse industrial sensors, commonly used connected devices, and network equipment, enabling vulnerability studies across heterogeneous industrial setups. The dataset was generated through the controlled execution of 50 realistic attacks spanning seven major categories: reconnaissance, denial of service, distributed denial of service, web exploitation, man-in-the-middle, brute force, and malware. This process produced a balanced mix of benign and malicious traffic that reflects real-world conditions. To enhance its utility, we introduce an original feature selection approach that identifies features most relevant to improving detection rates while minimizing resource usage. Comprehensive experiments with a broad spectrum of machine learning and deep learning models validate the dataset’s applicability, making DataSense a valuable resource for developing robust systems for detecting anomalies and preventing intrusions in real time within industrial environments. Full article
(This article belongs to the Special Issue AI-Driven IoT: Beyond Connectivity, Toward Intelligence)
Show Figures

Figure 1

16 pages, 6589 KB  
Article
An Enhanced Steganography-Based Botnet Communication Method in BitTorrent
by Gyeonggeun Park, Youngho Cho and Gang Qu
Electronics 2025, 14(20), 4081; https://doi.org/10.3390/electronics14204081 - 17 Oct 2025
Viewed by 575
Abstract
In a botnet attack, significant damage can occur when an attacker gains control over a large number of compromised network devices. Botnets have evolved from traditional centralized architectures to decentralized Peer-to-Peer (P2P) and hybrid forms. Recently, a steganography-based botnet (Stego-botnet) has emerged, which [...] Read more.
In a botnet attack, significant damage can occur when an attacker gains control over a large number of compromised network devices. Botnets have evolved from traditional centralized architectures to decentralized Peer-to-Peer (P2P) and hybrid forms. Recently, a steganography-based botnet (Stego-botnet) has emerged, which conceals command and control (C&C) messages within cover media such as images or video files shared over social networking sites (SNS). This type of Stego-botnet can evade conventional detection systems, as identifying hidden messages embedded in media transmitted via SNS platforms is inherently challenging. However, the inherent file size limitations of SNS platforms restrict the achievable payload capacity of such Stego-botnets. Moreover, the centralized characteristics of conventional botnet architectures expose attackers to a higher risk of identification. To overcome these challenges, researchers have explored network steganography techniques leveraging P2P networks such as BitTorrent, Google Suggest, and Skype. Among these, a hidden communication method utilizing Bitfield messages in BitTorrent has been proposed, demonstrating improved concealment compared to prior studies. Nevertheless, existing approaches still fail to achieve sufficient payload capacity relative to traditional digital steganography techniques. In this study, we extend P2P-based network steganography methods—particularly within the BitTorrent protocol—to address these limitations. We propose a novel botnet C&C communication model that employs network steganography over BitTorrent and validate its feasibility through experimental implementation. Furthermore, our results show that the proposed Stego-botnet achieves a higher payload capacity and outperforms existing Stego-botnet models in terms of both efficiency and concealment performance. Full article
Show Figures

Figure 1

25 pages, 2128 KB  
Article
A Low-Cost UAV System and Dataset for Real-Time Weed Detection in Salad Crops
by Alina L. Machidon, Andraž Krašovec, Veljko Pejović, Daniele Latini, Sarathchandrakumar T. Sasidharan, Fabio Del Frate and Octavian M. Machidon
Electronics 2025, 14(20), 4082; https://doi.org/10.3390/electronics14204082 - 17 Oct 2025
Viewed by 1076
Abstract
The global food crises and growing population necessitate efficient agricultural land use. Weeds cause up to 40% yield loss in major crops, resulting in over USD 100 billion in annual economic losses. Camera-equipped UAVs offer a solution for automatic weed detection, but the [...] Read more.
The global food crises and growing population necessitate efficient agricultural land use. Weeds cause up to 40% yield loss in major crops, resulting in over USD 100 billion in annual economic losses. Camera-equipped UAVs offer a solution for automatic weed detection, but the high computational and energy demands of deep learning models limit their use to expensive, high-end UAVs. In this paper, we present a low-cost UAV system built from off-the-shelf components, featuring a custom-designed on-board computing system based on the NVIDIA Jetson Nano. This system efficiently manages real-time image acquisition and inference using the energy-efficient Squeeze U-Net neural network for weed detection. Our approach ensures the pipeline operates in real time without affecting the drone’s flight autonomy. We also introduce the AgriAdapt dataset, a novel collection of 643 high-resolution aerial images of salad crops with weeds, which fills a key gap by providing realistic UAV data for benchmarking segmentation models under field conditions. Several deep learning models are trained and validated on the newly introduced AgriAdapt dataset, demonstrating its suitability for effective weed segmentation in UAV imagery. Quantitative results show that the dataset supports a range of architectures, from larger models such as DeepLabV3 to smaller, lightweight networks like Squeeze U-Net (with only 2.5 M parameters), achieving high accuracy (around 90%) across the board. These contributions distinguish our work from earlier UAV-based weed detection systems by combining a novel dataset with a comprehensive evaluation of accuracy, latency, and energy efficiency, thus directly targeting deep learning applications for real-time UAV deployment. Our results demonstrate the feasibility of deploying a low-cost, energy-efficient UAV system for real-time weed detection, making advanced agricultural technology more accessible and practical for widespread use. Full article
(This article belongs to the Special Issue Unmanned Aircraft Systems with Autonomous Navigation, 2nd Edition)
Show Figures

Figure 1

36 pages, 552 KB  
Review
Review of Applications of Regression and Predictive Modeling in Wafer Manufacturing
by Hsuan-Yu Chen and Chiachung Chen
Electronics 2025, 14(20), 4083; https://doi.org/10.3390/electronics14204083 - 17 Oct 2025
Cited by 1 | Viewed by 2745
Abstract
Semiconductor wafer manufacturing is one of the most complex and data-intensive industrial processes, comprising 500–1000 tightly interdependent steps, each requiring nanometer-level precision. As device nodes approach 3 nm and beyond, even minor deviations in parameters such as oxide thickness or critical dimensions can [...] Read more.
Semiconductor wafer manufacturing is one of the most complex and data-intensive industrial processes, comprising 500–1000 tightly interdependent steps, each requiring nanometer-level precision. As device nodes approach 3 nm and beyond, even minor deviations in parameters such as oxide thickness or critical dimensions can lead to catastrophic yield loss, challenging traditional physics-based control methods. In response, the industry has increasingly adopted regression analysis and predictive modeling as essential analytical frameworks. Classical regression, long used to support design of experiments (DOE), process optimization, and yield analysis, has evolved to enable multivariate modeling, virtual metrology, and fault detection. Predictive modeling extends these capabilities through machine learning and AI, leveraging massive sensor and metrology data streams for real-time process monitoring, yield forecasting, and predictive maintenance. These data-driven tools are now tightly integrated into advanced process control (APC), digital twins, and automated decision-making systems, transforming fabs into agile, intelligent manufacturing environments. This review synthesizes foundational and emerging methods, industry applications, and case studies, emphasizing their role in advancing Industry 4.0 initiatives. Future directions include hybrid physics–ML models, explainable AI, and autonomous manufacturing. Together, regression and predictive modeling provide semiconductor fabs with a robust ecosystem for optimizing performance, minimizing costs, and accelerating innovation in an increasingly competitive, high-stakes industry. Full article
(This article belongs to the Special Issue Advances in Semiconductor Devices and Applications)
Show Figures

Figure 1

27 pages, 4875 KB  
Article
A Comprehensive Radar-Based Berthing-Aid Dataset (R-BAD) and Onboard System for Safe Vessel Docking
by Fotios G. Papadopoulos, Antonios-Periklis Michalopoulos, Efstratios N. Paliodimos, Ioannis K. Christopoulos, Charalampos Z. Patrikakis, Alexandros Simopoulos and Stylianos A. Mytilinaios
Electronics 2025, 14(20), 4065; https://doi.org/10.3390/electronics14204065 - 16 Oct 2025
Viewed by 619
Abstract
Ship berthing operations are inherently challenging for maritime vessels, particularly within restricted port areas and under unfavorable weather conditions. Contrary to autonomous open-sea navigation, autonomous ship berthing remains a significant technological challenge for the maritime industry. Lidar and optical camera systems have been [...] Read more.
Ship berthing operations are inherently challenging for maritime vessels, particularly within restricted port areas and under unfavorable weather conditions. Contrary to autonomous open-sea navigation, autonomous ship berthing remains a significant technological challenge for the maritime industry. Lidar and optical camera systems have been deployed as auxiliary tools to support informed berthing decisions; however, these sensing modalities are severely affected by weather and light conditions, respectively, while cameras in particular are inherently incapable of providing direct range measurements. In this paper, we introduce a comprehensive, Radar-Based Berthing-Aid Dataset (R-BAD), aimed to cultivate the development of safe berthing systems onboard ships. The proposed R-BAD dataset includes a large collection of Frequency-Modulated Continuous Wave (FMCW) radar data in point cloud format alongside timestamped and synced video footage. There are more than 69 h of recorded ship operations, and the dataset is freely accessible to the interested reader(s). We also propose an onboard support system for radar-aided vessel docking, which enables obstacle detection, clustering, tracking and classification during ferry berthing maneuvers. The proposed dataset covers all docking/undocking scenarios (arrivals, departures, port idle, and cruising operations) and was used to train various machine/deep learning models of substantial performance, showcasing its validity for further autonomous navigation systems development. The berthing-aid system is tested in real-world conditions onboard an operational Ro-Ro/Passenger Ship and demonstrated superior, weather-resilient, repeatable and robust performance in detection, tracking and classification tasks, demonstrating its technology readiness for integration into future autonomous berthing-aid systems. Full article
Show Figures

Figure 1

24 pages, 1535 KB  
Article
Enhanced Distributed Multimodal Federated Learning Framework for Privacy-Preserving IoMT Applications: E-DMFL
by Dagmawit Tadesse Aga and Madhuri Siddula
Electronics 2025, 14(20), 4024; https://doi.org/10.3390/electronics14204024 - 14 Oct 2025
Viewed by 844
Abstract
The rapid growth of Internet of Medical Things (IoMT) devices offers promising avenues for real-time, personalized healthcare while also introducing critical challenges related to data privacy, device heterogeneity, and deployment scalability. This paper presents E-DMFL (Enhanced Distributed Multimodal Federated Learning), an Enhanced Distributed [...] Read more.
The rapid growth of Internet of Medical Things (IoMT) devices offers promising avenues for real-time, personalized healthcare while also introducing critical challenges related to data privacy, device heterogeneity, and deployment scalability. This paper presents E-DMFL (Enhanced Distributed Multimodal Federated Learning), an Enhanced Distributed Multimodal Federated Learning framework designed to address these issues. Our approach combines systems analysis principles with intelligent model design, integrating PyTorch-based modular orchestration and TensorFlow-style data pipelines to enable multimodal edge-based training. E-DMFL incorporates gated attention fusion, differential privacy, Shapley-value-based modality selection, and peer-to-peer communication to facilitate secure and adaptive learning in non-IID environments. We evaluate the framework using the EarSAVAS dataset, which includes synchronized audio and motion signals from ear-worn sensors. E-DMFL achieves a test accuracy of 92.0% in just six communication rounds. The framework also supports energy-efficient and real-time deployment through quantization-aware training and battery-aware scheduling. These results demonstrate the potential of combining systems-level design with federated learning (FL) innovations to support practical, privacy-aware IoMT applications. Full article
Show Figures

Figure 1

21 pages, 534 KB  
Article
Quantum Enabled Data Authentication Without Classical Control Interaction
by Piotr Zawadzki, Grzegorz Dziwoki, Marcin Kucharczyk, Jan Machniewski, Wojciech Sułek, Jacek Izydorczyk, Weronika Izydorczyk, Piotr Kłosowski, Adam Dustor, Wojciech Filipowski, Krzysztof Paszek and Anna Zawadzka
Electronics 2025, 14(20), 4037; https://doi.org/10.3390/electronics14204037 - 14 Oct 2025
Viewed by 359
Abstract
We present a quantum-assisted data authentication protocol that integrates classical information-theoretic security with quantum communication techniques. We assume only that the participants have access to open classical and quantum channels, and share a random static key material. Building on the Wegman–Carter paradigm, our [...] Read more.
We present a quantum-assisted data authentication protocol that integrates classical information-theoretic security with quantum communication techniques. We assume only that the participants have access to open classical and quantum channels, and share a random static key material. Building on the Wegman–Carter paradigm, our scheme employs universal hashing for message authentication and leverages quantum channels to securely transmit random nonces, eliminating the need for key recycling. The protocol utilizes polar codes within Wyner’s wiretap channel model to ensure confidentiality and reliability, even in the presence of an all-powerful adversary. Security analysis demonstrates that the protocol inherits strong guarantees from both classical and quantum frameworks, provided the quantum channel maintains low loss and noise. Full article
(This article belongs to the Special Issue Recent Advances in Information Security and Data Privacy)
Show Figures

Figure 1

14 pages, 4877 KB  
Article
Performance Improvement of Polarization Image Sensor with Multilayer On-Pixel Polarizer Structure for High-Sensitivity Millimeter-Wave Electro-Optic Imaging
by Ryoma Okada, Maya Mizuno, Hironari Takehara, Makito Haruta, Hiroyuki Tashiro, Jun Ohta and Kiyotaka Sasagawa
Electronics 2025, 14(20), 4026; https://doi.org/10.3390/electronics14204026 - 14 Oct 2025
Viewed by 535
Abstract
In this paper, we demonstrated a high-sensitivity polarization image sensor for millimeter-wave electric field imaging using electro-optic crystals. We developed a three-layer on-pixel polarizer structure fabricated with a 0.35-µm standard CMOS process, achieving an extinction ratio of 5.7, which corresponds to a 73% [...] Read more.
In this paper, we demonstrated a high-sensitivity polarization image sensor for millimeter-wave electric field imaging using electro-optic crystals. We developed a three-layer on-pixel polarizer structure fabricated with a 0.35-µm standard CMOS process, achieving an extinction ratio of 5.7, which corresponds to a 73% improvement over previous two-layer structure. Crosstalk reduction was implemented by applying a bias voltage to the n-well pixel separation, and extinction ratio was further improved. By using an improved sensor, it demonstrated a 7.6 dB SNR improvement in 30 GHz electric field imaging compared to previous sensors, despite 30% transmittance reduction. Angular dependence analysis confirmed adequate performance within the optical system’s constraints. These results enable high-speed and high-sensitivity millimeter-wave imaging applications. Full article
Show Figures

Figure 1

15 pages, 5169 KB  
Article
Twisting Soft Sleeve Actuator: Design and Experimental Evaluation
by Mohammed Abboodi and Marc Doumit
Electronics 2025, 14(20), 4020; https://doi.org/10.3390/electronics14204020 - 14 Oct 2025
Viewed by 697
Abstract
Soft wearable actuators must align with anatomical joints, conform to limb geometry, and operate at low pneumatic pressures. Yet most twisting mechanisms rely on bulky attachment interfaces and relatively high actuation pressures, limiting practicality in assistive applications. This study introduces the first Twisting [...] Read more.
Soft wearable actuators must align with anatomical joints, conform to limb geometry, and operate at low pneumatic pressures. Yet most twisting mechanisms rely on bulky attachment interfaces and relatively high actuation pressures, limiting practicality in assistive applications. This study introduces the first Twisting Soft Sleeve Actuator (TSSA), a self-contained, wearable actuator that produces controlled bidirectional torsion. The design integrates helically folded bellows with internal stabilization layers to suppress radial expansion and enhance torque transmission. The TSSA is fabricated from thermoplastic polyurethane using a Bowden-type fused filament fabrication (FFF) process optimized for airtightness and flexibility. Performance was characterized using a modular test platform that measured angular displacement and output force under positive pressure (up to 75 kPa) and vacuum (down to −85 kPa). A parametric study evaluated the effects of fold width, fold angle, wall thickness, and twist angle. Results demonstrate bidirectional, self-restoring torsion with clockwise rotation of approximately 30 degrees and a peak output force of about 40 N at 75 kPa, while reverse torsional motion occurred under vacuum actuation. The TSSA enables anatomically compatible, low-pressure torsion, supporting scalable, multi-degree-of-freedom sleeve systems for wearable robotics and rehabilitation. Full article
Show Figures

Figure 1

19 pages, 3266 KB  
Article
Empirically Informed Multi-Agent Simulation of Distributed Energy Resource Adoption and Grid Overload Dynamics in Energy Communities
by Lu Cong, Kristoffer Christensen, Magnus Værbak, Bo Nørregaard Jørgensen and Zheng Grace Ma
Electronics 2025, 14(20), 4001; https://doi.org/10.3390/electronics14204001 - 13 Oct 2025
Viewed by 654
Abstract
The rapid proliferation of residential electric vehicles (EVs), rooftop photovoltaics (PVs), and behind-the-meter batteries is transforming energy communities while introducing new operational stresses to local distribution grids. Short-duration transformer overloads, often overlooked in conventional hourly or optimization-based planning models, can accelerate asset aging [...] Read more.
The rapid proliferation of residential electric vehicles (EVs), rooftop photovoltaics (PVs), and behind-the-meter batteries is transforming energy communities while introducing new operational stresses to local distribution grids. Short-duration transformer overloads, often overlooked in conventional hourly or optimization-based planning models, can accelerate asset aging before voltage limits are reached. This study introduces a second-by-second, multi-agent-based simulation (MABS) framework that couples empirically calibrated Distributed Energy Resource (DER) adoption trajectories with real-time-price (RTP)–driven household charging decisions. Using a real 160-household feeder in Denmark (2024–2025), five progressively integrated DER scenarios are evaluated, ranging from EV-only adoption to fully synchronized EV–PV–battery coupling. Results reveal that uncoordinated EV charging under RTP shifts demand to early-morning hours, causing the first transformer overload within four months. PV deployment alone offers limited relief, while adding batteries delays overload onset by 55 days. Only fully coordinated EV–PV–battery adoption postponed the first overload by three months and reduced total overload hours in 2025 by 39%. The core novelty of this work lies in combining empirically grounded adoption behavior, second-level temporal fidelity, and agent-based grid dynamics to expose transient overload mechanisms invisible to coarser models. The framework provides a diagnostic and planning tool for distribution system operators to evaluate tariff designs, bundled incentives, and coordinated DER deployment strategies that enhance transformer longevity and grid resilience in future energy communities. Full article
(This article belongs to the Special Issue Wind and Renewable Energy Generation and Integration)
Show Figures

Figure 1

17 pages, 926 KB  
Article
Pilot Design Based on the Distribution of Inter-User Interference for Grant-Free Access
by Hao Wang, Xiujun Zhang and Shidong Zhou
Electronics 2025, 14(20), 3988; https://doi.org/10.3390/electronics14203988 - 12 Oct 2025
Viewed by 419
Abstract
Massive random access (MRA) involves massive devices sporadically and randomly sending short-packet messages through a shared wireless channel. It is a crucial scenario in 6G communications to support the Internet-of-Things. Grant-free access, where devices complete transmission without grants, is a promising scheme for [...] Read more.
Massive random access (MRA) involves massive devices sporadically and randomly sending short-packet messages through a shared wireless channel. It is a crucial scenario in 6G communications to support the Internet-of-Things. Grant-free access, where devices complete transmission without grants, is a promising scheme for MRA. In grant-free access, the design of pilot sequences has a significant effect on joint activity detection and channel estimation (JADCE) and, consequently, system performance. Inter-user interference (IUI), caused by non-orthogonal pilots, is random owing to the random set of active users, and existing studies on pilot design for grant-free access often attempt to reduce the mean IUI. However, the performance of JADCE is affected not only by the mean IUI but also by the tail behavior of the IUI distribution. In this paper, we propose a metric for pilot design, exploiting the distribution of IUI to reflect the impact of pilots on JADCE more precisely. We further develop a pilot design algorithm based on the proposed metric, with modified approximate message passing (AMP) adopted as the JADCE algorithm. Simulation results demonstrate that the proposed pilot design reduces the probability of missed detection of active users and channel estimation error, compared with existing pilot designs. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

16 pages, 340 KB  
Article
Adapting a Previously Proposed Open-Set Recognition Method for Time-Series Data: A Biometric User Identification Case Study
by András Pál Halász, Nawar Al Hemeary, Lóránt Szabolcs Daubner, János Juhász, Tamás Zsedrovits and Kálmán Tornai
Electronics 2025, 14(20), 3983; https://doi.org/10.3390/electronics14203983 - 11 Oct 2025
Viewed by 491
Abstract
Conventional classifiers are generally unable to identify samples from classes absent during the model’s training. However, such samples frequently emerge in real-world scenarios, necessitating the extension of classifier capabilities. Open-Set Recognition (OSR) models are designed to address this challenge. Previously, we developed a [...] Read more.
Conventional classifiers are generally unable to identify samples from classes absent during the model’s training. However, such samples frequently emerge in real-world scenarios, necessitating the extension of classifier capabilities. Open-Set Recognition (OSR) models are designed to address this challenge. Previously, we developed a robust OSR method that employs generated—“fake”—features to model the space of unknown classes encountered during deployment. Like most OSR models, this method was initially designed for image datasets. However, it is essential to extend OSR techniques to other data types, given their widespread use in practice. In this work, we adapt our model to time-series data while preserving its core efficiency advantage. Thanks to the model’s modular design, only the feature extraction component required modification. We implemented three approaches: a one-dimensional convolutional network for accurate representation, a lightweight method based on predefined statistical features, and a frequency-domain neural network. Further, we evaluated combinations of these methods. Experiments on a biometric time-series dataset, used here as a case study, demonstrate that our model achieves excellent open-set detection and closed-set accuracy. Combining feature extraction strategies yields the best performance, while individual methods offer flexibility: CNNs deliver high accuracy, whereas handcrafted features enable resource-efficient deployment. This adaptability makes the proposed framework suitable for scenarios with varying computational constraints. Full article
Show Figures

Figure 1

20 pages, 794 KB  
Article
Replay-Based Domain Incremental Learning for Cross-User Gesture Recognition in Robot Task Allocation
by Kanchon Kanti Podder, Pritom Dutta and Jian Zhang
Electronics 2025, 14(19), 3946; https://doi.org/10.3390/electronics14193946 - 6 Oct 2025
Viewed by 613
Abstract
Reliable gesture interfaces are essential for coordinating distributed robot teams in the field. However, models trained in a single domain often perform poorly when confronted with new users, different sensors, or unfamiliar environments. To address this challenge, we propose a memory-efficient replay-based domain [...] Read more.
Reliable gesture interfaces are essential for coordinating distributed robot teams in the field. However, models trained in a single domain often perform poorly when confronted with new users, different sensors, or unfamiliar environments. To address this challenge, we propose a memory-efficient replay-based domain incremental learning (DIL) framework, ReDIaL, that adapts to sequential domain shifts while minimizing catastrophic forgetting. Our approach employs a frozen encoder to create a stable latent space and a clustering-based exemplar replay strategy to retain compact, representative samples from prior domains under strict memory constraints. We evaluate the framework on a multi-domain air-marshalling gesture recognition task, where an in-house dataset serves as the initial training domain and the NATOPS dataset provides 20 cross-user domains for sequential adaptation. During each adaptation step, training data from the current NATOPS subject is interleaved with stored exemplars to retain prior knowledge while accommodating new knowledge variability. Across 21 sequential domains, our approach attains 97.34% accuracy on the domain incremental setting, exceeding pooled fine-tuning (91.87%), incremental fine-tuning (80.92%), and Experience Replay (94.20%) by +5.47, +16.42, and +3.14 percentage points, respectively. Performance also approaches the joint-training upper bound (98.18%), which represents the ideal case where data from all domains are available simultaneously. These results demonstrate that memory-efficient latent exemplar replay provides both strong adaptation and robust retention, enabling practical and trustworthy gesture-based human–robot interaction in dynamic real-world deployments. Full article
(This article belongs to the Special Issue Coordination and Communication of Multi-Robot Systems)
Show Figures

Figure 1

52 pages, 3207 KB  
Review
Cognitive Bias Mitigation in Executive Decision-Making: A Data-Driven Approach Integrating Big Data Analytics, AI, and Explainable Systems
by Leonidas Theodorakopoulos, Alexandra Theodoropoulou and Constantinos Halkiopoulos
Electronics 2025, 14(19), 3930; https://doi.org/10.3390/electronics14193930 - 3 Oct 2025
Cited by 1 | Viewed by 5576
Abstract
Cognitive biases continue to pose significant challenges in executive decision-making, often leading to strategic inefficiencies, misallocation of resources, and flawed risk assessments. While traditional decision-making relies on intuition and experience, these methods are increasingly proving inadequate in addressing the complexity of modern business [...] Read more.
Cognitive biases continue to pose significant challenges in executive decision-making, often leading to strategic inefficiencies, misallocation of resources, and flawed risk assessments. While traditional decision-making relies on intuition and experience, these methods are increasingly proving inadequate in addressing the complexity of modern business environments. Despite the growing integration of big data analytics into executive workflows, existing research lacks a comprehensive examination of how AI-driven methodologies can systematically mitigate biases while maintaining transparency and trust. This paper addresses these gaps by analyzing how big data analytics, artificial intelligence (AI), machine learning (ML), and explainable AI (XAI) contribute to reducing heuristic-driven errors in executive reasoning. Specifically, it explores the role of predictive modeling, real-time analytics, and decision intelligence systems in enhancing objectivity and decision accuracy. Furthermore, this study identifies key organizational and technical barriers—such as biases embedded in training data, model opacity, and resistance to AI adoption—that hinder the effectiveness of data-driven decision-making. By reviewing empirical findings from A/B testing, simulation experiments, and behavioral assessments, this research examines the applicability of AI-powered decision support systems in strategic management. The contributions of this paper include a detailed analysis of bias mitigation mechanisms, an evaluation of current limitations in AI-driven decision intelligence, and practical recommendations for fostering a more data-driven decision culture. By addressing these research gaps, this study advances the discourse on responsible AI adoption and provides actionable insights for organizations seeking to enhance executive decision-making through big data analytics. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Show Figures

Figure 1

13 pages, 20849 KB  
Article
Real-Time True Wireless Stereo Wearing Detection Using a PPG Sensor with Edge AI
by Raehyeong Kim, Joungmin Park, Jaeseong Kim, Jongwon Oh and Seung Eun Lee
Electronics 2025, 14(19), 3911; https://doi.org/10.3390/electronics14193911 - 30 Sep 2025
Viewed by 1136
Abstract
True wireless stereo (TWS) earbuds are evolving into multifunctional wearable devices, offering opportunities not only for audio streaming but also for health-related applications. A fundamental requirement for such devices is the ability to accurately detect whether they are being worn, yet conventional proximity [...] Read more.
True wireless stereo (TWS) earbuds are evolving into multifunctional wearable devices, offering opportunities not only for audio streaming but also for health-related applications. A fundamental requirement for such devices is the ability to accurately detect whether they are being worn, yet conventional proximity sensors remain limited in both reliability and functionality. This work explores the use of photoplethysmography (PPG) sensors, which are widely applied in heart rate and blood oxygen monitoring, as an alternative solution for wearing detection. A PPG sensor was embedded into a TWS prototype to capture blood flow changes, and the wearing status was classified in real time using a lightweight k-nearest neighbor (k-NN) algorithm on an edge AI processor. Experimental evaluation showed that incorporating a validity check enhanced classification performance, achieving F1 scores above 0.95 across all wearing conditions. These results indicate that PPG-based sensing can serve as a robust alternative to proximity sensors and expand the capabilities of TWS devices. Full article
Show Figures

Figure 1

24 pages, 4993 KB  
Article
Skeletal Image Features Based Collaborative Teleoperation Control of the Double Robotic Manipulators
by Hsiu-Ming Wu and Shih-Hsun Wei
Electronics 2025, 14(19), 3897; https://doi.org/10.3390/electronics14193897 - 30 Sep 2025
Viewed by 457
Abstract
In this study, a vision-based remote and synchronized control scheme is proposed for the double six-DOF robotic manipulators. Using an Intel RealSense D435 depth camera and MediaPipe skeletal image feature technique, the operator’s 3D hand pose is captured and mapped to the robot’s [...] Read more.
In this study, a vision-based remote and synchronized control scheme is proposed for the double six-DOF robotic manipulators. Using an Intel RealSense D435 depth camera and MediaPipe skeletal image feature technique, the operator’s 3D hand pose is captured and mapped to the robot’s workspace via coordinate transformation. Inverse kinematics is then applied to compute the necessary joint angles for synchronized motion control. Implemented on double robotic manipulators with the MoveIt framework, the system successfully achieves a collaborative teleoperation control task to transfer an object from a robotic manipulator to another one. Further, moving average filtering techniques are used to enhance trajectory smoothness and stability. The framework demonstrates the feasibility and effectiveness of non-contact, vision-guided multi-robot control for applications in teleoperation, smart manufacturing, and education. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

20 pages, 643 KB  
Article
Improving Physical Layer Security for Multi-Hop Transmissions in Underlay Cognitive Radio Networks with Various Eavesdropping Attacks
by Kyusung Shim and Beongku An
Electronics 2025, 14(19), 3867; https://doi.org/10.3390/electronics14193867 - 29 Sep 2025
Viewed by 390
Abstract
This paper investigates physical layer security (PHY-security) for multi-hop transmission in underlay cognitive radio networks under various eavesdropping attacks. To enhance secrecy performance, we propose two opportunistic scheduling schemes. The first scheme, called the minimal node selection (MNS) scheme, selects the node in [...] Read more.
This paper investigates physical layer security (PHY-security) for multi-hop transmission in underlay cognitive radio networks under various eavesdropping attacks. To enhance secrecy performance, we propose two opportunistic scheduling schemes. The first scheme, called the minimal node selection (MNS) scheme, selects the node in each cluster that minimizes the eavesdropper’s channel capacity. The second scheme, named the optimal node selection (ONS) scheme, chooses the node that maximizes secrecy capacity by using both the main and eavesdropper channel information. To reveal the relationship between network parameters and secrecy performance, we derive closed-form expressions for the secrecy outage probability (SOP) under different scheduling schemes and eavesdropping scenarios. Numerical results show that the ONS scheme provides the most robust secrecy performance among the considered schemes. Furthermore, we analyze the impact of key network parameters on secrecy performance. In detail, although the proposed ONS scheme requires more channel information than the MNS scheme, under a 20 dB interference threshold, the secrecy performance of the ONS scheme is 15% more robust than that of the MNS scheme. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

20 pages, 3174 KB  
Article
Techno-Economic Optimization of a Grid-Connected Hybrid-Storage-Based Photovoltaic System for Distributed Buildings
by Tao Ma, Bo Wang, Cangbin Dai, Muhammad Shahzad Javed and Tao Zhang
Electronics 2025, 14(19), 3843; https://doi.org/10.3390/electronics14193843 - 28 Sep 2025
Viewed by 569
Abstract
With growing urban populations and rapid technological advancement, major cities worldwide are facing pressing challenges from surging energy demands. Interestingly, substantial unused space within residential buildings offers potential for installing renewable energy systems coupled with energy storage. This study innovatively proposes a grid-connected [...] Read more.
With growing urban populations and rapid technological advancement, major cities worldwide are facing pressing challenges from surging energy demands. Interestingly, substantial unused space within residential buildings offers potential for installing renewable energy systems coupled with energy storage. This study innovatively proposes a grid-connected photovoltaic (PV) system integrated with pumped hydro storage (PHS) and battery storage for residential applications. A novel optimization algorithm is employed to achieve techno-economic optimization of the hybrid system. The results indicate a remarkably short payback period of about 5 years, significantly outperforming previous studies. Additionally, a threshold is introduced to activate pumping and water storage during off-peak nighttime electricity hours, strategically directing surplus power to either the pump or battery according to system operation principles. This nighttime water storage strategy not only promises considerable cost savings for residents, but also helps to mitigate grid stress under time-of-use pricing schemes. Overall, this study demonstrates that, through optimized system sizing, costs can be substantially reduced. Importantly, with the nighttime storage strategy, the payback period can be shortened even further, underscoring the novelty and practical relevance of this research. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

19 pages, 3437 KB  
Article
Comparing CNN and ViT for Open-Set Face Recognition
by Ander Galván, Mariví Higuero, Ane Sanz, Asier Atutxa, Eduardo Jacob and Mario Saavedra
Electronics 2025, 14(19), 3840; https://doi.org/10.3390/electronics14193840 - 27 Sep 2025
Cited by 1 | Viewed by 1485
Abstract
At present, there is growing interest in automated biometric identification applications. For these, it is crucial to have a system capable of accurately identifying a specific group of people while also detecting individuals who do not belong to that group. In face identification [...] Read more.
At present, there is growing interest in automated biometric identification applications. For these, it is crucial to have a system capable of accurately identifying a specific group of people while also detecting individuals who do not belong to that group. In face identification models that use Deep Learning (DL) techniques, this context is referred to as Open-Set Recognition (OSR), which is the focus of this work. This scenario presents a substantial challenge for this type of system, as it involves the need to effectively identify unknown individuals who were not part of the system’s training data. In this context, where the accuracy of this type of system is considered crucial, selecting the model to be used in each scenario becomes key. It is within this context that our work arises. Here, we present the results of a rigorous comparative analysis examining the precision of some of the most widely used models today for face identification, specifically some Convolutional Neural Network (CNN) models compared with a Vision Transformer (ViT) model. All models were pre-trained on the same large dataset and evaluated in an OSR scenario. The results show that ViT achieves the highest precision, outperforming CNN baselines and demonstrating better generalization for unknown identities. These findings support recent evidence that ViT is a promising alternative to CNN for this type of application. Full article
Show Figures

Figure 1

14 pages, 331 KB  
Article
Flow Matching for Simulation-Based Inference: Design Choices and Implications
by Massimiliano Giordano Orsini, Alessio Ferone, Laura Inno, Angelo Casolaro and Antonio Maratea
Electronics 2025, 14(19), 3833; https://doi.org/10.3390/electronics14193833 - 27 Sep 2025
Viewed by 1566
Abstract
Inverse problems are ubiquitous across many scientific fields, and involve the determination of the causes or parameters of a system from observations of its effects or outputs. These problems have been deeply studied through the use of simulated data, thereby under the lens [...] Read more.
Inverse problems are ubiquitous across many scientific fields, and involve the determination of the causes or parameters of a system from observations of its effects or outputs. These problems have been deeply studied through the use of simulated data, thereby under the lens of simulation-based inference. Recently, the natural combination of Continuous Normalizing Flows (CNFs) and Flow Matching Posterior Estimation (FMPE) has emerged as a novel, powerful, and scalable posterior estimator, capable of inferring the distribution of free parameters in a significantly reduced computational time compared to conventional techniques. While CNFs provide substantial flexibility in designing machine learning solutions, modeling decisions during their implementation can strongly influence predictive performance. To the best of our knowledge, no prior work has systematically analyzed how such modeling choices affect the robustness of posterior estimates in this framework. The aim of this work is to address this research gap by investigating the sensitivity of CNFs trained with FMPE under different modeling decisions, including data preprocessing, noise conditioning, and noisy observations. As a case study, we consider atmospheric retrieval of exoplanets and perform an extensive experimental campaign on the Ariel Data Challenge 2023 dataset. Through a comprehensive posterior evaluation framework, we demonstrate that (i) Z-score normalization outperforms min–max scaling across tasks; (ii) noise conditioning improves accuracy, coverage, and uncertainty estimation; and (iii) noisy observations significantly degrade predictive performance, thus underscoring reduced robustness under the assumed noise conditions. Full article
(This article belongs to the Special Issue Digital Signal and Image Processing for Multimedia Technology)
Show Figures

Figure 1

31 pages, 1002 KB  
Article
Strengthening Small Object Detection in Adapted RT-DETR Through Robust Enhancements
by Manav Madan and Christoph Reich
Electronics 2025, 14(19), 3830; https://doi.org/10.3390/electronics14193830 - 27 Sep 2025
Viewed by 4357
Abstract
RT-DETR (Real-Time DEtection TRansformer) has recently emerged as a promising model for object detection in images, yet its performance on small objects remains limited, particularly in terms of robustness. While various approaches have been explored, developing effective solutions for reliable small object detection [...] Read more.
RT-DETR (Real-Time DEtection TRansformer) has recently emerged as a promising model for object detection in images, yet its performance on small objects remains limited, particularly in terms of robustness. While various approaches have been explored, developing effective solutions for reliable small object detection remains a significant challenge. This paper introduces an adapted variant of RT-DETR, specifically designed to enhance robustness in small object detection. The model was first designed on one dataset and subsequently transferred to others to validate generalization. Key contributions include replacing components of the feed-forward neural network (FFNN) within a hybrid encoder with Hebbian, randomized, and Oja-inspired layers; introducing a modified loss function; and applying multi-scale feature fusion with fuzzy attention to refine encoder representations. The proposed model is evaluated on the Al-Cast Detection X-ray dataset, which contains small components from high-pressure die-casting machines, and the PCB quality inspection dataset, which features tiny hole anomalies. The results show that the optimized model achieves an mAP of 0.513 for small objects—an improvement from the 0.389 of the baseline RT-DETR model on the Al-Cast dataset—confirming its effectiveness. In addition, this paper contributes a mini-literature review of recent RT-DETR enhancements, situating our work within current research trends and providing context for future development. Full article
(This article belongs to the Special Issue Applications of Computer Vision, 3rd Edition)
Show Figures

Graphical abstract

20 pages, 2856 KB  
Article
Privacy-Preserving Federated Review Analytics with Data Quality Optimization for Heterogeneous IoT Platforms
by Jiantao Xu, Liu Jin and Chunhua Su
Electronics 2025, 14(19), 3816; https://doi.org/10.3390/electronics14193816 - 26 Sep 2025
Viewed by 782
Abstract
The proliferation of Internet of Things (IoT) devices has created a distributed ecosystem where users generate vast amounts of review data across heterogeneous platforms, from smart home assistants to connected vehicles. This data is crucial for service improvement but is plagued by fake [...] Read more.
The proliferation of Internet of Things (IoT) devices has created a distributed ecosystem where users generate vast amounts of review data across heterogeneous platforms, from smart home assistants to connected vehicles. This data is crucial for service improvement but is plagued by fake reviews, data quality inconsistencies, and significant privacy risks. Traditional centralized analytics fail in this landscape due to data privacy regulations and the sheer scale of distributed data. To address this, we propose FedDQ, a federated learning framework for Privacy-Preserving Federated Review Analytics with Data Quality Optimization. FedDQ introduces a multi-faceted data quality assessment module that operates locally on each IoT device, evaluating review data based on textual coherence, behavioral patterns, and cross-modal consistency without exposing raw data. These quality scores are then used to orchestrate a quality-aware aggregation mechanism at the server, prioritizing contributions from high-quality, reliable clients. Furthermore, our framework incorporates differential privacy and models system heterogeneity to ensure robustness and practical applicability in resource-constrained IoT environments. Extensive experiments on multiple real-world datasets show that FedDQ significantly outperforms baseline federated learning methods in accuracy, convergence speed, and resilience to data poisoning attacks, achieving up to a 13.8% improvement in F1-score under highly heterogeneous and noisy conditions while preserving user privacy. Full article
(This article belongs to the Special Issue Emerging IoT Sensor Network Technologies and Applications)
Show Figures

Figure 1

31 pages, 18957 KB  
Article
Hierarchical Hybrid Control and Communication Topology Optimization in DC Microgrids for Enhanced Performance
by Yuxuan Tang, Azeddine Houari, Lin Guan and Abdelhakim Saim
Electronics 2025, 14(19), 3797; https://doi.org/10.3390/electronics14193797 - 25 Sep 2025
Viewed by 533
Abstract
Bus voltage regulation and accurate power sharing constitute two pivotal control objectives in DC microgrids. The conventional droop control method inherently suffers from steady-state voltage deviation. Centralized control introduces vulnerability to single-point failures, with significantly degraded stability under abnormal operating conditions. Distributed control [...] Read more.
Bus voltage regulation and accurate power sharing constitute two pivotal control objectives in DC microgrids. The conventional droop control method inherently suffers from steady-state voltage deviation. Centralized control introduces vulnerability to single-point failures, with significantly degraded stability under abnormal operating conditions. Distributed control strategies mitigate this vulnerability but require careful balancing between control effectiveness and communication costs. Therefore, this paper proposes a hybrid hierarchical control architecture integrating multiple control strategies to achieve near-zero steady-state deviation voltage regulation and precise power sharing in DC microgrids. Capitalizing on the complementary advantages of different control methods, an operation-condition-adaptive hierarchical control (OCAHC) strategy is proposed. The proposed method improves reliability over centralized control under communication failures, and achieves better performance than distributed control under normal conditions. With a fault-detection logic module, the OCAHC framework enables automatic switching to maintain high control performance across different operating scenarios. For the inherent trade-off between consensus algorithm performance and communication costs, a communication topology optimization model is established with communication cost as the objective, subject to constraints including communication intensity, algorithm convergence under both normal and N-1 conditions, and control performance requirements. An accelerated optimization approach employing node-degree computation and equivalent topology reduction is proposed to enhance computational efficiency. Finally, case studies on a DC microgrid with five DGs verify the effectiveness of the proposed model and methods. Full article
(This article belongs to the Special Issue Power Electronics Controllers for Power System)
Show Figures

Figure 1

Back to TopTop