Next Issue
Volume 15, February-2
Previous Issue
Volume 15, January-2
 
 
electronics-logo

Journal Browser

Journal Browser

Electronics, Volume 15, Issue 3 (February-1 2026) – 226 articles

Cover Story (view full-size image): Hybrid AC/DC microgrids interfaced by solid-state transformers (SSTs) must often ride through distribution-grid faults by transitioning to islanded operation. Yet, prolonged outages can deplete local DER and storage, and a faulted MV segment can block the inter-microgrid power exchange needed to balance surplus and demand. In this regard, this paper proposes an SST-enabled emergency power-sharing scheme that activates a common LVDC feeder to bypass the faulted MV section and route power between microgrids, supplying priority loads without oversizing SSTs or adding extra conversion stages. A reduced-scale SST prototype validates stable grid-following and grid-forming operation with accurate regulation and high-power density. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 3489 KB  
Article
A Deployment Strategy for Reconfigurable Intelligent Surfaces with Joint Phase and Position Optimization
by Guangsong Yang, Hongbo Huang, Chuwei Sun, Yiliang Wu, Xinjie Xu and Shan Huang
Electronics 2026, 15(3), 718; https://doi.org/10.3390/electronics15030718 - 6 Feb 2026
Viewed by 135
Abstract
The actual implementation of fifth-generation (5G) and beyond networks faces persistent challenges, including environmental interference and limited coverage, which compromise transmission stability and network feasibility. Reconfigurable Intelligent Surfaces (RISs) have emerged as a promising technology to dynamically reconfigure wireless propagation environments and enhance [...] Read more.
The actual implementation of fifth-generation (5G) and beyond networks faces persistent challenges, including environmental interference and limited coverage, which compromise transmission stability and network feasibility. Reconfigurable Intelligent Surfaces (RISs) have emerged as a promising technology to dynamically reconfigure wireless propagation environments and enhance communication quality. To fully unlock the potential of RIS, this paper proposes a novel deployment strategy based on Double Deep Q-Networks (DDQNs) that jointly optimizes the RIS placement and phase shift configuration to maximize the system sum-rate. Specifically, the coverage area is discretized into a grid, and at each candidate location, a DDQN-based method is developed to solve the corresponding non-convex phase optimization problem. Simulation results reveal that our proposed strategy significantly surpasses conventional benchmark schemes, resulting in a sum-rate improvement of up to 38.41%. The study provides a practical and efficient pre-deployment framework for RIS-enhanced wireless networks. Full article
Show Figures

Figure 1

24 pages, 8246 KB  
Article
Overvoltage Suppression Filter Development for GaN Inverter-Fed Electrical Drive with Long Cable Based on Impedance Measurement
by Kaspars Kroičs and Jānis Voitkāns
Electronics 2026, 15(3), 717; https://doi.org/10.3390/electronics15030717 - 6 Feb 2026
Viewed by 137
Abstract
Wide-bandgap transistors have short voltage rise and fall times, thus leading to overvoltage at the end of the cable connecting the inverter and the motor. In this paper, the overvoltage reduction possibilities have been investigated analytically, experimentally, and based on a simulation model. [...] Read more.
Wide-bandgap transistors have short voltage rise and fall times, thus leading to overvoltage at the end of the cable connecting the inverter and the motor. In this paper, the overvoltage reduction possibilities have been investigated analytically, experimentally, and based on a simulation model. High-frequency models of the motor and the cable have been created based on impedance measurements. Different solutions for overvoltage reduction have been compared and an improved combined filter for the inverter with high switching frequency has been proposed. The overvoltage that was initially 80 percent has been reduced to below 10 percent by applying the filtering solution. Full article
(This article belongs to the Special Issue Advanced Technologies in Power Electronics)
Show Figures

Figure 1

12 pages, 2327 KB  
Article
Transformer Based on Multi-Domain Feature Fusion for AI-Generated Image Detection
by Qiaoyue Man and Young-Im Cho
Electronics 2026, 15(3), 716; https://doi.org/10.3390/electronics15030716 - 6 Feb 2026
Viewed by 115
Abstract
With the rapid advancement of Generative Adversarial Networks (GANs), diffusion models, and other deep generative techniques, AI-generated images have achieved unprecedented levels of visual realism, posing severe challenges to the authenticity, security, and credibility of digital content. This paper proposes a novel hybrid [...] Read more.
With the rapid advancement of Generative Adversarial Networks (GANs), diffusion models, and other deep generative techniques, AI-generated images have achieved unprecedented levels of visual realism, posing severe challenges to the authenticity, security, and credibility of digital content. This paper proposes a novel hybrid transformer model that integrates spatial and frequency domains. It leverages CLIP to extract semantic inconsistencies in the image’s spatial domain while employing wavelet transforms to capture multi-scale frequency anomalies in AI-generated images. After cross-domain feature fusion, global modeling is performed within the Swin-Transformer architecture, enabling robust authenticity detection of AI-generated images. Extensive experiments demonstrate that our detector maintains high accuracy across diverse datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence, Computer Vision and 3D Display)
Show Figures

Figure 1

20 pages, 1488 KB  
Article
AI-Driven Hybrid Deep Learning and Swarm Intelligence for Predictive Maintenance of Smart Manufacturing Robots in Industry 4.0
by Deepak Kumar, Santosh Reddy Addula, Mary Lind, Steven Brown and Segun Odion
Electronics 2026, 15(3), 715; https://doi.org/10.3390/electronics15030715 - 6 Feb 2026
Viewed by 197
Abstract
Advancements in Industry 4.0 technologies, which combine big data analytics, robotics, and intelligent decision systems to enable new ways to increase automation in the industrial sector, have undergone significant transformations. In this research, a Hybrid Attention-Gated Recurrent Unit (At-GRU) model, combined with Sand [...] Read more.
Advancements in Industry 4.0 technologies, which combine big data analytics, robotics, and intelligent decision systems to enable new ways to increase automation in the industrial sector, have undergone significant transformations. In this research, a Hybrid Attention-Gated Recurrent Unit (At-GRU) model, combined with Sand Cat Optimization (SCO), is proposed to enhance fault identification and predictive maintenance capabilities. The model utilized multivariate sensor data from cyber-physical and IoT-enabled robotic platforms to learn operational patterns and predict failures with enhanced reliability. The At-GRU provides deeper temporal feature extraction, thereby improving classification performance. The robustness of the proposed model is validated through analysis of a benchmark dataset for industrial robots, and the results demonstrate that the proposed model exhibits impressive predictive capacity, surpassing other prediction methods and predictive maintenance approaches. Additionally, the performance evaluation indicates a lower computational cost due to the lightweight gating architecture of GRU, combined with attention. The robotic motion is further optimized by the SCO algorithm, which reduces energy usage, execution delay, and trajectory deviations while ensuring smooth operation. Overall, the proposed work offers an intelligent and scalable solution for next-generation industrial automation systems. Furthermore, the proposed model demonstrates the real-world applicability and significant benefits of incorporating hybrid artificial intelligence models into real-time robot control applications for smart manufacturing environments. Full article
Show Figures

Figure 1

27 pages, 9745 KB  
Article
A Novel Water-Flow Live-Insect Monitoring Device for Measuring the Light-Trap Attraction Rate of Insects
by Jiarui Fang, Lei Shu, Ru Han, Kailiang Li and Wei Lin
Electronics 2026, 15(3), 714; https://doi.org/10.3390/electronics15030714 - 6 Feb 2026
Viewed by 113
Abstract
The light-trap attraction rate (LTARI) is an important metric for characterizing diel activity patterns and supports studies in insect behavioral ecology and pest management. However, conventional automatic light-trap devices often rely on lethal methods (e.g., high-voltage grids or infrared heating), causing high mortality [...] Read more.
The light-trap attraction rate (LTARI) is an important metric for characterizing diel activity patterns and supports studies in insect behavioral ecology and pest management. However, conventional automatic light-trap devices often rely on lethal methods (e.g., high-voltage grids or infrared heating), causing high mortality of non-target insects and severe image obstruction due to stacking of insect bodies. These issues disturb natural populations and bias attempts to quantify LTARI. Our primary objective is to develop and evaluate a non-lethal monitoring system as a methodological basis for future LTARI research, rather than to provide head-to-head quantitative comparisons with conventional traps. To address the above limitations, we propose a live-insect monitoring instrument that integrates a wind-suction trap with a Water-Flow Dispersion and Transport Structure (WF-DTS). The non-destructive trapping–dispersion–release process limits body stacking, allows captured insects to be released, and yields a community-level post-capture survival rate of 94% under the conditions tested. Experimental results show that the prototype maintains image integrity with clearly isolated single insects and achieves a detection performance of 95.6% (mAP@0.5) using the YOLOv8s model. At the inference stage, only the standard resizing and normalization operations of YOLOv8s are applied, without additional denoising, background subtraction, or data augmentation. These observations suggest that the WF-DTS generates images that are easier to segment and classify than those from conventional devices. The high detection accuracy is largely attributable to the physical dispersion of specimens and the uniform white matte background provided by the hardware design. Overall, the system constitutes a non-lethal hardware–software platform that may reduce backend processing complexity and provide a methodological basis for more accurate LTARI estimation in future, dedicated field studies. Full article
Show Figures

Figure 1

23 pages, 2820 KB  
Article
Empirical Modeling of Current Drawn by High-Speed Circuits for Power Integrity Simulations
by Raul Fizesan
Electronics 2026, 15(3), 713; https://doi.org/10.3390/electronics15030713 - 6 Feb 2026
Viewed by 209
Abstract
Firm requirements on electromagnetic compatibility (EMC) of electronic devices demand low electromagnetic emissions (EMI) of high-speed circuits, especially in the automotive industry. To be able to apply cost-effective anti-perturbative measures that reduce noise emission, critical signal integrity and power integrity (SI/PI) tools are [...] Read more.
Firm requirements on electromagnetic compatibility (EMC) of electronic devices demand low electromagnetic emissions (EMI) of high-speed circuits, especially in the automotive industry. To be able to apply cost-effective anti-perturbative measures that reduce noise emission, critical signal integrity and power integrity (SI/PI) tools are needed for developing high-speed printed circuit board (PCB) designs. This paper presents an efficient method for modeling and analyzing the current drawn by digital ICs based on SPICE modeling data. The profile of the current drawn by the ICs from the power supply is composed of the static supply current and the dynamic supply current. This method enables power integrity engineers, in particular, PhD students and researchers who aim to develop an intuitive understanding of PI phenomena during the pre-layout phase, to see the hidden impact of the supply current on the power rail noise through time domain simulations, using a complex simulation model that integrates the Finite-Difference Time-Domain (FDTD) method of modeling the power and ground plane, with Voltage Regulator Modules (VRMs) and decoupling capacitors. A comparison of simulation results between the proposed models and SPICE IC models is also included to validate the proposed model. Full article
Show Figures

Figure 1

26 pages, 3460 KB  
Article
Interpretable Graph-Embedding Framework Based on Joint Feature Similarity for Drug–Drug Interaction Prediction
by Xiaowei Li, Cheng Chen, Zihao Zhao, Qingyong Wang and Lichuan Gu
Electronics 2026, 15(3), 712; https://doi.org/10.3390/electronics15030712 - 6 Feb 2026
Viewed by 125
Abstract
Deep learning methods have been extensively used for drug–drug interaction (DDI) prediction, aiding the development of effective and safe combination therapies. Most studies focus on either the internal molecular structure or external contextual information of individual drugs to improve feature diversity and validity. [...] Read more.
Deep learning methods have been extensively used for drug–drug interaction (DDI) prediction, aiding the development of effective and safe combination therapies. Most studies focus on either the internal molecular structure or external contextual information of individual drugs to improve feature diversity and validity. However, the latent similarities between drug pairs, which are essential for accurate predictions, have largely been overlooked. Therefore, we propose an interpretable predictive approach for graph embedding called PINGE, which relies solely on the interaction network of drugs. Specifically, we constrain the joint features of drug pairs to their interactions, allowing those with similar types to achieve cosine similarity. This similarity in direction helps the joint features converge to the same class during prediction. Additionally, each known drug can link to multiple others, enhancing its diversity. Extensive experiments demonstrate that PINGE outperforms current advanced prediction methods on both KEGG and Drugbank datasets, achieving improvements of 0.7% and 2.4% in ACC while providing network structure-based explanations for predictions. Furthermore, PINGE surpasses advanced baselines by 1% and 1.1% in AUC on the human drug–target dataset and HuRI protein–protein interaction dataset, showcasing excellent versatility. Full article
Show Figures

Figure 1

31 pages, 5682 KB  
Article
ST-GC-GRU: A Hybrid Deep Learning Approach for Shield Attitude Prediction Based on a Spatial–Temporal Graph
by Wen Liu, Jia Chen, Shanshan Wang, Xue Wang, Xingao Yan, Chenning Zhang and Liang Zeng
Electronics 2026, 15(3), 711; https://doi.org/10.3390/electronics15030711 - 6 Feb 2026
Viewed by 110
Abstract
The accurate estimation of shield attitude deviation is related to the quality of tunnel construction. However, the existing recurrent neural network (RNN)-based methods are unable to efficiently capture the spatial correlation between different timestamps (DT) and have poor prediction performance when handling drastically [...] Read more.
The accurate estimation of shield attitude deviation is related to the quality of tunnel construction. However, the existing recurrent neural network (RNN)-based methods are unable to efficiently capture the spatial correlation between different timestamps (DT) and have poor prediction performance when handling drastically changing attitude data, which makes it difficult to estimate attitude deviation when attitude changes are frequent. This study proposes a shield machine attitude prediction model (ST-GC-GRU) based on a spatial–temporal graph. Different from the traditional attitude prediction methods, the method firstly introduces an improved GCN (ST-GCN: spatial–temporal graph) and the time decomposition technique to enhance its representation of the attitude change information, thus more rationally modeling the comprehensive spatial–temporal dependence of the shield structure operation data. The method demonstrates better prediction performance than previous methods under a large number of real data tests and effectively improves the low-confidence predictions of the prediction model when dealing with large attitude changes. The results indicate that the proposed method is better than the other seven prediction models in four attitude deviation values. The model and the research results can provide a reference for developing adaptive control technology in shield tunnel construction. Full article
Show Figures

Figure 1

18 pages, 3943 KB  
Article
Reference-Free Texture Image Retrieval Based on User-Adaptive Psychophysical Perception Modeling
by Shaojun Xu, Yulong Chen, Yichi Zhang and Yao Zheng
Electronics 2026, 15(3), 710; https://doi.org/10.3390/electronics15030710 - 6 Feb 2026
Viewed by 114
Abstract
Texture image retrieval based on subjective visual descriptions remains a significant challenge due to the “semantic gap”, where conventional Content-Based Image Retrieval (CBIR) methods rely on low-level features or reference images that often diverge from human perception. To bridge this gap, this paper [...] Read more.
Texture image retrieval based on subjective visual descriptions remains a significant challenge due to the “semantic gap”, where conventional Content-Based Image Retrieval (CBIR) methods rely on low-level features or reference images that often diverge from human perception. To bridge this gap, this paper proposes a reference-free, perception-driven retrieval framework that enables users to query textures directly via abstract perceptual attributes. First, we constructed a human-centric perceptual feature space through controlled psychophysical experiments, quantifying 12 explicit texture attributes (e.g., granularity, directionality) using a 9-point Likert scale. Second, addressing the variability in visual sensitivity across user demographics, we developed a user-adaptive mechanism incorporating dual perceptual libraries tailored for art-major and non-art-major groups. Retrieval is formulated as a perception-aligned similarity optimization problem within this normalized space. Experimental evaluations on the Describable Textures Dataset (DTD) demonstrate that our method achieves superior perceptual consistency compared to both handcrafted descriptors (GLCM, LBP, HOG) and deep learning baselines (VGG16, ResNet50). Notably, the framework attained high PAP@3 performance across both user groups, validating its effectiveness in decoding fuzzy human intent without the need for query images. This work provides a robust solution for semantic-based texture retrieval in human–computer interaction scenarios. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 947 KB  
Article
A Classifier with Unknown Pattern Recognition for Domain Name System Tunneling Detection in Dynamic Networks
by Huijuan Dong, Zengwei Zheng and Shenfei Pei
Electronics 2026, 15(3), 709; https://doi.org/10.3390/electronics15030709 - 6 Feb 2026
Viewed by 160
Abstract
Domain Name System (DNS) tunneling, a stealthy attack that exploits DNS infrastructure, poses critical threats to dynamic networks and is evolving with emerging attack patterns. This study aims to accurately classify multi-pattern legitimate and malicious traffic and to identify previously unseen attack patterns. [...] Read more.
Domain Name System (DNS) tunneling, a stealthy attack that exploits DNS infrastructure, poses critical threats to dynamic networks and is evolving with emerging attack patterns. This study aims to accurately classify multi-pattern legitimate and malicious traffic and to identify previously unseen attack patterns. We focus on two core research questions: how to accurately classify known-pattern DNS queries and reliably identify unknown-pattern samples. The codified objective is to develop an unsupervised classification approach that integrates multi-pattern adaptation and the recognition of unknown patterns. We formalize the task as Emerging Pattern Classification and propose the Medium Neighbors Forest. It is a forest-based model that uses the “medium neighbor” mechanism and clustering to identify unknown patterns. Experiments verify that the proposed model effectively identifies unseen patterns, offering a new perspective for DNS tunneling detection. Full article
(This article belongs to the Special Issue AI for Cybersecurity and Emerging Technologies for Secure Systems)
Show Figures

Figure 1

30 pages, 4319 KB  
Article
Cross-Border Digital Identity System Based on Ethereum Layer 2 Architecture
by Yu-Heng Hsieh, Ching-Hsi Tseng, Bang-Yi Luo and Shyan-Ming Yuan
Electronics 2026, 15(3), 708; https://doi.org/10.3390/electronics15030708 - 6 Feb 2026
Viewed by 192
Abstract
Modern passport systems face significant challenges in secure data sharing, real-time verification, and user-controlled authorization, particularly in cross-border scenarios. Existing digital passport solutions, often built on permissioned blockchains, suffer from limited transparency, scalability, and high operational costs. This paper proposes a decentralized passport [...] Read more.
Modern passport systems face significant challenges in secure data sharing, real-time verification, and user-controlled authorization, particularly in cross-border scenarios. Existing digital passport solutions, often built on permissioned blockchains, suffer from limited transparency, scalability, and high operational costs. This paper proposes a decentralized passport management system based on an Ethereum Layer 2 architecture that combines global governance with high-throughput and cost-efficient passport operations. The system adopts a hybrid design in which a Global Passport Registry smart contract is deployed on the Ethereum mainnet for cross-country coordination, while passport issuance, access control, and identity management are handled on Layer 2 networks through country-operated Passport Managers and user-specific Personal Passport smart contracts. Extensive performance evaluations show that Ethereum Layer 1 throughput saturates at approximately 40–50 transactions per second (TPS), whereas the proposed Layer 2 deployment consistently exceeds 150 TPS and reaches up to 300 TPS under higher-performance environments, significantly surpassing the estimated system requirement of 70 TPS. These improvements result in faster response times, reduced congestion, and substantially lower transaction costs, demonstrating that public Ethereum Layer 2 infrastructures can effectively support a scalable, self-sovereign, privacy-preserving, and globally verifiable digital passport system suitable for real-world deployment. Full article
(This article belongs to the Special Issue Data Privacy Protection in Blockchain Systems)
Show Figures

Figure 1

36 pages, 24812 KB  
Review
Artificial Intelligence-Enhanced Droop Control for Renewable Energy-Based Microgrids: A Comprehensive Review
by Michael Addai and Petr Musilek
Electronics 2026, 15(3), 707; https://doi.org/10.3390/electronics15030707 - 6 Feb 2026
Viewed by 278
Abstract
The integration of renewable energy sources into modern power systems requires advanced control strategies to maintain stability, reliability, and efficiency. This paper presents a comprehensive review of the application of artificial intelligence techniques, including machine learning, deep learning, and reinforcement learning, in improving [...] Read more.
The integration of renewable energy sources into modern power systems requires advanced control strategies to maintain stability, reliability, and efficiency. This paper presents a comprehensive review of the application of artificial intelligence techniques, including machine learning, deep learning, and reinforcement learning, in improving droop control for renewable energy integration. These artificial intelligence-based methods address key challenges such as frequency and voltage regulation, power sharing, and grid compliance under conditions of high renewable penetration. Machine learning approaches, such as support vector machines, are used to optimize droop parameters for dynamic grid conditions, while deep learning models, including recurrent neural networks, capture complex system dynamics to enhance the stability of distributed energy systems. Reinforcement learning algorithms enable adaptive, autonomous control, improving multi-objective optimization within microgrids. In addition, emerging directions such as transfer learning and real-time data analytics are explored for their potential to enhance scalability and resilience. Overall, this review synthesizes recent advances to demonstrate the growing impact of artificial intelligence in droop control and outlines future pathways toward more intelligent and sustainable power systems. Full article
Show Figures

Graphical abstract

18 pages, 4986 KB  
Article
Dynamic Behaviors and Stability Analysis of Closed-Loop Controlled LLC Resonant Converters
by Xue-Fei Wei, Bin Zeng, Mian Jiang and Chun-Ge Huang
Electronics 2026, 15(3), 706; https://doi.org/10.3390/electronics15030706 - 6 Feb 2026
Viewed by 131
Abstract
The LLC resonant converter constitutes a high-order switching system characterized by multiple operational modes and region-dependent switching sequences. This complexity poses significant challenges to system modeling and dynamic analysis. Furthermore, its inherent high-order nonlinearity tends to induce detrimental nonlinear phenomena, including bifurcation and [...] Read more.
The LLC resonant converter constitutes a high-order switching system characterized by multiple operational modes and region-dependent switching sequences. This complexity poses significant challenges to system modeling and dynamic analysis. Furthermore, its inherent high-order nonlinearity tends to induce detrimental nonlinear phenomena, including bifurcation and chaos, which are particularly undesirable in power electronic systems that demand the utmost priority for stability and reliability. To address these concerns, this work focuses on investigating the dynamic behaviors and stability of LLC resonant converter control systems. This study aims to elucidate the origins and evolution of these nonlinear characteristics, thereby facilitating the design of higher-performance power electronic systems. First, a continuous-time model of the closed-loop controlled LLC resonant converter system was established using the sigmoid function modeling method. This model allows direct application of continuous system theory to analyze dynamic behavior, significantly reducing analytical complexity. Second, the system’s bifurcation characteristics and stability were comprehensively investigated through Floquet theory, bifurcation diagrams, and Lyapunov exponent spectra. Results reveal that PFM-controlled LLC resonant converters exhibit rich nonlinear dynamics under variations in key parameters. Experiments successfully captured the observed nonlinear phenomena, validating the evolution of system dynamics and stability. This work provides a novel perspective for stability analysis and parameter design in multi-resonant converter systems. Full article
Show Figures

Figure 1

28 pages, 3003 KB  
Article
Adaptive Frequency Control for Multi-Relay MC-WPT Systems Based on Clustering and Reinforcement Learning
by Xiaodong Qing, Zhongming Yu, Menghao Shan, Zhao Chen, Tingfa Yang and Zhigang Zhang
Electronics 2026, 15(3), 705; https://doi.org/10.3390/electronics15030705 - 6 Feb 2026
Viewed by 108
Abstract
Magnetically coupled resonant wireless power transfer (MC-WPT) systems with multi-relay coupling structures can significantly extend the transmission distance. However, system performance is highly sensitive to the spatial positions and coupling conditions of the relay coils. Any misalignment can alter the energy transfer path, [...] Read more.
Magnetically coupled resonant wireless power transfer (MC-WPT) systems with multi-relay coupling structures can significantly extend the transmission distance. However, system performance is highly sensitive to the spatial positions and coupling conditions of the relay coils. Any misalignment can alter the energy transfer path, causing shifts in the optimal operating frequency and reductions in efficiency. This makes conventional single-frequency or static-tuning strategies unsuitable for handling complex variations in coupling states. To address this issue, this paper investigates a three-relay MC-WPT system and proposes an adaptive frequency control and energy routing method that combines clustering and Q-learning for scenarios with severe coil misalignment. First, a physical model based on coupled-mode theory is established to describe the relationships among coupling coefficients, operating frequency, and transmission efficiency. High-dimensional coupling state data are then collected under different relay coil misalignment conditions. Next, principal component analysis (PCA) and clustering algorithms are used to extract representative coupling patterns and identify the system’s optimal efficiency points, forming an offline database that includes mappings of optimal frequencies. Furthermore, Q-learning is introduced to enable adaptive frequency control through online state recognition. Finally, under severe coil misalignment, frequency retuning of non-misaligned coils is applied to actively shield misaligned coils and reconstruct the energy transfer path. Simulation and experimental results show that the proposed method can achieve real-time frequency control and dynamic energy routing in multi-relay MC-WPT systems without additional hardware. The system transmission efficiency is significantly improved under all relay misalignment scenarios, effectively addressing the optimal frequency shift problem in multi-relay coupling structures and providing a new approach for intelligent and efficient MC-WPT systems under complex coupling conditions. Full article
Show Figures

Figure 1

22 pages, 1612 KB  
Article
Lightweight 1D-CNN-Based Battery State-of-Charge Estimation and Hardware Development
by Seungbum Kang, Yoonjae Lee, Gahyeon Jang and Seongsoo Lee
Electronics 2026, 15(3), 704; https://doi.org/10.3390/electronics15030704 - 6 Feb 2026
Viewed by 109
Abstract
This paper presents the FPGA implementation and verification of a lightweight one-dimensional convolutional neural network (1D-CNN) pipeline for real-time battery state-of-charge (SoC) estimation in automotive battery management systems. The proposed model employs separable 1D convolution and global average pooling, and applies aggressive structured [...] Read more.
This paper presents the FPGA implementation and verification of a lightweight one-dimensional convolutional neural network (1D-CNN) pipeline for real-time battery state-of-charge (SoC) estimation in automotive battery management systems. The proposed model employs separable 1D convolution and global average pooling, and applies aggressive structured pruning to reduce the number of parameters from 3121 to 358, representing an 88.5% reduction, without significant accuracy loss. Using quantization-aware training (QAT), the network is trained and executed in INT8, which reduces weight storage to one-quarter of the 32-bit baseline while maintaining high estimation accuracy with a Mean Absolute Error (MAE) of 0.0172. The hardware adopts a time-multiplexed single MAC architecture with FSM control, occupying 98,410 gates under a 28 nm process. Evaluations on an FPGA testbed with representative drive-cycle inputs show that the proposed INT8 pipeline achieves performance comparable to the floating-point reference with negligible precision drop, demonstrating its suitability for in-vehicle BMS deployment. Full article
Show Figures

Figure 1

1 pages, 126 KB  
Correction
Correction: Rustam et al. Denial of Service Attack Classification Using Machine Learning with Multi-Features. Electronics 2022, 11, 3817
by Furqan Rustam, Muhammad Faheem Mushtaq, Ameer Hamza, Muhammad Shoaib Farooq, Anca Delia Jurcut and Imran Ashraf
Electronics 2026, 15(3), 703; https://doi.org/10.3390/electronics15030703 - 6 Feb 2026
Viewed by 88
Abstract
In the original publication [...] Full article
23 pages, 4628 KB  
Article
Design and Analysis of FSM-Based AES Encryption on FPGA Versus MATLAB Environment
by Sunny Arief Sudiro, Fauziah, Ragiel Hadi Prayitno, Bayu Kumoro Yakti, Sarifuddin Madenda and Michel Paindavoine
Electronics 2026, 15(3), 702; https://doi.org/10.3390/electronics15030702 - 5 Feb 2026
Viewed by 191
Abstract
The present paper compares and analyzes the design of AES-128 encryption and decryption using Finite State Machine (FSM) architecture on FPGA and MATLAB platforms. This study aims to evaluate performance disparities in terms of execution time, throughput, and hardware efficiency under identical input [...] Read more.
The present paper compares and analyzes the design of AES-128 encryption and decryption using Finite State Machine (FSM) architecture on FPGA and MATLAB platforms. This study aims to evaluate performance disparities in terms of execution time, throughput, and hardware efficiency under identical input data and key conditions. The FSM-based AES algorithm was modeled in MATLAB for functional validation and synthesized on an Artix-7 FPGA using VHDL. The experimental results confirmed that both platforms produced identical ciphertext and plaintext outputs, verifying the correctness of the processes employed. However, the FPGA demonstrated significantly better performance in terms of execution speed. Encryption and decryption times were measured in microseconds on the FPGA, while similar operations on the MATLAB platform required hundreds of milliseconds. The FPGA implementation achieved throughput of 872.53 Mbps for encryption and 858.49 Mbps for decryption with area usage of 1263 and 1428 slices, respectively. This yields an efficiency of 0.691 and 0.601 Mbps/slice, which is considered efficient according to established benchmarks. Compared to previous MATLAB-only and FPGA pipelined implementations, the current design strikes a balance between resource usage and performance, making it ideal for lightweight cryptographic applications in embedded systems. These results provide practical insights into selecting platforms for secure, real-time data processing. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

30 pages, 2475 KB  
Article
Machine Learning–Driven MPPT Control of PEM Fuel Cells with DC–DC Boost Converter Integration
by Ayşe Kocalmış Bilhan, Cem Haydaroğlu, Heybet Kılıç and Mahmut Temel Özdemir
Electronics 2026, 15(3), 701; https://doi.org/10.3390/electronics15030701 - 5 Feb 2026
Viewed by 154
Abstract
Proton exchange membrane fuel cells (PEMFCs) are attractive energy sources for clean and efficient power generation; however, their nonlinear characteristics and sensitivity to operating condition variations make maximum power point tracking (MPPT) a challenging control problem. Conventional MPPT techniques often exhibit slow convergence, [...] Read more.
Proton exchange membrane fuel cells (PEMFCs) are attractive energy sources for clean and efficient power generation; however, their nonlinear characteristics and sensitivity to operating condition variations make maximum power point tracking (MPPT) a challenging control problem. Conventional MPPT techniques often exhibit slow convergence, steady-state oscillations, and degraded performance under dynamic fuel flow variations. This paper proposes a machine learning–driven MPPT control strategy for a PEMFC system integrated with a DC–DC boost converter. The MPPT problem is formulated as a supervised classification task, where machine learning classifiers generate duty-cycle commands to regulate the converter and ensure operation at the maximum power point. A detailed PEMFC–converter model is developed in MATLAB/Simulink-2025b, and a dataset of 3000 labeled samples is generated under varying fuel flow conditions. Several classification algorithms, including decision trees, support vector machines (SVM), k-nearest neighbors (kNN), and ensemble learning methods, are systematically evaluated within an identical simulation framework. Simulation results show that the proposed machine learning-based MPPT controller significantly improves dynamic and steady-state performance. Ensemble Boosted Trees achieve the best overall response with a settling time of approximately 32 ms, peak power overshoot below 4.5%, and steady-state power ripple limited to 1.5%. Quadratic SVM and weighted kNN classifiers also demonstrate stable tracking behavior with power ripple below 2.1%, while overly complex models such as Cubic SVM suffer from large oscillations and reduced accuracy. These results confirm that classification-based machine learning offers an effective, fast, and robust MPPT solution for PEMFC systems under dynamic operating conditions. Full article
Show Figures

Figure 1

21 pages, 2066 KB  
Article
A Multi-Behavior and Sequence-Aware Recommendation Method
by Dan Yin and Tianshuo Wang
Electronics 2026, 15(3), 700; https://doi.org/10.3390/electronics15030700 - 5 Feb 2026
Viewed by 97
Abstract
This paper proposes a multi-behavior and sequence-aware recommendation method that effectively integrates diverse user–item interaction behaviors and their sequential dependencies to enhance recommendation accuracy. Unlike existing studies that treat different user–item interactions independently, our approach integrates diverse behaviors and their natural sequential dependencies [...] Read more.
This paper proposes a multi-behavior and sequence-aware recommendation method that effectively integrates diverse user–item interaction behaviors and their sequential dependencies to enhance recommendation accuracy. Unlike existing studies that treat different user–item interactions independently, our approach integrates diverse behaviors and their natural sequential dependencies to better capture user preferences and alleviate data sparsity caused by single-behavior modeling. Different from the traditional single-behavior models, our approach constructs a multi-behavior heterogeneous graph and defines multiple meta-path patterns to capture implicit relationships between users and items. By generating subgraph instances, we extract fine-grained interaction patterns and employ a LightGCN with residual connections to learn user representations under different behavioral sequences. Furthermore, an attention mechanism is introduced to fuse features across subgraphs, enabling more expressive preference modeling. Experimental results on two real-world datasets, Taobao and Tmall, demonstrate that our method outperforms state-of-the-art single- and multi-behavior recommendation models, achieving up to 10.0% and 11.1% improvements in HR@10 and NDCG@10 on Taobao and 9.0% and 10.6% on Tmall, respectively. These results confirm the effectiveness of leveraging both multi-behavior information and sequence dependencies in capturing deeper user preferences for more accurate recommendations. Full article
Show Figures

Figure 1

30 pages, 2477 KB  
Article
Fast Algorithms for Short-Length Type VI Discrete Cosine Transform
by Valentyna Kitsela, Marina Polyakova and Aleksandr Cariow
Electronics 2026, 15(3), 699; https://doi.org/10.3390/electronics15030699 - 5 Feb 2026
Viewed by 97
Abstract
In this paper, new fast algorithms for computing the discrete cosine transform type VI (DCT-VI) are proposed, with a special emphasis on short input sequences of three to eight samples. Fast algorithms for small discrete trigonometric transformations are directly used for efficient processing [...] Read more.
In this paper, new fast algorithms for computing the discrete cosine transform type VI (DCT-VI) are proposed, with a special emphasis on short input sequences of three to eight samples. Fast algorithms for small discrete trigonometric transformations are directly used for efficient processing of small data sets and also serve as fundamental building blocks for constructing algorithms for larger trigonometric transforms. By exploiting the intrinsic structural properties of the DCT-VI matrices of different sizes, the proposed methods significantly reduce arithmetic complexity compared to the conventional matrix–vector multiplication approach. The paper presents a detailed mathematical formulation of the algorithms, supported by data-flow graphs that illustrate the computational structure and facilitate the precise estimation of arithmetic operations. Optimized pseudocode implementations incorporating variable reuse are also introduced to facilitate practical realization in software environments. Performance analysis demonstrates a substantial reduction in the number of multiplications (up to 66%) and a slight decrease in additions (approximately 9%) for input sizes ranging from three to eight, thereby improving the execution speed of the considering transform. The proposed algorithms are well-suited for applications in video coding, data compression, and digital signal processing, where computational efficiency is critical. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

21 pages, 3659 KB  
Article
A Battery State-of-Charge Prediction Method Based on a Hammerstein Model Integrated with a Hippopotamus Optimization Algorithm and Neural Network
by Liang Zhang, Bilong Yang, Ling Lyu, Sihan Che, Haoqiang Li and Weifei Wang
Electronics 2026, 15(3), 698; https://doi.org/10.3390/electronics15030698 - 5 Feb 2026
Viewed by 102
Abstract
Accurate estimation of the state of charge (SOC) of lithium-ion batteries is critical for assessing the safety and remaining range of electric vehicles. However, due to the complex and variable operating environment of batteries and their highly nonlinear internal mechanisms, achieving high-precision SOC [...] Read more.
Accurate estimation of the state of charge (SOC) of lithium-ion batteries is critical for assessing the safety and remaining range of electric vehicles. However, due to the complex and variable operating environment of batteries and their highly nonlinear internal mechanisms, achieving high-precision SOC prediction remains a central challenge in current research. To this end, this paper proposes a nonlinear Hammerstein model based on the Hippopotamus Optimization Algorithm (HO) to optimize the backpropagation neural network, thereby enhancing the accuracy of SOC prediction. The HO-BP-Hammerstein model optimizes the BP neural network architecture using the Hippopotamus Algorithm and conducts SOC prediction accuracy tests on real-world data. Experimental results demonstrate the superiority of the proposed method through comparative accuracy analysis of various SOC prediction approaches under different operating conditions, confirming its significant engineering application value. Full article
Show Figures

Figure 1

37 pages, 501 KB  
Article
Comparative Analysis of Attribute-Based Encryption Schemes for Special Internet of Things Applications
by Łukasz Pióro, Krzysztof Kanciak and Zbigniew Zieliński
Electronics 2026, 15(3), 697; https://doi.org/10.3390/electronics15030697 - 5 Feb 2026
Viewed by 137
Abstract
Attribute-based encryption (ABE) is an advanced public key encryption mechanism that enables the precise control of access to encrypted data based on attributes assigned to users and data. Attribute-based access control (ABAC), which is built on ABE, is crucial in providing dynamic, fine-grained, [...] Read more.
Attribute-based encryption (ABE) is an advanced public key encryption mechanism that enables the precise control of access to encrypted data based on attributes assigned to users and data. Attribute-based access control (ABAC), which is built on ABE, is crucial in providing dynamic, fine-grained, and context-aware security management in modern Internet of Things (IoT) applications. ABAC controls access based on attributes associated with users, devices, resources, and environmental conditions rather than fixed roles, making it highly adaptable to the complex and heterogeneous nature of IoT ecosystems. ABE can significantly improve the security and manageability of modern military IoT systems. Nevertheless, its practical implementation requires obtaining a range of performance data and assessing the additional overhead, particularly regarding data transmission efficiency. This paper provides a comparative analysis of the performance of two cryptographic schemes for attribute-based encryption in the context of special Internet of Things (IoT) applications. This applies to special environments, both military and civilian, where infrastructure is unreliable and dynamic and decisions must be made locally and in near-real time. From a security perspective, there is a need for strong authentication, precise access control, and a zero-trust approach at the network edge as well. The CIRCL scheme, based on traditional pairing-based ABE (CP-ABE), is compared with the newer Covercrypt scheme, a hybrid key encapsulation mechanism with access control (KEMAC) that provides quantum resistance. The main goal is to determine which scheme scales better and meets the performance requirements for two different scenarios: large corporate networks (where scalability is key) and tactical edge networks (where minimal bandwidth and post-quantum security are paramount). The benchmark results are used to compare the operating costs in detail, such as the key generation time, message encryption and decryption times, public key size, and cipher overhead, showing that Covercrypt provides a reduction in ciphertext overhead in tactical scenarios, while CIRCL offers faster decryption throughput in large-scale enterprise environments. It is concluded that the optimal choice depends on the specific constraints of the operating environment. Full article
(This article belongs to the Special Issue Computer Networking Security and Privacy)
Show Figures

Figure 1

23 pages, 11570 KB  
Article
Geometric Graph Learning Network for Node Classification
by Lei Wang, Xitong Xu and Zhuqiang Li
Electronics 2026, 15(3), 696; https://doi.org/10.3390/electronics15030696 - 5 Feb 2026
Viewed by 99
Abstract
Graph attention improves neighbor discrimination, but it remains limited by local receptive fields and by a strong dependence on the input topology, which is often unreliable on heterophilous graphs. We propose Geometric Graph Learning Network (G2LNet), a structure-learning framework that infers message-passing probabilities [...] Read more.
Graph attention improves neighbor discrimination, but it remains limited by local receptive fields and by a strong dependence on the input topology, which is often unreliable on heterophilous graphs. We propose Geometric Graph Learning Network (G2LNet), a structure-learning framework that infers message-passing probabilities from an explicit geometric topology learned in latent Euclidean or hyperbolic spaces. G2LNet combines (i) a geometric mapping module, (ii) distance- or inner-product-based relation operators with perceptual connectivity to control the influence of the given graph, and (iii) end-to-end constraint objectives enforcing stability, sparsity, and (optional) symmetry of the learned topology. This design yields unified local, non-local, and graph-free neighborhoods, enabling systematic analysis of when non-local aggregation helps. Experiments on node classification across nine publicly available benchmark datasets demonstrate that G2LNet’s controlled variant consistently achieves higher accuracy than representative strong baseline models–both local and non-local–on most datasets. This establishes a robust alternative for smaller scale node classification tasks. Full article
Show Figures

Figure 1

20 pages, 500 KB  
Article
TrafSched: Integrating Bayesian Adaptation with LLMs for Traffic Scheduling Optimization
by Wentian Fan, Li Xu, Yongcheng Zeng, Siyu Xia, Xinyu Cui, Junyan Shi, Shu Lin, Mengyao Zhang, Yiwei Guo, Xin Zhang and Haifeng Zhang
Electronics 2026, 15(3), 695; https://doi.org/10.3390/electronics15030695 - 5 Feb 2026
Viewed by 176
Abstract
Railway timetabling requires resolving complex scheduling conflicts arising from shared tracks, station capacity limits, and strict safety intervals. Existing optimization or learning-based approaches often struggle to scale or generalize across diverse operational scenarios. We present TrafSched, a novel hybrid decision framework that [...] Read more.
Railway timetabling requires resolving complex scheduling conflicts arising from shared tracks, station capacity limits, and strict safety intervals. Existing optimization or learning-based approaches often struggle to scale or generalize across diverse operational scenarios. We present TrafSched, a novel hybrid decision framework that combines a curated strategy library, multi-dimensional conflict prioritization, Bayesian strategy adaptation and an optional Large Language Models (LLMs) integration module. TrafSched iteratively detects and resolves conflicts through adaptive strategy selection and backtracking, enabling robust exploration of feasible timetables without costly model retraining. Experiments on real-world-scale datasets involving 50–120 trains show that TrafSched consistently outperforms heuristic and reinforcement learning baselines, achieving up to 85.05% conflict-resolution success in the most challenging cases. These results demonstrate TrafSched’s effectiveness and scalability for modern railway scheduling operations. Full article
Show Figures

Figure 1

18 pages, 455 KB  
Article
Manifold Optimization for Physical Layer Security in Double-RIS-Assisted Communications
by Jin Li, Siyao Chen, Ziyi Wang, Haofei Lu, Yonghao Chen, Yunchao Song, Yuanjian Liu and Weigang Wang
Electronics 2026, 15(3), 694; https://doi.org/10.3390/electronics15030694 - 5 Feb 2026
Viewed by 103
Abstract
Reconfigurable intelligent surfaces (RISs) are an emerging wireless communication technology that has attracted significant attention, particularly in the field of physical layer security (PLS). This paper proposes a novel double-RIS-aided PLS communication system in a scenario where the direct links between the access [...] Read more.
Reconfigurable intelligent surfaces (RISs) are an emerging wireless communication technology that has attracted significant attention, particularly in the field of physical layer security (PLS). This paper proposes a novel double-RIS-aided PLS communication system in a scenario where the direct links between the access point (AP) and the legitimate user/eavesdropper are blocked, based on an alternating optimization algorithm based on the manifold optimization (MOAO) algorithm, which jointly optimizes the transmit beamforming and phase-shift matrices of the RISs to enhance the system’s secrecy rate performance. The maximum ratio transmission method is adopted to optimize the beamforming vector, and the manifold optimization-based algorithm is utilized to simultaneously optimize the phase-shift matrices of the two RISs. Meanwhile, we also propose a successive convex approximation (SCA)-based algorithm as a benchmark scheme for comparison with the MOAO algorithm. Simulation results show that the MOAO algorithm achieves a significantly improved secrecy rate while exhibiting a reduced computational complexity on the order of O(N12+N22) compared with the SCA-based benchmark. Full article
Show Figures

Figure 1

18 pages, 2458 KB  
Article
An Interpretable CPU Scheduling Method Based on a Multiscale Frequency-Domain Convolutional Transformer and a Dendritic Network
by Xiuwei Peng, Honghua Wang, Guohui Zhou, Jun Jiang, Hao Fang, Zhengxing Wu and Xiaohui Li
Electronics 2026, 15(3), 693; https://doi.org/10.3390/electronics15030693 - 5 Feb 2026
Viewed by 163
Abstract
In modern operating systems, CPU scheduling policy selection and evaluation still rely mainly on heuristic methods, especially at the single-processor level or the abstract ready-queue level, and there is still a lack of systematic modeling and interpretable analysis for complex workload patterns. Traditional [...] Read more.
In modern operating systems, CPU scheduling policy selection and evaluation still rely mainly on heuristic methods, especially at the single-processor level or the abstract ready-queue level, and there is still a lack of systematic modeling and interpretable analysis for complex workload patterns. Traditional approaches are easy to implement and respond quickly in specific scenarios, but they often fail to remain stable under dynamic workloads and high-dimensional features, which can harm generalization. In this work, we build a simulation dataset that covers five typical scheduling policies, redesign a deep learning framework for scheduling policy identification, and propose the MCFCTransformer-DD model. The model extends the standard Transformer with multiscale convolution, frequency-domain augmentation, and cross-attention to capture both low-frequency and high-frequency signals, learn local and global patterns, and model multivariate dependencies. We also introduce a Dendrite Network, or DD, into scheduling policy identification and decision support for the first time, and its gated dendritic structure provides a more transparent nonlinear decision boundary that reduces the black-box nature of deep models and helps mitigate overfitting. Experiments show that MCFCTransformer-DD achieves 94.50% accuracy, a 94.65% F1 score, and an AUROC of 1.00, which indicates strong policy identification performance and strong potential for decision support. Full article
Show Figures

Figure 1

25 pages, 7527 KB  
Article
Heterogeneous Multi-Domain Dataset Synthesis to Facilitate Privacy and Risk Assessments in Smart City IoT
by Matthew Boeding, Michael Hempel, Hamid Sharif and Juan Lopez, Jr.
Electronics 2026, 15(3), 692; https://doi.org/10.3390/electronics15030692 - 5 Feb 2026
Viewed by 198
Abstract
The emergence of the Smart Cities paradigm and the rapid expansion and integration of Internet of Things (IoT) technologies within this context have created unprecedented opportunities for high-resolution behavioral analytics, urban optimization, and context-aware services. However, this same proliferation intensifies privacy risks, particularly [...] Read more.
The emergence of the Smart Cities paradigm and the rapid expansion and integration of Internet of Things (IoT) technologies within this context have created unprecedented opportunities for high-resolution behavioral analytics, urban optimization, and context-aware services. However, this same proliferation intensifies privacy risks, particularly those arising from cross-modal data linkage across heterogeneous sensing platforms. To address these challenges, this paper introduces a comprehensive, statistically grounded framework for generating synthetic, multimodal IoT datasets tailored to Smart City research. The framework produces behaviorally plausible synthetic data suitable for preliminary privacy risk assessment and as a benchmark for future re-identification studies, as well as for evaluating algorithms in mobility modeling, urban informatics, and privacy-enhancing technologies. As part of our approach, we formalize probabilistic methods for synthesizing three heterogeneous and operationally relevant data streams—cellular mobility traces, payment terminal transaction logs, and Smart Retail nutrition records—capturing the behaviors of a large number of synthetically generated urban residents over a 12-week period. The framework integrates spatially explicit merchant selection using K-Dimensional (KD)-tree nearest-neighbor algorithms, temporally correlated anchor-based mobility simulation reflective of daily urban rhythms, and dietary-constraint filtering to preserve ecological validity in consumption patterns. In total, the system generates approximately 116 million mobility pings, 5.4 million transactions, and 1.9 million itemized purchases, yielding a reproducible benchmark for evaluating multimodal analytics, privacy-preserving computation, and secure IoT data-sharing protocols. To show the validity of this dataset, the underlying distributions of these residents were successfully validated against reported distributions in published research. We present preliminary uniqueness and cross-modal linkage indicators; comprehensive re-identification benchmarking against specific attack algorithms is planned as future work. This framework can be easily adapted to various scenarios of interest in Smart Cities and other IoT applications. By aligning methodological rigor with the operational needs of Smart City ecosystems, this work fills critical gaps in synthetic data generation for privacy-sensitive domains, including intelligent transportation systems, urban health informatics, and next-generation digital commerce infrastructures. Full article
Show Figures

Figure 1

37 pages, 853 KB  
Review
Quality Assessment of Artificial Intelligence Systems: A Metric-Based Approach
by Oleksandr Gordieiev, Daria Gordieieva, Austen Rainer, Anatoliy Gorbenko and Olga Tarasyuk
Electronics 2026, 15(3), 691; https://doi.org/10.3390/electronics15030691 - 5 Feb 2026
Viewed by 109
Abstract
This paper addresses the growing need for reliable methods to evaluate the quality of artificial intelligence (AI) systems as they become widely used in both critical domains and everyday applications. The study aims to develop a metric-based approach to assessing AI system quality [...] Read more.
This paper addresses the growing need for reliable methods to evaluate the quality of artificial intelligence (AI) systems as they become widely used in both critical domains and everyday applications. The study aims to develop a metric-based approach to assessing AI system quality by harmonising product quality and quality in use models in line with updated international standards. To achieve this, the authors analyse existing ISO/IEC 25000 series standards, identifying inconsistencies between older and newer versions, and propose an updated quality model that incorporates both perspectives. Building on guidance documents, international standards, and contemporary research, the study introduces a set of metrics designed to measure new subcharacteristics of AI system quality, particularly where standardised metrics have not yet been developed. The proposed approach bridges the gap between established quality models (ISO/IEC 25010:2023, ISO/IEC 25019:2023, ISO/IEC 25059:2023) and standardised measurement practices (ISO/IEC 25023:2016, ISO/IEC 25022:2016), enabling more consistent and practical evaluation of AI systems. These metrics can be applied by researchers and practitioners to improve the quality of AI systems, enhance their reliability, and reduce risks associated with insufficient quality. Future work will focus on empirical validation of the proposed approach to confirm its applicability and usefulness across diverse AI applications. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

25 pages, 4153 KB  
Review
Advances in Battery Technologies for Next-Generation Energy Storage Systems
by Toufik Sebbagh, Theodore Azemtsop Manfo and Mustafa Ergin Şahin
Electronics 2026, 15(3), 690; https://doi.org/10.3390/electronics15030690 - 5 Feb 2026
Viewed by 481
Abstract
Advancements in energy storage systems (ESS) are important to attaining a sustainable and resilient energy future. Despite significant advancements in battery technologies, including lithium-ion, sodium-ion, and redox flow batteries, numerous problems remain. These include low energy density, thermal instability, resource scarcity, high lifecycle [...] Read more.
Advancements in energy storage systems (ESS) are important to attaining a sustainable and resilient energy future. Despite significant advancements in battery technologies, including lithium-ion, sodium-ion, and redox flow batteries, numerous problems remain. These include low energy density, thermal instability, resource scarcity, high lifecycle costs, and ineffective recycling methods. Furthermore, the complexity of connecting battery systems to the grid while maintaining operational safety creates further impediments to implementation. Recent advancements, such as hybrid energy storage systems (HESS), better battery chemistries, and intelligent modeling tools based on MATLAB/Simulink R2025b, have shown promise in terms of performance, cost reduction, and more effective energy management. However, the scalability, recyclability, and real-world applicability of these systems require further exploration. The goal here is to provide a comprehensive overview of current and emerging battery technologies, focusing on technical performance, environmental sustainability, lifecycle cost modeling, and grid compatibility. This comprises a techno-economic study that employs process-based cost modeling (PBCM) and leveled cost of storage (LCOS), a thorough examination of green battery chemistries, and system-level modeling of battery and hybrid configurations. The study seeks to provide academics and stakeholders with a comprehensive framework that considers both the innovations and limitations of current ESS technologies in the context of global decarbonization targets. Full article
(This article belongs to the Special Issue Energy Saving Management Systems: Challenges and Applications)
Show Figures

Figure 1

25 pages, 5101 KB  
Article
Embodied Visual Perception for Driver Fatigue Monitoring Systems: A Hierarchical Decoupling Framework for Robust Fatigue Detection and Scenario Understanding
by Siyu Chen, Juhua Huang, Yinyin Liu, Saier Ye and Yuqi Bai
Electronics 2026, 15(3), 689; https://doi.org/10.3390/electronics15030689 - 5 Feb 2026
Viewed by 121
Abstract
As intelligent vehicle technologies evolve, reliable driver monitoring systems have become increasingly critical for ensuring the safety of human drivers and operational reliability. This paper proposes a novel visual computing framework for Driver Fatigue Monitoring Systems (DFMSs) based on hierarchical decoupling and scenario [...] Read more.
As intelligent vehicle technologies evolve, reliable driver monitoring systems have become increasingly critical for ensuring the safety of human drivers and operational reliability. This paper proposes a novel visual computing framework for Driver Fatigue Monitoring Systems (DFMSs) based on hierarchical decoupling and scenario element analysis, specifically designed for intelligent transportation environments. By treating the monitoring system as an engineering-level embodied perception–decision system deployed within the vehicle, rather than a purely disembodied vision module, the framework decouples low-level algorithmic perception from application-layer decision logic, enabling a more granular evaluation of visual computing performance in real-world scenarios. We leverage Python 3.9-driven automated test case generation to simulate diverse environmental variables, improving testing efficiency by 50% over traditional manual methods. The system utilizes deep learning-based visual computing to achieve high-fidelity monitoring of eye closure (PERCLOS, EAR), yawning (MAR), and head pose dynamics, enabling real-time assessment of the driver’s state within the embodied system loop. Comparative benchmarking reveals that our framework significantly outperforms existing models in visual understanding accuracy, achieving perfect confidence scores (1.000) for eye closure and smoking behavior detection, while drastically reducing false positives in mobile phone usage detection (misidentification rate: 0.016 vs. 0.805). These results demonstrate that an embodied approach to visual perception enhances the robustness and reliability of driver monitoring systems deployed in real vehicles, providing a scalable pathway for the development of next-generation intelligent transportation safety standards. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop