Next Issue
Volume 14, October-1
Previous Issue
Volume 14, September-1
 
 
electronics-logo

Journal Browser

Journal Browser

Electronics, Volume 14, Issue 18 (September-2 2025) – 181 articles

Cover Story (view full-size image): This paper enhances a robotic system for smart manufacturing by integrating a UFactory® xArm 5™ robotic arm with an Intel® RealSense™ D435 camera and the Mask R-CNN algorithm. This fusion of depth sensing and deep learning enables highly accurate object detection, manipulation, and placement, achieving up to 99% manipulation accuracy. We demonstrate a 20% improvement in success rates with depth data integration. The system's resilience is tested under adversarial noise, showing robust performance with only minor reductions in accuracy. Furthermore, the study analyzes cybersecurity threats specific to robotic systems and highlights the importance of adhering to international safety and security standards to ensure reliable industrial automation. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
32 pages, 4771 KB  
Review
Industrial Process Automation Through Machine Learning and OPC-UA: A Systematic Literature Review
by Henry O. Velesaca, Juan A. Holgado-Terriza and Jose M. Gutierrez-Guerrero
Electronics 2025, 14(18), 3749; https://doi.org/10.3390/electronics14183749 - 22 Sep 2025
Viewed by 211
Abstract
This systematic literature review examines the integration of Machine Learning techniques within industrial system architectures using OPC-UA for process automation. Through analyzing primary studies published between 2018 and 2024, the review identifies key trends, methodologies, and implementations across various industrial applications. The review [...] Read more.
This systematic literature review examines the integration of Machine Learning techniques within industrial system architectures using OPC-UA for process automation. Through analyzing primary studies published between 2018 and 2024, the review identifies key trends, methodologies, and implementations across various industrial applications. The review identifies a marked increase in research focused on hybrid architectures that integrate Machine Learning with OPC-UA, particularly in applications such as predictive maintenance and quality control. However, despite reported high accuracy rates—often above 95%—in controlled environments, there is limited evidence on the robustness of these solutions in real-world, large-scale deployments. This highlights the need for further empirical validation and benchmarking in diverse industrial contexts. Implementation patterns range from cloud-based deployments to edge computing solutions, with OPC-UA serving as a communication protocol, information modeling framework, and specifically using the finite state machine specification. The review also highlights current challenges and opportunities, providing valuable insights for researchers and practitioners working on intelligent industrial automation. Full article
Show Figures

Figure 1

18 pages, 7516 KB  
Article
Flexible Control Bandwidth Design-Oriented Stability Enhancement Method for Multi-Machine Grid-Connected Inverter Interconnected Systems
by Runkun Zhan, Jiayuan Gao, Shuai Huang, Binyan Li, Fei Jiang, Wanli Yang and Qianhong Wu
Electronics 2025, 14(18), 3748; https://doi.org/10.3390/electronics14183748 - 22 Sep 2025
Viewed by 218
Abstract
Under the high penetration of new energy, multi-machine interconnected grid-connected inverters (GCIs) are prone to lose stability due to the interaction with the power grid. To enhance their adaptability to the grid, a stability improvement method for multi-machine interconnected GCI systems with flexible [...] Read more.
Under the high penetration of new energy, multi-machine interconnected grid-connected inverters (GCIs) are prone to lose stability due to the interaction with the power grid. To enhance their adaptability to the grid, a stability improvement method for multi-machine interconnected GCI systems with flexible control bandwidth design is proposed. First, based on the design principles of the converter control bandwidth, the transfer function models of the voltage outer loop and current inner loop are constructed, and the inner and outer loop control bandwidths that meet the requirements are designed. Secondly, to mitigate the adverse effects caused by the interaction of control bandwidths among multi-machine interconnected inverters, By analyzing the influence mechanism of the inner and outer loop bandwidths on the stability of the multi-machine interconnection system under different ratios, a flexible control bandwidth allocation scheme for different inverters in the multi-machine interconnection scenario is designed., thereby enhancing the stability of multi-machine-interconnected GCI through differentiated control bandwidth design. Unlike methods that add extra control loops, the method proposed in this paper does not require sampling new physical variables or modifying the control structure. Instead, it only necessitates adjusting the controller parameters of the multi-machine interconnection system, specifically the optimized distribution of bandwidth, to enhance the stability of the multi-machine interconnection system. Finally, simulation results are presented to verify the correctness and effectiveness of the proposed control method. Full article
(This article belongs to the Special Issue Intelligent Control Strategies for Power Electronics)
Show Figures

Figure 1

19 pages, 1934 KB  
Article
XGBoost-Based Very Short-Term Load Forecasting Using Day-Ahead Load Forecasting Results
by Kyung-Min Song, Tae-Geun Kim, Seung-Min Cho, Kyung-Bin Song and Sung-Guk Yoon
Electronics 2025, 14(18), 3747; https://doi.org/10.3390/electronics14183747 - 22 Sep 2025
Viewed by 168
Abstract
Accurate very short-term load forecasting (VSTLF) is critical to ensure a secure operation of power systems under increasing uncertainty due to renewables. This study proposes an eXtreme Gradient Boosting (XGBoost)-based VSTLF model that incorporates day-ahead load forecasts (DALF) results and load variation features. [...] Read more.
Accurate very short-term load forecasting (VSTLF) is critical to ensure a secure operation of power systems under increasing uncertainty due to renewables. This study proposes an eXtreme Gradient Boosting (XGBoost)-based VSTLF model that incorporates day-ahead load forecasts (DALF) results and load variation features. DALF results provide trend information for the target time, while load variation, the difference in historical electric load, captures residual patterns. The load reconstitution method is also adapted to mitigate the forecasting uncertainty caused by behind-the-meter (BTM) photovoltaic (PV) generation. Input features for the proposed VSTLF model are selected using Kendall’s tau correlation coefficient and a feature importance score to remove irrelevant variables. A case study with real data from the Korean power system confirms the proposed model’s high forecasting accuracy and robustness. Full article
Show Figures

Figure 1

39 pages, 2251 KB  
Article
Real-Time Phishing Detection for Brand Protection Using Temporal Convolutional Network-Driven URL Sequence Modeling
by Marie-Laure E. Alorvor and Sajjad Dadkhah
Electronics 2025, 14(18), 3746; https://doi.org/10.3390/electronics14183746 - 22 Sep 2025
Viewed by 216
Abstract
Phishing, especially brand impersonation attacks, is a critical cybersecurity threat that harms user trust and organization security. This paper establishes a lightweight model for real-time detection that relies on URL-only sequences, addressing limitations for multimodal methods that leverage HTML, images, or metadata. This [...] Read more.
Phishing, especially brand impersonation attacks, is a critical cybersecurity threat that harms user trust and organization security. This paper establishes a lightweight model for real-time detection that relies on URL-only sequences, addressing limitations for multimodal methods that leverage HTML, images, or metadata. This approach is based on a Temporal Convolutional Network with Attention (TCNWithAttention) that utilizes character-level URLs to capture both local and long-range dependencies, while providing interpretability with attention visualization and Shapley additive explanations (SHAP). The model was trained and tested on the balanced GramBeddings dataset (800,000 URLs) and validated on the PhiUSIIL dataset of real-world phishing URLs. The model achieved 97.54% accuracy on the GramBeddings dataset, and 81% recall on the PhiUSIIL dataset. The model demonstrated strong generalization, fast inference, and CPU-only deployability. It outperformed CNN, BiLSTM and BERT baselines. Explanations highlighted phishing indicators, such as deceptive subdomains, brand impersonation, and suspicious tokens. It also affirmed real patterns in the legitimate domains. To our knowledge, a Streamlit application to facilitate single and batch URL analysis and log feedback to maintain usability is the first phishing detection framework to integrate TCN, attention, and SHAP, bridging academic innovation with practical cybersecurity techniques. Full article
(This article belongs to the Special Issue Emerging Technologies for Network Security and Anomaly Detection)
Show Figures

Figure 1

19 pages, 1196 KB  
Article
Multi-Sensor Fractional Order Information Fusion Suboptimal Filter with Time Delay
by Tianyi Li, Liang Chen, Yanfeng Zhu, Guanran Wang and Xiaojun Sun
Electronics 2025, 14(18), 3745; https://doi.org/10.3390/electronics14183745 - 22 Sep 2025
Viewed by 96
Abstract
A distributed weighted fusion fractional order filter is proposed for multi-sensor multi-delay fractional order systems. Firstly, the time-delay system is transformed into a non-time-delay system using the state augmentation method, and the optimal augmented fractional Kalman filter is derived. Secondly, in order to [...] Read more.
A distributed weighted fusion fractional order filter is proposed for multi-sensor multi-delay fractional order systems. Firstly, the time-delay system is transformed into a non-time-delay system using the state augmentation method, and the optimal augmented fractional Kalman filter is derived. Secondly, in order to reduce the computational burden, a suboptimal fractional order Kalman filter is presented. Compared with the optimal augmented method, it greatly reduces the computational complexity, which is convenient for real-time applications. Then, in order to derive the weighting coefficient for distributed fusion, the calculation formula of filtering error variance matrix between any two sensor subsystems is derived. Finally, the distributed weighted fusion fractional order filter is presented. It is local optimal and globally suboptimal: compared with each local filter, it has higher accuracy; compared with the centralized fusion filter, it has lower accuracy and more fault tolerance. In summary, it is more suitable for practical application. Simulation results verify the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

65 pages, 2973 KB  
Systematic Review
Machine Learning and Neural Networks for Phishing Detection: A Systematic Review (2017–2024)
by Jacek Lukasz Wilk-Jakubowski, Lukasz Pawlik, Grzegorz Wilk-Jakubowski and Aleksandra Sikora
Electronics 2025, 14(18), 3744; https://doi.org/10.3390/electronics14183744 - 22 Sep 2025
Viewed by 184
Abstract
Phishing remains a persistent and evolving cyber threat, constantly adapting its tactics to bypass traditional security measures. The advent of Machine Learning (ML) and Neural Networks (NN) has significantly enhanced the capabilities of automated phishing detection systems. This comprehensive review systematically examines the [...] Read more.
Phishing remains a persistent and evolving cyber threat, constantly adapting its tactics to bypass traditional security measures. The advent of Machine Learning (ML) and Neural Networks (NN) has significantly enhanced the capabilities of automated phishing detection systems. This comprehensive review systematically examines the landscape of ML- and NN-based approaches for identifying and mitigating phishing attacks. Our analysis, based on a rigorous search methodology, focuses on articles published between 2017 and 2024 across relevant subject areas in computer science and mathematics. We categorize existing research by phishing delivery channels, including websites, electronic mail, social networking, and malware. Furthermore, we delve into the specific machine learning models and techniques employed, such as various algorithms, classification and ensemble methods, neural network architectures (including deep learning), and feature engineering strategies. This review provides insights into the prevailing research trends, identifies key challenges, and highlights promising future directions in the application of machine learning and neural networks for robust phishing detection. Full article
Show Figures

Figure 1

41 pages, 2098 KB  
Review
Learning-Based Viewport Prediction for 360-Degree Videos: A Review
by Mahmoud Z. A. Wahba, Sara Baldoni and Federica Battisti
Electronics 2025, 14(18), 3743; https://doi.org/10.3390/electronics14183743 - 22 Sep 2025
Viewed by 277
Abstract
Nowadays, virtual reality is experiencing widespread adoption, and its popularity is expected to grow in the next few decades. A relevant portion of virtual reality content is represented by 360-degree videos, which allow users to be surrounded by the video content and to [...] Read more.
Nowadays, virtual reality is experiencing widespread adoption, and its popularity is expected to grow in the next few decades. A relevant portion of virtual reality content is represented by 360-degree videos, which allow users to be surrounded by the video content and to explore it without limitations. However, 360-degree videos are extremely demanding in terms of storage and streaming requirements. At the same time, users are not able to enjoy the 360-degree content all at once due to the inherent limitations of the human visual system. For this reason, viewport prediction techniques have been proposed: they aim at forecasting where the user will look, thus allowing the transmission of the sole viewport content or the assignment of a different quality level for viewport and non-viewport regions. In this context, artificial intelligence plays a pivotal role in the development of high-performance viewport prediction solutions. In this work, we analyze the evolution of viewport prediction based on machine and deep learning techniques in the last decade, focusing on their classification based on the employed processing technique, as well as the input and output formats. Our review shows common gaps in the existing approaches, thus paving the way for future research. An increase in viewport prediction accuracy and reliability will foster the diffusion of virtual reality content in real-life scenarios. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Show Figures

Figure 1

25 pages, 562 KB  
Article
VeriFlow: A Framework for the Static Verification of Web Application Access Control via Policy-Graph Consistency
by Tao Zhang, Fuzhong Hao, Yunfan Wang, Bo Zhang and Guangwei Xie
Electronics 2025, 14(18), 3742; https://doi.org/10.3390/electronics14183742 - 22 Sep 2025
Viewed by 233
Abstract
The evolution of industrial automation toward Industry 3.0 and 4.0 has driven the emergence of Industrial Edge-Cloud Platforms, which increasingly depend on web interfaces for managing and monitoring critical operational technology. This convergence introduces significant security risks, particularly from Broken Access Control (BAC)—a [...] Read more.
The evolution of industrial automation toward Industry 3.0 and 4.0 has driven the emergence of Industrial Edge-Cloud Platforms, which increasingly depend on web interfaces for managing and monitoring critical operational technology. This convergence introduces significant security risks, particularly from Broken Access Control (BAC)—a vulnerability consistently ranked as the top web application risk by the Open Web Application Security Project (OWASP). BAC flaws in industrial contexts can lead not only to data breaches but also to disruptions of physical processes. To address this urgent need for robust web-layer defense, this paper presents VeriFlow, a static verification framework for access control in web applications. VeriFlow reformulates access control verification as a consistency problem between two core artifacts: (1) a Formal Access Control Policy (P), which declaratively defines intended permissions, and (2) a Navigational Graph, which models all user-driven UI state transitions. By annotating the graph with policy P, VeriFlow verifies a novel Path-Permission Safety property, ensuring that no sequence of legitimate UI interactions can lead a user from an authorized state to an unauthorized one. A key technical contribution is a static analysis method capable of extracting navigational graphs directly from the JavaScript bundles of Single-Page Applications (SPAs), circumventing the limitations of traditional dynamic crawlers. In empirical evaluations, VeriFlow outperformed baseline tools in vulnerability detection, demonstrating its potential to deliver strong security guarantees that are provable within its abstracted navigational model. By formally checking policy-graph consistency, it systematically addresses a class of vulnerabilities often missed by dynamic tools, though its effectiveness is subject to the model-reality gap inherent in static analysis. Full article
Show Figures

Figure 1

24 pages, 1518 KB  
Article
Smart Matter-Enabled Air Vents for Trombe Wall Automation and Control
by Gabriel Conceição, Tiago Coelho, Afonso Mota, Ana Briga-Sá and António Valente
Electronics 2025, 14(18), 3741; https://doi.org/10.3390/electronics14183741 - 22 Sep 2025
Viewed by 246
Abstract
Improving energy efficiency in buildings is critical for supporting sustainable growth in the construction sector. In this context, the implementation of passive solar solutions in the building envelope plays an important role. Trombe wall is a passive solar system that presents great potential [...] Read more.
Improving energy efficiency in buildings is critical for supporting sustainable growth in the construction sector. In this context, the implementation of passive solar solutions in the building envelope plays an important role. Trombe wall is a passive solar system that presents great potential for passive solar heating purposes. However, its performance can be enhanced when the Internet of Things is applied. This study employs a multi-domain smart system based on Matter-enabled IoT technology for maximizing Trombe wall functionality using appropriate 3D-printed ventilation grids. The system includes ESP32-C6 microcontrollers with temperature sensors and ventilation grids controlled by actuated servo motors. The system is automated with a Raspberry Pi 5 running Home Assistant OS with Matter Server. The integration of the Matter protocol provides end-to-end interoperability and secure communication, avoiding traditional systems based on MQTT. This work demonstrates the technical feasibility of implementing smart ventilation control for Trombe walls using a Matter-enabled infrastructure. The system proves to be capable of executing real-time vent management based on predefined temperature thresholds. This setup lays the foundation for scalable and interoperable thermal automation in passive solar systems, paving the way for future optimizations and addicional implementations, namely in order to improve indoor thermal comfort in smart and more efficient buildings. Full article
(This article belongs to the Special Issue Parallel and Distributed Computing for Emerging Applications)
Show Figures

Figure 1

29 pages, 2989 KB  
Article
Design and Validation of an IoT-Integrated Fuzzy Logic Controller for High-Altitude NFT Hydroponic Systems: A Case Study in Cusco, Peru
by Julio C. Escalante-Mamani, Erwin J. Sacoto-Cabrera, Roger Jesus Coaquira-Castillo, L. Walter Utrilla Mego, Julio Cesar Herrera-Levano, Yesenia Concha-Ramos and Edison Moreno-Cardenas
Electronics 2025, 14(18), 3740; https://doi.org/10.3390/electronics14183740 - 22 Sep 2025
Viewed by 269
Abstract
Hydroponics in recirculation systems faces significant challenges in regulating critical parameters, such as pH and electrical conductivity (EC), especially in adverse environmental conditions, such as high altitudes. This paper presents the design and validation of a fuzzy controller integrated with IoT for NFT-type [...] Read more.
Hydroponics in recirculation systems faces significant challenges in regulating critical parameters, such as pH and electrical conductivity (EC), especially in adverse environmental conditions, such as high altitudes. This paper presents the design and validation of a fuzzy controller integrated with IoT for NFT-type hydroponic systems, implemented on low-cost hardware and tested in the city of Cusco (3339 m.a.s.l.). Unlike previous studies that are limited to simulations or laboratory tests, the proposal was validated under real growing conditions, demonstrating its practical viability. The system incorporates a fuzzy controller based on simple rules, an IoT module with ESP32 for remote monitoring via Blynk, and an accessible and replicable architecture. The results demonstrate stable performance in pH and EC regulation, with adequate response times, minimal overshoot, and reduced errors, achieving levels comparable to those of higher-cost commercial solutions. The main contribution of this study is the demonstration that an intelligent, economical, and replicable system can be applied in agricultural environments with limited resources, offering a viable alternative for improving productivity in high-altitude hydroponic systems. Full article
Show Figures

Figure 1

36 pages, 6788 KB  
Article
Performing-Arts-Based ICH-Driven Interaction Design Framework for Rehabilitation Game
by Jing Zhao, Xinran Zhang, Yiming Ma, Yi Liu, Siyu Huo, Xiaotong Mu, Qian Xiao and Yuhong Han
Electronics 2025, 14(18), 3739; https://doi.org/10.3390/electronics14183739 - 22 Sep 2025
Viewed by 244
Abstract
The lack of deep engagement strategies that include cultural contextualization in the current rehabilitation game design can result in limited user motivation and low adherence in long-term rehabilitation. Integrating cultural semantics into interactive rehabilitation design offers new opportunities to enhance user engagement and [...] Read more.
The lack of deep engagement strategies that include cultural contextualization in the current rehabilitation game design can result in limited user motivation and low adherence in long-term rehabilitation. Integrating cultural semantics into interactive rehabilitation design offers new opportunities to enhance user engagement and emotional resonance in digital rehabilitation therapy, especially in a deeper way rather than visually. This study introduces a framework comprising a “Rehabilitation Mechanism–Interaction Design–Cultural Feature” triadic mapping model and a structured procedure. Following the framework, a hand function rehabilitation game is designed based on Chinese string puppetry, as well body rehabilitation games based on shadow puppetry and Tai Chi. The hand rehabilitation game utilizes Leap Motion for its gesture-based input and Unity3D for real-time visual feedback and task execution. Functional training gestures such as grasping, wrist rotation, and pinching are mapped to culturally meaningful puppet actions within the game. Through task-oriented engagement and narrative immersion, the design improves cognitive accessibility, emotional motivation, and sustained participation. Evaluations are conducted from rehabilitation professionals and target users. The results demonstrate that the system is promising in integrating motor function training with emotional engagement, validating the feasibility of the proposed triadic mapping framework in rehabilitation game design. This study provides a replicable design strategy for human–computer interaction (HCI) researchers working at the intersection of healthcare, cultural heritage, and interactive media. Full article
(This article belongs to the Special Issue Innovative Designs in Human–Computer Interaction)
Show Figures

Figure 1

11 pages, 617 KB  
Article
An Explainable AI Framework for Online Diabetes Risk Prediction with a Personalized Chatbot Assistant
by Ehesan Maimaitijiang, Muyesaier Aihaiti and Yasin Mamatjan
Electronics 2025, 14(18), 3738; https://doi.org/10.3390/electronics14183738 - 22 Sep 2025
Viewed by 297
Abstract
Background and Objective: Diabetes is a prevalent chronic disease that presents considerable health risks, making prompt diagnosis and treatment essential to avert complications. Traditional Artificial Intelligence (AI) models for diabetes prediction often operate as black boxes. A major issue caused by this is [...] Read more.
Background and Objective: Diabetes is a prevalent chronic disease that presents considerable health risks, making prompt diagnosis and treatment essential to avert complications. Traditional Artificial Intelligence (AI) models for diabetes prediction often operate as black boxes. A major issue caused by this is that black boxes lack interpretability, which impacts their effectiveness in clinical use cases. We introduce a novel online recommendation framework using explainable AI (XAI) to predict type II diabetes risk and provide clear, actionable analyses with a personalized chatbot assistant. Methods: To make the model, we chose the CatBoost classifier and SHapley Additive exPlanations (SHAP) due to their ability to provide accurate predictions. Using those tools, we analyzed 16 individual risk factors from a dataset of 520 patients. We applied the Synthetic Minority Over-sampling Technique (SMOTE) to reduce the effect of data imbalance. We also developed an interactive interface that allows users to input data, visualize personalized risk profiles, and understand the driving factors behind predictions. Finally, large language models (LLMs) were integrated into the interface for patient-specific recommendations for improving health and lifestyle through a personalized chatbot assistant. Results: The model demonstrated great predictive performance, with an Area Under the ROC Curve (AUC) of 0.99, a Cohen Kappa score of 0.978, and an F1 score of 0.99. For the minority class, SMOTE application improved performance metrics, resulting in an AUC of 0.98 and an F1 score of 0.91 for female patients. Conclusions: This study proposes an explainable AI framework for predicting diabetes risk online and providing patient-specific advice through a personalized chatbot assistant. This will help to facilitate better decision-making and improved management of diabetes risk. Full article
Show Figures

Figure 1

14 pages, 769 KB  
Article
A Novel Low-Power Ternary 6T SRAM Design Using XNOR-Based CIM Architecture in Advanced FinFET Technologies
by Adnan A. Patel, Sohan Sai Dasaraju, Achyuth Gundrapally and Kyuwon Ken Choi
Electronics 2025, 14(18), 3737; https://doi.org/10.3390/electronics14183737 - 22 Sep 2025
Viewed by 290
Abstract
The increasing demand for high-performance and low-power hardware in artificial intelligence (AI) applications—such as speech recognition, facial recognition, and object detection—has driven the exploration of advanced memory designs. Convolutional neural networks (CNNs) and deep neural networks (DNNs) require intensive computational resources, leading to [...] Read more.
The increasing demand for high-performance and low-power hardware in artificial intelligence (AI) applications—such as speech recognition, facial recognition, and object detection—has driven the exploration of advanced memory designs. Convolutional neural networks (CNNs) and deep neural networks (DNNs) require intensive computational resources, leading to significant challenges in terms of memory access time and power consumption. Compute-in-Memory (CIM) architectures have emerged as an alternative by executing computations directly within memory arrays, thereby reducing the expensive data transfer between memory and processor units. In this work, we present a 6T SRAM-based CIM architecture implemented using FinFET technology, aiming to reduce both power consumption and access delay. We explore and simulate three different SRAM cell structures—PLNA (P-Latch N-Access), NLPA (N-Latch P-Access), and SE (Single-Ended)—to assess their suitability for CIM operations. Compared to a reference 10T XNOR-based CIM design, our results show that the proposed structures achieve an average power consumption approximately 70% lower, along with significant delay reduction, without compromising functional integrity. A comparative analysis is presented to highlight the trade-offs between the three configurations, providing insights into their potential applications in low-power AI accelerator design. Full article
Show Figures

Figure 1

19 pages, 3275 KB  
Article
Design and Analysis of Compact K/Ka-Band CMOS Four-Way Power Splitters for K/Ka-Band LEO Satellite Communications and 28/39 GHz 5G NR
by Yo-Sheng Lin and Chin-Yi Huang
Electronics 2025, 14(18), 3736; https://doi.org/10.3390/electronics14183736 - 21 Sep 2025
Viewed by 210
Abstract
We present the design and analysis of three CMOS 4-way power splitters operating in the K/Ka-band (18–27 GHz/27–40 GHz) for low Earth orbit (LEO) satellite communications and 26.5–29.5/37–40 GHz 5G radio applications. The first power splitter (PS1) consists of a two-way power splitter [...] Read more.
We present the design and analysis of three CMOS 4-way power splitters operating in the K/Ka-band (18–27 GHz/27–40 GHz) for low Earth orbit (LEO) satellite communications and 26.5–29.5/37–40 GHz 5G radio applications. The first power splitter (PS1) consists of a two-way power splitter using circular double-helical transmission lines (DH-TLs) cascaded with two two-way power splitters using noninverting circular sole-helical coupled-TL (SH-CL). The second power splitter (PS2) consists of a two-way power splitter using circular DH-TLs cascaded with two two-way power splitters using inverting circular SH-CL. The third power splitter (PS3) consists of three two-way power splitters using DH-TLs. For each two-way power splitter, a parallel input capacitor is included to satisfy the requirement for two equivalent quarter-wavelength (λ/4) TLs, ensuring a low input reflection coefficient. λ/10-DH-TL-based-double-λ/4-TLs, λ/12-noninverting-SH-CL-based-double-λ/4-TLs, and λ/9-inverting-SH-CL-based-double-λ/4-TLs are utilized to attain compact chip size and low amplitude inequality (AI) and phase deviation (PD). Prominent results are attained. For instance, the chip size of PS1 is 0.057 mm2. At 33 GHz, PS1 attains S11 of −16 dB, S22 of −21.2 dB, S33 of −19.7 dB, S23 of −15.3 dB, S21 of −7.862 dB, S31 of −7.803 dB, AI23 of −0.059 dB, and PD23 of 0.197°. The chip size of PS2 is 0.071 mm2. At 33 GHz, PS2 attains S11 of −13.5 dB, S22 of −16.1 dB, S33 of −16.7 dB, S23 of −34.8 dB, S21 of −8.1 dB, S31 of −8.146 dB, AI23 of 0.046 dB, and PD23 of −0.581°. To the authors’ knowledge, the overall performance of PS1, PS2, and PS3 ranks among the best published in the literature for K- and Ka-band four-way power splitters. Full article
Show Figures

Figure 1

18 pages, 30087 KB  
Article
ChatCAS: A Multimodal Ceramic Multi-Agent Studio for Consultation, Image Analysis and Generation
by Yongyi Han, Diandong Liu, Yi Ren, Zepeng Lei, Lianshan Sun and Jinping Li
Electronics 2025, 14(18), 3735; https://doi.org/10.3390/electronics14183735 - 21 Sep 2025
Viewed by 269
Abstract
Many traditional ceramic techniques are inscribed on UNESCO’s Intangible Cultural Heritage lists; yet, expert scarcity, long training cycles, and stylistic homogenization impede intergenerational transmission and innovation. Although large language models offer new opportunities, research tailored to ceramics remains limited. To address this gap, [...] Read more.
Many traditional ceramic techniques are inscribed on UNESCO’s Intangible Cultural Heritage lists; yet, expert scarcity, long training cycles, and stylistic homogenization impede intergenerational transmission and innovation. Although large language models offer new opportunities, research tailored to ceramics remains limited. To address this gap, we first construct EvalCera, the first open-source domain large language model evaluation dataset for ceramic knowledge, image analysis, and generation, and conduct large-scale assessments of existing general large language models on ceramic tasks, revealing their limitations. We then release the first ceramics-focused training corpus for large language models and, using it, develop CeramicGPT, the first domain-specific large language model for ceramics. Finally, we built ChatCAS, a workflow multi-agent system built on CeramicGPT and GPT-4o. Experiments show that our model and agents achieve the best performance on EvalCera (A) and (B) text tasks as well as (C) image generation. The code is publicly available. Full article
Show Figures

Figure 1

51 pages, 1073 KB  
Review
A Review of Click-Through Rate Prediction Using Deep Learning
by Shuaa Alotaibi and Bandar Alotaibi
Electronics 2025, 14(18), 3734; https://doi.org/10.3390/electronics14183734 - 21 Sep 2025
Viewed by 220
Abstract
Online advertising is vital for reaching target audiences and promoting products. In 2020, US online advertising revenue increased by 12.2% to $139.8 billion. The industry is projected to reach $487.32 billion by 2030. Artificial intelligence has improved click-through rates (CTR), enabling personalized advertising [...] Read more.
Online advertising is vital for reaching target audiences and promoting products. In 2020, US online advertising revenue increased by 12.2% to $139.8 billion. The industry is projected to reach $487.32 billion by 2030. Artificial intelligence has improved click-through rates (CTR), enabling personalized advertising content by analyzing user behavior and providing real-time predictions. This review examines the latest CTR prediction solutions, particularly those based on deep learning, over the past three years. This timeframe was chosen because CTR prediction has rapidly advanced in recent years, particularly with transformer architectures, multimodal fusion techniques, and industrial applications. By focusing on the last three years, the review highlights the most relevant developments not covered in earlier surveys. This review classifies CTR prediction methods into two main categories: CTR prediction techniques employing text and CTR prediction approaches utilizing multivariate data. The methods that use multivariate data to predict CTR are further categorized into four classes: graph-based methods, feature-interaction-based techniques, customer-behavior approaches, and cross-domain methods. The review also outlines current challenges and future research opportunities. The review highlights that graph-based and multimodal methods currently dominate state-of-the-art CTR prediction, while feature-interaction and cross-domain approaches provide complementary strengths. These key takeaways frame open challenges and emerging research directions. Full article
Show Figures

Figure 1

15 pages, 1789 KB  
Article
Averaging-Based Method for Real-Time Estimation of Voltage Effective Value in Grid-Connected Inverters
by Byunggyu Yu
Electronics 2025, 14(18), 3733; https://doi.org/10.3390/electronics14183733 - 21 Sep 2025
Viewed by 200
Abstract
Accurate and timely estimation of the root-mean-square (RMS) voltage is essential for grid-connected inverter systems, where it underpins reference generation, synchronization, and protection functions. Conventional RMS estimation methods, based on squaring, averaging, and taking the square root of values over full-cycle windows, achieve [...] Read more.
Accurate and timely estimation of the root-mean-square (RMS) voltage is essential for grid-connected inverter systems, where it underpins reference generation, synchronization, and protection functions. Conventional RMS estimation methods, based on squaring, averaging, and taking the square root of values over full-cycle windows, achieve high accuracy but incur significant latency and computational overhead, thereby limiting their suitability for real-time control. Frequency-domain approaches, such as the FFT or wavelet analysis offer harmonic decomposition but are too complex for cost-sensitive embedded controllers. To address these challenges, this paper proposes an averaging-based RMS estimation method that exploits the proportionality between the mean absolute value of a sinusoidal waveform and its RMS. The method computes a moving average of the absolute voltage over a half-cycle window synchronized to the phase-locked loop (PLL) frequency, followed by a fixed scaling factor. This recursive implementation reduces the computational burden to a few arithmetic operations per sample while maintaining synchronization with off-nominal frequencies. Time-domain simulations under nominal (60 Hz) and deviated frequencies (57 Hz and 63 Hz) demonstrate that the proposed estimator achieves steady-state accuracy comparable to that of conventional and adaptive methods but with convergence within a half-cycle, thereby reducing latency by nearly 50%. These results confirm the method’s suitability for fast, reliable, and resource-efficient real-time inverter control in modern distribution grids. To provide a comprehensive evaluation, the paper first reviews conventional RMS estimation methods and their inherent limitations, followed by a detailed presentation of the proposed averaging-based approach. Simulation results under both nominal and off-nominal frequency conditions are then presented, along with a comparative analysis highlighting the advantages of the proposed method. Full article
(This article belongs to the Special Issue Optimal Integration of Energy Storage and Conversion in Smart Grids)
Show Figures

Figure 1

14 pages, 9751 KB  
Article
Improving the Efficiency of a 10 MHz Voltage Regulator Using a PCB-Embedded Inductor
by GiWon Kim, Jisoo Hwang and SoYoung Kim
Electronics 2025, 14(18), 3732; https://doi.org/10.3390/electronics14183732 - 21 Sep 2025
Viewed by 209
Abstract
This study presents the design and experimental evaluation of a 10 MHz voltage regulator module (VRM) that incorporates a solenoid inductor embedded within a printed circuit board (PCB). To verify the performance of the inductor, a test PCB was fabricated and characterized using [...] Read more.
This study presents the design and experimental evaluation of a 10 MHz voltage regulator module (VRM) that incorporates a solenoid inductor embedded within a printed circuit board (PCB). To verify the performance of the inductor, a test PCB was fabricated and characterized using a vector network analyzer (VNA), with measurement data processed through 2x-thru de-embedding technique. A 10 MHz VRM was then implemented to assess the impact of the embedded inductor on system efficiency. Comparative measurements were conducted between two VRMs—one employing a surface-mounted (SMT) inductor and the other a PCB-embedded inductor. The SMT-based system achieved a peak efficiency of 65.24% at a load current of 800 mA, whereas the PCB-embedded inductor version reached 70.43% at 900 mA, reflecting an improvement of 5.19%. The VRM with an embedded inductor experienced less efficiency degradation under heavy load conditions, demonstrating superior energy delivery stability. These findings confirm the practical benefits of integrating solenoid inductors within a PCB for high-frequency, high-efficiency power conversion. Full article
Show Figures

Figure 1

18 pages, 3331 KB  
Article
DeepFocusNet: An Attention-Augmented Deep Neural Framework for Robust Colorectal Cancer Classification in Whole-Slide Histology Images
by Shah Md Aftab Uddin, Muhammad Yaseen, Md Kamran Hussain Chowdhury, Rubina Akter Rabeya, Shah Muhammad Imtiyaj Uddin and Hee-Cheol Kim
Electronics 2025, 14(18), 3731; https://doi.org/10.3390/electronics14183731 - 21 Sep 2025
Viewed by 310
Abstract
A major cause of cancer-related mortality globally is colorectal cancer, which emphasises the critical need for state-of-the-art diagnostic tools for early identification and categorisation. We use deep learning methodology to classify colorectal cancer histology images into eight different categories automatically. To improve classification [...] Read more.
A major cause of cancer-related mortality globally is colorectal cancer, which emphasises the critical need for state-of-the-art diagnostic tools for early identification and categorisation. We use deep learning methodology to classify colorectal cancer histology images into eight different categories automatically. To improve classification accuracy and maximise feature extraction, we create a DeepFocusNet architecture with attention approaches using a dataset of 5000 high-resolution (150 × 150) histological images. To improve model generalisation, we combine data augmentation, fine-tuning, and freezing early layers into our progressive training approach. Additionally, we create full-scale images using heatmaps and multi-class overlays after breaking up large-scale histology images (5000 × 5000) into smaller windows for classification using a special tiling technique. Attention mechanisms are added to improve the model’s performance and interpretability, as they are proven to focus on the most important histopathological traits. The model provides pathologists with high-resolution probability maps that aid in precise and speedy patient identification. The robustness of our methodology is demonstrated by empirical findings, opening the door for clinical applications of AI-driven histopathological investigation. Pathologists can receive precise and efficient diagnostic support from the final system thanks to its high-resolution probability maps and 97% classification accuracy. Empirical results provide evidence of our methodology’s robustness and show its potential for real-world clinical applications in AI-assisted histopathology. Full article
Show Figures

Figure 1

25 pages, 1804 KB  
Article
Adversarial Reconstruction with Spectral-Augmented and Graph Joint Embedding for Network Anomaly Detection
by Liwei Yu, Jing Wu, Qimei Chen and Guiao Yang
Electronics 2025, 14(18), 3730; https://doi.org/10.3390/electronics14183730 - 21 Sep 2025
Viewed by 262
Abstract
Network anomaly detection is widely used in network analysis and security prevention, in which reconstruction-based approaches have achieved remarkable results. However, attributed networks exhibit highly nonlinear relationships and time dependence over time, which make the anomalies more complex and ambiguous, resulting in anomaly [...] Read more.
Network anomaly detection is widely used in network analysis and security prevention, in which reconstruction-based approaches have achieved remarkable results. However, attributed networks exhibit highly nonlinear relationships and time dependence over time, which make the anomalies more complex and ambiguous, resulting in anomaly detection still facing challenges. To this end, this study proposes an adversarial reconstruction framework with spectral-augmented and graph joint embedding for anomaly detection (GAN-SAGE), which integrates an autoencoder (AE) based on the frequency feature enhanced graph transformer (GT) into the generator for generating adversarial networks (GAN), improving network representation through adversarial training. The first stage of the encoding process captures the frequency domain information of the input timing data through spectral-augmented, and the second stage enhances the modeling capability of spatial structure and graph interaction dependency through multi-attribute coupling and GTs. We conducted extensive experiments on AIOps, SWaT and WADI datasets, demonstrating the effectiveness of GAN-SAGE compared to the state-of-the-art method. The detection performance of GAN-SAGE, respectively, improved by an average of 9.64%, 18.73% and 19.79% in terms of F1-score across the three datasets. Full article
Show Figures

Figure 1

16 pages, 993 KB  
Article
A Multi-Feature Domain Interaction Learning Framework for Anomalous Network Detection
by Wei Sun, Fucun Zhang, Liang Guo and Xiao Liu
Electronics 2025, 14(18), 3729; https://doi.org/10.3390/electronics14183729 - 20 Sep 2025
Viewed by 249
Abstract
Network anomaly detection aims to identify abnormal traffic patterns that may indicate faults or cyber threats. This task requires modeling complex network flows composed of heterogeneous features, such as static headers, packet sequences, and statistical summaries. However, most existing methods focus on temporal [...] Read more.
Network anomaly detection aims to identify abnormal traffic patterns that may indicate faults or cyber threats. This task requires modeling complex network flows composed of heterogeneous features, such as static headers, packet sequences, and statistical summaries. However, most existing methods focus on temporal modeling and treat flows as uniform sequences, overlooking feature heterogeneity and dependencies across domains. As a result, they often miss subtle anomalies that can be reflected by cross-domain correlations, highlighting the need for more structured modeling. We propose a domain-aware framework for network anomaly detection that explicitly models the heterogeneity of flow-level features and their cross-domain interactions. To address the limitations of prior work in handling heterogeneous flow features, we design an Intra-Domain Expert Network (IDEN) that uses convolutional and feed-forward layers to independently extract patterns from distinct domains. We further introduce an Inter-Domain Expert Network (EDEN) that uses attention mechanisms to capture dependencies across domains and produces integrated flow representations. These refined representations are passed to a Transformer-based temporal module to detect anomalies over time, including gradually evolving or coordinated behaviors. Experiments on multiple public datasets show that our method achieves higher detection accuracy, demonstrating the value of explicitly modeling intra-domain structure and inter-domain dependencies. Full article
Show Figures

Figure 1

22 pages, 7478 KB  
Article
A Blockchain-Based System for Monitoring Sobriety and Tracking Location of Traffic Drivers
by Mihaela Gavrilă, Mădălina-Giorgiana Murariu, Delia-Elena Bărbuță, Marin Fotache, Lucian Trifina and Daniela Tărniceriu
Electronics 2025, 14(18), 3728; https://doi.org/10.3390/electronics14183728 - 20 Sep 2025
Viewed by 234
Abstract
This paper presents the design and implementation of a blockchain-secured system for monitoring driver sobriety and real-time geolocation. The proposed platform integrates a Modular Sensor Battery (MSB) for detecting alcohol concentration in exhaled air, a centralized Data Collection Platform (DC Platform) for real-time [...] Read more.
This paper presents the design and implementation of a blockchain-secured system for monitoring driver sobriety and real-time geolocation. The proposed platform integrates a Modular Sensor Battery (MSB) for detecting alcohol concentration in exhaled air, a centralized Data Collection Platform (DC Platform) for real-time data visualization and storage, and a complementary physiological monitoring device—the IoT Fit-Bit Smart Band (IFSB)—which captures heart rate and blood oxygen saturation as alternative indicators when breath-based sensing may be compromised. The MSB, the DC Platform, integration with the IoT FitBit Smart Band, and the blockchain-based data management architecture represent the authors’ direct contribution to both the conceptual design and technical implementation. These elements are introduced as part of a unified, fully integrated system designed to enable non-invasive sobriety monitoring and secure data integrity in vehicular contexts. To ensure data authenticity, a custom Ethereum smart contract stores cryptographic hashes of sensor readings, enabling decentralized, tamper-evident verification without exposing sensitive medical information. The system was validated in a controlled experimental environment, confirming its operational robustness and demonstrating its potential to improve road safety through secure, real-time sobriety detection and geolocation tracking. Full article
Show Figures

Figure 1

26 pages, 700 KB  
Article
Exploring AI in Healthcare Systems: A Study of Medical Applications and a Proposal for a Smart Clinical Assistant
by Răzvan Daniel Zota, Ionuț Alexandru Cîmpeanu and Mihai Adrian Lungu
Electronics 2025, 14(18), 3727; https://doi.org/10.3390/electronics14183727 - 20 Sep 2025
Viewed by 404
Abstract
The rising complexity and operational demands of modern healthcare systems have significantly increased resource usage and associated costs. This trend highlights the need for innovative approaches to optimize workflows and enhance decision-making. From this perspective, the present study explores how artificial intelligence (AI) [...] Read more.
The rising complexity and operational demands of modern healthcare systems have significantly increased resource usage and associated costs. This trend highlights the need for innovative approaches to optimize workflows and enhance decision-making. From this perspective, the present study explores how artificial intelligence (AI) can contribute to improving efficiency and information access in the medical field. The article begins with an introduction and a concise literature review focused on the integration of AI in healthcare platforms. Also, three main research questions are presented here. Our research employs an evaluation and a comparison for five existing medical-based applications. Each of these platforms was assessed to determine whether and how AI technologies have been integrated into their functionalities. The findings from this analysis inspired us to the design of a novel AI-based architecture, which we propose in section three of the article. This proposed architecture aims to assist medical professionals by providing streamlined access to relevant patient information, using machine learning (ML) techniques. Also, at the end of this section we address the initial research questions. In the final section of the article, we conclude that the insights gained from analyzing existing medical chatbot platforms has informed the design of our AI-based solution, aimed at supporting both patients and healthcare professionals through an integrated and intelligent system. The findings highlight the necessity for systems that not only align with user expectations but also demonstrate seamless integration within clinical workflows. Future research should prioritize advancing the reliability, personalization, and regulatory compliance of these platforms, thereby fostering enhanced patient engagement and enabling healthcare professionals to deliver care that is both more efficient and more accessible. Full article
(This article belongs to the Special Issue Artificial Intelligence and Big Data Processing in Healthcare)
Show Figures

Figure 1

24 pages, 3908 KB  
Article
Transform Domain Based GAN with Deep Multi-Scale Features Fusion for Medical Image Super-Resolution
by Huayong Yang, Qingsong Wei and Yu Sang
Electronics 2025, 14(18), 3726; https://doi.org/10.3390/electronics14183726 - 20 Sep 2025
Viewed by 314
Abstract
High-resolution (HR) medical images provide clearer anatomical details and facilitate early disease diagnosis, yet acquiring HR scans is often limited by imaging conditions, device capabilities, and patient factors. We propose a transform domain deep multiscale feature fusion generative adversarial network (MSFF-GAN) for medical [...] Read more.
High-resolution (HR) medical images provide clearer anatomical details and facilitate early disease diagnosis, yet acquiring HR scans is often limited by imaging conditions, device capabilities, and patient factors. We propose a transform domain deep multiscale feature fusion generative adversarial network (MSFF-GAN) for medical image super-resolution (SR). Considering the advantages of generative adversarial networks (GANs) and convolutional neural networks (CNNs), MSFF-GAN integrates a deep multi-scale convolution network into the GAN generator, which is composed primarily of a series of cascaded multi-scale feature extraction blocks in a coarse-to-fine manner to restore the medical images. Two tailored blocks are designed: a multiscale information distillation (MSID) block that adaptively captures long- and short-path features across scales, and a granular multiscale (GMS) block that expands receptive fields at fine granularity to strengthen multiscale feature extraction with reduced computational cost. Unlike conventional methods that predict HR images directly in the spatial domain, which often yield excessively smoothed outputs with missing textures, we formulate SR as the prediction of coefficients in the non-subsampled shearlet transform (NSST) domain. This transform domain modeling enables better preservation of global anatomical structure and local texture details. The predicted coefficients are inverted to reconstruct HR images, and the transform domain subbands are also fed to the discriminator to enhance its discrimination ability and improve perceptual fidelity. Extensive experiments on medical image datasets demonstrate that MSFF-GAN outperforms state-of-the-art approaches in structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR), while more effectively preserving global anatomy and fine textures. These results validate the effectiveness of combining multiscale feature fusion with transform domain prediction for high-quality medical image super-resolution. Full article
(This article belongs to the Special Issue New Trends in AI-Assisted Computer Vision)
Show Figures

Figure 1

30 pages, 696 KB  
Article
SPADR: A Context-Aware Pipeline for Privacy Risk Detection in Text Data
by Sultan Asiri, Randa Alshehri, Fatima Kamran, Hend Laznam, Yang Xiao and Saleh Alzahrani
Electronics 2025, 14(18), 3725; https://doi.org/10.3390/electronics14183725 - 19 Sep 2025
Viewed by 476
Abstract
Large language models (LLMs) are powerful, but they can unintentionally memorize and leak sensitive information found in their training or input data. To address this issue, we propose SPADR, a semantic privacy anomaly detection and remediation pipeline designed to detect and remove privacy [...] Read more.
Large language models (LLMs) are powerful, but they can unintentionally memorize and leak sensitive information found in their training or input data. To address this issue, we propose SPADR, a semantic privacy anomaly detection and remediation pipeline designed to detect and remove privacy risks from text. SPADR addresses limitations in existing redaction methods by identifying deeper forms of sensitive content, including implied relationships, contextual clues, and non-standard identifiers that traditional NER systems often overlook. SPADR combines semantic anomaly scoring using a denoising autoencoder with named entity recognition and graph-based analysis to detect both direct and hidden privacy risks. It is flexible enough to work on both training data (to prevent memorization) and user input (to prevent leakage at inference time). We evaluate SPADR on the Enron Email Dataset, where it significantly reduces document-level privacy leakage while maintaining strong semantic utility. The enhanced version, SPADR (S2), reduces the PII leak rate from 100% to 16.06% and achieves a BERTScore F1 of 88.03%. Compared to standard NER-based redaction systems, SPADR offers more accurate and context-aware privacy protection. This work highlights the importance of semantic and structural understanding in building safer, privacy-respecting AI systems. Full article
Show Figures

Figure 1

21 pages, 2532 KB  
Article
Heuristic-Based Computing-Aware Routing for Dynamic Networks
by Zhiyi Lin, Lingjie Wang, Wenxin Ning, Yuxiang Zhao, Li Yu and Jian Jiang
Electronics 2025, 14(18), 3724; https://doi.org/10.3390/electronics14183724 - 19 Sep 2025
Viewed by 201
Abstract
The development of the computing power network has brought about a revolutionary effect on network routing architecture. As a result, the computing-aware network routing problem has been raised to explore routing various computational tasks to appropriate computing resources in the dynamic network. In [...] Read more.
The development of the computing power network has brought about a revolutionary effect on network routing architecture. As a result, the computing-aware network routing problem has been raised to explore routing various computational tasks to appropriate computing resources in the dynamic network. In this study, we propose a heuristic-based computing-aware routing algorithm to achieve the optimal routing path by considering the dynamic network performance and computing resource status simultaneously. Our proposed approach models the dynamic network using time-varying node and edge weights, which are obtained by mapping basic performance indicators to weights according to quality-of-service requirements. This allows us to improve the user’s experience more effectively during the routing process. Moreover, a novel heuristic-based algorithm, which creatively transforms the computing-aware routing problem into a single-source shortest path problem, has been designed to achieve the comprehensive optimal routing path. The experimental results, based on both simulated networks and a real dedicated network in Zhejiang, demonstrate that our proposed method can obtain the comprehensive optimal routing path with a lower computing time cost than enumerating search. Furthermore, our proposed computing-aware routing method has been proven to be robust to the dynamics of the network, computing resources, and service load changes. Full article
Show Figures

Figure 1

15 pages, 1606 KB  
Article
Multi-Branch Knowledge-Assisted Proximal Policy Optimization for Design of MS-to-MS Vertical Transition with Multi-Layer Pixel Structures
by Ze-Ming Wu, Zheng Li, Ruo-Yu Liang, Xiao-Chun Li, Ken Ning and Jun-Fa Mao
Electronics 2025, 14(18), 3723; https://doi.org/10.3390/electronics14183723 - 19 Sep 2025
Viewed by 152
Abstract
This article proposes a wideband microstrip-to-microstrip vertical transition with multi-layer pixel structures, alongside a multi-branch knowledge-assisted proximal policy optimization (MB-KPPO) method for its automatic design. The proposed transition consists of the three-layer pixel structures with high design degrees of freedom to realize a [...] Read more.
This article proposes a wideband microstrip-to-microstrip vertical transition with multi-layer pixel structures, alongside a multi-branch knowledge-assisted proximal policy optimization (MB-KPPO) method for its automatic design. The proposed transition consists of the three-layer pixel structures with high design degrees of freedom to realize a wide bandwidth. The MB-KPPO adopts a multi-branch policy network instead of a single-branch policy network in the PPO to improve design efficiency. In addition, the MB-KPPO integrates a fully connected shape generation mechanism to incorporate physical requirements. An MS-to-MS vertical multi-layer pixel transition is designed and fabricated by PCB technology. Measurement results show that the multi-layer transition has a frequency range from 3.5 to 17.8 GHz, with a bandwidth that is 25% higher than the single-layer pixel transition towards higher frequencies. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

24 pages, 4963 KB  
Article
A Hybrid Deep Learning and Optical Flow Framework for Monocular Capsule Endoscopy Localization
by İrem Yakar, Ramazan Alper Kuçak, Serdar Bilgi, Onur Ferhanoglu and Tahir Cetin Akinci
Electronics 2025, 14(18), 3722; https://doi.org/10.3390/electronics14183722 - 19 Sep 2025
Viewed by 310
Abstract
Pose estimation and localization within the gastrointestinal tract, particularly the small bowel, are crucial for invasive medical procedures. However, the task is challenging due to the complex anatomy, homogeneous textures, and limited distinguishable features. This study proposes a hybrid deep learning (DL) method [...] Read more.
Pose estimation and localization within the gastrointestinal tract, particularly the small bowel, are crucial for invasive medical procedures. However, the task is challenging due to the complex anatomy, homogeneous textures, and limited distinguishable features. This study proposes a hybrid deep learning (DL) method combining Convolutional Neural Network (CNN)-based pose estimation and optical flow to address these challenges in a simulated small bowel environment. Initial pose estimation was used to assess the performance of simultaneous localization and mapping (SLAM) in such complex settings, using a custom endoscope prototype with a laser, micromotor, and miniaturized camera. The results showed limited feature detection and unreliable matches due to repetitive textures. To improve this issue, a hybrid CNN-based approach enhanced with Farneback optical flow was applied. Using consecutive images, three models were compared: Hybrid ResNet-50 with Farneback optical flow, ResNet-50, and NASNetLarge pretrained on ImageNet. The analysis showed that the hybrid model outperformed both ResNet-50 (0.39 cm) and NASNetLarge (1.46 cm), achieving the lowest RMSE of 0.03 cm, with feature-based SLAM failing to provide reliable results. The hybrid model also gained a competitive inference speed of 241.84 ms per frame, outperforming ResNet-50 (316.57 ms) and NASNetLarge (529.66 ms). To assess the impact of the optical flow choice, Lucas–Kanade was also implemented within the same framework and compared with the Farneback-based results. These results demonstrate that combining optical flow with ResNet-50 enhances pose estimation accuracy and stability, especially in textureless environments where traditional methods struggle. The proposed method offers a robust, real-time alternative to SLAM, with potential applications in clinical capsule endoscopy. The results are positioned as a proof-of-concept that highlights the feasibility and clinical potential of the proposed framework. Future work will extend the framework to real patient data and optimize for real-time hardware. Full article
Show Figures

Figure 1

20 pages, 1266 KB  
Review
Research Trends and Challenges of Integrated Constant On-Time (COT) Buck Converters
by Seok-Tae Koh and Sunghyun Bae
Electronics 2025, 14(18), 3721; https://doi.org/10.3390/electronics14183721 - 19 Sep 2025
Viewed by 320
Abstract
Constant on-time (COT) buck converters offer fast transient responses and a simple architecture but face challenges like switching frequency variation, instability with low-equivalent series resistance (ESR) capacitors, and DC output voltage offset. This paper reviews advanced COT control techniques developed to overcome these [...] Read more.
Constant on-time (COT) buck converters offer fast transient responses and a simple architecture but face challenges like switching frequency variation, instability with low-equivalent series resistance (ESR) capacitors, and DC output voltage offset. This paper reviews advanced COT control techniques developed to overcome these limitations. We examine methods for frequency stabilization (e.g., adaptive on-time, phase-locked loop), stability with low-ESR capacitors (e.g., passive and active ripple injection, virtual inductor current), and improved DC regulation (e.g., offset cancellation). This review also covers techniques for optimizing transient response and multiphase architectures for high-current applications. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

22 pages, 410 KB  
Article
Adapting Co-Design for Crisis Contexts: Lessons Learned Engaging Nonprofits
by Delvin Varghese, Joshua Paolo Seguin, Meriem Tebourbi, Tom Bartindale, Charishma Ratnam, Rebecca Powell, Rebecca Wickes and Patrick Olivier
Electronics 2025, 14(18), 3720; https://doi.org/10.3390/electronics14183720 - 19 Sep 2025
Viewed by 274
Abstract
During crisis situations like the COVID-19 pandemic, nonprofit organizations must rapidly adapt their community engagement approaches, yet traditional co-design methods often fall short in such time-sensitive, multi-stakeholder contexts. This paper examines how design methods need to evolve when working with nonprofits during crises [...] Read more.
During crisis situations like the COVID-19 pandemic, nonprofit organizations must rapidly adapt their community engagement approaches, yet traditional co-design methods often fall short in such time-sensitive, multi-stakeholder contexts. This paper examines how design methods need to evolve when working with nonprofits during crises by analyzing our intensive six-month collaboration with five Australian nonprofits serving migrant youth communities. Through Action Research involving over 130 co-design sessions, workshops, and stakeholder meetings, we developed and iteratively refined a social media engagement playbook. Our findings reveal three key methodological innovations: (1) adapting co-design methods for crisis contexts through flexible, asynchronous engagement; (2) managing multiple stakeholder relationships through what we term “nonprofit ecologies”—understanding organizations’ overlapping roles and relationships—and (3) balancing immediate needs with long-term goals through infrastructuring approaches that build sustainable capacity. This research contributes practical methods for conducting collaborative design during crises while advancing a theoretical understanding of how traditional design approaches must adapt to support nonprofits in complex, time-sensitive situations. Full article
(This article belongs to the Special Issue Advances in Human-Computer Interaction: Challenges and Opportunities)
Show Figures

Figure 1

Previous Issue
Back to TopTop