Next Issue
Volume 14, June-1
Previous Issue
Volume 14, May-1
 
 

Electronics, Volume 14, Issue 10 (May-2 2025) – 192 articles

Cover Story (view full-size image): GaAs Schottky technology offers superior performance for THz mixers at room temperature, making it a compelling choice for high-data-rate wireless receivers. However, InGaAs mixers offer a notable advantage in terms of reduced power requirements, enabling optical pumping with the incorporation of photodiodes acting as LO. This study presents the first comparison of GaAs and InGaAs diodes under electrical and optical pumping in a wireless system, transmitting up to 80 Gbps at 0.3 THz. The THz transmitter and the optical receiver’s LO use MUTC-PDs driven by free-running lasers. Results show that despite higher conversion loss, InGaAs mixers achieve similar BER than GaAs due to the purity of optically generated LO signals. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 3659 KiB  
Article
Online SSA-Based Real-Time Degradation Assessment for Inter-Turn Short Circuits in Permanent Magnet Traction Motors
by Zhenglin Cheng, Xueming Li, Kan Liu, Zhiwen Chen and Fengbing Jiang
Electronics 2025, 14(10), 2095; https://doi.org/10.3390/electronics14102095 - 21 May 2025
Viewed by 95
Abstract
Inter-turn short circuits (ITSCs) in permanent magnet synchronous motors (PMSMs) pose significant risks due to their subtle early symptoms and rapid degradation. To address this, we propose an online real-time diagnostic method for assessing the degradation state. This method employs the Sparrow Search [...] Read more.
Inter-turn short circuits (ITSCs) in permanent magnet synchronous motors (PMSMs) pose significant risks due to their subtle early symptoms and rapid degradation. To address this, we propose an online real-time diagnostic method for assessing the degradation state. This method employs the Sparrow Search Algorithm (SSA) for the online real-time identification of fault characteristic parameters. Following an analysis of the fault mechanisms of inter-turn short circuits, a mathematical model has been developed to include the short-circuit turns ratio and insulation resistance. An evaluation index has also been developed to assess the degree of fault-related degradation. To address the strong nonlinearity of parameters in the fault model, the SSA is employed for the real-time joint identification of parameters that characterize the relationship between fault location and degradation degree. Simulation experiments demonstrate that the SSA achieves convergence within 40 iterations, with a relative error below 5% and absolute error less than 0.007, outperforming traditional algorithms like the PSO, a significant improvement in the early detection of degradation caused by inter-turn short circuits and a step forward in technical support ensuring greater reliability and safety for the traction systems used in rail transit. Full article
Show Figures

Figure 1

23 pages, 977 KiB  
Article
Development of High-Quality Cryptographic Constructions Based on Many-Valued Logic Affine Transformations
by Mikolaj Karpinski, Artem Sokolov, Aizhan Tokkuliyeva, Volodymyr Radush, Nadiia Kazakova, Aigul Shaikhanova, Nataliya Zagorodna and Anna Korchenko
Electronics 2025, 14(10), 2094; https://doi.org/10.3390/electronics14102094 - 21 May 2025
Viewed by 102
Abstract
The S-box is a key component of modern ciphers, determining the quality and performance of the cryptographic algorithms in which it is applied. Many constructions for synthesizing high-quality S-boxes have been established, and those based on Galois fields theory—for example, the Nyberg construction [...] Read more.
The S-box is a key component of modern ciphers, determining the quality and performance of the cryptographic algorithms in which it is applied. Many constructions for synthesizing high-quality S-boxes have been established, and those based on Galois fields theory—for example, the Nyberg construction applied in the AES cryptographic algorithm—are particularly important. An integral component of the Nyberg construction is the affine transformation, which is used to improve the avalanche and correlation properties of the S-box. In this paper, a new approach is adopted for synthesizing affine transformations for S-boxes based on the quaternary matrices over the Galois field GF(4). We describe four basic structures that serve as the foundation for synthesizing a complete class of 648 affine transformation matrices of order n = 3 and a class of 7776 matrices of order n = 4 and introduce a recurrent structure to facilitate the synthesis of matrices for higher orders. Using these matrices in combination with the Nyberg construction, it is possible to construct bijective S-boxes that outperform the original Nyberg construction and many other known S-boxes in terms of strict avalanche criterion (SAC) and bit independence criterion strict avalanche criterion (BIC SAC) values, while maintaining a maximal level of nonlinearity and good cryptographic properties. We also propose modified GF(4) affine transformations that can be applied to specialized S-boxes which already satisfy the SAC for both component Boolean and 4-functions, as well as the criterion of minimal correlation between input and output, allowing us to enhance their nonlinearity to the value of Nf = 96. We integrate the synthesized S-boxes into the AES algorithm and evaluate their practical performance. The encryption outputs successfully pass the NIST statistical test suite in 96 out of 100 cases, outperforming both the original AES S-box and other reference constructions, confirming the practical strength of the proposed method. Full article
Show Figures

Figure 1

23 pages, 1701 KiB  
Article
Left Meets Right: A Siamese Network Approach to Cross-Palmprint Biometric Recognition
by Mohamed Ezz
Electronics 2025, 14(10), 2093; https://doi.org/10.3390/electronics14102093 - 21 May 2025
Viewed by 71
Abstract
What if you could identify someone’s right palmprint just by looking at their left—and vice versa? That is exactly what I set out to do. I built a specially adapted Siamese network that only needs one palm to reliably recognize the other, making [...] Read more.
What if you could identify someone’s right palmprint just by looking at their left—and vice versa? That is exactly what I set out to do. I built a specially adapted Siamese network that only needs one palm to reliably recognize the other, making biometric systems far more flexible in everyday settings. My solution rests on two simple but powerful ideas. First, Anchor Embedding through Feature Aggregation (AnchorEFA) creates a “super-anchor” by averaging four palmprint samples from the same person. This pooled anchor smooths out noise and highlights the consistent patterns shared between left and right palms. Second, I use a Concatenated Similarity Measurement—combining Euclidean distance with Element-wise Absolute Difference (EAD)—so the model can pick up both big structural similarities and tiny textural differences. I tested this approach on three public datasets (POLYU_Left_Right, TongjiS1_Left_Right, and CASIA_Left_Right) and saw a clear jump in accuracy compared to traditional methods. In fact, my four-sample AnchorEFA plus hybrid similarity metric did not just beat the baseline—it set a new benchmark for cross-palmprint recognition. In short, recognizing a palmprint from its opposite pair is not just feasible—it is practical, accurate, and ready for real-world use. This work opens the door to more secure, user-friendly biometric systems that still work even when only one palmprint is available. Full article
Show Figures

Figure 1

23 pages, 2867 KiB  
Article
A Novel Image Encryption Scheme Based on a Quantum Logistic Map, Hyper-Chaotic Lorenz Map, and DNA Dynamic Encoding
by Peiyi Wang, Yi Xiang and Lanlan Huang
Electronics 2025, 14(10), 2092; https://doi.org/10.3390/electronics14102092 - 21 May 2025
Viewed by 81
Abstract
In the digital information age, although digital images are widely used, the security issues associated with them have become increasingly severe. Consequently, ensuring secure image transmission has become a critical challenge in contemporary information security research. Chaotic systems are characterized by non-periodic behavior, [...] Read more.
In the digital information age, although digital images are widely used, the security issues associated with them have become increasingly severe. Consequently, ensuring secure image transmission has become a critical challenge in contemporary information security research. Chaotic systems are characterized by non-periodic behavior, strong dependence on initial conditions, and other favorable characteristics, and have been widely employed in the scrambling and diffusion processes of image encryption. Compared to classical chaotic maps, a quantum Logistic map exhibits better randomness and stronger sensitivity to initial values, effectively overcoming the attractor problem inherent in classical Logistic maps, thereby significantly enhancing the robustness of encryption methodologies. This article focuses on a innovative integration of a quantum Logistic map, hyper-chaotic Lorenz map, and DNA dynamic encoding technology, to design and implement a highly secure and efficient image encryption scheme. First, high-quality random number sequences are produced utilizing the quantum Logistic map, which is then employed to perform a scrambling operation on the image. Next, by integrating the chaotic sequences yielded from the hyper-chaotic Lorenz map with DNA dynamic encoding and operation rules, we implement a diffusion process, thereby increasing the strength of the image encryption. Experimental simulation results and multiple security analyses demonstrated that our encryption methodology achieved excellent encryption performance, effectively resisting a variety of attack strategies, and it holds significant potential for research on protecting image information through encryption. Full article
Show Figures

Figure 1

28 pages, 7671 KiB  
Article
A 57–64 GHz Receiver Front End in 40 nm CMOS
by Ioannis-Dimitrios Psycharis, Vasileios Tsourtis and Grigorios Kalivas
Electronics 2025, 14(10), 2091; https://doi.org/10.3390/electronics14102091 - 21 May 2025
Viewed by 72
Abstract
The global allocation of over 5 GHz of spectral bandwidth around the 60 GHz frequency band offers significant potential for ultra-high data rate wireless communication over short distances and enables the implementation of high-resolution frequency-modulated continuous-wave (FMCW) radar applications. In this study, a [...] Read more.
The global allocation of over 5 GHz of spectral bandwidth around the 60 GHz frequency band offers significant potential for ultra-high data rate wireless communication over short distances and enables the implementation of high-resolution frequency-modulated continuous-wave (FMCW) radar applications. In this study, a Front-End Receiver covering frequencies from 57 to 64 GHz was designed and characterized in a 40 nm CMOS process. The proposed architecture includes a Low-Noise Amplifier (LNA), a novel double-balanced mixer offering variable conversion gain, and a low-power class-C Voltage-Controlled Oscillator (VCO). From post-layout simulation results, the LNA presents a noise figure (NF) less than 4.8 dB and a gain more than 19 dB, while the input compression point (P1dB) reaches −15.6 dBm. The double-balanced mixer delivers a noise figure of less than 11 dB, a conversion gain of 14 dB, and an input-referred compression point of −13 dBm. The VCO achieves a phase noise of approximately −93 dBc/Hz at 1 MHz offset from 60 GHz and a tuning range of about 8 GHz, dissipating only 6.6 mW. Overall, the receiver demonstrates a maximum conversion gain of more than 39 dB, a noise figure of less than 9.2 dB, an input- referred compression point of −37 dBm, and a power dissipation of 56 mW. Full article
Show Figures

Figure 1

17 pages, 4722 KiB  
Article
Research on Bearing Fault Diagnosis Based on Vibration Signals and Deep Learning Models
by Bin Yuan, Lingkai Lu and Suifan Chen
Electronics 2025, 14(10), 2090; https://doi.org/10.3390/electronics14102090 - 21 May 2025
Viewed by 87
Abstract
To overcome the limitations of characteristic parameter identification and inadequate fault recognition rates in bearings, a bearing fault diagnosis method combining the improved whale optimization algorithm (IWOA), variational mode decomposition (VMD), and kernel extreme learning machine (KELM) is proposed. Firstly, to improve the [...] Read more.
To overcome the limitations of characteristic parameter identification and inadequate fault recognition rates in bearings, a bearing fault diagnosis method combining the improved whale optimization algorithm (IWOA), variational mode decomposition (VMD), and kernel extreme learning machine (KELM) is proposed. Firstly, to improve the convergence behavior and global search capability of the WOA, we introduced adaptive weight, a variable spiral shape parameter, and a Cauchy neighborhood perturbation strategy to improve the performance of the original algorithm. Secondly, to enhance the effectiveness of feature extraction, the IWOA was used to optimize the number of modal components and penalty coefficients in the VMD algorithm; then, we could obtain the optimal modal components and construct feature vectors based on the optimal modal components. Next, we used the IWOA to optimize the two key parameters, the regularization coefficient C and kernel parameter γ of KELM, and the feature vector was used as the input of KELM to achieve fault diagnosis. Finally, data collected from different experimental platforms were used for experimental analysis. The results indicate that the IWOA-VMD-KELM bearing fault diagnosis model significantly improved its accuracy compared to other models, achieving accuracies of 98.8% and 98.4% on the CWRU dataset and Southeast University dataset, respectively. Full article
Show Figures

Figure 1

19 pages, 23359 KiB  
Article
Enhanced Graph Diffusion Learning with Transformable Patching via Curriculum Contrastive Learning for Session Recommendation
by Jin Li, Rong Gao, Lingyu Yan, Quanfeng Yao, Xianjun Peng and Jiwei Hu
Electronics 2025, 14(10), 2089; https://doi.org/10.3390/electronics14102089 - 21 May 2025
Viewed by 57
Abstract
The fusion modeling of intra-session item information representation and inter-session item transition pattern for session recommendation has shown performance advantages. However, existing research still suffers from the following challenges: (1) the time-varying effects of complex relationships between item transitions within sessions need to [...] Read more.
The fusion modeling of intra-session item information representation and inter-session item transition pattern for session recommendation has shown performance advantages. However, existing research still suffers from the following challenges: (1) the time-varying effects of complex relationships between item transitions within sessions need to be deeply explored; and (2) the lack of effective representation for inter-session item transition patterns. To address these challenges, we propose a new session recommendation, named EGDLTP-CCL. Specifically, we first design a patch-enhanced gated neural network representation of session item transition patterns, which accurately captures the time-dynamically varying impacts of the complex relationships within sessions of item transitions through a designed transformer patching strategy. Then, we develop an energy-constraint-based graph diffusion model to capture the inter-session item transition patterns, which mitigates the problem of poor simulation of real inter-session item transition patterns by the introduction of an energy-constraint strategy for the graph diffusion model. In addition, patch-enhanced gated neural networks and energy-constrained graph diffusion models are treated as two different views in the contrastive learning framework. By introducing a curriculum learning strategy that explores how to effectively select and train negative samples in a contrastive learning framework, thereby deeply improving performance in contrastive learning task. Finally, we combine and jointly train the recommendation task and the curriculum learning contrastive learning task for optimization based on a multi-task learning strategy to further improve the recommendation performance. Experiments on real-world datasets show that EGDLTP-CCL significantly outperforms state-of-the-art methods. Full article
Show Figures

Figure 1

27 pages, 3651 KiB  
Article
Advanced Big Data Solutions for Detector Calibrations for High-Energy Physics
by Abdulameer Nour Jalal, Stefan Oniga and Balazs Ujvari
Electronics 2025, 14(10), 2088; https://doi.org/10.3390/electronics14102088 - 21 May 2025
Viewed by 53
Abstract
This investigation examines the Dead Hot Map (DHM) method and timing calibration for Run 14 Au+Au collisions in the PHENIX experiment. The DHM method guarantees data integrity by identifying and omitting defective detector towers (nonfunctional, hot, and very hot towers) via a set [...] Read more.
This investigation examines the Dead Hot Map (DHM) method and timing calibration for Run 14 Au+Au collisions in the PHENIX experiment. The DHM method guarantees data integrity by identifying and omitting defective detector towers (nonfunctional, hot, and very hot towers) via a set of criteria and statistical evaluations. This procedure entails hit distribution analysis, pseudorapidity adjustments, and normalization, resulting in an enhanced map of functional detector components. Timing calibration mitigates the issues associated with time-of-flight measurement inaccuracies, such as slewing effects and inter-sector timing differences. Numerous corrections are implemented, encompassing slewing, tower-specific offsets, and sector-by-sector adjustments, resulting in a final resolution of 500 picoseconds for the electromagnetic calorimeter. These calibrations improve the accuracy of photon and π0 measurements, essential for investigating quark–gluon plasma in high-energy nuclear collisions. Full article
Show Figures

Figure 1

18 pages, 11805 KiB  
Article
VL-PAW: A Vision–Language Dataset for Pear, Apple and Weed
by Gwang-Hyun Yu, Le Hoang Anh, Dang Thanh Vu, Jin Lee, Zahid Ur Rahman, Heon-Zoo Lee, Jung-An Jo and Jin-Young Kim
Electronics 2025, 14(10), 2087; https://doi.org/10.3390/electronics14102087 - 21 May 2025
Viewed by 64
Abstract
Vision–language models (VLMs) have achieved remarkable success in natural image domains, yet their potential remains underexplored in agriculture due to the lack of high-quality, joint image–text datasets. To address this limitation, we introduce VL-PAW (Vision–Language dataset for Pear, [...] Read more.
Vision–language models (VLMs) have achieved remarkable success in natural image domains, yet their potential remains underexplored in agriculture due to the lack of high-quality, joint image–text datasets. To address this limitation, we introduce VL-PAW (Vision–Language dataset for Pear, Apple, and Weed), a dataset comprising 3.9 K image–caption pairs for two key agricultural tasks: weed species classification and fruit inspection. We fine-tune the CLIP model on VL-PAW and gain several insights. First, the model demonstrates impressive zero-shot performance, achieving 98.21% accuracy in classifying coarse labels. Second, for fine-grained categories, the vision–language model outperforms vision-only models in both few-shot settings and entire dataset training (1-shot: 56.79%; 2-shot: 72.82%; 3-shot: 74.49%; 10-shot: 83.85%). Third, using intuitive captions enhances fine-grained fruit inspection performance compared to using class names alone. These findings demonstrate the applicability of VLMs in future agricultural querying systems. Full article
(This article belongs to the Collection Image and Video Analysis and Understanding)
Show Figures

Figure 1

16 pages, 1825 KiB  
Article
A Time Synchronization Hop-Count-Control Algorithm Based on Synchronization Error Convergence Probability Estimation
by Haibo Duan, Fanrong Shi, Sijie Wang, Qiushi Cui and Min Zeng
Electronics 2025, 14(10), 2086; https://doi.org/10.3390/electronics14102086 - 21 May 2025
Viewed by 58
Abstract
High-precision time synchronization is regarded as the foundation for ensuring the stable operation of microgrids and for the coordinated sensing, communication, and computing among network devices. In multi-hop wireless sensor networks, it is observed that both the accumulation of synchronization errors and the [...] Read more.
High-precision time synchronization is regarded as the foundation for ensuring the stable operation of microgrids and for the coordinated sensing, communication, and computing among network devices. In multi-hop wireless sensor networks, it is observed that both the accumulation of synchronization errors and the associated communication overhead are increased with the number of hops; however, in single-hop mode, it is found that the rate of error convergence is insufficient to satisfy the requirements for rapid synchronization. To address these challenges, a hop-control time synchronization algorithm based on the estimation of synchronization error convergence probability is proposed. In the proposed method, the convergence probability of the synchronization error at each node and its rate of change are estimated online, so that the synchronization hop count can be dynamically adjusted: during the synchronization establishment phase, a larger hop count is employed to accelerate error convergence; during the synchronization maintenance phase, the minimal hop count is utilized to maintain long-term high-precision synchronization, thereby ensuring synchronization accuracy and significantly reducing communication overhead. MATLAB simulation results are reported to have demonstrated that the proposed algorithm exhibits marked advantages in convergence speed, synchronization accuracy, and energy consumption as compared to fixed-hop-count and pure single-hop schemes, thereby providing an effective solution for efficient time synchronization in multi-hop wireless sensor networks. Full article
(This article belongs to the Special Issue Real-Time Monitoring and Intelligent Control for a Microgrid)
Show Figures

Figure 1

20 pages, 2183 KiB  
Review
Bulk-Driven CMOS Differential Stages for Ultra-Low-Voltage Ultra-Low-Power Operational Transconductance Amplifiers: A Comparative Analysis
by Muhammad Omer Shah, Andrea Ballo and Salvatore Pennisi
Electronics 2025, 14(10), 2085; https://doi.org/10.3390/electronics14102085 - 21 May 2025
Viewed by 68
Abstract
Energy-efficient integrated circuits require scaled-down supply voltages, posing challenges for analog design, particularly for operational transconductance amplifiers (OTAs) essential in high-accuracy CMOS feedback systems. Below 1 V, gate-driven OTAs are limited in common-mode input range and minimum supply voltage. This work investigates CMOS [...] Read more.
Energy-efficient integrated circuits require scaled-down supply voltages, posing challenges for analog design, particularly for operational transconductance amplifiers (OTAs) essential in high-accuracy CMOS feedback systems. Below 1 V, gate-driven OTAs are limited in common-mode input range and minimum supply voltage. This work investigates CMOS Bulk-Driven (BD) sub-threshold techniques as an efficient alternative for ultra-low voltage (ULV) and ultra-low power (ULP) designs. Although BD overcomes MOS threshold voltage limitations, historical challenges like lower transconductance, latch-up, and layout complexity hindered its use. Recent advancements in CMOS processes and the need for ULP solutions have revived industrial interest in BD. Through theoretical analysis and computer simulations, we explore BD topologies for ULP OTA input stages, classifying them as tailed/tail-less and class A/AB, evaluating their effectiveness for robust analog design, while offering valuable insights for circuit designers. Full article
(This article belongs to the Special Issue Advanced CMOS Technologies and Applications)
Show Figures

Figure 1

13 pages, 12297 KiB  
Article
Study of Wash-Induced Performance Variability in Embroidered Antenna Sensors for Physiological Monitoring
by Mariam El Gharbi, Jamal Abounasr, Raúl Fernández-García and Ignacio Gil
Electronics 2025, 14(10), 2084; https://doi.org/10.3390/electronics14102084 - 21 May 2025
Viewed by 59
Abstract
This paper presents a study on the repeatability of washing effects on two antenna-based sensors for breathing monitoring. One sensor is an embroidered meander antenna-based sensor integrated into a T-shirt, and the other is a loop antenna integrated into a belt. Both sensors [...] Read more.
This paper presents a study on the repeatability of washing effects on two antenna-based sensors for breathing monitoring. One sensor is an embroidered meander antenna-based sensor integrated into a T-shirt, and the other is a loop antenna integrated into a belt. Both sensors were subjected to five washing cycles, and their performance was assessed after each wash. The embroidered meander antenna was specifically compared before and after washing to monitor a male volunteer’s different breathing patterns, that is, eupnea, apnea, hypopnea, and hyperpnea. Stretching tests were also conducted to evaluate the impact of mechanical deformation on sensor behavior. The results highlight the changes in sensor performance across multiple washes and stretching conditions, offering insights into the durability and reliability of these embroidered and loop antennas for practical applications in wearable health monitoring. The findings emphasize the importance of considering both washing and mechanical stress in the design of robust antenna-based sensors. Full article
(This article belongs to the Special Issue Wearable Device Design and Its Latest Applications)
Show Figures

Figure 1

15 pages, 1285 KiB  
Article
Neural-Network-Based Interference Cancellation for MRC and EGC Receivers in Large Intelligent Surfaces for 6G
by Mário Marques da Silva, Gelson Pembele and Rui Dinis
Electronics 2025, 14(10), 2083; https://doi.org/10.3390/electronics14102083 - 21 May 2025
Viewed by 292
Abstract
Large Intelligent Surfaces (LISs) have emerged as a promising technology for enhancing spectral efficiency and communication capacity in the Sixth Generation of Cellular Communications (6G). Low-complexity receiver architectures for LISs rely on Maximum Ratio Combining (MRC) and Equal Gain Combining (EGC) receivers, often [...] Read more.
Large Intelligent Surfaces (LISs) have emerged as a promising technology for enhancing spectral efficiency and communication capacity in the Sixth Generation of Cellular Communications (6G). Low-complexity receiver architectures for LISs rely on Maximum Ratio Combining (MRC) and Equal Gain Combining (EGC) receivers, often complemented by iterative detection techniques for interference mitigation. In this work, we propose a novel approach where a neural network replaces iterative interference cancellation, learning to estimate the transmitted signals directly from the received data, mitigating interference without requiring iterative cancellation. Moreover, this also eliminates the need for channel matrix inversion at each frequency component, as required for Zero Forcing (ZF) and Minimum Mean Squared Error (MMSE) receivers, reducing computational complexity while still achieving a good performance improvement. The neural network parameters are optimized to balance performance and computational cost. Full article
(This article belongs to the Special Issue Advances in MIMO Systems)
Show Figures

Figure 1

16 pages, 1143 KiB  
Article
AlleyFloodNet: A Ground-Level Image Dataset for Rapid Flood Detection in Economically and Flood-Vulnerable Areas
by Ook Lee and Hanseon Joo
Electronics 2025, 14(10), 2082; https://doi.org/10.3390/electronics14102082 - 21 May 2025
Viewed by 82
Abstract
Urban flooding in economically and environmentally vulnerable areas—such as alleyways, lowlands, and semi-basement residences—poses serious threats. Previous studies on flood detection have largely relied on aerial or satellite-based imagery. While some studies used ground-level images, datasets capturing localized flooding in economically vulnerable urban [...] Read more.
Urban flooding in economically and environmentally vulnerable areas—such as alleyways, lowlands, and semi-basement residences—poses serious threats. Previous studies on flood detection have largely relied on aerial or satellite-based imagery. While some studies used ground-level images, datasets capturing localized flooding in economically vulnerable urban areas remain limited. To address this, we constructed AlleyFloodNet, a dataset designed for rapid flood detection in flood-vulnerable urban areas, with ground-level images collected from diverse regions worldwide. In particular, this dataset includes data from flood-vulnerable urban areas under diverse realistic conditions, such as varying water levels, colors, and lighting. By fine-tuning several deep learning models on AlleyFloodNet, the ConvNeXt-Large model achieved excellent performance, with an accuracy of 96.56%, precision of 95.45%, recall of 97.67%, and an F1 score of 96.55%. Comparative experiments with existing ground-level image datasets confirmed that datasets specifically designed for economically and flood-vulnerable urban areas, like AlleyFloodNet, are more effective for detecting floods in these regions. By successfully fine-tuning deep learning models, AlleyFloodNet not only addresses the limitations of existing flood monitoring datasets but also provides foundational resources for developing practical, real-time flood detection and alert systems for urban populations vulnerable to flooding. Full article
(This article belongs to the Special Issue Advanced Edge Intelligence in Smart Environments)
Show Figures

Figure 1

13 pages, 2984 KiB  
Article
Rectified Artificial Neural Networks for Long-Term Force Sensing in Piezoelectric Touch Panels
by Yong Liu, Xuemeng Li, Weihang Ma, Hongbei Meng and Shuo Gao
Electronics 2025, 14(10), 2081; https://doi.org/10.3390/electronics14102081 - 21 May 2025
Viewed by 58
Abstract
Human–machine interfaces based on force touch panels have attracted enormous attention due to the merits of the high human–machine interaction efficiency. Many studies have been devoted to diverse force touch technologies. Broad applications in terms of both actual use and research have been [...] Read more.
Human–machine interfaces based on force touch panels have attracted enormous attention due to the merits of the high human–machine interaction efficiency. Many studies have been devoted to diverse force touch technologies. Broad applications in terms of both actual use and research have been developed, such as 3D touch and force-based keystroke authentication. The fruitful results are based on the assumption that users’ touch habits remain unchanged over time; thus, a stationary customized force-sensing model can be built. However, for long-term use, users’ touch habits change due to time-drifting and specific events, causing a decrease in the performance of stationary force-sensing models. To address this issue, a rectified artificial neural network for long-term force sensing in piezoelectric touch panels is presented in this paper. With additional information on the touching time and the occurrence of specific events, the force level predictions were rectified, achieving an accuracy of 97.62% for a long-term data set. The proposed technique enables customized force sensing for long-term use and enhances the human–machine interactive efficiency. Full article
Show Figures

Figure 1

32 pages, 911 KiB  
Article
TB-Collect: Efficient Garbage Collection for Non-Volatile Memory Online Transaction Processing Engines
by Jianhao Wei, Qian Zhang, Yiwen Xiang and Xueqing Gong
Electronics 2025, 14(10), 2080; https://doi.org/10.3390/electronics14102080 - 21 May 2025
Viewed by 67
Abstract
Existing databases supporting Online Transaction Processing (OLTP) workloads based on non-volatile memory (NVM) almost all use Multi-Version Concurrency Control (MVCC) protocol to ensure data consistency. MVCC allows multiple transactions to execute concurrently without lock conflicts, reducing the wait time between read and write [...] Read more.
Existing databases supporting Online Transaction Processing (OLTP) workloads based on non-volatile memory (NVM) almost all use Multi-Version Concurrency Control (MVCC) protocol to ensure data consistency. MVCC allows multiple transactions to execute concurrently without lock conflicts, reducing the wait time between read and write operations, and thereby significantly increasing the throughput of NVM OLTP engines. However, it requires garbage collection (GC) to clean up the obsolete tuple versions to prevent storage overflow, which consumes additional system resources. Furthermore, existing GC approaches in NVM OLTP engines are inefficient because they are based on methods designed for dynamic random access memory (DRAM) OLTP engines, without considering the significant differences in read/write bandwidth and cache line size between NVM and DRAM. These approaches either involve excessive random NVM access (traversing tuple versions) or lead to too many additional NVM write operations, both of which degrade the performance and durability of NVM. In this paper, we propose TB-Collect, a high-performance GC approach specifically designed for NVM OLTP engines. On the one hand, TB-Collect separates tuple headers and contents, storing data in an append-only manner, which greatly reduces NVM writes. On the other hand, TB-Collect performs GC at the block level, eliminating the need to traverse tuple versions and improving the utilization of reclaimed space. We have implemented TB-Collect on DBx1000 and MySQL. Experimental results show that TB-Collect achieves 1.15 to 1.58 times the throughput of existing methods when running TPCC and YCSB workloads. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

25 pages, 5209 KiB  
Article
Enhancing Indoor Positioning with GNSS-Aided In-Building Wireless Systems
by Shuya Zhou, Xinghe Chu and Zhaoming Lu
Electronics 2025, 14(10), 2079; https://doi.org/10.3390/electronics14102079 - 21 May 2025
Viewed by 72
Abstract
Wireless indoor positioning systems are challenged by the reliance on densely deployed hardware and exhaustive site surveys, leading to elevated deployment and maintenance costs that limit scalability. This paper introduces a novel positioning framework that enhances the existing In-Building Wireless (IBW) infrastructure by [...] Read more.
Wireless indoor positioning systems are challenged by the reliance on densely deployed hardware and exhaustive site surveys, leading to elevated deployment and maintenance costs that limit scalability. This paper introduces a novel positioning framework that enhances the existing In-Building Wireless (IBW) infrastructure by retransmitting Global Navigation Satellite System (GNSS) signals. Pseudorange residuals extracted from raw GNSS measurements, when mapped against known cable lengths, facilitate anchor identification and precise ranging. In parallel, directional and inertial measurements are derived from the channel state information (CSI) of cellular reference signals. Building upon these observations, we develop a Hybrid Adaptive Filter-Graph Fusion (HAF-GF) algorithm for high-precision positioning, wherein the adaptive filter modulates observation noise based on Line-of-Sight (LoS) conditions, while a factor graph optimization over multiple positional constraints ensures global consistency and accelerates convergence. Ray tracing-based simulations in a complex office environment validate the efficacy of the proposed approach, demonstrating a 30% improvement in positioning accuracy and at least a threefold increase in deployment efficiency compared to conventional methods. Full article
(This article belongs to the Special Issue Mobile Positioning and Tracking Using Wireless Networks)
Show Figures

Figure 1

20 pages, 970 KiB  
Article
Design of Dual-Mode Multi-Band Doherty Power Amplifier Employing Impedance-and-Phase Constrained Optimization
by Meiyu Tao, Yunqin Chen, Wa Kong, Shaohua Ni, Zhaowen Zheng and Jing Xia
Electronics 2025, 14(10), 2078; https://doi.org/10.3390/electronics14102078 - 21 May 2025
Viewed by 69
Abstract
To expand the operating frequency bands of the Doherty power amplifier (DPA), this paper proposes a dual-mode multi-band DPA design method employing impedance-and-phase constrained optimization based on reciprocal gate bias. By introducing the concept of reciprocal gate bias, the operating mode is switched [...] Read more.
To expand the operating frequency bands of the Doherty power amplifier (DPA), this paper proposes a dual-mode multi-band DPA design method employing impedance-and-phase constrained optimization based on reciprocal gate bias. By introducing the concept of reciprocal gate bias, the operating mode is switched by swapping the gate biases of the carrier and peaking amplifiers of the DPA, which effectively extend the operating frequency band without modifying the load modulation network. Furthermore, multiple impedance constraint circles are used to cover the optimum load impedance region obtained from the load-pull simulation. And, the phases required for impedance transformation network (ITN) across the multi-band are determined based on the impedance transformation requirements when the DPA operates in power back-off (PBO) state and saturation state. Then, the ITNs that satisfy the impedance and phase constraints can be optimized and designed. For verification, a dual-mode multi-band DPA, operating in Mode I at 1.96–2.10 GHz and 2.75–2.86 GHz, and in Mode II at 2.49–2.61 GHz and 3.20–3.36 GHz, is designed and fabricated. Measured results show that the output power of the DPA exceeds 43 dBm with corresponding saturated drain efficiencies (DEs) higher than 50% in both modes. For 6 dB PBO, the DEs are 49.4–55.7% and 49.8–51.7% in Mode I, whereas in Mode II, they range from 51.2% to 52.4% and from 50.4% to 53.5%. Moreover, good linearity can be achieved after linearization for 20 MHz modulated signals. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

22 pages, 971 KiB  
Article
A Personalized Itinerary Recommender System: Considering Sequential Pattern Mining
by Chieh-Yuan Tsai and Jing-Hao Wang
Electronics 2025, 14(10), 2077; https://doi.org/10.3390/electronics14102077 - 20 May 2025
Viewed by 117
Abstract
Personalized itinerary recommendations are essential as many people choose traveling as their primary leisure pursuit. Unlike model-based and optimization-based methods, sequential-pattern-mining-based methods, which are based on the users’ previous visiting experience, can generate more personalized itineraries and avoid the difficulties caused by the [...] Read more.
Personalized itinerary recommendations are essential as many people choose traveling as their primary leisure pursuit. Unlike model-based and optimization-based methods, sequential-pattern-mining-based methods, which are based on the users’ previous visiting experience, can generate more personalized itineraries and avoid the difficulties caused by the two methods. Although sequential-pattern-mining-based methods have shown promise in generating personalized itineraries, the following three challenges remain. First, they often overlook user diversity in time and category preferences, leading to less personalized itinerary suggestions. Second, they typically evaluate sequences only by POI preference, ignoring crucial factors of optimal visiting times and travel distance. Third, they tend to recommend feasible but not optimal itineraries without exploring extended combinations that could better meet user constraints. To solve the difficulties above, a novel personalized itinerary recommendation system for social media is proposed. First, the user preference, which contains time and category preferences, is generated for all users. Users with similar preferences are clustered into the same group. Then, the sequential pattern mining algorithm is adopted to create frequent sequential patterns for each group. Second, to evaluate the suitability of an itinerary, we define the itinerary score according to the considerations of the POI preference, time matching, and travel distance. Third, based on the tentative itineraries generated from the sequential pattern mining process, the Sequential-Pattern-Mining-based Itinerary Recommendation (SPM-IR) algorithm is developed to create more candidate itineraries under user-specified constraints. The top-N candidate sequences ranked by the proposed itinerary score are then returned to the target user as the itinerary recommendation. A real-life dataset from geotagged social media is implemented to demonstrate the benefits of the proposed personalized itinerary recommendation system. Empirical evaluations show that 94.82% of the generated itineraries outperformed real-life itineraries in POI preference, time matching, and travel-distance-based itinerary scores. Ablation studies confirmed the contribution of time and category preferences and highlighted the importance of time matching in itinerary evaluation. Full article
(This article belongs to the Special Issue Application of Data Mining in Social Media)
Show Figures

Figure 1

13 pages, 1463 KiB  
Article
Weak-Light-Enhanced AlGaN/GaN UV Phototransistors with a Buried p-GaN Structure
by Haiping Wang, Feiyu Zhang, Xuzhi Zhao, Haifan You, Zhan Ma, Jiandong Ye, Hai Lu, Rong Zhang, Youdou Zheng and Dunjun Chen
Electronics 2025, 14(10), 2076; https://doi.org/10.3390/electronics14102076 - 20 May 2025
Viewed by 111
Abstract
We propose a novel ultraviolet (UV) phototransistor (PT) architecture based on an AlGaN/GaN high electron mobility transistor (HEMT) with a buried p-GaN layer. In the dark, the polarization-induced two-dimensional electron gas (2DEG) at the AlGaN/GaN heterojunction interface is depleted by the buried p-GaN [...] Read more.
We propose a novel ultraviolet (UV) phototransistor (PT) architecture based on an AlGaN/GaN high electron mobility transistor (HEMT) with a buried p-GaN layer. In the dark, the polarization-induced two-dimensional electron gas (2DEG) at the AlGaN/GaN heterojunction interface is depleted by the buried p-GaN and the conduction channel is closed. Under UV illumination, the depletion region shrinks to just beneath the AlGaN/GaN interface and the 2DEG recovers. The retraction distance of the depletion region during device turn-on operation is comparable to the thickness of the AlGaN barrier layer, which is an order of magnitude smaller than that in the conventional p-GaN/AlGaN/GaN PT, whose retraction distance spans the entire GaN channel layer. Consequently, the proposed device demonstrates significantly enhanced weak-light detection capability and improved switching speed. Silvaco Atlas simulations reveal that under a weak UV intensity of 100 nW/cm2, the proposed device achieves a photocurrent density of 1.68 × 10−3 mA/mm, responsivity of 8.41 × 105 A/W, photo-to-dark-current ratio of 2.0 × 108, UV-to-visible rejection ratio exceeding 108, detectivity above 1 × 1019 cm·Hz1/2/W, and response time of 0.41/0.41 ns. The electron concentration distributions, conduction band variations, and 2DEG recovery behaviors in both the conventional and novel structures under dark and weak UV illumination are investigated in depth via simulations. Full article
(This article belongs to the Special Issue Advances in Semiconductor GaN and Applications)
Show Figures

Figure 1

13 pages, 2240 KiB  
Article
Monocular 3D Tooltip Tracking in Robotic Surgery—Building a Multi-Stage Pipeline
by Sanjeev Narasimhan, Mehmet Kerem Turkcan, Mattia Ballo, Sarah Choksi, Filippo Filicori and Zoran Kostic
Electronics 2025, 14(10), 2075; https://doi.org/10.3390/electronics14102075 - 20 May 2025
Viewed by 489
Abstract
Tracking the precise movement of surgical tools is essential for enabling automated analysis, providing feedback, and enhancing safety in robotic-assisted surgery. Accurate 3D tracking of surgical tooltips is challenging to implement when using monocular videos due to the complexity of extracting depth information. [...] Read more.
Tracking the precise movement of surgical tools is essential for enabling automated analysis, providing feedback, and enhancing safety in robotic-assisted surgery. Accurate 3D tracking of surgical tooltips is challenging to implement when using monocular videos due to the complexity of extracting depth information. We propose a pipeline that combines state-of-the-art foundation models—Florence2 and Segment Anything 2 (SAM2)—for zero-shot 2D localization of tooltip coordinates using a monocular video input. Localization predictions are refined through supervised training of the YOLOv11 segmentation model to enable real-time applications. The depth estimation model Metric3D computes the relative depth and provides tooltip camera coordinates, which are subsequently transformed into world coordinates via a linear model estimating rotation and translation parameters. An experimental evaluation on the JIGSAWS Suturing Kinematic dataset achieves a 3D Average Jaccard score on tooltip tracking of 84.5 and 91.2 for the zero-shot and supervised approaches, respectively. The results validate the effectiveness of our approach and its potential to enhance real-time guidance and assessment in robotic-assisted surgical procedures. Full article
Show Figures

Figure 1

22 pages, 45371 KiB  
Article
DPCK: An Adaptive Differential Privacy-Based CK-Means Clustering Scheme for Smart Meter Data Analysis
by Shaobo Zhang, Jielu Zhu, Entao Luo, Xiaoyu Zhu and Qing Yang
Electronics 2025, 14(10), 2074; https://doi.org/10.3390/electronics14102074 - 20 May 2025
Viewed by 95
Abstract
K-means, as a commonly used clustering method, has been widely applied in data analysis for smart meters. However, this method requires repeatedly computing the similarity between all data points and cluster centers in each iteration, which leads to high computational overhead. Moreover, [...] Read more.
K-means, as a commonly used clustering method, has been widely applied in data analysis for smart meters. However, this method requires repeatedly computing the similarity between all data points and cluster centers in each iteration, which leads to high computational overhead. Moreover, the process of analyzing electricity consumption data by K-means can cause the leakage of users’ privacy, and the current differential privacy technique adopts a uniform privacy budget allocation for data, which reduces the availability of the data. In order to reduce the computational overhead of smart meter data analysis and improve data availability while protecting data privacy, this paper proposes an adaptive differential privacy-based CK-means clustering scheme, named DPCK. Firstly, we propose a CK-means method by improving K-means, which not only reduces the computation between data and centers but also avoids repeated computation by calculating the adjacent cluster center set and stability region for each cluster, thus effectively reducing the computational overhead of data analysis. Secondly, we design an adaptive differential privacy mechanism to add Laplace noise by calculating a different privacy budget for each cluster, which improves data availability while protecting data privacy. Finally, theoretical analysis demonstrates that DPCK provides differential privacy protection. Experimental results show that, compared to baseline methods, DPCK effectively reduces the computational overhead of data analysis and improves data availability by 11.3% while protecting user privacy. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 2611 KiB  
Article
GPU-Optimized Implementation for Accelerating CSAR Imaging
by Mengting Cui, Ping Li, Zhaohui Bu, Meng Xun and Li Ding
Electronics 2025, 14(10), 2073; https://doi.org/10.3390/electronics14102073 - 20 May 2025
Viewed by 80
Abstract
The direct porting of the Range Migration Algorithm to GPUs for three-dimensional (3D) cylindrical synthetic aperture radar (CSAR) imaging faces difficulties in achieving real-time performance while the architecture and programming models of GPUs significantly differ from CPUs. This paper proposes a GPU-optimized implementation [...] Read more.
The direct porting of the Range Migration Algorithm to GPUs for three-dimensional (3D) cylindrical synthetic aperture radar (CSAR) imaging faces difficulties in achieving real-time performance while the architecture and programming models of GPUs significantly differ from CPUs. This paper proposes a GPU-optimized implementation for accelerating CSAR imaging. The proposed method first exploits the concentric-square-grid (CSG) interpolation to reduce the computational complexity for reconstructing a uniform 2D wave-number domain. Although the CSG method transforms the 2D traversal interpolation into two independent 1D interpolations, the interval search to determine the position intervals for interpolation results in a substantial computational burden. Therefore, binary search is applied to avoid traditional point-to-point matching for efficiency improvement. Additionally, leveraging the partition independence of the grid distribution of CSG, the 360° data are divided into four streams along the diagonal for parallel processing. Furthermore, high-speed shared memory is utilized instead of high-latency global memory in the Hadamard product for the phase compensation stage. The experimental results demonstrate that the proposed method achieves CSAR imaging on a 1440×100×128 dataset in 0.794 s, with an acceleration ratio of 35.09 compared to the CPU implementation and 5.97 compared to the conventional GPU implementation. Full article
Show Figures

Figure 1

21 pages, 4356 KiB  
Article
Horizontal Attack Against EC kP Accelerator Under Laser Illumination
by Dmytro Petryk, Ievgen Kabin, Peter Langendoerfer and Zoya Dyka
Electronics 2025, 14(10), 2072; https://doi.org/10.3390/electronics14102072 - 20 May 2025
Viewed by 80
Abstract
Devices employing cryptographic approaches have to be resistant to physical attacks. Side-Channel Analysis (SCA) and Fault Injection (FI) attacks are frequently used to reveal cryptographic keys. In this paper, we present a combined SCA and laser illumination attack against an Elliptic Curve Scalar [...] Read more.
Devices employing cryptographic approaches have to be resistant to physical attacks. Side-Channel Analysis (SCA) and Fault Injection (FI) attacks are frequently used to reveal cryptographic keys. In this paper, we present a combined SCA and laser illumination attack against an Elliptic Curve Scalar Multiplication accelerator, while using different equipment for the measurement of its power traces, i.e., we performed the measurements using a current probe from Riscure and a differential probe from Teledyne LeCroy, with an attack success of 70% and 90%, respectively. Our experiments showed that laser illumination increased the power consumption of the chip, especially its static power consumption, but the success of the horizontal power analysis attacks changed insignificantly. After applying 100% of the laser beam output power and illuminating the smallest area of 143 µm2, we observed an offset of 17 mV in the measured trace. We assume that using a laser with a high laser beam power, as well as concentrating on measuring and analysing only static current, can significantly improve the attack’s success. The attacks exploiting the Static Current under Laser Illumination (SCuLI attacks) are novel, and their potential has not yet been fully investigated. These attacks can be especially dangerous against cryptographic chips manufactured in downscaling technologies. If such attacks are feasible, appropriate countermeasures have to be proposed in the future. Full article
(This article belongs to the Special Issue Advances in Hardware Security Research)
Show Figures

Figure 1

17 pages, 1262 KiB  
Article
Time Series Forecasting via an Elastic Optimal Adaptive GM(1,1) Model
by Teng Li, Jiajia Nie, Guozhi Qiu, Zhen Li, Cun Ji and Xueqing Li
Electronics 2025, 14(10), 2071; https://doi.org/10.3390/electronics14102071 - 20 May 2025
Viewed by 113
Abstract
The GM(1,1) model is a well-established approach for time series forecasting, demonstrating superior effectiveness with limited data and incomplete information. However, its performance often degrades in dynamic systems, leading to obvious prediction errors. To address this impediment, we propose an elastic optimal adaptive [...] Read more.
The GM(1,1) model is a well-established approach for time series forecasting, demonstrating superior effectiveness with limited data and incomplete information. However, its performance often degrades in dynamic systems, leading to obvious prediction errors. To address this impediment, we propose an elastic optimal adaptive GM(1,1) model, dubbed EOAGM, to improve forecasting performance. Specifically, our proposed EOAGM dynamically optimizes the sequence length by discarding outdated data and incorporating new data, reducing the influence of irrelevant historical information. Moreover, we introduce a stationarity test mechanism to identify and adjust sequence data fluctuations, ensuring stability and robustness against volatility. Additionally, the model refines parameter optimization by incorporating predicted values into candidate sequences and assessing their impact on subsequent forecasts, particularly under conditions of data fluctuation or anomalies. Experimental evaluations across multiple real-world datasets demonstrate the superior prediction accuracy and reliability of our model compared to six baseline approaches. Full article
(This article belongs to the Special Issue Future Technologies for Data Management, Processing and Application)
Show Figures

Figure 1

27 pages, 11612 KiB  
Article
FACDIM: A Face Image Super-Resolution Method That Integrates Conditional Diffusion Models with Prior Attributes
by Jianhua Ren, Yuze Guo and Qiangkui Leng
Electronics 2025, 14(10), 2070; https://doi.org/10.3390/electronics14102070 - 20 May 2025
Viewed by 136
Abstract
Facial image super-resolution seeks to reconstruct high-quality details from low-resolution inputs, yet traditional methods, such as interpolation, convolutional neural networks (CNNs), and generative adversarial networks (GANs), often fall short, suffering from insufficient realism, loss of high-frequency details, and training instability. Furthermore, many existing [...] Read more.
Facial image super-resolution seeks to reconstruct high-quality details from low-resolution inputs, yet traditional methods, such as interpolation, convolutional neural networks (CNNs), and generative adversarial networks (GANs), often fall short, suffering from insufficient realism, loss of high-frequency details, and training instability. Furthermore, many existing models inadequately incorporate facial structural attributes and semantic information, leading to semantically inconsistent generated images. To overcome these limitations, this study introduces an attribute-prior conditional diffusion implicit model that enhances the controllability of super-resolution generation and improves detail restoration capabilities. Methodologically, the framework consists of four components: a pre-super-resolution module, a facial attribute extraction module, a global feature encoder, and an enhanced conditional diffusion implicit model. Specifically, low-resolution images are subjected to preliminary super-resolution and attribute extraction, followed by adaptive group normalization to integrate feature vectors. Additionally, residual convolutional blocks are incorporated into the diffusion model to utilize attribute priors, complemented by self-attention mechanisms and skip connections to optimize feature transmission. Experiments conducted on the CelebA and FFHQ datasets demonstrate that the proposed model achieves an increase of 2.16 dB in PSNR and 0.08 in SSIM under an 8× magnification factor compared to SR3, with the generated images displaying more realistic textures. Moreover, manual adjustment of attribute vectors allows for directional control over generation outcomes (e.g., modifying facial features or lighting conditions), ensuring alignment with anthropometric characteristics. This research provides a flexible and robust solution for high-fidelity face super-resolution, offering significant advantages in detail preservation and user controllability. Full article
(This article belongs to the Special Issue AI-Driven Image Processing: Theory, Methods, and Applications)
Show Figures

Figure 1

49 pages, 1114 KiB  
Review
A Survey on the Main Techniques Adopted in Indoor and Outdoor Localization
by Massimo Stefanoni, Imre Kovács, Peter Sarcevic and Ákos Odry
Electronics 2025, 14(10), 2069; https://doi.org/10.3390/electronics14102069 - 20 May 2025
Viewed by 151
Abstract
In modern engineering applications, localization and orientation play an increasingly crucial role in ensuring the successful execution of assigned tasks. Industrial robots, smart home systems, healthcare environments, nuclear facilities, agriculture, and autonomous vehicles are just a few examples of fields where localization technologies [...] Read more.
In modern engineering applications, localization and orientation play an increasingly crucial role in ensuring the successful execution of assigned tasks. Industrial robots, smart home systems, healthcare environments, nuclear facilities, agriculture, and autonomous vehicles are just a few examples of fields where localization technologies are applied. Over the years, these technologies have evolved significantly, with numerous methods being developed, proposed, and refined. This paper aims to provide a comprehensive review of the primary localization and orientation technologies available in the literature, detailing the fundamental principles on which they are based and the key algorithms used to implement them. To achieve accurate and reliable localization, fusion-based approaches are often necessary, integrating data from multiple sensors and systems or estimating hidden states. For this purpose, algorithms such as Kalman Filters, Particle Filters, or Neural Networks are usually adopted. The first part of this article presents an extensive review of localization technologies, including radio frequency, RFID, laser-based systems, vision-based techniques, light-based positioning, IMU-based methods, odometry, and ultrasound-based solutions. The second part focuses on the most widely used algorithms for localization. Finally, summary tables provide an overview of the best and most consistent accuracies reported in the literature for the investigated technologies and systems. Full article
Show Figures

Figure 1

17 pages, 2553 KiB  
Article
Negative Feedback Matters: Exploring Positive and Negative Correlations for Time Series Anomaly Detection
by Yixuan Jin, Xueting Liu, Bing Hu, Joojo Walker, Ke Wang, Wei Wu and Ting Zhong
Electronics 2025, 14(10), 2068; https://doi.org/10.3390/electronics14102068 - 20 May 2025
Viewed by 176
Abstract
Recently, graph neural networks (GNNs) have demonstrated remarkable success in multivariable time series anomaly detection, particularly by explicitly modeling inter-variable relationships. However, to prevent the distinct pattern of one variable from introducing noise to unrelated variables, existing methods focus solely on leveraging positive [...] Read more.
Recently, graph neural networks (GNNs) have demonstrated remarkable success in multivariable time series anomaly detection, particularly by explicitly modeling inter-variable relationships. However, to prevent the distinct pattern of one variable from introducing noise to unrelated variables, existing methods focus solely on leveraging positive correlations among neighbors for relationship modeling while neglecting the role of negative correlations. This limitation hinders their effectiveness in complex scenarios where both positive and negative dependencies are critical. To address this challenge, we propose PNGDN, a novel GNN framework that incorporates both positive and negative correlations to enhance anomaly-detection performance. Notably, PNGDN introduces a correlational graph structure learning module that simultaneously captures positive and negative dependencies. It quantitatively filters out spurious relationships based on the value of similarity, which serves as a unified threshold to screen both positive and negative correlations, allowing the model to focus on truly meaningful correlations among variables. Additionally, an attention-based information propagation mechanism ensures the efficient propagation of information under positive and negative correlations, facilitating accurate predictions for each variable. Extensive experiments on three benchmark time series anomaly detection datasets validate the superior performance of PNGDN. Full article
Show Figures

Figure 1

44 pages, 5183 KiB  
Article
A Blockchain-Based Framework for Secure Data Stream Dissemination in Federated IoT Environments
by Jakub Sychowiec and Zbigniew Zieliński
Electronics 2025, 14(10), 2067; https://doi.org/10.3390/electronics14102067 - 20 May 2025
Viewed by 149
Abstract
An industrial-scale increase in applications of the Internet of Things (IoT), a significant number of which are based on the concept of federation, presents unique security challenges due to their distributed nature and the need for secure communication between components from different administrative [...] Read more.
An industrial-scale increase in applications of the Internet of Things (IoT), a significant number of which are based on the concept of federation, presents unique security challenges due to their distributed nature and the need for secure communication between components from different administrative domains. A federation may be created for the duration of a mission, such as military operations or Humanitarian Assistance and Disaster Relief (HADR) operations. These missions often occur in very difficult or even hostile environments, posing additional challenges for ensuring reliability and security. The heterogeneity of devices, protocols, and security requirements in different domains further complicates the requirements for the secure distribution of data streams in federated IoT environments. The effective dissemination of data streams in federated environments also ensures the flexibility to filter and search for patterns in real-time to detect critical events or threats (e.g., fires and hostile objects) with changing information needs of end users. The paper presents a novel and practical framework for secure and reliable data stream dissemination in federated IoT environments, leveraging blockchain, Apache Kafka brokers, and microservices. To authenticate IoT devices and verify data streams, we have integrated a hardware and software IoT gateway with the Hyperledger Fabric (HLF) blockchain platform, which records the distinguishing features of IoT devices (fingerprints). In this paper, we analyzed our platform’s security, focusing on secure data distribution. We formally discussed potential attack vectors and ways to mitigate them through the platform’s design. We thoroughly assess the effectiveness of the proposed framework by conducting extensive performance tests in two setups: the Amazon Web Services (AWS) cloud-based and Raspberry Pi resource-constrained environments. Implementing our framework in the AWS cloud infrastructure has demonstrated that it is suitable for processing audiovisual streams in environments that require immediate interoperability. The results are promising, as the average time it takes for a consumer to read a verified data stream is in the order of seconds. The measured time for complete processing of an audiovisual stream corresponds to approximately 25 frames per second (fps). The results obtained also confirmed the computational stability of our framework. Furthermore, we have confirmed that our environment can be deployed on resource-constrained commercial off-the-shelf (COTS) platforms while maintaining low operational costs. Full article
(This article belongs to the Special Issue Feature Papers in "Computer Science & Engineering", 2nd Edition)
Show Figures

Figure 1

14 pages, 2069 KiB  
Article
The Role of Facial Action Units in Investigating Facial Movements During Speech
by Aliya A. Newby, Ambika Bhatta, Charles Kirkland III, Nicole Arnold and Lara A. Thompson
Electronics 2025, 14(10), 2066; https://doi.org/10.3390/electronics14102066 - 20 May 2025
Viewed by 167
Abstract
Investigating how facial movements can be used to characterize and quantify speech is important, in particular, to aid those suffering from motor control speech disorders. Here, we sought to investigate how facial action units (AUs), previously used to classify human expressions and emotion, [...] Read more.
Investigating how facial movements can be used to characterize and quantify speech is important, in particular, to aid those suffering from motor control speech disorders. Here, we sought to investigate how facial action units (AUs), previously used to classify human expressions and emotion, could be used to quantify and understand unimpaired human speech. Fourteen (14) adult participants (30.1 ± 7.9 years old), fluent in English, with no speech impairments, were examined. Within each data collection session, 6 video trials per participant per phoneme were acquired (i.e., 102 trials total/phoneme). The participants were asked to vocalize the vowels /æ/, /ɛ/, /ɪ/, /ɒ/, and /ʊ/; the consonants /b/, /n/, /m/, /p/, /h/, /w/, and /d/; and the diphthongs /eI/, /ʌɪ/, /i/, /a:/, and /u:/. Through the use of Python Py-Feat, our analysis displayed the AU contributions for each phoneme. The important implication of our methodological findings is that AUs could be used to quantify speech in populations with no speech disability; this has the potential to be broadened toward providing feedback and characterization of speech changes and improvements in impaired populations. This would be of interest to persons with speech disorders, speech language pathologists, engineers, and physicians. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop