sensors-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 5837 KB  
Article
Enhancing Clustering Efficiency in Heterogeneous Wireless Sensor Network Protocols Using the K-Nearest Neighbours Algorithm
by Abdulla Juwaied, Lidia Jackowska-Strumillo and Artur Sierszeń
Sensors 2025, 25(4), 1029; https://doi.org/10.3390/s25041029 - 9 Feb 2025
Cited by 7 | Viewed by 2051
Abstract
Wireless Sensor Networks are formed by tiny, self-contained, battery-powered computers with radio links that can sense their surroundings for events of interest and store and process the sensed data. Sensor nodes wirelessly communicate with each other to relay information to a central base [...] Read more.
Wireless Sensor Networks are formed by tiny, self-contained, battery-powered computers with radio links that can sense their surroundings for events of interest and store and process the sensed data. Sensor nodes wirelessly communicate with each other to relay information to a central base station. Energy consumption is the most critical parameter in Wireless Sensor Networks (WSNs). Network lifespan is directly influenced by the energy consumption of the sensor nodes. All sensors in the network send and receive data from the base station (BS) using different routing protocols and algorithms. These routing protocols use two main types of clustering: hierarchical clustering and flat clustering. Consequently, effective clustering within Wireless Sensor Network (WSN) protocols is essential for establishing secure connections among nodes, ensuring a stable network lifetime. This paper introduces a novel approach to improve energy efficiency, reduce the length of network connections, and increase network lifetime in heterogeneous Wireless Sensor Networks by employing the K-Nearest Neighbours (KNN) algorithm to optimise node selection and clustering mechanisms for four protocols: Low-Energy Adaptive Clustering Hierarchy (LEACH), Stable Election Protocol (SEP), Threshold-sensitive Energy Efficient sensor Network (TEEN), and Distributed Energy-efficient Clustering (DEC). Simulation results obtained using MATLAB (R2024b) demonstrate the efficacy of the proposed K-Nearest Neighbours algorithm, revealing that the modified protocols achieve shorter distances between cluster heads and nodes, reduced energy consumption, and improved network lifetime compared to the original protocols. The proposed KNN-based approach enhances the network’s operational efficiency and security, offering a robust solution for energy management in WSNs. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

16 pages, 4009 KB  
Article
Curved Fabry-Pérot Ultrasound Detectors: Optical and Mechanical Analysis
by Barbara Rossi, Maria Alessandra Cutolo, Martino Giaquinto, Andrea Cusano and Giovanni Breglio
Sensors 2025, 25(4), 1014; https://doi.org/10.3390/s25041014 - 8 Feb 2025
Cited by 1 | Viewed by 1322
Abstract
Optical fiber-based acoustic detectors for ultrasound imaging in medical field feature plano-concave Fabry–Perot cavities integrated on fiber tips, realized via dip-coating. This technique imposes constraints on sensor geometry, potentially limiting performance. Lab-on-Fiber technology enables complex three-dimensional structures with precise control over geometric parameters, [...] Read more.
Optical fiber-based acoustic detectors for ultrasound imaging in medical field feature plano-concave Fabry–Perot cavities integrated on fiber tips, realized via dip-coating. This technique imposes constraints on sensor geometry, potentially limiting performance. Lab-on-Fiber technology enables complex three-dimensional structures with precise control over geometric parameters, such as the curvature radius. A careful investigation of the optical and mechanical aspects involved in the sensors’ performances is crucial for determining the design rules of such probes. In this study, we numerically analyzed the impact of curvature on the optical and acoustic properties of a plano-concave cavity using the Finite Element Method. Performance metrics, including sensitivity, bandwidth, and directivity, were compared to planar Fabry–Perot configurations. The results suggest that introducing curvature significantly enhances sensitivity by improving light confinement, especially for cavity thicknesses exceeding half the Rayleigh zone (∼45 μm), reaching an enhancement of 2.5 a L = 60 μm compared to planar designs. The curved structure maintains high spectral quality (FOM) despite 2% fabrication perturbations. A mechanical analysis confirms no disadvantages in acoustic response and bandwidth (∼40 MHz). These findings establish curved plano-concave structures as robust and reliable for high-sensitivity polymeric lab-on-fiber ultrasound detectors, offering improved performance and fabrication tolerance for MHz-scale bandwidth applications. Full article
(This article belongs to the Special Issue Feature Papers in Optical Sensors 2025)
Show Figures

Figure 1

28 pages, 4405 KB  
Article
Towards Explainable Artificial Intelligence for GNSS Multipath LSTM Training Models
by He-Sheng Wang, Dah-Jing Jwo and Zhi-Hang Gao
Sensors 2025, 25(3), 978; https://doi.org/10.3390/s25030978 - 6 Feb 2025
Cited by 1 | Viewed by 2357
Abstract
This paper addresses the critical challenge of understanding and interpreting deep learning models in Global Navigation Satellite System (GNSS) applications, specifically focusing on multipath effect detection and analysis. As GNSS systems become increasingly reliant on deep learning for signal processing, the lack of [...] Read more.
This paper addresses the critical challenge of understanding and interpreting deep learning models in Global Navigation Satellite System (GNSS) applications, specifically focusing on multipath effect detection and analysis. As GNSS systems become increasingly reliant on deep learning for signal processing, the lack of model interpretability poses significant risks for safety-critical applications. We propose a novel approach combining Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) cells with Layer-wise Relevance Propagation (LRP) to create an explainable framework for multipath detection. Our key contributions include: (1) the development of an interpretable LSTM architecture for processing GNSS observables, including multipath variables, carrier-to-noise ratios, and satellite elevation angles; (2) the adaptation of the LRP technique for GNSS signal analysis, enabling attribution of model decisions to specific input features; and (3) the discovery of a correlation between LRP relevance scores and signal anomalies, leading to a new method for anomaly detection. Through systematic experimental validation, we demonstrate that our LSTM model achieves high prediction accuracy across all GNSS parameters while maintaining interpretability. A significant finding emerges from our controlled experiments: LRP relevance scores consistently increase during anomalous signal conditions, with growth rates varying from 7.34% to 32.48% depending on the feature type. In our validation experiments, we systematically introduced signal anomalies in specific time segments of the data sequence and observed corresponding increases in LRP scores: multipath parameters showed increases of 7.34–8.81%, carrier-to-noise ratios exhibited changes of 12.50–32.48%, and elevation angle parameters increased by 16.10%. These results demonstrate the potential of LRP-based analysis for enhancing GNSS signal quality monitoring and integrity assessment. Our approach not only improves the interpretability of deep learning models in GNSS applications but also provides a practical framework for detecting and analyzing signal anomalies, contributing to the development of more reliable and trustworthy navigation systems. Full article
Show Figures

Figure 1

12 pages, 1901 KB  
Article
Advancing Near-Infrared Probes for Enhanced Breast Cancer Assessment
by Mohammad Pouriayevali, Ryley McWilliams, Avner Bachar, Parmveer Atwal, Ramani Ramaseshan and Farid Golnaraghi
Sensors 2025, 25(3), 983; https://doi.org/10.3390/s25030983 - 6 Feb 2025
Cited by 1 | Viewed by 1895
Abstract
Breast cancer remains a leading cause of cancer-related deaths among women, emphasizing the critical need for early detection and monitoring techniques. Conventional imaging modalities such as mammography, MRI, and ultrasound have face sensitivity, specificity, cost, and patient comfort limitations. This study introduces a [...] Read more.
Breast cancer remains a leading cause of cancer-related deaths among women, emphasizing the critical need for early detection and monitoring techniques. Conventional imaging modalities such as mammography, MRI, and ultrasound have face sensitivity, specificity, cost, and patient comfort limitations. This study introduces a handheld Near-Infrared Diffuse Optical Tomography (NIR DOT) probe for breast cancer imaging. The NIRscan probe utilizes multi-wavelength light-emitting diodes (LEDs) and a linear charge-coupled device (CCD) sensor to acquire real-time optical data, reconstructing cross-sectional images of breast tissue based on scattering and absorption coefficients. With wavelengths optimized for the differential optical properties of tissue components, the probe enables functional imaging, distinguishing between healthy and malignant tissues. Clinical evaluations have demonstrated its potential for precise tumor localization and monitoring therapeutic responses, achieving a sensitivity of 94.7% and specificity of 84.2%. By incorporating machine learning algorithms and a modified diffusion equation (MDE), the system enhances the accuracy and speed of image reconstruction, supporting rapid, non-invasive diagnostics. This development represents a significant step forward in portable, cost-effective solutions for breast cancer detection, with potential applications in low-resource settings and diverse clinical environments. Full article
(This article belongs to the Special Issue Advanced Sensors for Detection of Cancer Biomarkers and Virus)
Show Figures

Figure 1

17 pages, 15387 KB  
Article
Improving 3D Reconstruction Through RGB-D Sensor Noise Modeling
by Fahira Afzal Maken, Sundaram Muthu, Chuong Nguyen, Changming Sun, Jinguang Tong, Shan Wang, Russell Tsuchida, David Howard, Simon Dunstall and Lars Petersson
Sensors 2025, 25(3), 950; https://doi.org/10.3390/s25030950 - 5 Feb 2025
Cited by 3 | Viewed by 2729
Abstract
High-resolution RGB-D sensors are widely used in computer vision, manufacturing, and robotics. The depth maps from these sensors have inherently high measurement uncertainty that includes both systematic and non-systematic noise. These noisy depth estimates degrade the quality of scans, resulting in less accurate [...] Read more.
High-resolution RGB-D sensors are widely used in computer vision, manufacturing, and robotics. The depth maps from these sensors have inherently high measurement uncertainty that includes both systematic and non-systematic noise. These noisy depth estimates degrade the quality of scans, resulting in less accurate 3D reconstruction, making them unsuitable for some high-precision applications. In this paper, we focus on quantifying the uncertainty in the depth maps of high-resolution RGB-D sensors for the purpose of improving 3D reconstruction accuracy. To this end, we estimate the noise model for a recent high-precision RGB-D structured light sensor called Zivid when mounted on a robot arm. Our proposed noise model takes into account the measurement distance and angle between the sensor and the measured surface. We additionally analyze the effect of background light, exposure time, and the number of captures on the quality of the depth maps obtained. Our noise model seamlessly integrates with well-known classical and modern neural rendering-based algorithms, from KinectFusion to Point-SLAM methods using bilinear interpolation as well as 3D analytical functions. We collect a high-resolution RGB-D dataset and apply our noise model to improve tracking and produce higher-resolution 3D models. Full article
Show Figures

Figure 1

26 pages, 12669 KB  
Review
Recent Progress in Intrinsically Stretchable Sensors Based on Organic Field-Effect Transistors
by Mingxin Zhang, Mengfan Zhou, Jing Sun, Yanhong Tong, Xiaoli Zhao, Qingxin Tang and Yichun Liu
Sensors 2025, 25(3), 925; https://doi.org/10.3390/s25030925 - 4 Feb 2025
Cited by 2 | Viewed by 3243
Abstract
Organic field-effect transistors (OFETs) are an ideal platform for intrinsically stretchable sensors due to their diverse mechanisms and unique electrical signal amplification characteristics. The remarkable advantages of intrinsically stretchable sensors lie in their molecular tunability, lightweight design, mechanical robustness, solution processability, and low [...] Read more.
Organic field-effect transistors (OFETs) are an ideal platform for intrinsically stretchable sensors due to their diverse mechanisms and unique electrical signal amplification characteristics. The remarkable advantages of intrinsically stretchable sensors lie in their molecular tunability, lightweight design, mechanical robustness, solution processability, and low Young’s modulus, which enable them to seamlessly conform to three-dimensional curved surfaces while maintaining electrical performance under significant deformations. Intrinsically stretchable sensors have been widely applied in smart wearables, electronic skin, biological detection, and environmental protection. In this review, we summarize the recent progress in intrinsically stretchable sensors based on OFETs, including advancements in functional layer materials, sensing mechanisms, and applications such as gas sensors, strain sensors, stress sensors, proximity sensors, and temperature sensors. The conclusions and future outlook discuss the challenges and future outlook for stretchable OFET-based sensors. Full article
Show Figures

Figure 1

32 pages, 3991 KB  
Review
Artificial Intelligence in IR Thermal Imaging and Sensing for Medical Applications
by Antoni Z. Nowakowski and Mariusz Kaczmarek
Sensors 2025, 25(3), 891; https://doi.org/10.3390/s25030891 - 1 Feb 2025
Cited by 7 | Viewed by 7869
Abstract
The state of the art in IR thermal imaging methods for applications in medical diagnostics is discussed. A review of advances in IR thermal imaging technology in the years 1960–2024 is presented. Recently used artificial intelligence (AI) methods in the analysis of thermal [...] Read more.
The state of the art in IR thermal imaging methods for applications in medical diagnostics is discussed. A review of advances in IR thermal imaging technology in the years 1960–2024 is presented. Recently used artificial intelligence (AI) methods in the analysis of thermal images are the main interest. IR thermography is discussed in view of novel applications of machine learning methods for improved diagnostic analysis and medical treatment. The AI approach aims to improve image quality by denoising thermal images, using applications of AI super-resolution algorithms, removing artifacts, object detection, face and characteristic features localization, complex matching of diagnostic symptoms, etc. Full article
(This article belongs to the Collection Medical Applications of Sensor Systems and Devices)
Show Figures

Figure 1

52 pages, 4917 KB  
Review
Exploring the Unseen: A Survey of Multi-Sensor Fusion and the Role of Explainable AI (XAI) in Autonomous Vehicles
by De Jong Yeong, Krishna Panduru and Joseph Walsh
Sensors 2025, 25(3), 856; https://doi.org/10.3390/s25030856 - 31 Jan 2025
Cited by 11 | Viewed by 13264
Abstract
Autonomous vehicles (AVs) rely heavily on multi-sensor fusion to perceive their environment and make critical, real-time decisions by integrating data from various sensors such as radar, cameras, Lidar, and GPS. However, the complexity of these systems often leads to a lack of transparency, [...] Read more.
Autonomous vehicles (AVs) rely heavily on multi-sensor fusion to perceive their environment and make critical, real-time decisions by integrating data from various sensors such as radar, cameras, Lidar, and GPS. However, the complexity of these systems often leads to a lack of transparency, posing challenges in terms of safety, accountability, and public trust. This review investigates the intersection of multi-sensor fusion and explainable artificial intelligence (XAI), aiming to address the challenges of implementing accurate and interpretable AV systems. We systematically review cutting-edge multi-sensor fusion techniques, along with various explainability approaches, in the context of AV systems. While multi-sensor fusion technologies have achieved significant advancement in improving AV perception, the lack of transparency and explainability in autonomous decision-making remains a primary challenge. Our findings underscore the necessity of a balanced approach to integrating XAI and multi-sensor fusion in autonomous driving applications, acknowledging the trade-offs between real-time performance and explainability. The key challenges identified span a range of technical, social, ethical, and regulatory aspects. We conclude by underscoring the importance of developing techniques that ensure real-time explainability, specifically in high-stakes applications, to stakeholders without compromising safety and accuracy, as well as outlining future research directions aimed at bridging the gap between high-performance multi-sensor fusion and trustworthy explainability in autonomous driving systems. Full article
(This article belongs to the Special Issue Advances in Physical, Chemical, and Biosensors)
Show Figures

Figure 1

23 pages, 7697 KB  
Review
Recent Advances in Aptamer-Based Microfluidic Biosensors for the Isolation, Signal Amplification and Detection of Exosomes
by Jessica Hu and Dan Gao
Sensors 2025, 25(3), 848; https://doi.org/10.3390/s25030848 - 30 Jan 2025
Cited by 8 | Viewed by 4441
Abstract
Exosomes carry diverse tumor-associated molecular information that can reflect real-time tumor progression, making them a promising tool for liquid biopsy. However, traditional methods for exosome isolation and detection often rely on large, expensive equipment and are time-consuming, limiting their practical applicability in clinical [...] Read more.
Exosomes carry diverse tumor-associated molecular information that can reflect real-time tumor progression, making them a promising tool for liquid biopsy. However, traditional methods for exosome isolation and detection often rely on large, expensive equipment and are time-consuming, limiting their practical applicability in clinical settings. Microfluidic technology offers a versatile platform for exosome analysis, with advantages such as seamless integration, portability and reduced sample volumes. Aptamers, which are single-stranded oligonucleotides with high affinity and specificity for target molecules, have been frequently employed in the development of aptamer-based microfluidics for the isolation, signal amplification, and quantitative detection of exosomes. This review summarizes recent advances in aptamer-based microfluidic strategies for exosome analysis, including (1) strategies for on-chip exosome capture mediated by aptamers combined with nanomaterials or nanointerfaces; (2) aptamer-based on-chip signal amplification techniques, such as enzyme-free hybridization chain reaction (HCR), rolling circle amplification (RCA), and DNA machine-assisted amplification; and (3) various aptamer-assisted detection methods, such as fluorescence, electrochemistry, surface-enhanced Raman scattering (SERS), and magnetism. The limitations and advantages of these methods are also summarized. Finally, future challenges and directions for the clinical analysis of exosomes based on aptamer-based microfluidics are discussed. Full article
(This article belongs to the Special Issue Recent Advances in Microfluidic Sensing Devices)
Show Figures

Figure 1

13 pages, 1543 KB  
Article
SDR-Fi-Z: A Wireless Local Area Network-Fingerprinting-Based Indoor Positioning Method for E911 Vertical Accuracy Mandate
by Rahul Mundlamuri, Devasena Inupakutika and David Akopian
Sensors 2025, 25(3), 823; https://doi.org/10.3390/s25030823 - 30 Jan 2025
Cited by 1 | Viewed by 1377
Abstract
The Enhanced 911 (E911) mandate of the Federal Communications Commission (FCC) drives the evolution of indoor three-dimensional (3D) location/positioning services for emergency calls. Many indoor localization systems exploit location-dependent wireless signaling signatures, often called fingerprints, and machine learning techniques for position estimation. In [...] Read more.
The Enhanced 911 (E911) mandate of the Federal Communications Commission (FCC) drives the evolution of indoor three-dimensional (3D) location/positioning services for emergency calls. Many indoor localization systems exploit location-dependent wireless signaling signatures, often called fingerprints, and machine learning techniques for position estimation. In particular, received signal strength indicators (RSSIs) and Channel State Information (CSI) in Wireless Local Area Networks (WLANs or Wi-Fi) have gained popularity and have been addressed in the literature. While RSSI signatures are easy to collect, the fluctuation of wireless signals resulting from environmental uncertainties leads to considerable variations in RSSIs, which poses a challenge to accurate localization on a single floor, not to mention multi-floor or even three-dimensional (3D) indoor localization. Considering recent E911 mandate attention to vertical location accuracy, this study aimed to investigate CSI from Wi-Fi signals to produce baseline Z-axis location data, which has not been thoroughly addressed. To that end, we utilized CSI measurements and two representative machine learning methods, an artificial neural network (ANN) and convolutional neural network (CNN), to estimate both 3D and vertical-axis positioning feasibility to achieve E911 accuracy compliance. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

19 pages, 5606 KB  
Article
Static Calibration of a New Three-Axis Fiber Bragg Grating-Based Optical Accelerometer
by Abraham Perez-Alonzo, Luis Alvarez-Icaza and Gabriel E. Sandoval-Romero
Sensors 2025, 25(3), 835; https://doi.org/10.3390/s25030835 - 30 Jan 2025
Cited by 1 | Viewed by 2954
Abstract
Optical sensors are a promising technology in structural and health monitoring due to their high sensitivity and immunity to electromagnetic interference. Because of their high sensitivity, they can register the responses of buildings to a wide range of motions, including those induced by [...] Read more.
Optical sensors are a promising technology in structural and health monitoring due to their high sensitivity and immunity to electromagnetic interference. Because of their high sensitivity, they can register the responses of buildings to a wide range of motions, including those induced by ambient noise, or detect small structural changes caused by aging or environmental factors. In previous work, an FBG-based accelerometer was introduced that is suitable for use as an autonomous unit since it does not make use of any interrogator equipment. In this paper, we present the results of the characterization of this device, which yielded the best precision and accuracy. The results show the following: (i) improvements in the orthogonality of the sensor axes, which impact their cross-axis sensitivity; (ii) reductions in the electronic noise, which increase the signal-to-noise ratio. The results of our static characterization show that, in the worst case, we can obtain a correlation coefficient R2 of 0.9999 when comparing the output voltage with the input acceleration for the X- and Y-axes of the sensor. We developed an analytical, non-iterative, 12-parameter matrix calibration approach based on the least-squares method, which allows compensation for different gains in its axes, offset, and cross-axis. To improve the accuracy of our sensor, we propose a table with correction terms that can be subtracted from the estimated acceleration. The mean error of each estimated acceleration component of the sensor is zero, with a maximum standard deviation of 0.018 m/s2. The maximum RMSE for all tested positions is 6.7 × 10−3 m/s2. Full article
Show Figures

Figure 1

23 pages, 6653 KB  
Article
Monitoring Welfare of Individual Broiler Chickens Using Ultra-Wideband and Inertial Measurement Unit Wearables
by Imad Khan, Daniel Peralta, Jaron Fontaine, Patricia Soster de Carvalho, Ana Martos Martinez-Caja, Gunther Antonissen, Frank Tuyttens and Eli De Poorter
Sensors 2025, 25(3), 811; https://doi.org/10.3390/s25030811 - 29 Jan 2025
Cited by 2 | Viewed by 2531
Abstract
Monitoring animal welfare on farms and in research settings is attracting increasing interest, both for ethical reasons and for improving productivity through the early detection of stress or diseases. In contrast to video-based monitoring, which requires good light conditions and has difficulty tracking [...] Read more.
Monitoring animal welfare on farms and in research settings is attracting increasing interest, both for ethical reasons and for improving productivity through the early detection of stress or diseases. In contrast to video-based monitoring, which requires good light conditions and has difficulty tracking specific animals, recent advances in the miniaturization of wearable devices allow for the collection of acceleration and location data to track individual animal behavior. However, for broilers, there are several challenges to address when using wearables, such as coping with (i) the large numbers of chickens in commercial farms,(ii)the impact of their rapid growth, and (iii) the small weights that the devices must have to be carried by the chickens without any impact on their health or behavior. To this end, this paper describes a pilot study in which chickens were fitted with devices containing an Inertial Measurement Unit (IMU) and an Ultra-Wideband (UWB) sensor. To establish guidelines for practitioners who want to monitor broiler welfare and activity at different scales, we first compare the attachment methods of the wearables to the broiler chickens, taking into account their effectiveness (in terms of retention time) and their impact on the broiler’s welfare. Then, we establish the technical requirements to carry out such a study, and the challenges that may arise. This analysis involves aspects such as noise estimation, synergy between UWB and IMU, and the measurement of activity levels based on the monitoring of chicken activity. We show that IMU data can be used for detecting activity level differences between individual animals and environmental conditions. UWB data can be used to monitor the positions and movement patterns of up to 200 animals simultaneously with an accuracy of less than 20 cm. We also show that the accuracy depends on installation aspects and that errors are larger at the borders of the monitored area. Attachment with sutures had the longest mean retention of 19.5 days, whereas eyelash glue had the shortest mean retention of 3 days. To conclude the paper, we identify current challenges and future research lines in the field. Full article
(This article belongs to the Special Issue Flexible and Wearable Sensors and Sensing for Agriculture and Food)
Show Figures

Figure 1

35 pages, 8022 KB  
Review
Internet of Robotic Things: Current Technologies, Challenges, Applications, and Future Research Topics
by Jakub Krejčí, Marek Babiuch, Jiří Suder, Václav Krys and Zdenko Bobovský
Sensors 2025, 25(3), 765; https://doi.org/10.3390/s25030765 - 27 Jan 2025
Cited by 4 | Viewed by 5601
Abstract
This article focuses on the integration of the Internet of Things (IoT) and the Internet of Robotic Things, representing a dynamic research area with significant potential for industrial applications. The Internet of Robotic Things (IoRT) integrates IoT technologies into robotic systems, enhancing their [...] Read more.
This article focuses on the integration of the Internet of Things (IoT) and the Internet of Robotic Things, representing a dynamic research area with significant potential for industrial applications. The Internet of Robotic Things (IoRT) integrates IoT technologies into robotic systems, enhancing their efficiency and autonomy. The article provides an overview of the technologies used in IoRT, including hardware components, communication technologies, and cloud services. It also explores IoRT applications in industries such as healthcare, agriculture, and more. The article discusses challenges and future research directions, including data security, energy efficiency, and ethical issues. The goal is to raise awareness of the importance of IoRT and demonstrate how this technology can bring significant benefits across various sectors. Full article
Show Figures

Figure 1

17 pages, 1213 KB  
Article
Transformer-Driven Affective State Recognition from Wearable Physiological Data in Everyday Contexts
by Fang Li and Dan Zhang
Sensors 2025, 25(3), 761; https://doi.org/10.3390/s25030761 - 27 Jan 2025
Cited by 3 | Viewed by 3517
Abstract
The rapid advancement in wearable physiological measurement technology in recent years has brought affective computing closer to everyday life scenarios. Recognizing affective states in daily contexts holds significant potential for applications in human–computer interaction and psychiatry. Addressing the challenge of long-term, multi-modal physiological [...] Read more.
The rapid advancement in wearable physiological measurement technology in recent years has brought affective computing closer to everyday life scenarios. Recognizing affective states in daily contexts holds significant potential for applications in human–computer interaction and psychiatry. Addressing the challenge of long-term, multi-modal physiological data in everyday settings, this study introduces a Transformer-based algorithm for affective state recognition, designed to fully exploit the temporal characteristics of signals and the interrelationships between different modalities. Utilizing the DAPPER dataset, which comprises continuous 5-day wrist-worn recordings of heart rate, skin conductance, and tri-axial acceleration from 88 subjects, our Transformer-based model achieved an average binary classification accuracy of 71.5% for self-reported positive or negative affective state sampled at random moments during daily data collection, and 60.29% and 61.55% for the five-class classification based on valence and arousal scores. The results of this study demonstrate the feasibility of applying affective state recognition based on wearable multi-modal physiological signals in everyday contexts. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (3rd Edition))
Show Figures

Figure 1

23 pages, 3741 KB  
Article
Enhanced Pure Pursuit Path Tracking Algorithm for Mobile Robots Optimized by NSGA-II with High-Precision GNSS Navigation
by Xiongwen Jiang, Taiga Kuroiwa, Yu Cao, Linfeng Sun, Haohao Zhang, Takahiro Kawaguchi and Seiji Hashimoto
Sensors 2025, 25(3), 745; https://doi.org/10.3390/s25030745 - 26 Jan 2025
Cited by 3 | Viewed by 2807
Abstract
With the rapid development of automation and intelligent technology, mobile robots have shown wide application potential in many fields, and accurate navigation systems are the key to robots completing tasks. This paper proposes an enhanced pure pursuit path tracking algorithm for mobile robots, [...] Read more.
With the rapid development of automation and intelligent technology, mobile robots have shown wide application potential in many fields, and accurate navigation systems are the key to robots completing tasks. This paper proposes an enhanced pure pursuit path tracking algorithm for mobile robots, which is optimized using NSGA-II, with high-precision GNSS navigation for accurate positioning. The improved algorithm considers the dynamic characteristics and real–world operating conditions of the robot, optimizing steering decisions to enhance path tracking accuracy. Experimental results demonstrate the effectiveness of the algorithm: with a look–ahead distance of 0.5 and a maximum linear velocity of 3, the average absolute pose error (APE) is reduced by 14.63%, while a velocity of 4 reduces the APE by 55.94%. The enhanced algorithm significantly reduces path deviation and improves navigation performance. Full article
Show Figures

Figure 1

17 pages, 7941 KB  
Article
Visual Localization Domain for Accurate V-SLAM from Stereo Cameras
by Eleonora Di Salvo, Sara Bellucci, Valeria Celidonio, Ilaria Rossini, Stefania Colonnese and Tiziana Cattai
Sensors 2025, 25(3), 739; https://doi.org/10.3390/s25030739 - 26 Jan 2025
Cited by 4 | Viewed by 1686
Abstract
Trajectory estimation from stereo image sequences remains a fundamental challenge in Visual Simultaneous Localization and Mapping (V-SLAM). To address this, we propose a novel approach that focuses on the identification and matching of keypoints within a transformed domain that emphasizes visually significant features. [...] Read more.
Trajectory estimation from stereo image sequences remains a fundamental challenge in Visual Simultaneous Localization and Mapping (V-SLAM). To address this, we propose a novel approach that focuses on the identification and matching of keypoints within a transformed domain that emphasizes visually significant features. Specifically, we propose to perform V-SLAM in a VIsual Localization Domain (VILD), i.e., a domain where visually relevant feature are suitably represented for analysis and tracking. This transformed domain adheres to information-theoretic principles, enabling a maximum likelihood estimation of rotation, translation, and scaling parameters by minimizing the distance between the coefficients of the observed image and those of a reference template. The transformed coefficients are obtained from the output of specialized Circular Harmonic Function (CHF) filters of varying orders. Leveraging this property, we employ a first-order approximation of the image-series representation, directly computing the first-order coefficients through the application of first-order CHF filters. The proposed VILD provides a theoretically grounded and visually relevant representation of the image. We utilize VILD for point matching and tracking across the stereo video sequence. The experimental results on real-world video datasets demonstrate that integrating visually-driven filtering significantly improves trajectory estimation accuracy compared to traditional tracking performed in the spatial domain. Full article
(This article belongs to the Special Issue Emerging Advances in Wireless Positioning and Location-Based Services)
Show Figures

Figure 1

16 pages, 2668 KB  
Article
Localization of Capsule Endoscope in Alimentary Tract by Computer-Aided Analysis of Endoscopic Images
by Ruiyao Zhang, Boyuan Peng, Yiyang Liu, Xinkai Liu, Jie Huang, Kohei Suzuki, Yuki Nakajima, Daiki Nemoto, Kazutomo Togashi and Xin Zhu
Sensors 2025, 25(3), 746; https://doi.org/10.3390/s25030746 - 26 Jan 2025
Cited by 2 | Viewed by 1463
Abstract
Capsule endoscopy is a common method for detecting digestive diseases. The location of a capsule endoscope should be constantly monitored through a visual inspection of the endoscopic images by medical staff to confirm the examination’s progress. In this study, we proposed a computer-aided [...] Read more.
Capsule endoscopy is a common method for detecting digestive diseases. The location of a capsule endoscope should be constantly monitored through a visual inspection of the endoscopic images by medical staff to confirm the examination’s progress. In this study, we proposed a computer-aided analysis (CADx) method for the localization of a capsule endoscope. At first, a classifier based on a Swin Transformer was proposed to classify each frame of the capsule endoscopy videos into images of the stomach, small intestine, and large intestine, respectively. Then, a K-means algorithm was used to correct outliers in the classification results. Finally, a localization algorithm was proposed to determine the position of the capsule endoscope in the alimentary tract. The proposed method was developed and validated using videos of 204 consecutive cases. The proposed CADx, based on a Swin Transformer, showed a precision of 93.46%, 97.28%, and 98.68% for the classification of endoscopic images recorded in the stomach, small intestine, and large intestine, respectively. Compared with the landmarks identified by endoscopists, the proposed method demonstrated an average transition time error of 16.2 s to locate the intersection of the stomach and small intestine, as well as 13.5 s to locate that of the small intestine and the large intestine, based on the 20 validation videos with an average length of 3261.8 s. The proposed method accurately localizes the capsule endoscope in the alimentary tract and may replace the laborious real-time visual inspection in capsule endoscopic examinations. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

19 pages, 2273 KB  
Article
Signal Preprocessing in Instrument-Based Electronic Noses Leads to Parsimonious Predictive Models: Application to Olive Oil Quality Control
by Luis Fernandez, Sergio Oller-Moreno, Jordi Fonollosa, Rocío Garrido-Delgado, Lourdes Arce, Andrés Martín-Gómez, Santiago Marco and Antonio Pardo
Sensors 2025, 25(3), 737; https://doi.org/10.3390/s25030737 - 25 Jan 2025
Viewed by 1945
Abstract
Gas sensor-based electronic noses (e-noses) have gained considerable attention over the past thirty years, leading to the publication of numerous research studies focused on both the development of these instruments and their various applications. Nonetheless, the limited specificity of gas sensors, along with [...] Read more.
Gas sensor-based electronic noses (e-noses) have gained considerable attention over the past thirty years, leading to the publication of numerous research studies focused on both the development of these instruments and their various applications. Nonetheless, the limited specificity of gas sensors, along with the common requirement for chemical identification, has led to the adaptation and incorporation of analytical chemistry instruments into the e-nose framework. Although instrument-based e-noses exhibit greater specificity to gasses than traditional ones, they still produce data that require correction in order to build reliable predictive models. In this work, we introduce the use of a multivariate signal processing workflow for datasets from a multi-capillary column ion mobility spectrometer-based e-nose. Adhering to the electronic nose philosophy, these workflows prioritized untargeted approaches, avoiding dependence on traditional peak integration techniques. A comprehensive validation process demonstrates that the application of this preprocessing strategy not only mitigates overfitting but also produces parsimonious models, where classification accuracy is maintained with simpler, more interpretable structures. This reduction in model complexity offers significant advantages, providing more efficient and robust models without compromising predictive performance. This strategy was successfully tested on an olive oil dataset, showcasing its capability to improve model parsimony and generalization performance. Full article
(This article belongs to the Special Issue Gas Recognition in E-Nose System)
Show Figures

Figure 1

20 pages, 903 KB  
Article
Metasensor: A Proposal for Sensor Evolution in Robotics
by Michele Braccini
Sensors 2025, 25(3), 725; https://doi.org/10.3390/s25030725 - 25 Jan 2025
Viewed by 1669
Abstract
Sensors play a fundamental role in achieving the complex behaviors typically found in biological organisms. However, their potential role in the design of artificial agents is often overlooked. This often results in the design of robots that are poorly adapted to the environment, [...] Read more.
Sensors play a fundamental role in achieving the complex behaviors typically found in biological organisms. However, their potential role in the design of artificial agents is often overlooked. This often results in the design of robots that are poorly adapted to the environment, compared to their biological counterparts. This paper proposes a formalization of a novel architectural component, called a metasensor, which enables a process of sensor evolution reminiscent of what occurs in living organisms. The metasensor layer searches for the optimal interpretation of its input signals and then feeds them to the robotic agent to accomplish the assigned task. Also, the metasensor enables a robot to adapt to new tasks and dynamic, unknown environments without requiring the redesign of its hardware and software. To validate this concept, a proof of concept is presented where the metasensor changes the robot’s behavior from a light avoidance task to an area avoidance task. This is achieved through two different implementations: one hand-coded and the other based on a neural network substrate, in which the network weights are evolved using an evolutionary algorithm. The results demonstrate the potential of the metasensor to modify the behavior of a robot through sensor evolution. These promising results pave the way for novel applications of the metasensor in real-world robotic scenarios, including those requiring online adaptation. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

32 pages, 2512 KB  
Review
Mapping of Industrial IoT to IEC 62443 Standards
by Ivan Cindrić, Marko Jurčević and Tamara Hadjina
Sensors 2025, 25(3), 728; https://doi.org/10.3390/s25030728 - 25 Jan 2025
Cited by 7 | Viewed by 3963
Abstract
The increasing adoption of the Industrial Internet of Things (IIoT) has led to significant improvements in operational efficiency but has also brought new challenges for cybersecurity. To address these challenges, a number of standards have been introduced over the years. One of the [...] Read more.
The increasing adoption of the Industrial Internet of Things (IIoT) has led to significant improvements in operational efficiency but has also brought new challenges for cybersecurity. To address these challenges, a number of standards have been introduced over the years. One of the best-known series of standards for this purpose is ISA/IEC 62443. This paper examines the applicability of the ISA/IEC 62443 series of standards, traditionally used for securing industrial automation and control systems, to the IIoT environment. For each requirement described in the ISA/IEC 62443 standards, relevant research on that subject is reviewed and presented in a table-like manner. Based on this table, areas for future research are identified, including system hardening, asset inventory, safety instrumented system isolation, risk assessment methodologies, change management systems, data storage security, and incident response procedures. The focus on future improvement is performed for the area of system hardening, for which research and guidelines already exist but not for the specific area of IIoT environments. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

32 pages, 9788 KB  
Article
Experimental Assessment of OSNMA-Enabled GNSS Positioning in Interference-Affected RF Environments
by Alexandru Rusu-Casandra and Elena Simona Lohan
Sensors 2025, 25(3), 729; https://doi.org/10.3390/s25030729 - 25 Jan 2025
Cited by 1 | Viewed by 1912
Abstract
This article investigates the performance of the Galileo Open Service Navigation Message Authentication (OSNMA) system in real-life environments prone to RF interference (RFI), jamming, and/or spoofing attacks. Considering the existing data that indicate a relatively high number of RFI- and spoofing-related incidents reported [...] Read more.
This article investigates the performance of the Galileo Open Service Navigation Message Authentication (OSNMA) system in real-life environments prone to RF interference (RFI), jamming, and/or spoofing attacks. Considering the existing data that indicate a relatively high number of RFI- and spoofing-related incidents reported in Eastern Europe, this study details a data-collection campaign along various roads through urban, suburban, and rural settings, mostly in three border counties in East and South-East of Romania, and presents the results based on the data analysis. The key performance indicators are determined from the perspective of an end user relying only on Galileo OSNMA authenticated signals. The Galileo OSNMA signals were captured using one of the few commercially available GNSS receivers that can perform this OSNMA authentication algorithm incorporating the satellite signals. This work includes a presentation of the receiver’s operation and of the authentication results obtained during test runs that experienced an unusually high number of RFI-related incidents, followed by a detailed analysis of instances when such RFI impaired or fully prevented obtaining an authenticated position, velocity, and time (PVT) solution. The results indicate that Galileo OSNMA demonstrates significant robustness against interference in real-life RF-degraded environments, dealing with both accidental and intentional interference. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

17 pages, 6865 KB  
Article
Improving Stroke Treatment Using Magnetic Nanoparticle Sensors to Monitor Brain Thrombus Extraction
by Dhrubo Jyoti, Daniel Reeves, Scott Gordon-Wylie, Clifford Eskey and John Weaver
Sensors 2025, 25(3), 672; https://doi.org/10.3390/s25030672 - 23 Jan 2025
Viewed by 1838
Abstract
(1) Background: Mechanical thrombectomy (MT) successfully treats ischemic strokes by extracting the thrombus, or clot, using a stent retriever to pull it through the blood vessel. However, clot slippage and/or fragmentation can occur. Real-time feedback to a clinician about attachment between the stent [...] Read more.
(1) Background: Mechanical thrombectomy (MT) successfully treats ischemic strokes by extracting the thrombus, or clot, using a stent retriever to pull it through the blood vessel. However, clot slippage and/or fragmentation can occur. Real-time feedback to a clinician about attachment between the stent and clot could enable more complete removal. We propose a system whereby antibody-targeted magnetic nanoparticles (NPs) are injected via a microcatheter to coat the clot, oscillating magnetic fields excite the particles, and a small coil attached to the catheter picks up a signal that determines the proximity of the clot to the stent. (2) Methods: We used existing simulation code to model the signal from NPs distributed on a hemispherical clot with three orthogonally applied magnetic fields. An in vitro apparatus was built that applied fields and read out signals from a 1.5 mm pickup coil at a variable distance and orientation angle from a sample of 100 nm iron oxide core/shell NPs. (3) Results: Our simulations suggest that the sum of the voltages induced in the pickup coil from three orthogonal applied fields could localize a clot to within 180 µm, regardless of the exact orientation of the pickup coil, with further precision added via rotation-correction formulae. Our experimental system validated simulations; we estimated an in vitro distance recovery precision of 41 µm with a pickup coil 1 mm from the clot. (4) Conclusions: Magnetic NP sensing could be a safe and real-time method to estimate whether a clot is attached to the stent retriever during MT. Full article
Show Figures

Figure 1

22 pages, 1174 KB  
Perspective
Trends in Snapshot Spectral Imaging: Systems, Processing, and Quality
by Jean-Baptiste Thomas, Pierre-Jean Lapray and Steven Le Moan
Sensors 2025, 25(3), 675; https://doi.org/10.3390/s25030675 - 23 Jan 2025
Cited by 4 | Viewed by 4601
Abstract
Recent advances in spectral imaging have enabled snapshot acquisition, as a means to mitigate the impracticalities of spectral imaging, e.g., expert operators and cumbersome hardware. Snapshot spectral imaging, e.g., in technologies like spectral filter arrays, has also enabled higher temporal resolution at the [...] Read more.
Recent advances in spectral imaging have enabled snapshot acquisition, as a means to mitigate the impracticalities of spectral imaging, e.g., expert operators and cumbersome hardware. Snapshot spectral imaging, e.g., in technologies like spectral filter arrays, has also enabled higher temporal resolution at the expense of the spatio-spectral resolution, allowing for the observation of temporal events. Designing, realising, and deploying such technologies is yet challenging, particularly due to the lack of clear, user-meaningful quality criteria across diverse applications, sensor types, and workflows. Key research gaps include optimising raw image processing from snapshot spectral imagers and assessing spectral image and video quality in ways valuable to end-users, manufacturers, and developers. This paper identifies several challenges and current opportunities. It proposes considering them jointly and suggests creating a new unified snapshot spectral imaging paradigm that would combine new systems and standards, new algorithms, new cost functions, and quality indices. Full article
(This article belongs to the Collection Advances in Spectroscopy and Spectral Imaging)
Show Figures

Figure 1

24 pages, 2730 KB  
Review
The Future of Clinical Active Shoulder Range of Motion Assessment, Best Practice, and Its Challenges: Narrative Review
by Wolbert van den Hoorn, Arthur Fabre, Giacomo Nardese, Eric Yung-Sheng Su, Kenneth Cutbush, Ashish Gupta and Graham Kerr
Sensors 2025, 25(3), 667; https://doi.org/10.3390/s25030667 - 23 Jan 2025
Cited by 4 | Viewed by 9460
Abstract
Optimising outcomes after shoulder interventions requires objective shoulder range of motion (ROM) assessments. This narrative review examines video-based pose technologies and markerless motion capture, focusing on their clinical application for shoulder ROM assessment. Camera pose-based methods offer objective ROM measurements, though the accuracy [...] Read more.
Optimising outcomes after shoulder interventions requires objective shoulder range of motion (ROM) assessments. This narrative review examines video-based pose technologies and markerless motion capture, focusing on their clinical application for shoulder ROM assessment. Camera pose-based methods offer objective ROM measurements, though the accuracy varies due to the differences in gold standards, anatomical definitions, and deep learning techniques. Despite some biases, the studies report a high consistency, emphasising that methods should not be used interchangeably if they do not agree with each other. Smartphone cameras perform well in capturing 2D planar movements but struggle with that of rotational movements and forward flexion, particularly when thoracic compensations are involved. Proper camera positioning, orientation, and distance are key, highlighting the importance of standardised protocols in mobile phone-based ROM evaluations. Although 3D motion capture, per the International Society of Biomechanics recommendations, remains the gold standard, advancements in LiDAR/depth sensing, smartphone cameras, and deep learning show promise for reliable ROM assessments in clinical settings. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence in Gait and Posture Analysis)
Show Figures

Figure 1

30 pages, 1550 KB  
Review
The Potential of Wearable Sensors for Detecting Cognitive Rumination: A Scoping Review
by Vitica X. Arnold and Sean D. Young
Sensors 2025, 25(3), 654; https://doi.org/10.3390/s25030654 - 23 Jan 2025
Cited by 3 | Viewed by 4052
Abstract
Cognitive rumination, a transdiagnostic symptom across mental health disorders, has traditionally been assessed through self-report measures. However, these measures are limited by their temporal nature and subjective bias. The rise in wearable technologies offers the potential for continuous, real-time monitoring of physiological indicators [...] Read more.
Cognitive rumination, a transdiagnostic symptom across mental health disorders, has traditionally been assessed through self-report measures. However, these measures are limited by their temporal nature and subjective bias. The rise in wearable technologies offers the potential for continuous, real-time monitoring of physiological indicators associated with rumination. This scoping review investigates the current state of research on using wearable technology to detect cognitive rumination. Specifically, we examine the sensors and wearable devices used, physiological biomarkers measured, standard measures of rumination used, and the comparative validity of specific biomarkers in identifying cognitive rumination. The review was performed according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines on IEEE, Scopus, PubMed, and PsycInfo databases. Studies that used wearable devices to measure rumination-related physiological responses and biomarkers were included (n = 9); seven studies assessed one biomarker, and two studies assessed two biomarkers. Electrodermal Activity (EDA) sensors capturing skin conductance activity emerged as both the most prevalent sensor (n = 5) and the most comparatively valid biomarker for detecting cognitive rumination via wearable devices. Other commonly investigated biomarkers included electrical brain activity measured through Electroencephalogram (EEG) sensors (n = 2), Heart Rate Variability (HRV) measured using Electrocardiogram (ECG) sensors and heart rate fitness monitors (n = 2), muscle response measured through Electromyography (EMG) sensors (n = 1) and movement measured through an accelerometer (n = 1). The Empatica E4 and Empatica Embrace 2 wrist-worn devices were the most frequently used wearable (n = 3). The Rumination Response Scale (RRS), was the most widely used standard scale for assessing rumination. Experimental induction protocols, often adapted from Nolen-Hoeksema and Morrow’s 1993 rumination induction paradigm, were also widely used. In conclusion, the findings suggest that wearable technology offers promise in capturing real-time physiological responses associated with rumination. However, the field is still developing, and further research is needed to validate these findings and explore the impact of individual traits and contextual factors on the accuracy of rumination detection. Full article
(This article belongs to the Special Issue Advanced Wearable Sensors for Medical Applications)
Show Figures

Figure 1

17 pages, 14063 KB  
Article
ATEX-Certified, FPGA-Based Three-Channel Quantum Cascade Laser Sensor for Sulfur Species Detection in Petrochemical Process Streams
by Harald Moser, Johannes Paul Waclawek, Walter Pölz and Bernhard Lendl
Sensors 2025, 25(3), 635; https://doi.org/10.3390/s25030635 - 22 Jan 2025
Cited by 2 | Viewed by 1447
Abstract
In this work, a highly sensitive, selective, and industrially compatible gas sensor prototype is presented. The sensor utilizes three distributed-feedback quantum cascade lasers (DFB-QCLs), employing wavelength modulation spectroscopy (WMS) for the detection of hydrogen sulfide (H2S), methane (CH4), methyl [...] Read more.
In this work, a highly sensitive, selective, and industrially compatible gas sensor prototype is presented. The sensor utilizes three distributed-feedback quantum cascade lasers (DFB-QCLs), employing wavelength modulation spectroscopy (WMS) for the detection of hydrogen sulfide (H2S), methane (CH4), methyl mercaptan (CH3SH), and carbonyl sulfide (COS) in the spectral regions of 8.0 µm, 7.5 µm, and 4.9 µm, respectively. In addition, field-programmable gate array (FPGA) hardware is used for real-time signal generation, laser driving, signal processing, and handling industrial communication protocols. To comply with on-site safety standards, the QCL sensor prototype is housed in an industrial-grade enclosure and equipped with the necessary safety features to ensure certified operation under ATEX/IECEx regulations for hazardous and explosive environments. The system integrates an automated gas sampling and conditioning module, alongside a purge and pressurization system, with intrinsic safety electronic components, thereby enabling reliable explosion prevention and malfunction protection. Detection limits of approximately 0.3 ppmv for H2S, 60 ppbv for CH3SH, and 5 ppbv for COS are demonstrated. Noise-equivalent absorption sensitivity (NEAS) levels for H2S, CH3SH, and COS were determined to be 5.93 × 10−9, 4.65 × 10−9, and 5.24 × 10−10 cm−1 Hz−1/2. The suitability of the sensor prototype for simultaneous sulfur species monitoring is demonstrated in process streams of a hydrodesulphurization (HDS) and fluid catalytic cracking (FCC) unit at the project’s industrial partner, OMV AG. Full article
(This article belongs to the Special Issue Photonics for Advanced Spectroscopy and Sensing)
Show Figures

Figure 1

19 pages, 15983 KB  
Article
Advanced Deep Learning Models for Melanoma Diagnosis in Computer-Aided Skin Cancer Detection
by Ranpreet Kaur, Hamid GholamHosseini and Maria Lindén
Sensors 2025, 25(3), 594; https://doi.org/10.3390/s25030594 - 21 Jan 2025
Cited by 7 | Viewed by 2853
Abstract
The most deadly type of skin cancer is melanoma. A visual examination does not provide an accurate diagnosis of melanoma during its early to middle stages. Therefore, an automated model could be developed that assists with early skin cancer detection. It is possible [...] Read more.
The most deadly type of skin cancer is melanoma. A visual examination does not provide an accurate diagnosis of melanoma during its early to middle stages. Therefore, an automated model could be developed that assists with early skin cancer detection. It is possible to limit the severity of melanoma by detecting it early and treating it promptly. This study aims to develop efficient approaches for various phases of melanoma computer-aided diagnosis (CAD), such as preprocessing, segmentation, and classification. The first step of the CAD pipeline includes the proposed hybrid method, which uses morphological operations and context aggregation-based deep neural networks to remove hairlines and improve poor contrast in dermoscopic skin cancer images. An image segmentation network based on deep learning is then used to extract lesion regions for detailed analysis and calculate the optimized classification features. Lastly, a deep neural network is used to distinguish melanoma from benign lesions. The proposed approaches use a benchmark dataset named International Skin Imaging Collaboration (ISIC) 2020. In this work, two forms of evaluations are performed with the classification model. The first experiment involves the incorporation of the results from the preprocessing and segmentation stages into the classification model. The second experiment involves the evaluation of the classifier without employing these stages i.e., using raw images. From the study results, it can be concluded that a classification model using segmented and cleaned images contributes more to achieving an accurate classification rate of 93.40% with a 1.3 s test time on a single image. Full article
Show Figures

Figure 1

16 pages, 2629 KB  
Article
The Development and Optimisation of a Urinary Volatile Organic Compound Analytical Platform Using Gas Sensor Arrays for the Detection of Colorectal Cancer
by Ramesh P. Arasaradnam, Ashwin Krishnamoorthy, Mark A. Hull, Peter Wheatstone, Frank Kvasnik and Krishna C. Persaud
Sensors 2025, 25(3), 599; https://doi.org/10.3390/s25030599 - 21 Jan 2025
Cited by 3 | Viewed by 3566
Abstract
The profile of Volatile Organic Compounds (VOCs) may help prioritise at-risk groups for early cancer detection. Urine sampling has been shown to provide good disease accuracy whilst being patient acceptable compared to faecal analysis. Thus, in this study, urine samples were examined using [...] Read more.
The profile of Volatile Organic Compounds (VOCs) may help prioritise at-risk groups for early cancer detection. Urine sampling has been shown to provide good disease accuracy whilst being patient acceptable compared to faecal analysis. Thus, in this study, urine samples were examined using an electronic nose with metal oxide gas sensors and a solid-phase microextraction sampling system. A calibration dataset (derived from a previous study) with CRC-positive patients and healthy controls was used to train a radial basis function neural network. However, a blinded analysis failed to detect CRC accurately, necessitating an enhanced data-processing strategy. This new approach categorised samples by significant bowel diseases, including CRC and high-risk polyps. Retraining the neural network showed an area under the ROC curve of 0.88 for distinguishing CRC versus non-significant bowel disease (without CRC, polyps or inflammation). These findings suggest that, with appropriate training sets, urine VOC analysis could be a rapid, low-cost method for early detection of precancerous colorectal polyps and CRC. Full article
Show Figures

Figure 1

19 pages, 8391 KB  
Article
NeuroFlex: Feasibility of EEG-Based Motor Imagery Control of a Soft Glove for Hand Rehabilitation
by Soroush Zare, Sameh I. Beaber and Ye Sun
Sensors 2025, 25(3), 610; https://doi.org/10.3390/s25030610 - 21 Jan 2025
Cited by 3 | Viewed by 4631
Abstract
Motor impairments resulting from neurological disorders, such as strokes or spinal cord injuries, often impair hand and finger mobility, restricting a person’s ability to grasp and perform fine motor tasks. Brain plasticity refers to the inherent capability of the central nervous system to [...] Read more.
Motor impairments resulting from neurological disorders, such as strokes or spinal cord injuries, often impair hand and finger mobility, restricting a person’s ability to grasp and perform fine motor tasks. Brain plasticity refers to the inherent capability of the central nervous system to functionally and structurally reorganize itself in response to stimulation, which underpins rehabilitation from brain injuries or strokes. Linking voluntary cortical activity with corresponding motor execution has been identified as effective in promoting adaptive plasticity. This study introduces NeuroFlex, a motion-intent-controlled soft robotic glove for hand rehabilitation. NeuroFlex utilizes a transformer-based deep learning (DL) architecture to decode motion intent from motor imagery (MI) EEG data and translate it into control inputs for the assistive glove. The glove’s soft, lightweight, and flexible design enables users to perform rehabilitation exercises involving fist formation and grasping movements, aligning with natural hand functions for fine motor practices. The results show that the accuracy of decoding the intent of fingers making a fist from MI EEG can reach up to 85.3%, with an average AUC of 0.88. NeuroFlex demonstrates the feasibility of detecting and assisting the patient’s attempted movements using pure thinking through a non-intrusive brain–computer interface (BCI). This EEG-based soft glove aims to enhance the effectiveness and user experience of rehabilitation protocols, providing the possibility of extending therapeutic opportunities outside clinical settings. Full article
Show Figures

Figure 1

21 pages, 9714 KB  
Article
3D Metamaterials Facilitate Human Cardiac MRI at 21.0 Tesla: A Proof-of-Concept Study
by Bilguun Nurzed, Nandita Saha, Jason M. Millward and Thoralf Niendorf
Sensors 2025, 25(3), 620; https://doi.org/10.3390/s25030620 - 21 Jan 2025
Cited by 1 | Viewed by 4185
Abstract
The literature reports highlight the transmission field (B1+) uniformity and efficiency constraints of cardiac magnetic resonance imaging (MRI) at ultrahigh magnetic fields (UHF). This simulation study proposes a 3D Metamaterial (MM) to address these challenges. The study proposes a [...] Read more.
The literature reports highlight the transmission field (B1+) uniformity and efficiency constraints of cardiac magnetic resonance imaging (MRI) at ultrahigh magnetic fields (UHF). This simulation study proposes a 3D Metamaterial (MM) to address these challenges. The study proposes a 3D MM consisting of unit cells (UC) with split ring resonator (SRR) layers immersed in dielectric material glycerol. Implementing the proposed MM design aims to reduce the effective thickness and weight of the dielectric material while shaping B1+ and improving the penetration depth. The latter is dictated by the chosen array size, where small local UC arrays can focus B1+ and larger UC arrays can increase the field of view, at the cost of a lower penetration depth. Designing RF antennas that can effectively transmit at 21.0 T while maintaining patient safety and comfort is challenging. Using Self-Grounded Bow-Tie (SGBT) antennas in conjunction with the proposed MM demonstrated enhanced B1+ efficiency and uniformity across the human heart without signal voids. The study employed dynamic parallel transmission with tailored kT points to homogenize the 3D flip angle over the whole heart. This proof-of-concept study provides the technical foundation for human cardiac MRI at 21.0 T. Such numerical simulations are mandatory precursors for the realization of whole-body human UHF MR instruments. Full article
Show Figures

Graphical abstract

26 pages, 8033 KB  
Article
Time-Series Image-Based Automated Monitoring Framework for Visible Facilities: Focusing on Installation and Retention Period
by Seonjun Yoon and Hyunsoo Kim
Sensors 2025, 25(2), 574; https://doi.org/10.3390/s25020574 - 20 Jan 2025
Cited by 4 | Viewed by 1553
Abstract
In the construction industry, ensuring the proper installation, retention, and dismantling of temporary structures, such as jack supports, is critical to maintaining safety and project timelines. However, inconsistencies between on-site data and construction documentation remain a significant challenge. To address this, this study [...] Read more.
In the construction industry, ensuring the proper installation, retention, and dismantling of temporary structures, such as jack supports, is critical to maintaining safety and project timelines. However, inconsistencies between on-site data and construction documentation remain a significant challenge. To address this, this study proposes an integrated monitoring framework that combines computer vision-based object detection and document recognition techniques. The system utilizes YOLOv5 for detecting jack supports in both construction drawings and on-site images captured through wearable cameras, while optical character recognition (OCR) and natural language processing (NLP) extract installation and dismantling timelines from work orders. The proposed framework enables continuous monitoring and ensures compliance with retention periods by aligning on-site data with documented requirements. The analysis includes 23 jack supports monitored daily over 28 days under varying environmental conditions, including lighting changes and structural configurations. The results demonstrate that the system achieves an average detection accuracy of 94.1%, effectively identifying discrepancies and reducing misclassifications caused by structural similarities and environmental variations. To further enhance detection reliability, methods such as color differentiation, construction plan overlays, and vertical segmentation were implemented, significantly improving performance. This study validates the effectiveness of integrating visual and textual data sources in dynamic construction environments. The study supports the development of automated monitoring systems by improving accuracy and safety measures while reducing manual intervention, offering practical insights for future construction site management. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems)
Show Figures

Figure 1

19 pages, 554 KB  
Article
Unleashing the Potential of Pre-Trained Diffusion Models for Generalizable Person Re-Identification
by Jiachen Li and Xiaojin Gong
Sensors 2025, 25(2), 552; https://doi.org/10.3390/s25020552 - 18 Jan 2025
Cited by 1 | Viewed by 2718
Abstract
Domain-generalizable re-identification (DG Re-ID) aims to train a model on one or more source domains and evaluate its performance on unseen target domains, a task that has attracted growing attention due to its practical relevance. While numerous methods have been proposed, most rely [...] Read more.
Domain-generalizable re-identification (DG Re-ID) aims to train a model on one or more source domains and evaluate its performance on unseen target domains, a task that has attracted growing attention due to its practical relevance. While numerous methods have been proposed, most rely on discriminative or contrastive learning frameworks to learn generalizable feature representations. However, these approaches often fail to mitigate shortcut learning, leading to suboptimal performance. In this work, we propose a novel method called diffusion model-assisted representation learning with a correlation-aware conditioning scheme (DCAC) to enhance DG Re-ID. Our method integrates a discriminative and contrastive Re-ID model with a pre-trained diffusion model through a correlation-aware conditioning scheme. By incorporating ID classification probabilities generated from the Re-ID model with a set of learnable ID-wise prompts, the conditioning scheme injects dark knowledge that captures ID correlations to guide the diffusion process. Simultaneously, feedback from the diffusion model is back-propagated through the conditioning scheme to the Re-ID model, effectively improving the generalization capability of Re-ID features. Extensive experiments on both single-source and multi-source DG Re-ID tasks demonstrate that our method achieves state-of-the-art performance. Comprehensive ablation studies further validate the effectiveness of the proposed approach, providing insights into its robustness. Full article
Show Figures

Figure 1

14 pages, 6733 KB  
Article
Detailed Determination of Delamination Parameters in a Multilayer Structure Using Asymmetric Lamb Wave Mode
by Olgirdas Tumšys, Lina Draudvilienė and Egidijus Žukauskas
Sensors 2025, 25(2), 539; https://doi.org/10.3390/s25020539 - 18 Jan 2025
Cited by 1 | Viewed by 1310
Abstract
A signal-processing algorithm for the detailed determination of delamination in multilayer structures is proposed in this work. The algorithm is based on calculating the phase velocity of the Lamb wave A0 mode and estimating this velocity dispersion. Both simulation and experimental studies [...] Read more.
A signal-processing algorithm for the detailed determination of delamination in multilayer structures is proposed in this work. The algorithm is based on calculating the phase velocity of the Lamb wave A0 mode and estimating this velocity dispersion. Both simulation and experimental studies were conducted to validate the proposed technique. The delamination having a diameter of 81 mm on the segment of a wind turbine blade (WTB) was used for verification of the proposed technique. Four cases were used in the simulation study: defect-free, delamination between the first and second layers, delamination between the second and third layers, and defect (hole). The calculated phase velocity variation in the A0 mode was used to determine the location and edge coordinates of the delaminations and defects. It has been found that in order to estimate the depth at which the delamination is, it is appropriate to calculate the phase velocity dispersion curves. The difference in the reconstructed phase velocity dispersion curves between the layers simulated at different depths is estimated to be about 60 m/s. The phase velocity values were compared with the delamination of the second and third layers and a hole drilled at the corresponding depth. The obtained simulation results confirmed that the drilled hole can be used as a defect corresponding to delamination. The WTB sample with a drilled hole of 81 mm was used in the experimental study. Using the proposed algorithm, detailed defect parameters were obtained. The results obtained using simulated and experimental signals indicated that the proposed new algorithm is suitable for the determination of delamination parameters in a multilayer structure. Full article
(This article belongs to the Special Issue Acoustic and Ultrasonic Sensing Technology in Non-Destructive Testing)
Show Figures

Figure 1

14 pages, 6235 KB  
Article
Integrating Machine Learning for Predictive Maintenance on Resource-Constrained PLCs: A Feasibility Study
by Riccardo Mennilli, Luigi Mazza and Andrea Mura
Sensors 2025, 25(2), 537; https://doi.org/10.3390/s25020537 - 17 Jan 2025
Cited by 5 | Viewed by 3604
Abstract
This study investigates the potential of deploying a neural network model on an advanced programmable logic controller (PLC), specifically the Finder Opta™, for real-time inference within the predictive maintenance framework. In the context of Industry 4.0, edge computing aims to process data directly [...] Read more.
This study investigates the potential of deploying a neural network model on an advanced programmable logic controller (PLC), specifically the Finder Opta™, for real-time inference within the predictive maintenance framework. In the context of Industry 4.0, edge computing aims to process data directly on local devices rather than relying on a cloud infrastructure. This approach minimizes latency, enhances data security, and reduces the bandwidth required for data transmission, making it ideal for industrial applications that demand immediate response times. Despite the limited memory and processing power inherent to many edge devices, this proof-of-concept demonstrates the suitability of the Finder Opta™ for such applications. Using acoustic data, a convolutional neural network (CNN) is deployed to infer the rotational speed of a mechanical test bench. The findings underscore the potential of the Finder Opta™ to support scalable and efficient predictive maintenance solutions, laying the groundwork for future research in real-time anomaly detection. By enabling machine learning capabilities on compact, resource-constrained hardware, this approach promises a cost-effective, adaptable solution for diverse industrial environments. Full article
Show Figures

Figure 1

12 pages, 3243 KB  
Article
Internal Integrated Temperature Sensor for Lithium-Ion Batteries
by Pengfei Yang, Kai Su, Shijie Weng, Jiang Han, Qian Zhang, Zhiqiang Li, Xiaoli Peng and Yong Xiang
Sensors 2025, 25(2), 511; https://doi.org/10.3390/s25020511 - 17 Jan 2025
Cited by 3 | Viewed by 4159
Abstract
Lithium-ion batteries represent a significant component of the field of energy storage, with a diverse range of applications in consumer electronics, portable devices, and numerous other fields. In view of the growing concerns about the safety of batteries, it is of the utmost [...] Read more.
Lithium-ion batteries represent a significant component of the field of energy storage, with a diverse range of applications in consumer electronics, portable devices, and numerous other fields. In view of the growing concerns about the safety of batteries, it is of the utmost importance to develop a sensor that is capable of accurately monitoring the internal temperature of lithium-ion batteries. External sensors are subject to the necessity for additional space and ancillary equipment. Moreover, external sensors cannot accurately measure internal battery temperature due to packaging material interference, causing a temperature discrepancy between the interior and surface. Consequently, this study presents an integrated temperature sensor within the battery, based on PT1000 resistance temperature detector (RTD). The sensor is integrated with the anode via a flexible printed circuit (FPC), simplifying the assembly process. The PT1000 RTD microsensor’s temperature is linearly related to resistance (R = 3.71T + 1003.86). It measures about 15 °C temperature difference inside/outside the battery. On short-circuit, the battery’s internal temperature rises to 27 °C in 10 s and 32 °C in 20 s, measured by the sensor. A battery with the PT1000 sensor retains 89.8% capacity under 2 C, similar to the normal battery. Furthermore, a PT1000 temperature array sensor was designed and employed to enable precise monitoring and localization of internal temperature variations. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

21 pages, 7219 KB  
Article
Time Series Data Augmentation for Energy Consumption Data Based on Improved TimeGAN
by Peihao Tang, Zhen Li, Xuanlin Wang, Xueping Liu and Peng Mou
Sensors 2025, 25(2), 493; https://doi.org/10.3390/s25020493 - 16 Jan 2025
Cited by 6 | Viewed by 3558
Abstract
Predicting the time series energy consumption data of manufacturing processes can optimize energy management efficiency and reduce maintenance costs for enterprises. Using deep learning algorithms to establish prediction models for sensor data is an effective approach; however, the performance of these models is [...] Read more.
Predicting the time series energy consumption data of manufacturing processes can optimize energy management efficiency and reduce maintenance costs for enterprises. Using deep learning algorithms to establish prediction models for sensor data is an effective approach; however, the performance of these models is significantly influenced by the quantity and quality of the training data. In real production environments, the amount of time series data that can be collected during the manufacturing process is limited, which can lead to a decline in model performance. In this paper, we use an improved TimeGAN model for the augmentation of energy consumption data, which incorporates a multi-head self-attention mechanism layer into the recovery model to enhance prediction accuracy. A hybrid CNN-GRU model is used to predict the energy consumption data from the operational processes of manufacturing equipment. After data augmentation, the prediction model exhibits significant reductions in RMSE and MAE along with an increase in the R2 value. The prediction accuracy of the model is maximized when the amount of generated synthetic data is approximately twice that of the original data. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

33 pages, 1112 KB  
Review
A Comprehensive Review of Vision-Based Sensor Systems for Human Gait Analysis
by Xiaofeng Han, Diego Guffanti and Alberto Brunete
Sensors 2025, 25(2), 498; https://doi.org/10.3390/s25020498 - 16 Jan 2025
Cited by 8 | Viewed by 6501
Abstract
Analysis of the human gait represents a fundamental area of investigation within the broader domains of biomechanics, clinical research, and numerous other interdisciplinary fields. The progression of visual sensor technology and machine learning algorithms has enabled substantial developments in the creation of human [...] Read more.
Analysis of the human gait represents a fundamental area of investigation within the broader domains of biomechanics, clinical research, and numerous other interdisciplinary fields. The progression of visual sensor technology and machine learning algorithms has enabled substantial developments in the creation of human gait analysis systems. This paper presents a comprehensive review of the advancements and recent findings in the field of vision-based human gait analysis systems over the past five years, with a special emphasis on the role of vision sensors, machine learning algorithms, and technological innovations. The relevant papers were subjected to analysis using the PRISMA method, and 72 articles that met the criteria for this research project were identified. A detailing of the most commonly used visual sensor systems, machine learning algorithms, human gait analysis parameters, optimal camera placement, and gait parameter extraction methods is presented in the analysis. The findings of this research indicate that non-invasive depth cameras are gaining increasing popularity within this field. Furthermore, depth learning algorithms, such as convolutional neural networks (CNNs) and long short-term memory (LSTM) networks, are being employed with increasing frequency. This review seeks to establish the foundations for future innovations that will facilitate the development of more effective, versatile, and user-friendly gait analysis tools, with the potential to significantly enhance human mobility, health, and overall quality of life. This work was supported by [GOBIERNO DE ESPANA/PID2023-150967OB-I00]. Full article
(This article belongs to the Special Issue Advanced Sensors in Biomechanics and Rehabilitation)
Show Figures

Figure 1

18 pages, 1946 KB  
Article
Minimizing Delay and Power Consumption at the Edge
by Erol Gelenbe
Sensors 2025, 25(2), 502; https://doi.org/10.3390/s25020502 - 16 Jan 2025
Cited by 4 | Viewed by 1894
Abstract
Edge computing systems must offer low latency at low cost and low power consumption for sensors and other applications, including the IoT, smart vehicles, smart homes, and 6G. Thus, substantial research has been conducted to identify optimum task allocation schemes in this context [...] Read more.
Edge computing systems must offer low latency at low cost and low power consumption for sensors and other applications, including the IoT, smart vehicles, smart homes, and 6G. Thus, substantial research has been conducted to identify optimum task allocation schemes in this context using non-linear optimization, machine learning, and market-based algorithms. Prior work has mainly focused on two methodologies: (i) formulating non-linear optimizations that lead to NP-hard problems, which are processed via heuristics, and (ii) using AI-based formulations, such as reinforcement learning, that are then tested with simulations. These prior approaches have two shortcomings: (a) there is no guarantee that optimum solutions are achieved, and (b) they do not provide an explicit formula for the fraction of tasks that are allocated to the different servers to achieve a specified optimum. This paper offers a radically different and mathematically based principled method that explicitly computes the optimum fraction of jobs that should be allocated to the different servers to (1) minimize the average latency (delay) of the jobs that are allocated to the edge servers and (2) minimize the average energy consumption of these jobs at the set of edge servers. These results are obtained with a mathematical model of a multiple-server edge system that is managed by a task distribution platform, whose equations are derived and solved using methods from stochastic processes. This approach has low computational cost and provides simple linear complexity formulas to compute the fraction of tasks that should be assigned to the different servers to achieve minimum latency and minimum energy consumption. Full article
(This article belongs to the Special Issue Feature Papers in the 'Sensor Networks' Section 2024)
Show Figures

Figure 1

26 pages, 21796 KB  
Article
Design of a Cost-Effective Ultrasound Force Sensor and Force Control System for Robotic Extra-Body Ultrasound Imaging
by Yixuan Zheng, Hongyuan Ning, Eason Rangarajan, Aban Merali, Adam Geale, Lukas Lindenroth, Zhouyang Xu, Weizhao Wang, Philipp Kruse, Steven Morris, Liang Ye, Xinyi Fu, Kawal Rhode and Richard James Housden
Sensors 2025, 25(2), 468; https://doi.org/10.3390/s25020468 - 15 Jan 2025
Cited by 4 | Viewed by 2996
Abstract
Ultrasound imaging is widely valued for its safety, non-invasiveness, and real-time capabilities but is often limited by operator variability, affecting image quality and reproducibility. Robot-assisted ultrasound may provide a solution by delivering more consistent, precise, and faster scans, potentially reducing human error and [...] Read more.
Ultrasound imaging is widely valued for its safety, non-invasiveness, and real-time capabilities but is often limited by operator variability, affecting image quality and reproducibility. Robot-assisted ultrasound may provide a solution by delivering more consistent, precise, and faster scans, potentially reducing human error and healthcare costs. Effective force control is crucial in robotic ultrasound scanning to ensure consistent image quality and patient safety. However, existing robotic ultrasound systems rely heavily on expensive commercial force sensors or the integrated sensors of commercial robotic arms, limiting their accessibility. To address these challenges, we developed a cost-effective, lightweight, 3D-printed force sensor and a hybrid position–force control strategy tailored for robotic ultrasound scanning. The system integrates patient-to-robot registration, automated scanning path planning, and multi-sensor data fusion, allowing the robot to autonomously locate the patient, target the region of interest, and maintain optimal contact force during scanning. Validation was conducted using an ultrasound-compatible abdominal aortic aneurysm (AAA) phantom created from patient CT data and healthy volunteer testing. For the volunteer testing, during a 1-min scan, 65% of the forces were within the good image range. Both volunteers reported no discomfort or pain during the whole procedure. These results demonstrate the potential of the system to provide safe, precise, and autonomous robotic ultrasound imaging in real-world conditions. Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

24 pages, 9651 KB  
Article
Fault Detection in Induction Machines Using Learning Models and Fourier Spectrum Image Analysis
by Kevin Barrera-Llanga, Jordi Burriel-Valencia, Angel Sapena-Bano and Javier Martinez-Roman
Sensors 2025, 25(2), 471; https://doi.org/10.3390/s25020471 - 15 Jan 2025
Cited by 7 | Viewed by 2524
Abstract
Induction motors are essential components in industry due to their efficiency and cost-effectiveness. This study presents an innovative methodology for automatic fault detection by analyzing images generated from the Fourier spectra of current signals using deep learning techniques. A new preprocessing technique incorporating [...] Read more.
Induction motors are essential components in industry due to their efficiency and cost-effectiveness. This study presents an innovative methodology for automatic fault detection by analyzing images generated from the Fourier spectra of current signals using deep learning techniques. A new preprocessing technique incorporating a distinctive background to enhance spectral feature learning is proposed, enabling the detection of four types of faults: healthy motor coupled to a generator with a broken bar (HGB), broken rotor bar (BRB), race bearing fault (RBF), and bearing ball fault (BBF). The dataset was generated from three-phase signals of an induction motor controlled by a Direct Torque Controller under various operating conditions (20–1500 rpm with 0–100% load), resulting in 4251 images. The model, based on a Visual Geometry Group (VGG) architecture with 19 layers, achieved an overall accuracy of 98%, with specific accuracies of 99% for RAF, 100% for BRB, 100% for RBF, and 95% for BBF. A new model interpretability was assessed using explainability techniques, which allowed for the identification of specific learning patterns. This analysis introduces a new approach by demonstrating how different convolutional blocks capture particular features: the first convolutional block captures signal shape, while the second identifies background features. Additionally, distinct convolutional layers were associated with each fault type: layer 9 for RAF, layer 13 for BRB, layer 16 for RBF, and layer 14 for BBF. This methodology offers a scalable solution for predictive maintenance in induction motors, effectively combining signal processing, computer vision, and explainability techniques. Full article
(This article belongs to the Special Issue Feature Papers in Fault Diagnosis & Sensors 2024)
Show Figures

Figure 1

23 pages, 7031 KB  
Article
Fluorescence Lifetime Endoscopy with a Nanosecond Time-Gated CAPS Camera with IRF-Free Deep Learning Method
by Pooria Iranian, Thomas Lapauw, Thomas Van den Dries, Sevada Sahakian, Joris Wuts, Valéry Ann Jacobs, Jef Vandemeulebroucke, Maarten Kuijk and Hans Ingelberts
Sensors 2025, 25(2), 450; https://doi.org/10.3390/s25020450 - 14 Jan 2025
Cited by 2 | Viewed by 1809
Abstract
Fluorescence imaging has been widely used in fields like (pre)clinical imaging and other domains. With advancements in imaging technology and new fluorescent labels, fluorescence lifetime imaging is gradually gaining recognition. Our research department is developing the tauCAMTM, based on the [...] Read more.
Fluorescence imaging has been widely used in fields like (pre)clinical imaging and other domains. With advancements in imaging technology and new fluorescent labels, fluorescence lifetime imaging is gradually gaining recognition. Our research department is developing the tauCAMTM, based on the Current-Assisted Photonic Sampler, to achieve real-time fluorescence lifetime imaging in the NIR (700–900 nm) region. Incorporating fluorescence lifetime into endoscopy could further improve the differentiation of malignant and benign cells based on their distinct lifetimes. In this work, the capabilities of an endoscopic lifetime imaging system are demonstrated using a rigid endoscope involving various phantoms and an IRF-free deep learning-based method with only 6-time points. The results show that this application’s fluorescence lifetime image has better lifetime uniformity and precision with 6-time points than the conventional methods. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

23 pages, 622 KB  
Article
MalHAPGNN: An Enhanced Call Graph-Based Malware Detection Framework Using Hierarchical Attention Pooling Graph Neural Network
by Wenjie Guo, Wenbiao Du, Xiuqi Yang, Jingfeng Xue, Yong Wang, Weijie Han and Jingjing Hu
Sensors 2025, 25(2), 374; https://doi.org/10.3390/s25020374 - 10 Jan 2025
Cited by 5 | Viewed by 3563
Abstract
While deep learning techniques have been extensively employed in malware detection, there is a notable challenge in effectively embedding malware features. Current neural network methods primarily capture superficial characteristics, lacking in-depth semantic exploration of functions and failing to preserve structural information at the [...] Read more.
While deep learning techniques have been extensively employed in malware detection, there is a notable challenge in effectively embedding malware features. Current neural network methods primarily capture superficial characteristics, lacking in-depth semantic exploration of functions and failing to preserve structural information at the file level. Motivated by the aforementioned challenges, this paper introduces MalHAPGNN, a novel framework for malware detection that leverages a hierarchical attention pooling graph neural network based on enhanced call graphs. Firstly, to ensure semantic richness, a Bidirectional Encoder Representations from Transformers-based (BERT) attribute-enhanced function embedding method is proposed for the extraction of node attributes in the function call graph. Subsequently, this work designs a hierarchical graph neural network that integrates attention mechanisms and pooling operations, complemented by function node sampling and structural learning strategies. This framework delivers a comprehensive profile of malicious code across semantic, syntactic, and structural dimensions. Extensive experiments conducted on the Kaggle and VirusShare datasets have demonstrated that the proposed framework outperforms other graph neural network (GNN)-based malware detection methods. Full article
(This article belongs to the Special Issue Security of IoT-Enabled Infrastructures in Smart Cities)
Show Figures

Figure 1

14 pages, 2150 KB  
Article
Enhancing Low-Light Images with Kolmogorov–Arnold Networks in Transformer Attention
by Alexandru Brateanu, Raul Balmez, Ciprian Orhei, Cosmin Ancuti and Codruta Ancuti
Sensors 2025, 25(2), 327; https://doi.org/10.3390/s25020327 - 8 Jan 2025
Cited by 6 | Viewed by 2071
Abstract
Low-light image enhancement (LLIE) techniques improve the performance of image sensors by enhancing visibility and details in poorly lit environments and have significantly benefited from recent research into Transformer models. This work presents a novel Transformer attention mechanism inspired by the Kolmogorov–Arnold representation [...] Read more.
Low-light image enhancement (LLIE) techniques improve the performance of image sensors by enhancing visibility and details in poorly lit environments and have significantly benefited from recent research into Transformer models. This work presents a novel Transformer attention mechanism inspired by the Kolmogorov–Arnold representation theorem, incorporating learnable non-linearity and multivariate function decomposition. This innovative mechanism is the foundation of KAN-T, our proposed Transformer network. By enhancing feature flexibility and enabling the model to capture broader contextual information, KAN-T achieves superior performance. Our comprehensive experiments, both quantitative and qualitative, demonstrate that the proposed method achieves state-of-the-art performance in low-light image enhancement, highlighting its effectiveness and wide-ranging applicability. The code will be released upon publication. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

29 pages, 3092 KB  
Article
A Comparison Study of Person Identification Using IR Array Sensors and LiDAR
by Kai Liu, Mondher Bouazizi, Zelin Xing and Tomoaki Ohtsuki
Sensors 2025, 25(1), 271; https://doi.org/10.3390/s25010271 - 6 Jan 2025
Cited by 1 | Viewed by 2327
Abstract
Person identification is a critical task in applications such as security and surveillance, requiring reliable systems that perform robustly under diverse conditions. This study evaluates the Vision Transformer (ViT) and ResNet34 models across three modalities—RGB, thermal, and depth—using datasets collected with infrared array [...] Read more.
Person identification is a critical task in applications such as security and surveillance, requiring reliable systems that perform robustly under diverse conditions. This study evaluates the Vision Transformer (ViT) and ResNet34 models across three modalities—RGB, thermal, and depth—using datasets collected with infrared array sensors and LiDAR sensors in controlled scenarios and varying resolutions (16 × 12 to 640 × 480) to explore their effectiveness in person identification. Preprocessing techniques, including YOLO-based cropping, were employed to improve subject isolation. Results show a similar identification performance between the three modalities, in particular in high resolution (i.e., 640 × 480), with RGB image classification reaching 100.0%, depth images reaching 99.54% and thermal images reaching 97.93%. However, upon deeper investigation, thermal images show more robustness and generalizability by maintaining focus on subject-specific features even at low resolutions. In contrast, RGB data performs well at high resolutions but exhibits reliance on background features as resolution decreases. Depth data shows significant degradation at lower resolutions, suffering from scattered attention and artifacts. These findings highlight the importance of modality selection, with thermal imaging emerging as the most reliable. Future work will explore multi-modal integration, advanced preprocessing, and hybrid architectures to enhance model adaptability and address current limitations. This study highlights the potential of thermal imaging and the need for modality-specific strategies in designing robust person identification systems. Full article
(This article belongs to the Special Issue Intelligent Sensors and Signal Processing in Industry)
Show Figures

Figure 1

30 pages, 14512 KB  
Article
An Inverse FEM for Structural Health Monitoring of a Containership: Sensor Network Optimization for Accurate Displacement, Strain, and Internal Force Reconstruction
by Jacopo Bardiani, Christian Oppezzo, Andrea Manes and Claudio Sbarufatti
Sensors 2025, 25(1), 276; https://doi.org/10.3390/s25010276 - 6 Jan 2025
Cited by 6 | Viewed by 2087
Abstract
In naval engineering, particular attention has been given to containerships, as these structures are constantly exposed to potential damage during service hours and since they are essential for large-scale transportation. To assess the structural integrity of these ships and to ensure the safety [...] Read more.
In naval engineering, particular attention has been given to containerships, as these structures are constantly exposed to potential damage during service hours and since they are essential for large-scale transportation. To assess the structural integrity of these ships and to ensure the safety of the crew and the cargo being transported, it is essential to adopt structural health monitoring (SHM) strategies that enable real-time evaluations of a ship’s status. To achieve this, this paper introduces an advancement in the field of smart sensing and SHM that improves ship monitoring and diagnostic capabilities. This is accomplished by a framework that combines the inverse finite element method (iFEM) with the definition of an optimal Fiber Bragg Gratings-based sensor network for the reconstruction of the full field of displacement; strain; and finally, cross-section internal forces. The optimization of the sensor network was performed by defining a multi-objective function that simultaneously considers the accuracy of the displacement field reconstruction and the associated cost of the sensor network. The framework was successfully applied to a mid-portion of a containership case, demonstrating its effective applicability in real and complex scenarios. Full article
(This article belongs to the Special Issue Sensor Application for Nondestructive Structural Health Monitoring)
Show Figures

Figure 1

21 pages, 6239 KB  
Article
Electrochemical Sensor for Hydrogen Leakage Detection at Room Temperature
by Gimi Aurelian Rîmbu, Lucian Pîslaru-Dănescu, George-Claudiu Zărnescu, Carmen Alina Ștefănescu, Mihai Iordoc, Aristofan Alexandru Teișanu and Gabriela Telipan
Sensors 2025, 25(1), 264; https://doi.org/10.3390/s25010264 - 5 Jan 2025
Cited by 5 | Viewed by 3091
Abstract
The use of hydrogen as fuel presents many safety challenges due to its flammability and explosive nature, combined with its lack of color, taste, and odor. The purpose of this paper is to present an electrochemical sensor that can achieve rapid and accurate [...] Read more.
The use of hydrogen as fuel presents many safety challenges due to its flammability and explosive nature, combined with its lack of color, taste, and odor. The purpose of this paper is to present an electrochemical sensor that can achieve rapid and accurate detection of hydrogen leakage. This paper presents both the component elements of the sensor, like sensing material, sensing element, and signal conditioning, as well as the electronic protection and signaling module of the critical concentrations of H2. The sensing material consists of a catalyst type Vulcan XC72 40% Pt, from FuelCellStore, (Bryan, TX, USA). The sensing element is based on a membrane electrode assembly (MEA) system that includes a cathode electrode, an ion-conducting membrane type Nafion 117, from FuelCellStore, (Bryan, TX, USA). and an anode electrode mounted in a coin cell type CR2016, from Xiamen Tob New Energy Technology Co., Ltd, (Xiamen City, Fujian Province, China). The electronic block for electrical signal conditioning, which is delivered by the sensing element, uses an INA111, from Burr-Brown by Texas Instruments Corporation, (Dallas, TX, USA). instrumentation operational amplifier. The main characteristics of the electrochemical sensor for hydrogen leakage detection are operation at room temperature so it does not require a heater, maximum amperometric response time of 1 s, fast recovery time of maximum 1 s, and extended range of hydrogen concentrations detection in a range of up to 20%. Full article
(This article belongs to the Special Issue Advanced Sensors for Gas Monitoring)
Show Figures

Figure 1

25 pages, 13514 KB  
Article
Parallelized Field-Programmable Gate Array Data Processing for High-Throughput Pulsed-Radar Systems
by Aaron D. Pitcher, Mihail Georgiev, Natalia K. Nikolova and Nicola Nicolici
Sensors 2025, 25(1), 239; https://doi.org/10.3390/s25010239 - 3 Jan 2025
Cited by 2 | Viewed by 1253
Abstract
A parallelized field-programmable gate array (FPGA) architecture is proposed to realize an ultra-fast, compact, and low-cost dual-channel ultra-wideband (UWB) pulsed-radar system. This approach resolves the main shortcoming of current FPGA-based radars, namely their low processing throughput, which leads to a significant loss of [...] Read more.
A parallelized field-programmable gate array (FPGA) architecture is proposed to realize an ultra-fast, compact, and low-cost dual-channel ultra-wideband (UWB) pulsed-radar system. This approach resolves the main shortcoming of current FPGA-based radars, namely their low processing throughput, which leads to a significant loss of data provided by the radar receiver. The architecture is integrated with an in-house UWB pulsed radar operating at a sampling rate of 20 gigasamples per second (GSa/s). It is demonstrated that the FPGA data-processing speed matches that of the radar output, thus eliminating data loss. The radar system achieves a remarkable speed of over 9000 waveforms per second on each channel. The proposed architecture is scalable to accommodate higher sampling rates and various waveform periods. It is also multi-functional since the FPGA controls and synchronizes two transmitters and a dual-channel receiver, performs signal reconstruction on both channels simultaneously, and carries out user-defined averaging, trace windowing, and interference suppression for improving the receiver’s signal-to-noise ratio. We also investigate the throughput rate while offloading radar data onto an external device through an Ethernet link. Since the radar data rate significantly exceeds the Ethernet link capacity, we show how the FPGA-based averaging and windowing functions are leveraged to reduce the amount of offloaded data while fully utilizing the radar output. Full article
(This article belongs to the Special Issue Recent Advances in Radar Imaging Techniques and Applications)
Show Figures

Figure 1

17 pages, 5679 KB  
Article
Fiber Bragg Grating Thermometry and Post-Treatment Ablation Size Analysis of Radiofrequency Thermal Ablation on Ex Vivo Liver, Kidney and Lung
by Sanzhar Korganbayev, Leonardo Bianchi, Clara Girgi, Elva Vergantino, Domiziana Santucci, Eliodoro Faiella and Paola Saccomandi
Sensors 2025, 25(1), 245; https://doi.org/10.3390/s25010245 - 3 Jan 2025
Viewed by 1612
Abstract
Radiofrequency ablation (RFA) is a minimally invasive procedure that utilizes localized heat to treat tumors by inducing localized tissue thermal damage. The present study aimed to evaluate the temperature evolution and spatial distribution, ablation size, and reproducibility of ablation zones in ex vivo [...] Read more.
Radiofrequency ablation (RFA) is a minimally invasive procedure that utilizes localized heat to treat tumors by inducing localized tissue thermal damage. The present study aimed to evaluate the temperature evolution and spatial distribution, ablation size, and reproducibility of ablation zones in ex vivo liver, kidney, and lung using a commercial device, i.e., Dophi™ R150E RFA system (Surgnova, Beijing, China), and to compare the results with the manufacturer’s specifications. Optical fibers embedding arrays of fiber Bragg grating (FBG) sensors, characterized by 0.1 °C accuracy and 1.2 mm spatial resolution, were employed for thermometry during the procedures. Experiments were conducted for all the organs in two different configurations: single-electrode (200 W for 12 min) and double-electrode (200 W for 9 min). Results demonstrated consistent and reproducible ablation zones across all organ types, with variations in temperature distribution and ablation size influenced by tissue characteristics and RFA settings. Higher temperatures were achieved in the liver; conversely, the lung exhibited the smallest ablation zone and the lowest maximum temperatures. The study found that using two electrodes for 9 min produced larger, more rounded ablation areas compared to a single electrode for 12 min. Our findings support the efficacy of the RFA system and highlight the need for tailored RFA parameters based on organ type and tumor properties. This research provides insights into the characterization of RFA systems for optimizing RFA techniques and underscores the importance of accurate thermometry and precise procedural planning to enhance clinical outcomes. Full article
Show Figures

Figure 1

35 pages, 4267 KB  
Article
Uncertainty-Aware Multimodal Trajectory Prediction via a Single Inference from a Single Model
by Ho Suk and Shiho Kim
Sensors 2025, 25(1), 217; https://doi.org/10.3390/s25010217 - 2 Jan 2025
Viewed by 2718
Abstract
In the domain of autonomous driving, trajectory prediction plays a pivotal role in ensuring the safety and reliability of autonomous systems, especially when navigating complex environments. Unfortunately, trajectory prediction suffers from uncertainty problems due to the randomness inherent in the driving environment, but [...] Read more.
In the domain of autonomous driving, trajectory prediction plays a pivotal role in ensuring the safety and reliability of autonomous systems, especially when navigating complex environments. Unfortunately, trajectory prediction suffers from uncertainty problems due to the randomness inherent in the driving environment, but uncertainty quantification in trajectory prediction is not widely addressed, and most studies rely on deep ensembles methods. This study presents a novel uncertainty-aware multimodal trajectory prediction (UAMTP) model that quantifies aleatoric and epistemic uncertainties through a single forward inference. Our approach employs deterministic single forward pass methods, optimizing computational efficiency while retaining robust prediction accuracy. By decomposing trajectory prediction into velocity and yaw components and quantifying uncertainty in both, the UAMTP model generates multimodal predictions that account for environmental randomness and intention ambiguity. Evaluation on datasets collected by CARLA simulator demonstrates that our model not only outperforms Deep Ensembles-based multimodal trajectory prediction method in terms of accuracy such as minFDE and miss rate metrics but also offers enhanced time to react for collision avoidance scenarios. This research marks a step forward in integrating efficient uncertainty quantification into multimodal trajectory prediction tasks within resource-constrained autonomous driving platforms. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

40 pages, 2488 KB  
Article
Analysis of Autonomous Penetration Testing Through Reinforcement Learning and Recommender Systems
by Ariadna Claudia Moreno, Aldo Hernandez-Suarez, Gabriel Sanchez-Perez, Linda Karina Toscano-Medina, Hector Perez-Meana, Jose Portillo-Portillo, Jesus Olivares-Mercado and Luis Javier García Villalba
Sensors 2025, 25(1), 211; https://doi.org/10.3390/s25010211 - 2 Jan 2025
Cited by 3 | Viewed by 6190
Abstract
Conducting penetration testing (pentesting) in cybersecurity is a crucial turning point for identifying vulnerabilities within the framework of Information Technology (IT), where real malicious offensive behavior is simulated to identify potential weaknesses and strengthen preventive controls. Given the complexity of the tests, time [...] Read more.
Conducting penetration testing (pentesting) in cybersecurity is a crucial turning point for identifying vulnerabilities within the framework of Information Technology (IT), where real malicious offensive behavior is simulated to identify potential weaknesses and strengthen preventive controls. Given the complexity of the tests, time constraints, and the specialized level of expertise required for pentesting, analysis and exploitation tools are commonly used. Although useful, these tools often introduce uncertainty in findings, resulting in high rates of false positives. To enhance the effectiveness of these tests, Machine Learning (ML) has been integrated, showing significant potential for identifying anomalies across various security areas through detailed detection of underlying malicious patterns. However, pentesting environments are unpredictable and intricate, requiring analysts to make extensive efforts to understand, explore, and exploit them. This study considers these challenges, proposing a recommendation system based on a context-rich, vocabulary-aware transformer capable of processing questions related to the target environment and offering responses based on necessary pentest batteries evaluated by a Reinforcement Learning (RL) estimator. This RL component assesses optimal attack strategies based on previously learned data and dynamically explores additional attack vectors. The system achieved an F1 score and an Exact Match rate over 97.0%, demonstrating its accuracy and effectiveness in selecting relevant pentesting strategies. Full article
(This article belongs to the Special Issue Sensing and Machine Learning Control: Progress and Applications)
Show Figures

Figure 1

Back to TopTop