Journal Description
Sensors
Sensors
is an international, peer-reviewed, open access journal on the science and technology of sensors. Sensors is published semimonthly online by MDPI. The Polish Society of Applied Electromagnetics (PTZE), Japan Society of Photogrammetry and Remote Sensing (JSPRS), Spanish Society of Biomedical Engineering (SEIB), International Society for the Measurement of Physical Behaviour (ISMPB) and Chinese Society of Micro-Nano Technology (CSMNT) and more are affiliated with Sensors and their members receive a discount on the article processing charges.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), PubMed, MEDLINE, PMC, Ei Compendex, Inspec, Astrophysics Data System, and other databases.
- Journal Rank: JCR - Q2 (Instruments and Instrumentation) / CiteScore - Q1 (Instrumentation)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 18.6 days after submission; acceptance to publication is undertaken in 2.4 days (median values for papers published in this journal in the second half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Testimonials: See what our editors and authors say about Sensors.
- Companion journals for Sensors include: Chips, Targets and AI Sensors.
Impact Factor:
3.5 (2024);
5-Year Impact Factor:
3.7 (2024)
Latest Articles
Exploring Multi-Channel GPS Receivers for Detecting Spoofing Attacks on UAVs Using Machine Learning
Sensors 2025, 25(13), 4045; https://doi.org/10.3390/s25134045 (registering DOI) - 28 Jun 2025
Abstract
All current transportation systems (vehicles, trucks, planes, etc.) rely on the Global Positioning System (GPS) as their main navigation technology. GPS receivers collect signals from multiple satellites and are able to provide more or less accurate positioning. For civilian applications, GPS signals are
[...] Read more.
All current transportation systems (vehicles, trucks, planes, etc.) rely on the Global Positioning System (GPS) as their main navigation technology. GPS receivers collect signals from multiple satellites and are able to provide more or less accurate positioning. For civilian applications, GPS signals are sent without any encryption system. For this reason, they are vulnerable to various attacks, and the most prevalent one is known as GPS spoofing. The main consequence is the loss of position monitoring, which may increase damage risks in terms of crashes or hijacking. In this study, we focus on UAV (unmanned aerial vehicle) positioning attacks. We first review numerous techniques for detecting and mitigating GPS spoofing attacks, finding that various types of attacks may occur. In the literature, many studies have focused on only one type of attack. We believe that targeting the study of many attacks is crucial for developing efficient mitigation mechanisms. Thus, we have explored a well-known datasetcontaining authentic UAV signals along with spoofed signals (with three types of attacked signals). As a main contribution, we propose a more interpretable approach to exploit the dataset by extracting individual mission sequences, handling non-stationary features, and converting the GPS raw data into a simplified structured format. Then, we design tree-based machine learning algorithms, namely decision tree (DT), random forest (RF), and extreme gradient boosting (XGBoost), for the purpose of classifying signal types and to recognize spoofing attacks. Our main findings are as follows: (a) random forest has significant capability in detecting and classifying GPS spoofing attacks, outperforming the other models. (b) We have been able to detect most types of attacks and distinguish them.
Full article
(This article belongs to the Section Navigation and Positioning)
►
Show Figures
Open AccessArticle
Single-Image Super-Resolution via Cascaded Non-Local Mean Network and Dual-Path Multi-Branch Fusion
by
Yu Xu and Yi Wang
Sensors 2025, 25(13), 4044; https://doi.org/10.3390/s25134044 (registering DOI) - 28 Jun 2025
Abstract
Image super-resolution (SR) aims to reconstruct high-resolution (HR) images from low-resolution (LR) inputs. It plays a crucial role in applications such as medical imaging, surveillance, and remote sensing. However, due to the ill-posed nature of the task and the inherent limitations of imaging
[...] Read more.
Image super-resolution (SR) aims to reconstruct high-resolution (HR) images from low-resolution (LR) inputs. It plays a crucial role in applications such as medical imaging, surveillance, and remote sensing. However, due to the ill-posed nature of the task and the inherent limitations of imaging sensors, obtaining accurate HR images remains challenging. While numerous methods have been proposed, the traditional approaches suffer from oversmoothing and limited generalization; CNN-based models lack the ability to capture long-range dependencies; and Transformer-based solutions, although effective in modeling global context, are computationally intensive and prone to texture loss. To address these issues, we propose a hybrid CNN–Transformer architecture that cascades a pixel-wise self-attention non-local means module (PSNLM) and an adaptive dual-path multi-scale fusion block (ADMFB). The PSNLM is inspired by the non-local means (NLM) algorithm. We use weighted patches to estimate the similarity between pixels centered at each patch while limiting the search region and constructing a communication mechanism across ranges. The ADMFB enhances texture reconstruction by adaptively aggregating multi-scale features through dual attention paths. The experimental results demonstrate that our method achieves superior performance on multiple benchmarks. For instance, in challenging ×4 super-resolution, our method outperforms the second-best method by 0.0201 regarding the Structural Similarity Index (SSIM) on the BSD100 dataset. On the texture-rich Urban100 dataset, our method achieves a 26.56 dB Peak Signal-to-Noise Ratio (PSNR) and 0.8133 SSIM.
Full article
(This article belongs to the Section Sensing and Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Driving Pattern Analysis, Gear Shift Classification, and Fuel Efficiency in Light-Duty Vehicles: A Machine Learning Approach Using GPS and OBD II PID Signals
by
Juan José Molina-Campoverde, Juan Zurita-Jara and Paúl Molina-Campoverde
Sensors 2025, 25(13), 4043; https://doi.org/10.3390/s25134043 (registering DOI) - 28 Jun 2025
Abstract
This study proposes an automatic gear shift classification algorithm in M1 category vehicles using data acquired through the onboard diagnostic system (OBD II) and GPS. The proposed approach is based on the analysis of identification parameters (PIDs), such as manifold absolute pressure (MAP),
[...] Read more.
This study proposes an automatic gear shift classification algorithm in M1 category vehicles using data acquired through the onboard diagnostic system (OBD II) and GPS. The proposed approach is based on the analysis of identification parameters (PIDs), such as manifold absolute pressure (MAP), revolutions per minute (RPM), vehicle speed (VSS), torque, power, stall times, and longitudinal dynamics, to determine the efficiency and behavior of the vehicle in each of its gears. In addition, the unsupervised K-means algorithm was implemented to analyze vehicle gear changes, identify driving patterns, and segment the data into meaningful groups. Machine learning techniques, including K-Nearest Neighbors (KNN), decision trees, logistic regression, and Support Vector Machines (SVMs), were employed to classify gear shifts accurately. After a thorough evaluation, the KNN (Fine KNN) model proved to be the most effective, achieving an accuracy of 99.7%, an error rate of 0.3%, a precision of 99.8%, a recall of 99.7%, and an F1-score of 99.8%, outperforming other models in terms of accuracy, robustness, and balance between metrics. A multiple linear regression model was developed to estimate instantaneous fuel consumption (in L/100 km) using the gear predicted by the KNN algorithm and other relevant variables. The model, built on over 66,000 valid observations, achieved an R2 of 0.897 and a root mean square error (RMSE) of 2.06, indicating a strong fit. Results showed that higher gears (3, 4, and 5) are associated with lower fuel consumption. In contrast, a neutral gear presented the highest levels of consumption and variability, especially during prolonged idling periods in heavy traffic conditions. In future work, we propose integrating this algorithm into driver assistance systems (ADAS) and exploring its applicability in autonomous vehicles to enhance real-time decision making. Such integration could optimize gear shift timing based on dynamic factors like road conditions, traffic density, and driver behavior, ultimately contributing to improved fuel efficiency and overall vehicle performance.
Full article
(This article belongs to the Section Vehicular Sensing)
►▼
Show Figures

Figure 1
Open AccessArticle
A Novel Approach to Non-Invasive Intracranial Pressure Wave Monitoring: A Pilot Healthy Brain Study
by
Andrius Karaliunas, Laimonas Bartusis, Solventa Krakauskaite, Edvinas Chaleckas, Mantas Deimantavicius, Yasin Hamarat, Vytautas Petkus, Toma Stulge, Vytenis Ratkunas, Guven Celikkaya, Ingrida Januleviciene and Arminas Ragauskas
Sensors 2025, 25(13), 4042; https://doi.org/10.3390/s25134042 (registering DOI) - 28 Jun 2025
Abstract
Intracranial pressure (ICP) pulse wave morphology, including the ratios of the three characteristic peaks (P1, P2, and P3), offers valuable insights into intracranial dynamics and brain compliance. Traditional invasive methods for ICP pulse wave monitoring pose significant risks, highlighting the need for non-invasive
[...] Read more.
Intracranial pressure (ICP) pulse wave morphology, including the ratios of the three characteristic peaks (P1, P2, and P3), offers valuable insights into intracranial dynamics and brain compliance. Traditional invasive methods for ICP pulse wave monitoring pose significant risks, highlighting the need for non-invasive alternatives. This pilot study investigates a novel non-invasive method for monitoring ICP pulse waves through closed eyelids, using a specially designed, liquid-filled, fully passive sensor system named ‘Archimedes 02’. To our knowledge, this is the first technological approach that enables the non-invasive monitoring of ICP pulse waveforms via closed eyelids. This study involved 10 healthy volunteers, aged 26–39 years, who underwent resting-state non-invasive ICP pulse wave monitoring sessions using the ‘Archimedes 02’ device while in the supine position. The recorded signals were processed to extract pulse waves and evaluate their morphological characteristics. The results indicated successful detection of pressure pulse waves, showing the expected three peaks (P1, P2, and P3) in all subjects. The calculated P2/P1 ratios were 0.762 (SD = ±0.229) for the left eye and 0.808 (SD = ±0.310) for the right eye, suggesting normal intracranial compliance across the cohort, despite variations observed in some individuals. Physiological tests—the Valsalva maneuver and the Queckenstedt test, both performed in the supine position—induced statistically significant increases in the P2/P1 and P3/P1 ratios, supporting the notion that non-invasively recorded pressure pulse waves, measured through closed eyelids, reflect intracranial volume and pressure dynamics. Additionally, a transient hypoemic/hyperemic response test performed in the upright position induced signal changes in pressure recordings from the ‘Archimedes 02’ sensor that were consistent with intact cerebral blood flow autoregulation, aligning with established physiological principles. These findings indicate that ICP pulse waves and their dynamic changes can be monitored non-invasively through closed eyelids, offering a potential method for brain monitoring in patients for whom invasive procedures are not feasible.
Full article
(This article belongs to the Special Issue Integrated Sensor Systems for Medical Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
A Study of the Soil–Wall–Indoor Air Thermal Environment in a Solar Greenhouse
by
Zhi Zhang, Yu Li, Liqiang Wang, Weiwei Cheng and Zhonghua Liu
Sensors 2025, 25(13), 4041; https://doi.org/10.3390/s25134041 (registering DOI) - 28 Jun 2025
Abstract
Greenhouses offer optimal environments for crop cultivation during the winter months. The rationale for this study was identified as the synergistic exchange of air between the soil, the wall, and the indoor environment within the greenhouse (referring to the coupling law of the
[...] Read more.
Greenhouses offer optimal environments for crop cultivation during the winter months. The rationale for this study was identified as the synergistic exchange of air between the soil, the wall, and the indoor environment within the greenhouse (referring to the coupling law of the temperature fields of the three elements in space and time, including the direction of heat transfer and the consistency of the temperature zoning), thereby maintaining a more optimal temperature. However, there is a paucity of research on the impact of different spans on the thermal environment in solar greenhouses and even fewer studies on the synergistic law of changes in soil-wall indoor air in solar greenhouses with different spans. In this study, two solar greenhouses with different spans were analyzed through a combination of experiments as follows: K-means classification optimized using the grey wolf optimizer (GWO), computational fluid dynamics (CFD) simulations, and long short-term memory (LSTM) prediction models. The two solar greenhouses, designated as S1 and S2, had spans of 11 m and 10 m, respectively. The results are as follows: In two greenhouses when the span and temperature were the same, the indoor air temperature and soil temperature of the S1 greenhouse were lower than those of the S2 greenhouse; there was an isothermal layer in the north wall of greenhouses S1 and S2 (a stable area where the temperature change over time is less than 0.5 °C), the horizontal distance between the isothermal layer on the inside of the greenhouse wall and the inside of the wall was more than 400 mm, and that of the outside of the greenhouse wall was more than 200 mm; within the solar greenhouse, this study identified that heat was emitted from the inner surface of the wall (at 0 mm from the inner surface) toward the outer surface of the wall (at 0 mm from the outer surface), as well as at a horizontal distance of 200 mm from the inner surface of the wall. The temperature data from 0:00 to 8:00 at night were selected for the purpose of analyzing the temperature synergistic change in soil-wall indoor air in the S1 greenhouse. The temperature change can be classified into four categories according to K-means classification, which was optimized based on the grey wolf algorithm. The categories were as follows: high-temperature region, medium-high temperature region, medium-low temperature region, and low-temperature region. The low-temperature region spanned the range of X = (800, 3000) mm, and its height range was Y = (−150, 1200) mm. The CFD model and LSTM prediction model have been shown to be superior, and the findings of this study offer a theoretical basis for the optimization of thermal environment control in solar greenhouses.
Full article
(This article belongs to the Section Smart Agriculture)
Open AccessArticle
LST-BEV: Generating a Long-Term Spatial–Temporal Bird’s-Eye-View Feature for Multi-View 3D Object Detection
by
Qijun Feng, Chunyang Zhao, Pengfei Liu, Zhichao Zhang, Yue Jin and Wanglin Tian
Sensors 2025, 25(13), 4040; https://doi.org/10.3390/s25134040 (registering DOI) - 28 Jun 2025
Abstract
This paper presents a novel multi-view 3D object detection framework, Long-Term Spatial–Temporal Bird’s-Eye View (LST-BEV), designed to improve performance in autonomous driving. Traditional 3D detection relies on sensors like LiDAR, but visual perception using multi-camera systems is emerging as a more cost-effective solution.
[...] Read more.
This paper presents a novel multi-view 3D object detection framework, Long-Term Spatial–Temporal Bird’s-Eye View (LST-BEV), designed to improve performance in autonomous driving. Traditional 3D detection relies on sensors like LiDAR, but visual perception using multi-camera systems is emerging as a more cost-effective solution. Existing methods struggle with capturing long-range dependencies and cross-task information due to limitations in attention mechanisms. To address this, we propose a Long-Range Cross-Task Detection Head (LRCH) to capture these dependencies and integrate cross-task information for accurate predictions. Additionally, we introduce the Long-Term Temporal Perception Module (LTPM), which efficiently extracts temporal features by combining Mamba and linear attention, overcoming challenges in temporal frame extraction. Experimental results in the nuScenes dataset demonstrate that our proposed LST-BEV outperforms its baseline (SA-BEVPool) by 2.1% mAP and 2.7% NDS, indicating a significant performance improvement.
Full article
(This article belongs to the Section Vehicular Sensing)
►▼
Show Figures

Figure 1
Open AccessArticle
Transformer-Based Detection and Clinical Evaluation System for Torsional Nystagmus
by
Ju-Hyuck Han, Yong-Suk Kim, Jong Bin Lee, Hantai Kim, Jong-Yeup Kim and Yongseok Cho
Sensors 2025, 25(13), 4039; https://doi.org/10.3390/s25134039 (registering DOI) - 28 Jun 2025
Abstract
Motivation: Benign paroxysmal positional vertigo (BPPV) is characterized by torsional nystagmus induced by changes in head position, where accurate quantitative assessment of subtle torsional eye movements is essential for precise diagnosis. Conventional videonystagmography (VNG) techniques face challenges in accurately capturing the rotational components
[...] Read more.
Motivation: Benign paroxysmal positional vertigo (BPPV) is characterized by torsional nystagmus induced by changes in head position, where accurate quantitative assessment of subtle torsional eye movements is essential for precise diagnosis. Conventional videonystagmography (VNG) techniques face challenges in accurately capturing the rotational components of pupil movements, and existing automated methods typically exhibit limited performance in identifying torsional nystagmus. Methodology: The objective of this study was to develop an automated system capable of accurately and quantitatively detecting torsional nystagmus. We introduce the Torsion Transformer model, designed to directly estimate torsion angles from iris images. This model employs a self-supervised learning framework comprising two main components: a Decoder module, which learns rotational transformations from image data, and a Finder module, which subsequently estimates the torsion angle. The resulting torsion angle data, represented as time-series, are then analyzed using a 1-dimensional convolutional neural network (1D-CNN) classifier to detect the presence of nystagmus. The performance of the proposed method was evaluated using video recordings from 127 patients diagnosed with BPPV. Findings: Our Torsion Transformer model demonstrated robust performance, achieving a sensitivity of 89.99%, specificity of 86.36%, an F1-score of 88.82%, and an area under the receiver operating characteristic curve (AUROC) of 87.93%. These results indicate that the proposed model effectively quantifies torsional nystagmus, with performance levels comparable to established methods for detecting horizontal and vertical nystagmus. Thus, the Torsion Transformer shows considerable promise as a clinical decision support tool in the diagnosis of BPPV. Key Findings: Technical performance improvement in torsional nystagmus detection; System to support clinical decision-making for healthcare professionals.
Full article
(This article belongs to the Section Biomedical Sensors)
►▼
Show Figures

Figure 1
Open AccessArticle
Variability of the Skin Temperature from Wrist-Worn Device for Definition of Novel Digital Biomarkers of Glycemia
by
Agnese Piersanti, Martina Littero, Libera Lucia Del Giudice, Ilaria Marcantoni, Laura Burattini, Andrea Tura and Micaela Morettini
Sensors 2025, 25(13), 4038; https://doi.org/10.3390/s25134038 (registering DOI) - 28 Jun 2025
Abstract
This study exploited the skin temperature signal derived from a wrist-worn wearable device to define potential digital biomarkers for glycemia levels. Characterization of the skin temperature signal measured through the Empatica E4 device was obtained in 16 subjects (data taken from a dataset
[...] Read more.
This study exploited the skin temperature signal derived from a wrist-worn wearable device to define potential digital biomarkers for glycemia levels. Characterization of the skin temperature signal measured through the Empatica E4 device was obtained in 16 subjects (data taken from a dataset freely available on PhysioNet) by deriving standard metrics and a set of novel metrics describing both the current and the retrospective behavior of the signal. For each subject and for each metric, values that correspond to when glycemia was inside the tight range (70–140 mg/dL) were compared through the Wilcoxon rank-sum test against those above or below the range. For hypoglycemia characterization (below range), retrospective behavior of skin temperature described by the metric CVT SD (standard deviation of the series of coefficient of variation) proved to be the most effective both in daytime and nighttime (100% and 50% of the analyzed subjects, respectively). On the other side, for hyperglycemia characterization (above range), differences were observed between daytime and nighttime, with current behavior of skin temperature, described by M2T (deviation from the reference value of 32 °C), being the most informative during daytime, whereas retrospective behavior, described by SDT hhmm (standard deviation of the series of means), showed the highest effectiveness during nighttime. Proposed variability features outperformed standard metrics, and in future studies, their integration with other digital biomarkers of glycemia could improve the performance of applications devoted to non-invasive detection of glycemic events.
Full article
(This article belongs to the Section Wearables)
►▼
Show Figures

Figure 1
Open AccessArticle
Four-Dimensional Adjustable Electroencephalography Cap for Solid–Gel Electrode
by
Junyi Zhang, Deyu Zhao, Yue Li, Gege Ming and Weihua Pei
Sensors 2025, 25(13), 4037; https://doi.org/10.3390/s25134037 (registering DOI) - 28 Jun 2025
Abstract
Currently, the electroencephalogram (EEG) cap is limited to a finite number of sizes based on head circumference, lacking the mechanical flexibility to accommodate the full range of skull dimensions. This reliance on head circumference data alone often results in a poor fit between
[...] Read more.
Currently, the electroencephalogram (EEG) cap is limited to a finite number of sizes based on head circumference, lacking the mechanical flexibility to accommodate the full range of skull dimensions. This reliance on head circumference data alone often results in a poor fit between the EEG cap and the user’s head shape. To address these limitations, we have developed a four-dimensional (4D) adjustable EEG cap. This cap features an adjustable mechanism that covers the entire cranial area in four dimensions, allowing it to fit the head shapes of nearly all adults. The system is compatible with 64 channels or lower electrode counts. We conducted a study with numerous volunteers to compare the performance characteristics of the 4D caps with the commercial (COML) caps in terms of contact pressure, preparation time, wearing impedance, and performance in brain–computer interface (BCI) applications. The 4D cap demonstrated the ability to adapt to various head shapes more quickly, reduce impedance during testing, and enhance measurement accuracy, signal-to-noise ratio (SNR), and comfort. These improvements suggest its potential for broader application in both laboratory settings and daily life.
Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—3rd Edition)
Open AccessArticle
An Experimental Approach for Investigating Freezing of Gait in Parkinson’s Disease Using Virtual Reality and Neural Sensing: A Pilot Study
by
Mandy Miller Koop, Anson B. Rosenfeldt, Kathryn Scelina, Logan Scelina, Colin Waltz, Andrew S. Bazyk, Visar Berki, Kyle Baker, Julio N. Reyes Torres, Enio Kuvliev, Sean Nagel, Benjamin L. Walter, James Liao, David Escobar, Kenneth B. Baker and Jay L. Alberts
Sensors 2025, 25(13), 4036; https://doi.org/10.3390/s25134036 (registering DOI) - 28 Jun 2025
Abstract
Freezing of gait (FOG) is a disabling symptom associated with Parkinson’s disease (PD). Its understanding and effective treatment is compromised due to the difficulty in reliably triggering FOG in clinical and laboratory environments. The Cleveland Clinic-Virtual Home Environment (CC-VHE) platform was developed to
[...] Read more.
Freezing of gait (FOG) is a disabling symptom associated with Parkinson’s disease (PD). Its understanding and effective treatment is compromised due to the difficulty in reliably triggering FOG in clinical and laboratory environments. The Cleveland Clinic-Virtual Home Environment (CC-VHE) platform was developed to address the challenges of eliciting FOG by combining an omnidirectional treadmill with immersive virtual reality (VR) environments to induce FOG under physical, emotional, and cognitive triggers. Recent developments in deep brain stimulation devices that sense neural signals from the subthalamic nucleus in real time offer the potential to understand the underlying neural mechanism(s) of FOG. This manuscript presents the coupling of the CC-VHE technology, VR paradigms, and the experimental and analytical methods for recording and analyzing synchronous cortical, subcortical, and kinematic data as an approach to begin to understand the nuanced neural pathology associated with FOG. To evaluate the utility and feasibility of coupling VR and neural sensing technology, initial data from one participant are included.
Full article
(This article belongs to the Section Biomedical Sensors)
►▼
Show Figures

Figure 1
Open AccessArticle
How Does the Number of Small Goals Affect National-Level Female Soccer Players in Game-Based Situations? Effects on Technical–Tactical, Physical, and Physiological Variables
by
Dovydas Alaune, Audrius Snieckus, Bruno Travassos, Paweł Chmura, David Pizarro and Diogo Coutinho
Sensors 2025, 25(13), 4035; https://doi.org/10.3390/s25134035 (registering DOI) - 28 Jun 2025
Abstract
This study investigated the impact of varying the number of small goals on elite female soccer players’ decision-making, technical–tactical skills, running performance, and perceived exertion during game-based situations (GBSs). Sixteen national female players (aged 22.33 ± 2.89 years) participated in three conditions within
[...] Read more.
This study investigated the impact of varying the number of small goals on elite female soccer players’ decision-making, technical–tactical skills, running performance, and perceived exertion during game-based situations (GBSs). Sixteen national female players (aged 22.33 ± 2.89 years) participated in three conditions within an 8vs8 game without a goalkeeper (45 × 40 m), each featuring a different number of small goals (1.2 × 0.8 m): (i) 1 small goal (1G); (ii) 2 small goals (2G); and (iii) 3 small goals (3G). Sensors to track players’ positioning, perceived exertion, and notational analysis were used to evaluate player performance. The results indicated that players covered a greater distance at low intensity during the 2G condition compared to both 1G (p = 0.024) and 3G (p ≤ 0.05). Conversely, the 3G condition promoted a higher distance covered at high intensity compared to 2G (p ≤ 0.05). The 1G condition resulted in fewer accelerations (2G, p = 0.003; 3G, p < 0.001) and decelerations (2G, p = 0.012) compared to conditions with additional goals. However, there were no statistically significant effects on technical–tactical actions. Notably, a trend toward improved decision-making was observed in the 1G condition compared to 2G (ES = −0.64 [−1.39; 0.11]) and a longer ball possession duration compared to 3G (ES = −0.28 [−0.71; 0.16]). In conclusion, coaches working with elite female soccer players can strategically vary the number of goals to achieve specific physical aims (i.e., using 2G to emphasize acceleration and deceleration or 3G to promote high-intensity distance) with minimal effects on their perceived fatigue, technical–tactical variables, and decision-making.
Full article
(This article belongs to the Section Wearables)
►▼
Show Figures

Figure 1
Open AccessArticle
Bias-Reduced Localization for Drone Swarm Based on Sensor Selection
by
Bo Wu, Bazhong Shen, Yonggan Zhang, Li Yang and Zhiguo Wang
Sensors 2025, 25(13), 4034; https://doi.org/10.3390/s25134034 (registering DOI) - 28 Jun 2025
Abstract
To address the problem of accurate localization of high-speed drone swarm intrusions, this paper adopts time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurements, aiming to improve the performance of estimating the motion state of drone swarms. To this end,
[...] Read more.
To address the problem of accurate localization of high-speed drone swarm intrusions, this paper adopts time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurements, aiming to improve the performance of estimating the motion state of drone swarms. To this end, a two-step strategy is proposed in this study. Firstly, a small number of sensor nodes with random locations are selected in the wireless sensor network, and the constraint-weighted least squares (CWLS) method is used to obtain the rough position and speed information of the drone swarm. Based on this rough information, the objective function of node optimization is constructed and solved using the randomized semidefinite program (SDP) algorithm proposed in this paper to screen out the sensor nodes with optimal localization performance. Secondly, the sensor nodes screened in the first step are used to re-localize the drone swarm, and the CWLS problem is constructed by combining the TDOA and FDOA measurements, and a deviation elimination scheme is proposed to further improve the localization accuracy of the drone swarm. Simulation results show that the randomized SDP algorithm proposed in this paper has the optimal localization effect, and moreover, the bias reduction scheme proposed in this paper can make the localization error of the drone swarm reach the Cramér–Rao Lower Bound (CRLB) with a low signal-to-noise ratio (SNR).
Full article
(This article belongs to the Section Sensor Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
Experimental Study on Multi-Directional Hybrid Energy Harvesting of a Two-Degree-of-Freedom Cantilever Beam
by
Minglei Han, Zhiqi Xing, Shuangbin Liu and Xu Yang
Sensors 2025, 25(13), 4033; https://doi.org/10.3390/s25134033 (registering DOI) - 28 Jun 2025
Abstract
Based on the research of the directional self-adaptive piezoelectric energy harvester (DSPEH), a structural design scheme of a multi-directional hybrid energy harvester (MHEH) is put forward. The working principle of the MHEH is experimentally studied. A prototype is designed and manufactured, and the
[...] Read more.
Based on the research of the directional self-adaptive piezoelectric energy harvester (DSPEH), a structural design scheme of a multi-directional hybrid energy harvester (MHEH) is put forward. The working principle of the MHEH is experimentally studied. A prototype is designed and manufactured, and the output characteristics of the MHEH in vibrational degree of freedom (DOF) and rotational DOF are experimentally studied. Compared with the DSPEH, after adding the electromagnetic energy harvesting module, the MHEH effectively uses the rotational energy in the rotational DOF, achieves simultaneous energy harvesting from one excitation through two mechanisms, and the output power of the electromagnetic module reaches 61 μW. The total power of the system is increased by 10 times, the power density is increased by 500%, and the MHEH has high voltage output characteristics in multiple directions. Compared with traditional multi-directional and self-adaptive energy harvesters, the MHEH utilizes a reverse-thinking method to generate continuous rotational motion of the cantilever beam, thus eliminating the influence of external excitation direction on the normal vibration of the cantilever beam. In addition, the MHEH has achieved hybrid energy harvesting with a single cantilever beam and multiple mechanisms, providing new ideas for multi-directional energy harvesting.
Full article
(This article belongs to the Section Sensor Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
Reducing Label Dependency in Human Activity Recognition with Wearables: From Supervised Learning to Novel Weakly Self-Supervised Approaches
by
Taoran Sheng and Manfred Huber
Sensors 2025, 25(13), 4032; https://doi.org/10.3390/s25134032 (registering DOI) - 28 Jun 2025
Abstract
Human activity recognition (HAR) using wearable sensors has advanced through various machine learning paradigms, each with inherent trade-offs between performance and labeling requirements. While fully supervised techniques achieve high accuracy, they demand extensive labeled datasets that are costly to obtain. Conversely, unsupervised methods
[...] Read more.
Human activity recognition (HAR) using wearable sensors has advanced through various machine learning paradigms, each with inherent trade-offs between performance and labeling requirements. While fully supervised techniques achieve high accuracy, they demand extensive labeled datasets that are costly to obtain. Conversely, unsupervised methods eliminate labeling needs but often deliver suboptimal performance. This paper presents a comprehensive investigation across the supervision spectrum for wearable-based HAR, with particular focus on novel approaches that minimize labeling requirements while maintaining competitive accuracy. We develop and empirically compare: (1) traditional fully supervised learning, (2) basic unsupervised learning, (3) a weakly supervised learning approach with constraints, (4) a multi-task learning approach with knowledge sharing, (5) a self-supervised approach based on domain expertise, and (6) a novel weakly self-supervised learning framework that leverages domain knowledge and minimal labeled data. Experiments across benchmark datasets demonstrate that: (i) our weakly supervised methods achieve performance comparable to fully supervised approaches while significantly reducing supervision requirements; (ii) the proposed multi-task framework enhances performance through knowledge sharing between related tasks; (iii) our weakly self-supervised approach demonstrates remarkable efficiency with just 10% of labeled data. These results not only highlight the complementary strengths of different learning paradigms, offering insights into tailoring HAR solutions based on the availability of labeled data, but also establish that our novel weakly self-supervised framework offers a promising solution for practical HAR applications where labeled data are limited.
Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning: 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Optical Spectroscopic Detection of Mitochondrial Biomarkers (FMN and NADH) for Hypothermic Oxygenated Machine Perfusion: A Comparative Study in Different Perfusion Media
by
Lorenzo Agostino Cadinu, Keyue Sun, Chunbao Jiao, Rebecca Panconesi, Sangeeta Satish, Fatma Selin Yildirim, Omer Faruk Karakaya, Chase J. Wehrle, Geofia Shaina Crasta, Fernanda Walsh Fernandes, Nasim Eshraghi, Koki Takase, Hiroshi Horie, Pier Carlo Ricci, Davide Bagnoli, Mauricio Flores Carvalho, Andrea Schlegel and Massimo Barbaro
Sensors 2025, 25(13), 4031; https://doi.org/10.3390/s25134031 (registering DOI) - 28 Jun 2025
Abstract
Ex situ machine perfusion has emerged as a pivotal technique for organ preservation and pre-transplant viability assessment, where the real-time monitoring of mitochondrial biomarkers—flavin mononucleotide (FMN) and nicotinamide adenine dinucleotide (NADH)—could significantly mitigate ischemia-reperfusion injury risks. This study develops a non-invasive optical method
[...] Read more.
Ex situ machine perfusion has emerged as a pivotal technique for organ preservation and pre-transplant viability assessment, where the real-time monitoring of mitochondrial biomarkers—flavin mononucleotide (FMN) and nicotinamide adenine dinucleotide (NADH)—could significantly mitigate ischemia-reperfusion injury risks. This study develops a non-invasive optical method combining fluorescence and UV-visible spectrophotometry to quantify FMN and NADH in hypothermic oxygenated perfusion media. Calibration curves revealed linear responses for both biomarkers in absorption and fluorescence (FMN: λex = 445 nm, λem = 530–540 nm; NADH: λex = 340 nm, λem = 465 nm) at concentrations < 100 μg mL−1. However, NADH exhibited nonlinear fluorescence above 100 μg mL−1, requiring shifted excitation to 365 nm for reliable detection. Spectroscopic analysis further demonstrated how perfusion solution composition alters FMN/NADH fluorescence properties, with consistent reproducibility across media. The method’s robustness was validated through comparative studies in clinically relevant solutions, proposing a strategy for precise biomarker quantification without invasive sampling. These findings establish a foundation for real-time, optical biosensor development to enhance organ perfusion monitoring. By bridging spectroscopic principles with clinical needs, this work advances translational sensor technologies for transplant medicine, offering a template for future device integration.
Full article
(This article belongs to the Special Issue Magnetic and Optical Sensors for Healthcare, Medical, and Bioscience Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Study on Height Measurement for Polyethylene Terephthalate (PET) Materials Based on Residual Networks
by
Chongwei Liao, Weixin Zhang, Yujie Peng and Changjun Liu
Sensors 2025, 25(13), 4030; https://doi.org/10.3390/s25134030 (registering DOI) - 28 Jun 2025
Abstract
In industrial production, high-power microwaves are commonly used for heating and drying processes; however, their application in measurement is relatively limited. This paper presents a power measurement system to enhance the use of microwave measurements in industry and improve the efficiency of microwave
[...] Read more.
In industrial production, high-power microwaves are commonly used for heating and drying processes; however, their application in measurement is relatively limited. This paper presents a power measurement system to enhance the use of microwave measurements in industry and improve the efficiency of microwave drying for PET particles. Operating at 2.45 GHz, the system integrates four-port power measurements based on the multilayer perceptron (MLP). By introducing residual connectivity, the residual network is determined to detect the height of PET particles. Experimental results show that this system can perform rapid measurements without needing a vector network analyzer (VNA), significantly improving the efficiency of microwave energy utilization in the early drying stages. Furthermore, the system offers practical and cost-efficient predictions for low-loss particulate materials. This power measurement strategy holds promising application potential in future industrial production.
Full article
(This article belongs to the Section Electronic Sensors)
►▼
Show Figures

Figure 1
Open AccessReview
Advances in Interface Circuits for Self-Powered Piezoelectric Energy Harvesting Systems: A Comprehensive Review
by
Abdallah Al Ghazi, Achour Ouslimani and Abed-Elhak Kasbari
Sensors 2025, 25(13), 4029; https://doi.org/10.3390/s25134029 (registering DOI) - 28 Jun 2025
Abstract
This paper presents a comprehensive summary of recent advances in circuit topologies for piezoelectric energy harvesting, leading to self-powered systems (SPSs), covering the full-bridge rectifier (FBR) and half-bridge rectifier (HBR), AC-DC converters, and maximum power point tracking (MPPT) techniques. These approaches are analyzed
[...] Read more.
This paper presents a comprehensive summary of recent advances in circuit topologies for piezoelectric energy harvesting, leading to self-powered systems (SPSs), covering the full-bridge rectifier (FBR) and half-bridge rectifier (HBR), AC-DC converters, and maximum power point tracking (MPPT) techniques. These approaches are analyzed with respect to their advantages, limitations, and overall impact on energy harvesting efficiency. Th work explores alternative methods that leverage phase shifting between voltage and current waveform components to enhance conversion performance. Additionally, it provides detailed insights into advanced design strategies, including adaptive power management algorithms, low-power control techniques, and complex impedance matching. The paper also addresses the fundamental principles and challenges of converting mechanical vibrations into electrical energy. Experimental results and performance metrics are reviewed, particularly in relation to hybrid approaches, load impedance, vibration frequency, and power conditioning requirements in energy harvesting systems. This review aims to provide researchers and engineers with a critical understanding of the current state of the art, key challenges, and emerging opportunities in piezoelectric energy harvesting. By examining recent developments, it offers valuable insights into optimizing interface circuit design for the development of efficient and self-sustaining piezoelectric energy harvesting systems.
Full article
(This article belongs to the Section Electronic Sensors)
►▼
Show Figures

Figure 1
Open AccessArticle
A Comprehensive Methodological Survey of Human Activity Recognition Across Diverse Data Modalities
by
Jungpil Shin, Najmul Hassan, Abu Saleh Musa Miah and Satoshi Nishimura
Sensors 2025, 25(13), 4028; https://doi.org/10.3390/s25134028 (registering DOI) - 27 Jun 2025
Abstract
Human Activity Recognition (HAR) systems aim to understand human behavior and assign a label to each action, attracting significant attention in computer vision due to their wide range of applications. HAR can leverage various data modalities, such as RGB images and video, skeleton,
[...] Read more.
Human Activity Recognition (HAR) systems aim to understand human behavior and assign a label to each action, attracting significant attention in computer vision due to their wide range of applications. HAR can leverage various data modalities, such as RGB images and video, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, and radar signals. Each modality provides unique and complementary information suited to different application scenarios. Consequently, numerous studies have investigated diverse approaches for HAR using these modalities. This survey includes only peer-reviewed research papers published in English to ensure linguistic consistency and academic integrity. This paper presents a comprehensive survey of the latest advancements in HAR from 2014 to 2025, focusing on Machine Learning (ML) and Deep Learning (DL) approaches categorized by input data modalities. We review both single-modality and multi-modality techniques, highlighting fusion-based and co-learning frameworks. Additionally, we cover advancements in hand-crafted action features, methods for recognizing human–object interactions, and activity detection. Our survey includes a detailed dataset description for each modality, as well as a summary of the latest HAR systems, accompanied by a mathematical derivation for evaluating the deep learning model for each modality, and it also provides comparative results on benchmark datasets. Finally, we provide insightful observations and propose effective future research directions in HAR.
Full article
(This article belongs to the Special Issue Computer Vision and Sensors-Based Application for Intelligent Systems)
Open AccessArticle
On-Site and Sensitive Pipeline Oxygen Detection Equipment Based on TDLAS
by
Yanfei Zhang, Kaiping Yuan, Zhaoan Yu, Yunhan Zhang, Xin Liu and Tieliang Lv
Sensors 2025, 25(13), 4027; https://doi.org/10.3390/s25134027 (registering DOI) - 27 Jun 2025
Abstract
The application of oxygen sensors based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) in the industrial field has received extensive attention. However, most of the existing studies construct detection systems using discrete devices, making it difficult to apply them in the industrial field.
[...] Read more.
The application of oxygen sensors based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) in the industrial field has received extensive attention. However, most of the existing studies construct detection systems using discrete devices, making it difficult to apply them in the industrial field. In this work, through the optimization of the sensor circuit, the size of the core components of the sensor is reduced to 7.8 × 7.8 × 11.8 cm3, integrating the laser, photodetector, and system control circuit. A novel integrated optical path design is proposed for the optical mechanical structure, which enhances the structural integration and long-term optical path stability while reducing the system assembly complexity. The interlocking design of the laser-driven digital-to-analog converter (DAC) and photocurrent acquisition analog-to-digital converter (ADC) reduces the requirements of the harmonic signal extraction for the system hardware. By adopting a high-precision ADC and a high-resolution pulse-width modulation (PWM), the peak-to-peak value of the laser temperature control noise is reduced to 2 m°C, thereby reducing the detection noise of the sensor. This oxygen detection system has a minimum response time of 0.1 s. Under the condition of a 0.5 m detection optical path, the Allan variance shows that when the integration time is 5.6 s, the detection limit reaches 53.4 ppm, which is ahead of the detection accuracy of similar equipment under the very small system size.
Full article
(This article belongs to the Section Optical Sensors)
►▼
Show Figures

Figure 1
Open AccessArticle
Technology and Method Optimization for Foot–Ground Contact Force Detection in Wheel-Legged Robots
by
Chao Huang, Meng Hong, Yaodong Wang, Hui Chai, Zhuo Hu, Zheng Xiao, Sijia Guan and Min Guo
Sensors 2025, 25(13), 4026; https://doi.org/10.3390/s25134026 (registering DOI) - 27 Jun 2025
Abstract
Wheel-legged robots combine the advantages of both wheeled robots and traditional quadruped robots, enhancing terrain adaptability but posing higher demands on the perception of foot–ground contact forces. However, existing approaches still suffer from limited accuracy in estimating contact positions and three-dimensional contact forces
[...] Read more.
Wheel-legged robots combine the advantages of both wheeled robots and traditional quadruped robots, enhancing terrain adaptability but posing higher demands on the perception of foot–ground contact forces. However, existing approaches still suffer from limited accuracy in estimating contact positions and three-dimensional contact forces when dealing with flexible tire–ground interactions. To address this challenge, this study proposes a foot–ground contact state detection technique and optimization method based on multi-sensor fusion and intelligent modeling for wheel-legged robots. First, finite element analysis (FEA) is used to simulate strain distribution under various contact conditions. Combined with global sensitivity analysis (GSA), the optimal placement of PVDF sensors is determined and experimentally validated. Subsequently, under dynamic gait conditions, data collected from the PVDF sensor array are used to predict three-dimensional contact forces through Gaussian process regression (GPR) and artificial neural network (ANN) models. A custom experimental platform is developed to replicate variable gait frequencies and collect dynamic contact data for validation. The results demonstrate that both GPR and ANN models achieve high accuracy in predicting dynamic 3D contact forces, with normalized root mean square error (NRMSE) as low as 8.04%. The models exhibit reliable repeatability and generalization to novel inputs, providing robust technical support for stable contact perception and motion decision-making in complex environments.
Full article
(This article belongs to the Section Sensors and Robotics)

Journal Menu
► ▼ Journal Menu-
- Sensors Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal Browser-
arrow_forward_ios
Forthcoming issue
arrow_forward_ios Current issue - Vol. 25 (2025)
- Vol. 24 (2024)
- Vol. 23 (2023)
- Vol. 22 (2022)
- Vol. 21 (2021)
- Vol. 20 (2020)
- Vol. 19 (2019)
- Vol. 18 (2018)
- Vol. 17 (2017)
- Vol. 16 (2016)
- Vol. 15 (2015)
- Vol. 14 (2014)
- Vol. 13 (2013)
- Vol. 12 (2012)
- Vol. 11 (2011)
- Vol. 10 (2010)
- Vol. 9 (2009)
- Vol. 8 (2008)
- Vol. 7 (2007)
- Vol. 6 (2006)
- Vol. 5 (2005)
- Vol. 4 (2004)
- Vol. 3 (2003)
- Vol. 2 (2002)
- Vol. 1 (2001)
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Drones, Electronics, IoT, MAKE, Sensors
Machine Learning in Internet of Things II
Topic Editors: Dawid Połap, Robertas DamaševičiusDeadline: 30 June 2025
Topic in
Applied Sciences, Electronics, JSAN, Photonics, Sensors, Telecom
Machine Learning in Communication Systems and Networks, 2nd Edition
Topic Editors: Yichuang Sun, Haeyoung Lee, Oluyomi SimpsonDeadline: 20 July 2025
Topic in
Aerospace, Automation, Drones, Remote Sensing, Sensors
Target Tracking, Guidance, and Navigation for Autonomous Systems, 2nd Edition
Topic Editors: Won-Sang Ra, Shaoming He, Ivan MasmitjaDeadline: 20 August 2025
Topic in
Aerospace, Drones, Inventions, Materials, Sensors, Polymers, Applied Sciences, Energies
Innovation and Inventions in Aerospace and UAV Applications
Topic Editors: Andrzej Łukaszewicz, Mohamed Thariq Hameed Sultan, Quang Ha, Wojciech Giernacki, Leszek Ambroziak, Wojciech Tarasiuk, Andriy HolovatyyDeadline: 31 August 2025

Conferences
Special Issues
Special Issue in
Sensors
Applications of Wireless Communication Network Based on MIMO in Sensors
Guest Editors: Hongyuan Gao, Yumeng SuDeadline: 30 June 2025
Special Issue in
Sensors
Recent Advances in Bioelectronics for Health Monitoring and Disease Diagnosis
Guest Editors: Massimo Mischi, Geert LangereisDeadline: 30 June 2025
Special Issue in
Sensors
Sensors and Sensor Fusion Technology in Autonomous Vehicles
Guest Editor: Stefano QuerDeadline: 30 June 2025
Special Issue in
Sensors
Future Horizons in Networking: Exploring the Potential of 6G
Guest Editors: Arun Kumar, Peter Han Joo ChongDeadline: 30 June 2025
Topical Collections
Topical Collection in
Sensors
Robotics, Sensors and Industry 4.0
Collection Editors: Abir Hussain, Dhiya Al-Jumeily OBE, Hissam Tawfik, Panos Liatsis
Topical Collection in
Sensors
Cryptography and Security in IoT and Sensor Networks
Collection Editors: Ilsun You, Gaurav Choudhary, Karl Andersson
Topical Collection in
Sensors
Sensors for Gait, Posture, and Health Monitoring
Collection Editor: Thurmon Lockhart