Next Issue
Volume 26, January-2
Previous Issue
Volume 25, December-2
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 26, Issue 1 (January-1 2026) – 350 articles

Cover Story (view full-size image): Planetary journal bearings enable high torque density and reliability in wind turbine gearboxes due to compact design and long service life. Although generally robust, abnormal conditions (e.g., contamination or insufficient lubrication) can cause damage. Condition monitoring systems can help detect critical states early, but no commercial solution currently exists for journal bearings in wind turbines. Existing studies are limited to small-scale tests, and their applicability to full-size systems remains unproven. This work presents results from a wind turbine gearbox system test with journal bearings and a novel CMS based on surface acoustic wave measurements. The bearing design, sensor setup and test results are described. A machine learning algorithm is shown to predict bearing friction states based on the CMS signals. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
28 pages, 5948 KB  
Article
Probability-Based Forwarding Scheme with Boundary Optimization for C-V2X Multi-Hop Communication
by Zhonghui Pei, Long Xie, Jingbin Lu, Liyuan Zheng and Huiheng Liu
Sensors 2026, 26(1), 350; https://doi.org/10.3390/s26010350 - 5 Jan 2026
Viewed by 377
Abstract
The Internet of Vehicles (IoV) can transmit the status information of vehicles and roads through single-hop or multi-hop broadcast communication, which is a key technology for building intelligent transportation systems and enhancing road safety. However, in dense traffic environments, broadcasting Emergency messages via [...] Read more.
The Internet of Vehicles (IoV) can transmit the status information of vehicles and roads through single-hop or multi-hop broadcast communication, which is a key technology for building intelligent transportation systems and enhancing road safety. However, in dense traffic environments, broadcasting Emergency messages via vehicles can easily trigger massive forwarding redundancy, leading to channel resource selection conflicts between vehicles and affecting the reliability of inter-vehicle communication. This paper analyzes the forwarding near the single-hop transmission radius boundary of the sending node in a probability-based inter-vehicle multi-hop forwarding scheme, pointing out the existence of the boundary forwarding redundancy problem. To address this problem, this paper proposes two probability-based schemes with boundary optimization: (1) By optimizing the forwarding probability distribution outside the transmission radius boundary of the sending node, the forwarding nodes outside the boundary can be effectively utilized while effectively reducing the forwarding redundancy they bring. (2) Additional forwarding backoff timers are allocated to nodes outside the transmission radius boundary of the sending node based on the distance to further reduce the forwarding redundancy outside the boundary. Experimental results show that, compared with the reference schemes without boundary forwarding probability optimization, the proposed schemes significantly reduce forwarding redundancy of Emergency messages while maintaining good single-hop and multi-hop transmission performance. When the reference transmission radius is 300 m and the vehicle density is 0.18 veh/m, compared with the probability-based forwarding scheme without boundary optimization, the proposed schemes (1) and (2) improve the single-hop packet delivery ratio by an average of about 5.41% and 11.83% and reduce the multi-hop forwarding ratio by about 18.07% and 36.07%, respectively. Full article
(This article belongs to the Special Issue Vehicle-to-Everything (V2X) Communication Networks 2024–2025)
Show Figures

Graphical abstract

26 pages, 26937 KB  
Article
Concurrent Incipient Fault Diagnosis in Three-Phase Induction Motors Using Discriminative Band Energy Analysis of AM-Demodulated Vibration Envelopes
by Matheus Boldarini de Godoy, Guilherme Beraldi Lucas and Andre Luiz Andreoli
Sensors 2026, 26(1), 349; https://doi.org/10.3390/s26010349 - 5 Jan 2026
Viewed by 480
Abstract
Three-phase induction motors (TIMs) are widely used in industrial applications, with bearings and rotors representing the most failure-prone components. Detecting incipient damage in these elements is particularly challenging. The associated signatures are weak and highly sensitive to variations, and their identification typically demands [...] Read more.
Three-phase induction motors (TIMs) are widely used in industrial applications, with bearings and rotors representing the most failure-prone components. Detecting incipient damage in these elements is particularly challenging. The associated signatures are weak and highly sensitive to variations, and their identification typically demands sophisticated filters, deep learning models, or high-cost sensors. In this context, the main goal of this work is to propose a new algorithm that reduces the dependence on such complex techniques while still enabling reliable detection of realistic faults using low-cost sensors. Therefore, the proposed Discriminative Band Energy Analysis (DBEA) algorithm operates on vibration signals acquired by low-cost accelerometers. The DBEA operates as a low-complexity filtering stage that is inherently robust to noise and variations in operating conditions, thereby enhancing discrimination among fault classes, without requiring neural networks or deep learning techniques. Moreover, the interaction of concurrent faults generates distinctive amplitude-modulated patterns in the vibration signal, making the AM demodulation-based algorithm particularly effective at separating overlapping fault signatures. The method was evaluated under a wide range of load and voltage conditions, demonstrating robustness to speed variations and measurement noise. The results show that the proposed DBEA framework enables non-invasive classification, making it suitable for implementation in compact and portable diagnostic systems. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

19 pages, 684 KB  
Article
Sensor Driven Resource Optimization Framework for Intelligent Fog Enabled IoHT Systems
by Salman Khan, Ibrar Ali Shah, Woong-Kee Loh, Javed Ali Khan, Alexios Mylonas and Nikolaos Pitropakis
Sensors 2026, 26(1), 348; https://doi.org/10.3390/s26010348 - 5 Jan 2026
Viewed by 381
Abstract
Fog computing has revolutionized the world by providing its services close to the user premises, which results in reducing the communication latency for many real-time applications. This communication latency has been a major constraint in cloud computing and ultimately causes user dissatisfaction due [...] Read more.
Fog computing has revolutionized the world by providing its services close to the user premises, which results in reducing the communication latency for many real-time applications. This communication latency has been a major constraint in cloud computing and ultimately causes user dissatisfaction due to slow response time. Many real-time applications like smart transportation, smart healthcare systems, smart cities, smart farming, video surveillance, and virtual and augmented reality are delay-sensitive real-time applications and require quick response times. The response delay in certain critical healthcare applications might cause serious loss to health patients. Therefore, by leveraging fog computing, a substantial portion of healthcare-related computational tasks can be offloaded to nearby fog nodes. This localized processing significantly reduces latency and enhances system availability, making it particularly advantageous for time-sensitive and mission-critical healthcare applications. Due to close proximity to end users, fog computing is considered to be the most suitable computing platform for real-time applications. However, fog devices are resource constrained and require proper resource management techniques for efficient resource utilization. This study presents an optimized resource allocation and scheduling framework for delay-sensitive healthcare applications using a Modified Particle Swarm Optimization (MPSO) algorithm. Using the iFogSim toolkit, the proposed technique was evaluated for many extensive simulations to obtain the desired results in terms of system response time, cost of execution and execution time. Experimental results demonstrate that the MPSO-based method reduces makespan by up to 8% and execution cost by up to 3% compared to existing metaheuristic algorithms, highlighting its effectiveness in enhancing overall fog computing performance for healthcare systems. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

22 pages, 4277 KB  
Article
TGN-MCDS: A Temporal Graph Network-Based Algorithm for Cluster-Head Optimization in Large-Scale FANETs
by Xiangrui Fan, Yuxuan Yang, Shuo Zhang and Wenlong Cai
Sensors 2026, 26(1), 347; https://doi.org/10.3390/s26010347 - 5 Jan 2026
Viewed by 323
Abstract
With the growing deployment of Flying Ad hoc Networks (FANETs) in military and civilian applications, constructing a stable and efficient communication backbone has become a critical challenge. This paper tackles the Cluster Head (CH) optimization problem in large-scale and highly dynamic FANETs by [...] Read more.
With the growing deployment of Flying Ad hoc Networks (FANETs) in military and civilian applications, constructing a stable and efficient communication backbone has become a critical challenge. This paper tackles the Cluster Head (CH) optimization problem in large-scale and highly dynamic FANETs by formulating it as a Minimum Connected Dominating Set (MCDS) problem. However, since MCDS is NP-complete on general graphs, existing heuristic and exact algorithms suffer from limited coverage, poor connectivity, and high computational cost. To address these issues, we propose TGN-MCDS, a novel algorithm built upon the Temporal Graph Network (TGN) architecture, which leverages graph neural networks for cluster head selection and efficiently learns time-varying network topologies. The algorithm adopts a multi-objective loss function incorporating coverage, connectivity, size control, centrality, edge penalty, temporal smoothness, and information entropy to guide model training. Simulation results demonstrate that TGN-MCDS rapidly achieves near-optimal CH sets with full node coverage and strong connectivity. Compared with Greedy, Integer Linear Programming (ILP), and Branch-and-Bound (BnB) methods, TGN-MCDS produces fewer and more stable CHs, significantly improving cluster stability while maintaining high computational efficiency for real-time operations in large-scale FANETs. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

22 pages, 1715 KB  
Article
A Semantic-Associated Factor Graph Model for LiDAR-Assisted Indoor Multipath Localization
by Bingxun Liu, Ke Han, Zhongliang Deng and Gan Guo
Sensors 2026, 26(1), 346; https://doi.org/10.3390/s26010346 - 5 Jan 2026
Viewed by 331
Abstract
In indoor environments where Global Navigation Satellite System (GNSS) signals are entirely blocked, wireless signals such as 5G and Ultra-Wideband (UWB) have become primary means for high-precision positioning. However, complex indoor structures lead to significant multipath effects, which severely constrain the improvement of [...] Read more.
In indoor environments where Global Navigation Satellite System (GNSS) signals are entirely blocked, wireless signals such as 5G and Ultra-Wideband (UWB) have become primary means for high-precision positioning. However, complex indoor structures lead to significant multipath effects, which severely constrain the improvement of positioning accuracy. Existing indoor positioning methods rarely link environmental semantic information (e.g., wall, column) to multipath error estimation, leading to inaccurate multipath correction—especially in complex scenes with multiple reflective objects. To address this issue, this paper proposes a LiDAR-assisted multipath estimation and positioning method. This method constructs a tightly coupled perception-positioning framework: first, a semantic-feature-based neural network for reflective surface detection is designed to accurately extract the geometric parameters of potential reflectors from LiDAR point clouds; subsequently, a unified factor graph model is established to multidimensionally associate and jointly infer terminal states, virtual anchor (VA) states, wireless signal measurements, and LiDAR-perceived reflector information, enabling dynamic discrimination and utilization of both line-of-sight (LOS) and non-line-of-sight (NLOS) paths. Experimental results demonstrate that the root mean square error (RMSE) of the proposed method is improved by 32.1% compared to traditional multipath compensation approaches. This research provides an effective solution for high-precision and robust positioning in complex indoor environments. Full article
(This article belongs to the Special Issue Advances in RFID-Based Indoor Positioning Systems)
Show Figures

Figure 1

21 pages, 2477 KB  
Article
Non-Invasive Blood Pressure Estimation Enhanced by Capillary Refill Time Modulation of PPG Signals
by Qianheng Yin, Yixiong Chen, Lan Lin, Dongdong Wang and Shen Sun
Sensors 2026, 26(1), 345; https://doi.org/10.3390/s26010345 - 5 Jan 2026
Viewed by 448
Abstract
This study evaluates the impact of capillary refill time (CRT) modulation on photoplethysmography (PPG) signals for improved non-invasive continuous blood pressure (CBP) estimation. Data from 21 healthy participants were collected, applying a standardized 9 N pressure for 15 s to induce CRT during [...] Read more.
This study evaluates the impact of capillary refill time (CRT) modulation on photoplethysmography (PPG) signals for improved non-invasive continuous blood pressure (CBP) estimation. Data from 21 healthy participants were collected, applying a standardized 9 N pressure for 15 s to induce CRT during 6-min sessions. PPG signals were segmented into 252 paired 30-s intervals (CRT-modulated and standard). Three machine learning models—ResNetCNN, LSTM, and Transformer—were validated using leave-one-subject-out (LOSO) and non-LOSO methods. CRT modulation significantly enhanced accuracy across all models. ResNetCNN showed substantial improvements, reducing mean absolute error (MAE) by up to 35.6% and mean absolute percentage error (MAPE) by up to 40.6%. LSTM and Transformer models also achieved notable accuracy gains. All models met the Association for the Advancement of Medical Instrumentation (AAMI) criteria (mean error < 5 mmHg; standard deviation < 8 mmHg). The findings suggest CRT modulation’s strong potential to improve wearable CBP monitoring, especially in resource-limited settings. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

25 pages, 4290 KB  
Article
State-Aware Resource Allocation for V2X Communications
by Ming Sun, Jinqing Xu and Jiaying Wang
Sensors 2026, 26(1), 344; https://doi.org/10.3390/s26010344 - 5 Jan 2026
Viewed by 410
Abstract
Vehicle-to-Everything (V2X) has become a key technology for addressing intelligent transportation challenges. Improving spectrum utilization and mitigating multi-user interference among V2X links are currently the primary focuses of research efforts. However, the time-varying nature of channel resources and the dynamic vehicular environment pose [...] Read more.
Vehicle-to-Everything (V2X) has become a key technology for addressing intelligent transportation challenges. Improving spectrum utilization and mitigating multi-user interference among V2X links are currently the primary focuses of research efforts. However, the time-varying nature of channel resources and the dynamic vehicular environment pose significant challenges to achieving high spectral efficiency and low interference. Numerous studies have demonstrated the effectiveness of deep reinforcement learning (DRL) in distributed resource allocation for vehicular networks. Nevertheless, in conventional distributed DRL frameworks, the independence of agent decisions often weakens cooperation among agents, thereby limiting the overall performance potential of the algorithms. To address this limitation, this paper proposes a state-aware communication resource allocation algorithm for vehicular networks. The proposed approach enhances the representation capability of observable data by expanding the state space, thus improving the utilization of available observations. Additionally, a conditional attention mechanism is introduced to strengthen the model’s perception of environmental dynamics. These innovative improvements significantly enhance each agent’s awareness of the environment and promote effective collaboration among agents. Simulation results verify that the proposed algorithm effectively improves agents’ environmental perception and inter-agent cooperation, leading to superior performance in complex and dynamic V2X scenarios. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

25 pages, 9700 KB  
Article
A Natural Gas Energy Metering Method Based on Density-Sound Velocity Correlation
by Bin Zhang, Zhenwei Huang, Wenlin Wang, Junxian Wang, Dailiang Xie, Ying Cheng and Yi Yang
Sensors 2026, 26(1), 343; https://doi.org/10.3390/s26010343 - 5 Jan 2026
Viewed by 353
Abstract
This paper proposes a method for metering natural gas energy based on the correlation between density and sound velocity. The technique integrates physical property correlation models with measured parameters, such as temperature, pressure, sound velocity, and density, to accurately predict the compression factor [...] Read more.
This paper proposes a method for metering natural gas energy based on the correlation between density and sound velocity. The technique integrates physical property correlation models with measured parameters, such as temperature, pressure, sound velocity, and density, to accurately predict the compression factor and the ideal volume calorific value of natural gas under operating conditions. The volume flow is corrected using the compression factor, which enables precise metering of natural gas energy through the adjusted volume flow and calorific value. To develop a high-precision physical property correlation model, a natural gas dataset comprising 10,000 sample sets is first constructed for model training and testing. Multiple machine learning algorithms are then employed to build predictive models. Analysis of the experimental results led to the development of a model-switching strategy based on the ranges of input features, which substantially enhanced prediction accuracy. For the compression factor model, the mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE), and coefficient of determination (R2) were 0.00118, 0.0030, 0.14%, and 0.9987, respectively. The corresponding indicators for the calorific value model were 0.1583, 0.331, 0.44%, and 0.9736. The proposed method is finally validated using a natural gas real-flow test bench. The results demonstrated maximum prediction errors of 0.061% and 1.19% for the compression factor and calorific value, respectively, while the maximum relative energy error across four gas samples was 1.21%. These results indicate that the method can effectively achieve accurate natural gas energy metering in practical operating conditions. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

30 pages, 12301 KB  
Article
Deep Learning 1D-CNN-Based Ground Contact Detection in Sprint Acceleration Using Inertial Measurement Units
by Felix Friedl, Thorben Menrad and Jürgen Edelmann-Nusser
Sensors 2026, 26(1), 342; https://doi.org/10.3390/s26010342 - 5 Jan 2026
Viewed by 381
Abstract
Background: Ground contact (GC) detection is essential for sprint performance analysis. Inertial measurement units (IMUs) enable field-based assessment, but their reliability during sprint acceleration remains limited when using heuristic and recently used machine learning algorithms. This study introduces a deep learning one-dimensional convolutional [...] Read more.
Background: Ground contact (GC) detection is essential for sprint performance analysis. Inertial measurement units (IMUs) enable field-based assessment, but their reliability during sprint acceleration remains limited when using heuristic and recently used machine learning algorithms. This study introduces a deep learning one-dimensional convolutional neural network (1D-CNN) to improve GC event and GC times detection in sprint acceleration. Methods: Twelve sprint-trained athletes performed 60 m sprints while bilateral shank-mounted IMUs (1125 Hz) and synchronized high-speed video (250 Hz) captured the first 15 m. Video-derived GC events served as reference labels for model training, validation, and testing, using resultant acceleration and angular velocity as model inputs. Results: The optimized model (18 inception blocks, window = 100, stride = 15) achieved mean Hausdorff distances ≤ 6 ms and 100% precision and recall for both validation and test datasets (Rand Index ≥ 0.977). Agreement with video references was excellent (bias < 1 ms, limits of agreement ± 15 ms, r > 0.90, p < 0.001). Conclusions: The 1D-CNN surpassed heuristic and prior machine learning approaches in the sprint acceleration phase, offering robust, near-perfect GC detection. These findings highlight the promise of deep learning-based time-series models for reliable, real-world biomechanical monitoring in sprint acceleration tasks. Full article
(This article belongs to the Special Issue Inertial Sensing System for Motion Monitoring)
Show Figures

Figure 1

22 pages, 2531 KB  
Review
Recent Advances in Raman Spectral Classification with Machine Learning
by Yonghao Liu, Yizhan Wu, Junjie Wang, Jiantao Qi, Changjing Zhou and Yuhua Xue
Sensors 2026, 26(1), 341; https://doi.org/10.3390/s26010341 - 5 Jan 2026
Cited by 1 | Viewed by 852
Abstract
Raman spectroscopy is a non-destructive analytical technique based on molecular vibrational properties. However, its practical application is often challenged by weak scattering signals, complex spectra, and the high-dimensional nature of the data, which complicates accurate interpretation. Traditional chemometric methods are limited in handling [...] Read more.
Raman spectroscopy is a non-destructive analytical technique based on molecular vibrational properties. However, its practical application is often challenged by weak scattering signals, complex spectra, and the high-dimensional nature of the data, which complicates accurate interpretation. Traditional chemometric methods are limited in handling complex, nonlinear Raman data and rely on tedious, expert-knowledge-based feature engineering. The fusion of data-driven Machine Learning (ML) and Deep Learning (DL) methods offers a robust solution, enabling the automatic learning of complex features from raw data and achieving high-accuracy classification and prediction. The present study employed a structured narrative review methodology to capture the research progress, current trends, and future directions in the field of ML-assisted Raman spectral classification. This review provides a comprehensive overview of the application of traditional ML models and advanced DL architectures in Raman spectral analysis. It highlights the latest applications of this technology across several key domains, including biomedical diagnostics, food safety and authentication, mineralogical classification, and plastic and microplastic identification. Despite recent progress, several challenges remain: limited training data, weak cross-dataset generalization, poor reproducibility, and limited interpretability of deep models. We also outline practical directions for future research. Full article
(This article belongs to the Special Issue Advanced Sensor Technologies for Corrosion Monitoring)
Show Figures

Figure 1

20 pages, 4924 KB  
Article
Learning-Augmented MPC for Autonomous Vehicle Path Tracking via Ensemble Residual Dynamics Learning
by Lu Xiong, Ming Liu, Zhihao Xie, Bo Leng and Yuanjian Zhang
Sensors 2026, 26(1), 340; https://doi.org/10.3390/s26010340 - 5 Jan 2026
Viewed by 431
Abstract
Accurate vehicle dynamics modeling is essential for path tracking control, especially under sharp-curvature or rapidly changing conditions where nonlinear and time-varying behaviors introduce significant discrepancies between the nominal model and real vehicle responses, ultimately degrading the performance of traditional Model Predictive Control (MPC). [...] Read more.
Accurate vehicle dynamics modeling is essential for path tracking control, especially under sharp-curvature or rapidly changing conditions where nonlinear and time-varying behaviors introduce significant discrepancies between the nominal model and real vehicle responses, ultimately degrading the performance of traditional Model Predictive Control (MPC). To address this challenge, this paper proposes a learning-augmented MPC framework that incorporates an ensemble learning-based Data-Driven Dynamics Refinement (DDR) Model to enhance predictive accuracy and control robustness. The DDR Model complements nominal vehicle dynamics by capturing complex behaviors that are difficult to represent analytically. An ensemble of independently trained neural predictors is employed to improve generalization performance and provide stable refinement across diverse driving conditions. Furthermore, a feature-driven activation mechanism is designed to selectively apply refinement only when pronounced nonlinear behaviors arise, thereby reducing unnecessary computational burden. High-fidelity simulation studies validate the effectiveness of the proposed method. In single- and double-lane-change scenarios, the refined dynamics reduce maximum lateral deviation by approximately 6 cm and 4 cm, and decrease the maximum vehicle heading error by 0.02 rad and 0.015 rad, respectively, demonstrating significant improvements in tracking accuracy and robustness. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

26 pages, 8454 KB  
Article
Real-Time Fluorescence-Based COVID-19 Diagnosis Using a Lightweight Deep Learning System
by Hui-Jae Bae, Jongweon Kim and Daesik Jeong
Sensors 2026, 26(1), 339; https://doi.org/10.3390/s26010339 - 5 Jan 2026
Viewed by 361
Abstract
The coronavirus is highly contagious, making rapid early diagnosis essential. Although deep learning-based diagnostic methods using CT or X-ray images have advanced significantly, they still face limitations in cost, processing time, and radiation exposure. In addition, for the possibility of real-time COVID-19 diagnosis, [...] Read more.
The coronavirus is highly contagious, making rapid early diagnosis essential. Although deep learning-based diagnostic methods using CT or X-ray images have advanced significantly, they still face limitations in cost, processing time, and radiation exposure. In addition, for the possibility of real-time COVID-19 diagnosis, model lightweighting is required. This study proposes a lightweight deep learning model for COVID-19 diagnosis based on fluorescence images and demonstrates its applicability in embedded environments. To prevent data imbalance caused by noise and experimental variations, images were preprocessed using Gray Scale conversion, CLAHE, and Z-Score normalization to equalize brightness values. Among the tested architectures—VGG, ResNet, DenseNet, and EfficientNet—ResNet152 and VGG13 achieved the highest accuracies of 97.25% and 93.58%, respectively, and were selected for lightweighting. Layer-wise importance was calculated using an imprinting-based method, and less important layers were pruned. The pruned VGG13 maintained its accuracy while reducing model size by 18.9 MB and parameters by 4.2 M. ResNet152 (Prune 39) improved accuracy by 1% while reducing size by 161.5 MB and parameters by 40.22 M. The optimized model achieved 129.97 ms, corresponding to 7.69 frames per second (FPS) on an NPU(Furiosa AI Warboy), proving real-time COVID-19 diagnosis is feasible even on low-power edge devices. Full article
Show Figures

Figure 1

19 pages, 2298 KB  
Article
HFSA-Net: A 3D Object Detection Network with Structural Encoding and Attention Enhancement for LiDAR Point Clouds
by Xuehao Yin, Zhen Xiao, Jinju Shao, Zhimin Qiu and Lei Wang
Sensors 2026, 26(1), 338; https://doi.org/10.3390/s26010338 - 5 Jan 2026
Viewed by 398
Abstract
The inherent sparsity of LiDAR point cloud data presents a fundamental challenge for 3D object detection. During the feature encoding stage, especially in voxelization, existing methods find it difficult to effectively retain the critical geometric structural information contained in these sparse point clouds, [...] Read more.
The inherent sparsity of LiDAR point cloud data presents a fundamental challenge for 3D object detection. During the feature encoding stage, especially in voxelization, existing methods find it difficult to effectively retain the critical geometric structural information contained in these sparse point clouds, resulting in decreased detection performance. To address this problem, this paper proposes an enhanced 3D object detection framework. It first designs a Structured Voxel Feature Encoder that significantly enhances the initial feature representation through intra-voxel feature refinement and multi-scale neighborhood context aggregation. Second, it constructs a Hybrid-Domain Attention-Guided Sparse Backbone, which introduces a decoupled hybrid attention mechanism and a hierarchical integration strategy to realize dynamic weighting and focusing on key semantic and geometric features. Finally, a Scale-Aggregation Head is proposed to improve the model’s perception and localization capabilities for different-sized objects via multi-level feature pyramid fusion and cross-layer information interaction. Experimental results on the KITTI dataset show that the proposed algorithm increases the mean Average Precision (mAP) by 3.34% compared to the baseline model. Moreover, experiments on a vehicle platform with a lower-resolution LiDAR verify the effectiveness of the proposed method in improving 3D detection accuracy and its generalization ability. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensing Technology for Autonomous Vehicles)
Show Figures

Figure 1

21 pages, 9995 KB  
Article
HCNet: Multi-Exposure High-Dynamic-Range Reconstruction Network for Coded Aperture Snapshot Spectral Imaging
by Hang Shi, Jingxia Chen, Yahui Li, Pengwei Zhang and Jinshou Tian
Sensors 2026, 26(1), 337; https://doi.org/10.3390/s26010337 - 5 Jan 2026
Viewed by 407
Abstract
Coded Aperture Snapshot Spectral Imaging (CASSI) is a rapid hyperspectral imaging technique with broad application prospects. Due to limitations in three-dimensional compressed data acquisition modes and hardware constraints, the compressed measurements output by actual CASSI systems have a finite dynamic range, leading to [...] Read more.
Coded Aperture Snapshot Spectral Imaging (CASSI) is a rapid hyperspectral imaging technique with broad application prospects. Due to limitations in three-dimensional compressed data acquisition modes and hardware constraints, the compressed measurements output by actual CASSI systems have a finite dynamic range, leading to degraded hyperspectral reconstruction quality. To address this issue, a high-quality hyperspectral reconstruction method based on multi-exposure fusion is proposed. A multi-exposure data acquisition strategy is established to capture low-, medium-, and high-exposure low-dynamic-range (LDR) measurements. A multi-exposure fusion-based high-dynamic-range (HDR) CASSI measurement reconstruction network (HCNet) is designed to reconstruct physically consistent HDR measurement images. Unlike traditional HDR networks for visual enhancement, HCNet employs a multiscale feature fusion architecture and combines local–global convolutional joint attention with residual enhancement mechanisms to efficiently fuse complementary information from multiple exposures. This makes it more suitable for CASSI systems, ensuring high-fidelity reconstruction of hyperspectral data in both spatial and spectral dimensions. A multi-exposure fusion CASSI mathematical model is constructed, and a CASSI experimental system is established. Simulation and real-world experimental results demonstrate that the proposed method significantly improves hyperspectral image reconstruction quality compared to traditional single-exposure strategies, exhibiting high robustness against multi-exposure interval jitters and shot noise in practical systems. Leveraging the higher-dynamic-range target information acquired through multiple exposures, especially in HDR scenes, the method enables reconstruction with enhanced contrast in both bright and dark details and also demonstrates higher spectral correlation, validating the enhancement of CASSI reconstruction and effective measurement capability in HDR scenarios. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

1 pages, 125 KB  
Correction
Correction: Kaur, N.; Gupta, L. Securing the 6G–IoT Environment: A Framework for Enhancing Transparency in Artificial Intelligence Decision-Making Through Explainable Artificial Intelligence. Sensors 2025, 25, 854
by Navneet Kaur and Lav Gupta
Sensors 2026, 26(1), 336; https://doi.org/10.3390/s26010336 - 5 Jan 2026
Viewed by 315
Abstract
In the original publication [...] Full article
(This article belongs to the Special Issue Security and Privacy Challenges in IoT-Driven Smart Environments)
21 pages, 2824 KB  
Article
A 3D Microfluidic Paper-Based Analytical Device with Smartphone-Based Colorimetric Readout for Phosphate Sensing
by Jose Manuel Graña-Dosantos, Francisco Pena-Pereira, Carlos Bendicho and Inmaculada de la Calle
Sensors 2026, 26(1), 335; https://doi.org/10.3390/s26010335 - 4 Jan 2026
Viewed by 591
Abstract
In this work, a 3D microfluidic paper-based analytical device (3D-µPAD) was developed for the smartphone-based colorimetric determination of phosphate in environmental samples. The assay relied on the formation of a blue-colored product (molybdenum blue) in the detection area of the 3D-µPAD upon reduction [...] Read more.
In this work, a 3D microfluidic paper-based analytical device (3D-µPAD) was developed for the smartphone-based colorimetric determination of phosphate in environmental samples. The assay relied on the formation of a blue-colored product (molybdenum blue) in the detection area of the 3D-µPAD upon reduction of the heteropolyacid H3PMo12O40 formed in the presence of phosphate. A number of experimental parameters were optimized, including geometric aspects of 3D-µPADs, digitization and image processing conditions, the amount of chemicals deposited in specific areas of the 3D-µPAD, and the reaction time. In addition, the stability of the device was evaluated at three different storage temperatures. Under optimal conditions, the working range was found to be from 4 to 25 mg P/L (12–77 mg PO4−3/L). The limits of detection (LOD) and quantification (LOQ) were 0.015 mg P/L and 0.05 mg P/L, respectively. The repeatability and intermediate precision of a 5 mg P/L standard were 4.8% and 7.1%, respectively. The proposed colorimetric assay has been successfully applied to phosphorous determination in various waters, soils, and sediments, obtaining recoveries in the range of 94 to 107%. The ready-to-use 3D-µPAD showed a greener profile than the standard method for phosphate determination, being affordable, easy-to-use, and suitable for citizen science applications. Full article
Show Figures

Graphical abstract

24 pages, 2236 KB  
Article
Radar HRRP Sequence Target Recognition Based on a Lightweight Spatiotemporal Fusion Network
by Xiang Li, Yitao Su, Xiaobin Zhao, Junjun Yin and Jian Yang
Sensors 2026, 26(1), 334; https://doi.org/10.3390/s26010334 - 4 Jan 2026
Viewed by 425
Abstract
High-resolution range profile (HRRP) sequence recognition in radar automatic target recognition faces several practical challenges, including severe category imbalance, degradation of robustness under complex and variable operating conditions, and strict requirements for lightweight models suitable for real-time deployment on resource-limited platforms. To address [...] Read more.
High-resolution range profile (HRRP) sequence recognition in radar automatic target recognition faces several practical challenges, including severe category imbalance, degradation of robustness under complex and variable operating conditions, and strict requirements for lightweight models suitable for real-time deployment on resource-limited platforms. To address these problems, this paper proposes a lightweight spatiotemporal fusion-based (LSTF) HRRP sequence target recognition method. First, a lightweight Transformer encoder based on group linear transformations (TGLT) is designed to effectively model temporal dynamics while significantly reducing parameter size and computation, making it suitable for edge-device applications. Second, a transform-domain spatial feature extraction network is introduced, combining the fractional Fourier transform with an enhanced squeeze-and-excitation fully convolutional network (FSCN). This design fully exploits multi-domain spatial information and enhances class separability by leveraging discriminative scattering-energy distributions at specific fractional orders. Finally, an adaptive focal loss with label smoothing (AFL-LS) is constructed to dynamically adjust class weights for improved performance on long-tail classes, while label smoothing alleviates overfitting and enhances generalization. Experiments on the MSTAR and CVDomes datasets demonstrate that the proposed method consistently outperforms existing baseline approaches across three representative scenarios. Full article
(This article belongs to the Special Issue Radar Target Detection, Imaging and Recognition)
Show Figures

Figure 1

16 pages, 4363 KB  
Article
A Hybrid Multi-Scale Transformer-CNN UNet for Crowd Counting
by Kai Zhao, Chunhao He, Shufan Peng and Tianliang Lu
Sensors 2026, 26(1), 333; https://doi.org/10.3390/s26010333 - 4 Jan 2026
Viewed by 411
Abstract
Crowd counting is a critical computer vision task with significant applications in public security and smart city systems. While deep learning has markedly improved accuracy, persistent challenges include extreme scale variations, severe occlusion, and complex background clutter. To address these issues, we propose [...] Read more.
Crowd counting is a critical computer vision task with significant applications in public security and smart city systems. While deep learning has markedly improved accuracy, persistent challenges include extreme scale variations, severe occlusion, and complex background clutter. To address these issues, we propose a novel Hybrid Multi-Scale Transformer-CNN U-shaped Network (HMSTUNet). Our key contributions are: a hybrid architecture integrating a Multi-Scale Vision Transformer (MSViT) for capturing long-range dependencies and a Dynamic Convolutional Attention Block (DCAB) for modeling local density patterns; and a U-shaped encoder–decoder with skip connections for effective multi-level feature fusion. Extensive evaluations on five public benchmarks show that HMSTUNet achieves the best Mean Absolute Error (MAE) on all five datasets and the best Mean Squared Error (MSE) on three. It sets new state-of-the-art records, attaining MAE/MSE of 49.1/77.8 on SHA, 6.2/10.3 on SHB, 142.1/192.7 on UCF_CC_50, 77.9/132.5 on UCF-QNRF, and 43.2/119.6 on NWPU-Crowd. These results demonstrate the model’s strong robustness and generalization capability. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 1914 KB  
Article
Analysis of Bonding Defects in Cementing Casing Using Attenuation Characteristic of Circumferential SH Guided Waves
by Jie Gao, Tianhao Chen, Yan Lyu, Guorong Song, Jian Peng and Cunfu He
Sensors 2026, 26(1), 332; https://doi.org/10.3390/s26010332 - 4 Jan 2026
Viewed by 339
Abstract
Circumferential guided wave detection technology can serve as an alternative method for detecting casing bond defects. Due to the presence of the cement cladding, the circumferential SH guided waves transmit shear waves into the cement cladding as they propagate in the cementing casing, [...] Read more.
Circumferential guided wave detection technology can serve as an alternative method for detecting casing bond defects. Due to the presence of the cement cladding, the circumferential SH guided waves transmit shear waves into the cement cladding as they propagate in the cementing casing, which cause the circumferential SH guided waves to show attenuation characteristics. In this study, the cementing casing structure was considered as a steel substratum semi-infinite domain cemented cladding pipe structure, and the corresponding dispersion and attenuation characteristics of circumferential SH guided waves were numerically solved based on the state matrix and Legendre polynomial hybrid method. In addition, a finite element simulation model of cementing casing was established to explore the interaction between SH guided waves and bonding defects. The relationship between the amplitude of SH guided waves and the size of the bonding defects was established through the attenuation coefficient. Moreover, an experimental platform for cementing casing detection is constructed to detect bonding defects of different sizes and to achieve the acoustic analysis of cementing defects in cementing casing, which provides a research path for the non-destructive testing and evaluation of bonding defects in cementing casing. Full article
Show Figures

Figure 1

23 pages, 10150 KB  
Article
Tip Discharge Evolution Characteristics and Mechanism Analysis via Optical–Electrical Sensors in Oil-Immersed Transformers
by Zehao Chen, Yong Qian, Gehao Sheng, Fenghua Wang, Bing Xue, Chunhui Zhang and Chengxiang Liu
Sensors 2026, 26(1), 331; https://doi.org/10.3390/s26010331 - 4 Jan 2026
Viewed by 383
Abstract
Tip discharge in oil-immersed transformers poses a significant threat to insulation integrity. Conventional detection methods, such as gas and electrical analysis, are limited by slow response times or susceptibility to interference. Additionally, the lack of systematic comparisons between aged and fresh oil using [...] Read more.
Tip discharge in oil-immersed transformers poses a significant threat to insulation integrity. Conventional detection methods, such as gas and electrical analysis, are limited by slow response times or susceptibility to interference. Additionally, the lack of systematic comparisons between aged and fresh oil using multi-modal signal correlations hinders the development of accurate diagnostic strategies. To address this, a multi-modal sensing platform employing optical, UHF, and HFCT sensors, complemented by visual observation, was developed to investigate the evolution characteristics and mechanisms of tip discharge and to compare the detection effectiveness of these methods. Experimental results reveal that aged oil undergoes a novel four-stage evolution, where discharge signals first rise to a local peak, then experience suppression, followed by a dramatic surge, and finally decline slightly before breakdown. This process is governed by an “Impurity-Assisted Cumulative Breakdown Mechanism,” driven by impurity bridge growth and space charge effects, with signal transitions from ‘decoupling’ to synchronization. The optical sensor demonstrated superior sensitivity in early discharge stages compared to electrical methods. In contrast, fresh oil exhibited a “High-Field-Driven Stochastic Breakdown Mechanism,” with isolated pulses from micro-bubble discharges maintaining a metastable state until a critical threshold triggers instantaneous failure. This study enhances the understanding of how oil condition alters discharge mechanisms and underscores the value of multi-modal sensing for insulation condition assessment. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

18 pages, 6832 KB  
Article
Enhancing Efficiency in Coal-Fired Boilers Using a New Predictive Control Method for Key Parameters
by Qinwu Li, Libin Yu, Tingyu Liu, Lianming Li, Yangshu Lin, Tao Wang, Chao Yang, Lijie Wang, Weiguo Weng, Chenghang Zheng and Xiang Gao
Sensors 2026, 26(1), 330; https://doi.org/10.3390/s26010330 - 4 Jan 2026
Viewed by 460
Abstract
In the context of carbon neutrality, the large-scale integration of renewable energy sources has led to frequent load changes in coal-fired boilers. These fluctuations cause key operational parameters to deviate significantly from their design values, undermining combustion stability and reducing operational efficiency. To [...] Read more.
In the context of carbon neutrality, the large-scale integration of renewable energy sources has led to frequent load changes in coal-fired boilers. These fluctuations cause key operational parameters to deviate significantly from their design values, undermining combustion stability and reducing operational efficiency. To address this issue, we introduce a novel predictive control method to enhance the control precision of key parameters under complex variable-load conditions, which integrates a coupled predictive model and real-time optimization. The predictive model is based on a coupled Transformer-gated recurrent unit (GRU) architecture, which demonstrates strong adaptability to load fluctuations and achieves high prediction accuracy, with a mean absolute error of 0.095% and a coefficient of determination of 0.966 for oxygen content (OC); 0.0163 kPa and 0.987 for bed pressure (BP); and 0.300 °C and 0.927 for main steam temperature (MST). These results represent substantial improvements over lone implementations of GRU, LSTM, and Transformer models. Based on these multi-step predictions, a WOA-based real-time optimization strategy determines coordinated adjustments of secondary fan frequency, slag discharger frequency, and desuperheating water valves before deviations occur. Field validation on a 300 t/h boiler over a representative 24 h load cycle shows that the method reduces fluctuations in OC, BP, and MST by 62.07%, 50.95%, and 40.43%, respectively, relative to the original control method. By suppressing parameter variability and maintaining key parameters near operational targets, the method enhances boiler thermal efficiency and steam quality. Based on the performance gain measured during the typical operating day, the corresponding annual gain is estimated at ~1.77%, with an associated CO2 reduction exceeding 6846 t. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

22 pages, 1777 KB  
Article
DP2PNet: Diffusion-Based Point-to-Polygon Conversion for Single-Point Supervised Oriented Object Detection
by Peng Li, Limin Zhang and Tao Qu
Sensors 2026, 26(1), 329; https://doi.org/10.3390/s26010329 - 4 Jan 2026
Viewed by 318
Abstract
Rotated Bounding Boxes (RBBs) for oriented object detection are labor-intensive and time-consuming to annotate. Single-point supervision offers a cost-effective alternative but suffers from insufficient size and orientation information, leading existing methods to rely heavily on complex priors and fixed refinement stages. In this [...] Read more.
Rotated Bounding Boxes (RBBs) for oriented object detection are labor-intensive and time-consuming to annotate. Single-point supervision offers a cost-effective alternative but suffers from insufficient size and orientation information, leading existing methods to rely heavily on complex priors and fixed refinement stages. In this paper, we propose DP2PNet (Diffusion-Point-to-Polygon Network), the first diffusion model-based framework for single-point supervised oriented object detection. DP2PNet features three key innovations: (1) A multi-scale consistent noise generator that replaces manual or external model priors with Gaussian noise, reducing dependency on domain-specific information; (2) A Noise Cross-Constraint module based on multi-instance learning, which selects optimal noise point bags by fusing receptive field matching and object coverage; (3) A Semantic Key Point Aggregator that aggregates noise points via graph convolution to form semantic key points, from which pseudo-RBBs are generated using convex hulls. DP2PNet supports dynamic adjustment of refinement stages without retraining, enabling flexible accuracy optimization. Extensive experiments on DOTA-v1.0 and DIOR-R datasets demonstrate that DP2PNet achieves 53.82% and 53.61% mAP50, respectively, comparable to methods relying on complex priors. It also exhibits strong noise robustness and cross-dataset generalization. Full article
Show Figures

Figure 1

26 pages, 3302 KB  
Article
An Autonomous Land Vehicle Navigation System Based on a Wheel-Mounted IMU
by Shuang Du, Wei Sun, Xin Wang, Yuyang Zhang, Yongxin Zhang and Qihang Li
Sensors 2026, 26(1), 328; https://doi.org/10.3390/s26010328 - 4 Jan 2026
Viewed by 448
Abstract
Navigation errors due to drifting in inertial systems using low-cost sensors are some of the main challenges for land vehicle navigation in Global Navigation Satellite System (GNSS)-denied environments. In this paper, we propose an autonomous navigation strategy with a wheel-mounted microelectromechanical system (MEMS) [...] Read more.
Navigation errors due to drifting in inertial systems using low-cost sensors are some of the main challenges for land vehicle navigation in Global Navigation Satellite System (GNSS)-denied environments. In this paper, we propose an autonomous navigation strategy with a wheel-mounted microelectromechanical system (MEMS) inertial measurement unit (IMU), referred to as the wheeled inertial navigation system (INS), to effectively suppress drifted navigation errors. The position, velocity, and attitude (PVA) of the vehicle are predicted through the inertial mechanization algorithm, while gyro outputs are utilized to derive the vehicle’s forward velocity, which is treated as an observation with non-holonomic constraints (NHCs) to estimate the inertial navigation error states. To establish a theoretical foundation for wheeled INS error characteristics, a comprehensive system observability analysis is conducted from an analytical point of view. The wheel rotation significantly improves the observability of gyro errors perpendicular to the rotation axis, which effectively suppresses azimuth errors, horizontal velocity, and position errors. This leads to the superior navigation performance of a wheeled INS over the traditional odometer (OD)/NHC/INS. Moreover, a hybrid extended particle filter (EPF), which fuses the extended Kalman filter (EKF) and PF, is proposed to update the vehicle’s navigation states. It has the advantages of (1) dealing with the system’s non-linearity and non-Gaussian noises, and (2) simultaneously achieving both a high level of accuracy in its estimation and tolerable computational complexity. Kinematic field test results indicate that the proposed wheeled INS is able to provide an accurate navigation solution in GNSS-denied environments. When a total distance of over 26 km is traveled, the maximum position drift rate is only 0.47% and the root mean square (RMS) of the heading error is 1.13°. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

15 pages, 3967 KB  
Article
Low-Light Image Segmentation on Edge Computing System
by Sung-Chan Choi and Sung-Yeon Kim
Sensors 2026, 26(1), 327; https://doi.org/10.3390/s26010327 - 4 Jan 2026
Viewed by 369
Abstract
Segmenting low-light images, such as images showing cracks on tunnel walls, is challenging due to limited visibility. Hence, we need to combine image brightness enhancement and a segmentation algorithm. We introduce essential preliminaries, specifically highlighting deep learning-based low-light image enhancement methods and the [...] Read more.
Segmenting low-light images, such as images showing cracks on tunnel walls, is challenging due to limited visibility. Hence, we need to combine image brightness enhancement and a segmentation algorithm. We introduce essential preliminaries, specifically highlighting deep learning-based low-light image enhancement methods and the pixel-level image segmentation algorithm. After that, we provide a three-step low-light image segmentation algorithm. The proposed algorithm begins with brightness and contrast enhancement of low-light images, followed by accurate segmentation using a U-Net model. By various experimental results, we show the performance metrics of the proposed low-light image segmentation algorithm and compare the proposed algorithm’s performance against several baseline models. Furthermore, we demonstrate the implementation of the proposed low-light image segmentation pipeline on an edge computing platform. The implementation results show that the proposed algorithm is sufficiently fast for real-time processing. Full article
(This article belongs to the Special Issue Image Processing and Analysis for Object Detection: 3rd Edition)
Show Figures

Figure 1

19 pages, 3887 KB  
Article
RELoc: An Enhanced 3D WiFi Fingerprinting Indoor Localization Algorithm with RFECV Feature Selection
by Shehu Lukman Ayinla, Azrina Abd Aziz, Micheal Drieberg, Misfa Susanto and Anis Laouiti
Sensors 2026, 26(1), 326; https://doi.org/10.3390/s26010326 - 4 Jan 2026
Viewed by 411
Abstract
The use of Artificial Intelligence (AI) algorithms has enhanced WiFi fingerprinting-based indoor localization. However, most existing approaches are limited to 2D coordinate estimation, which leads to significant performance declines in multi-floor environments due to vertical ambiguity and inadequate spatial modeling. This limitation reduces [...] Read more.
The use of Artificial Intelligence (AI) algorithms has enhanced WiFi fingerprinting-based indoor localization. However, most existing approaches are limited to 2D coordinate estimation, which leads to significant performance declines in multi-floor environments due to vertical ambiguity and inadequate spatial modeling. This limitation reduces reliability in real-world applications where accurate indoor localization is essential. This study proposes RELoc, a new 3D indoor localization framework that integrates Recursive Feature Elimination with Cross-Validation (RFECV) for optimal Access Point (AP) selection and Extremely Randomized Trees (ERT) for precise 2D and 3D coordinate regression. The ERT hyperparameters are optimized using Bayesian optimization with Optuna’s Tree-structured Parzen Estimator (TPE) to ensure robust, stable, and accurate localization. Extensive evaluation on the SODIndoorLoc and UTSIndoorLoc datasets demonstrates that RELoc delivers superior performance in both 2D and 3D indoor localization. Specifically, RELoc achieves Mean Absolute Errors (MAEs) of 1.84 m and 4.39 m for 2D coordinate prediction on SODIndoorLoc and UTSIndoorLoc, respectively. When floor information is incorporated, RELoc improves by 33.15% and 26.88% over the 2D version on these datasets. Furthermore, RELoc outperforms state-of-the-art methods by 7.52% over Graph Neural Network (GNN) and 12.77% over Deep Neural Network (DNN) on SODIndoorLoc and 40.22% over Extra Tree (ET) on UTSIndoorLoc, showing consistent improvements across various indoor environments. This enhancement emphasizes the critical role of 3D modeling in achieving robust and spatially discriminative indoor localization. Full article
(This article belongs to the Special Issue Indoor Localization Techniques Based on Wireless Communication)
Show Figures

Figure 1

23 pages, 41532 KB  
Article
CW-DETR: An Efficient Detection Transformer for Traffic Signs in Complex Weather
by Tianpeng Wang, Qiaoshuang Teng, Shangyu Sun, Weidong Song, Jinhe Zhang and Yuxuan Li
Sensors 2026, 26(1), 325; https://doi.org/10.3390/s26010325 - 4 Jan 2026
Viewed by 354
Abstract
Traffic sign detection under adverse weather conditions remains challenging due to severe feature degradation caused by rain, fog, and snow, which significantly impairs the performance of existing detection systems. This study presents the CW-DETR (Complex Weather Detection Transformer), an end-to-end detection framework designed [...] Read more.
Traffic sign detection under adverse weather conditions remains challenging due to severe feature degradation caused by rain, fog, and snow, which significantly impairs the performance of existing detection systems. This study presents the CW-DETR (Complex Weather Detection Transformer), an end-to-end detection framework designed to address weather-induced feature deterioration in real-time applications. Building upon the RT-DETR, our approach integrates four key innovations: a multipath feature enhancement network (FPFENet) for preserving fine-grained textures, a Multiscale Edge Enhancement Module (MEEM) for combating boundary degradation, an adaptive dual-stream bidirectional feature pyramid network (ADBF-FPN) for cross-scale feature compensation, and a multiscale convolutional gating module (MCGM) for suppressing semantic–spatial confusion. Extensive experiments on the CCTSDB2021 dataset demonstrate that the CW-DETR achieves 69.0% AP and 94.4% AP50, outperforming state-of-the-art real-time detectors by 2.3–5.7 percentage points while maintaining computational efficiency (56.8 GFLOPs). A cross-dataset evaluation on TT100K, the TSRD, CNTSSS, and real-world snow conditions (LNTU-TSD) confirms the robust generalization capabilities of the proposed model. These results establish CW-DETR as an effective solution for all-weather traffic sign detection in intelligent transportation systems. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

15 pages, 2133 KB  
Article
Impact of Helicopter Vibrations on In-Ear PPG Monitoring for Vital Signs—Mountain Rescue Technology Study (MoReTech)
by Aaron Benkert, Jakob Bludau, Lukas Boborzi, Stephan Prueckner and Roman Schniepp
Sensors 2026, 26(1), 324; https://doi.org/10.3390/s26010324 - 4 Jan 2026
Viewed by 484
Abstract
Pulsoximeters are widely used in the medical care of preclinical patients to evaluate the cardiorespiratory status and monitor basic vital signs, such as pulse rate (PR) and oxygen saturation (SpO2). In many preclinical situations, air transport of the patient by helicopter [...] Read more.
Pulsoximeters are widely used in the medical care of preclinical patients to evaluate the cardiorespiratory status and monitor basic vital signs, such as pulse rate (PR) and oxygen saturation (SpO2). In many preclinical situations, air transport of the patient by helicopter is necessary. Conventional pulse oximeters, mostly used on the patient’s finger, are prone to motion artifacts during transportation. Therefore, this study aims to determine whether simulated helicopter vibration has an impact on the photoplethysmogram (PPG) derived from an in-ear sensor at the external ear canal and whether the vibration influences the calculation of vital signs PR and SpO2. The in-ear PPG signals of 17 participants were measured at rest and under exposure to vibration generated by a helicopter simulator. Several signal quality indicators (SQI), including perfusion index, skewness, entropy, kurtosis, omega, quality index, and valid pulse detection, were extracted from the in-ear PPG recordings during rest and vibration. An intra-subject comparison was performed to evaluate signal quality changes under exposure to vibration. The analysis revealed no significant difference in any SQI between vibration and rest (all p > 0.05). Furthermore, the vital signs PR and SpO2 calculated using the in-ear PPG signal were compared to reference measurements by a clinical monitoring system (ECG and SpO2 finger sensor). The results for the PR showed substantial agreement (CCCrest = 0.96; CCCvibration = 0.96) and poor agreement for SpO2 (CCCrest = 0.41; CCCvibration = 0.19). The results of our study indicate that simulated helicopter vibration had no significant impact on the calculation of the SQIs, and the calculation of vital signs PR and SpO2 did not differ between rest and vibration conditions. Full article
(This article belongs to the Special Issue Novel Optical Sensors for Biomedical Applications—2nd Edition)
Show Figures

Figure 1

24 pages, 14037 KB  
Article
Enhancing Surgical Planning with AI-Driven Segmentation and Classification of Oncological MRI Scans
by Alejandro Martinez Guillermo, Juan Francisco Zapata Pérez, Juan Martinez-Alajarin and Alicia Arévalo García
Sensors 2026, 26(1), 323; https://doi.org/10.3390/s26010323 - 4 Jan 2026
Viewed by 507
Abstract
This work presents the development of an Artificial Intelligence (AI)-based pipeline for patient-specific three-dimensional (3D) reconstruction from oncological magnetic resonance imaging (MRI), leveraging image-derived information to enhance the analysis process. These developments were carried out within the framework of Cella Medical Solutions, forming [...] Read more.
This work presents the development of an Artificial Intelligence (AI)-based pipeline for patient-specific three-dimensional (3D) reconstruction from oncological magnetic resonance imaging (MRI), leveraging image-derived information to enhance the analysis process. These developments were carried out within the framework of Cella Medical Solutions, forming part of a broader initiative to improve and optimize the company’s medical-image processing pipeline. The system integrates automatic MRI sequence classification using a ResNet-based architecture and segmentation of anatomical structures with a modular nnU-Net v2 framework. The classification stage achieved over 90% accuracy and showed improved segmentation performance over prior state-of-the-art pipelines, particularly for contrast-sensitive anatomies such as the hepatic vasculature and pancreas, where dedicated vascular networks showed Dice score differences of approximately 20–22%, and for musculoskeletal structures, where the model outperformed specialized networks in several elements. In terms of computational efficiency, the complete processing of a full MRI case, including sequence classification and segmentation, required approximately four minutes on the target hardware. The integration of sequence-aware information allows for a more comprehensive understanding of MRI signals, leading to more accurate delineations than approaches without such differentiation. From a clinical perspective, the proposed method has the potential to be integrated into surgical planning workflows. The segmentation outputs were converted into a patient-specific 3D model, which was subsequently integrated into Cella’s surgical planner as a proof of concept. This process illustrates the transition from voxel-wise anatomical labels to a fully navigable 3D reconstruction, representing a step toward more robust and personalized AI-driven medical-image analysis workflows that leverage sequence-aware information for enhanced clinical utility. Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

22 pages, 4393 KB  
Article
An Open-Source, Low-Cost Solution for 3D Scanning
by Andrei Mateescu, Ioana Livia Stefan, Silviu Raileanu and Ioan Stefan Sacala
Sensors 2026, 26(1), 322; https://doi.org/10.3390/s26010322 - 4 Jan 2026
Viewed by 556
Abstract
With new applications continuously emerging in the fields of manufacturing, quality control and inspection, the need to develop three-dimensional (3D) scanning solutions suitable for industrial environments increases. 3D scanning is the process of analyzing one or more objects in order to convert and [...] Read more.
With new applications continuously emerging in the fields of manufacturing, quality control and inspection, the need to develop three-dimensional (3D) scanning solutions suitable for industrial environments increases. 3D scanning is the process of analyzing one or more objects in order to convert and store the object’s features in a digital format. Due to the increased costs of industrial 3D scanning solutions, this paper proposes an open-source, low-cost architecture for obtaining a 3D model that can be used in manufacturing, which involves a linear laser beam that is swept across the object via a rotating mirror, and a camera that grabs images, to further be used to extract the dimensions of the object through a technique inspired by laser triangulation. The 3D models for several objects are obtained, analyzed and compared to the dimensions of their respective real-world counterparts. For the tested objects, the proposed system yields a maximum mean height error of 2.56 mm, a maximum mean length error of 1.48 mm and a maximum mean width error of 1.30 mm on the raw point cloud and a scanning time of ∼4 s per laser line. Finally, a few observations and ways to improve the proposed solution are mentioned. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensing Technology in Smart Manufacturing)
Show Figures

Figure 1

19 pages, 25889 KB  
Article
Current-Aware Temporal Fusion with Input-Adaptive Heterogeneous Mixture-of-Experts for Video Deblurring
by Yanwen Zhang, Zejing Zhao and Akio Namiki
Sensors 2026, 26(1), 321; https://doi.org/10.3390/s26010321 - 4 Jan 2026
Viewed by 348
Abstract
In image sensing, measurements such as an object’s position or contour are typically obtained by analyzing digitized images. This method is widely used due to its simplicity. However, relative motion or inaccurate focus can cause motion and defocus blur, reducing measurement accuracy. Thus, [...] Read more.
In image sensing, measurements such as an object’s position or contour are typically obtained by analyzing digitized images. This method is widely used due to its simplicity. However, relative motion or inaccurate focus can cause motion and defocus blur, reducing measurement accuracy. Thus, video deblurring is essential. However, existing deep learning-based video deblurring methods struggle to balance high-quality deblurring, fast inference, and wide applicability. First, we propose a Current-Aware Temporal Fusion (CATF) framework, which focuses on the current frame in terms of both network architecture and modules. This reduces interference from unrelated features of neighboring frames and fully exploits current frame information, improving deblurring quality. Second, we introduce a Mixture-of-Experts module based on NAFBlocks (MoNAF), which adaptively selects expert structures according to the input features, reducing inference time. Third, we design a training strategy to support both sequential and temporally parallel inference. In sequential deblurring, we conduct experiments on the DVD, GoPro, and BSD datasets. Qualitative results show that our method effectively preserves image structures and fine details. Quantitative results further demonstrate that our method achieves clear advantages in terms of PSNR and SSIM. In particular, under the exposure setting of 3 ms–24 ms on the BSD dataset, our method achieves 33.09 dB PSNR and 0.9453 SSIM, indicating its effectiveness even in severely blurred scenarios. Meanwhile, our method achieves a good balance between deblurring quality and runtime efficiency. Moreover, the framework exhibits minimal error accumulation and performs effectively in temporal parallel computation. These results demonstrate that effective video deblurring serves as an important supporting technology for accurate image sensing. Full article
(This article belongs to the Special Issue Smart Remote Sensing Images Processing for Sensor-Based Applications)
Show Figures

Figure 1

Previous Issue
Back to TopTop