Next Issue
Volume 26, February-1
Previous Issue
Volume 26, January-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 26, Issue 2 (January-2 2026) – 402 articles

Cover Story (view full-size image): Large-element surface-micromachined optical ultrasound transducers (SMOUTs) enable highly sensitive ultrasound detection for acoustic imaging, but their performance stability is affected by ambient pressure and temperature variations. This work systematically investigates optical resonance wavelength (ORW) shifts in SMOUTs induced by these environmental changes. ORW sensitivities of SMOUTs are analysed and quantified through finite-element simulations and experimental characterizations over clinically relevant pressure and temperature ranges. The strong agreement between simulation and experiment establishes a robust framework for ORW stabilization and compensation, enabling stable, high-sensitivity interrogation of SMOUT arrays using non-tuneable high-power light sources for practical applications. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 17706 KB  
Article
From Simplified Markers to Muscle Function: A Deep Learning Approach for Personalized Cervical Biomechanics Assessment Powered by Massive Musculoskeletal Simulation
by Yuanyuan He, Siyu Liu and Miao Li
Sensors 2026, 26(2), 752; https://doi.org/10.3390/s26020752 - 22 Jan 2026
Viewed by 320
Abstract
Accurate, subject-specific estimation of cervical muscle forces is a critical prerequisite for advancing spinal biomechanics and clinical diagnostics. However, this task remains challenging due to substantial inter-individual anatomical variability and the invasiveness of direct measurement techniques. In this study, we propose a novel [...] Read more.
Accurate, subject-specific estimation of cervical muscle forces is a critical prerequisite for advancing spinal biomechanics and clinical diagnostics. However, this task remains challenging due to substantial inter-individual anatomical variability and the invasiveness of direct measurement techniques. In this study, we propose a novel data-driven biomechanical framework that addresses these limitations by integrating massive-scale personalized musculoskeletal simulations with an efficient Feedforward Neural Network (FNN) model. We generated an unprecedented dataset comprising one million personalized OpenSim cervical models, systematically varying key anthropometric parameters (neck length, shoulder width, head mass) to robustly capture human morphological diversity. A random subset was selected for inverse dynamics simulations to establish a comprehensive, physics-based training dataset. Subsequently, an FNN was trained to learn a robust, nonlinear mapping from non-invasive kinematic and anthropometric inputs to the forces of 72 cervical muscles. The model’s accuracy was validated on a test set, achieving a coefficient of determination (R2) exceeding 0.95 for all 72 muscle forces. This approach effectively transforms a computationally intensive biomechanical problem into a rapid tool. Additionally, the framework incorporates a functional assessment module that evaluates motion deficits by comparing observed head trajectories against a simulated idealized motion envelope. Validation using data from a healthy subject and a patient with restricted mobility demonstrated the framework’s ability to accurately track muscle force trends and precisely identify regions of functional limitations. This methodology offers a scalable and clinically translatable solution for personalized cervical muscle evaluation, supporting targeted rehabilitation and injury risk assessment based on readily obtainable sensor data. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

28 pages, 28148 KB  
Article
Wireless Local Area Network Link Sharing in Unmanned Surface Vehicle Control Scenarios
by Krzysztof Gierłowski, Michał Hoeft, Andrzej Bęben and Maciej Sosnowski
Sensors 2026, 26(2), 751; https://doi.org/10.3390/s26020751 - 22 Jan 2026
Viewed by 252
Abstract
The popularity of unmanned vehicles in numerous areas of employment, combined with the diversity and continuing evolution of their payloads, make the communication solutions utilized by such vehicles the element of a particular importance. In our previous publication, we confirmed a general applicability [...] Read more.
The popularity of unmanned vehicles in numerous areas of employment, combined with the diversity and continuing evolution of their payloads, make the communication solutions utilized by such vehicles the element of a particular importance. In our previous publication, we confirmed a general applicability of wireless local area network (WLAN) technologies as solutions suitable to provide a control loop communication of unmanned surface vehicles (USVs). At the same time, our research indicated that WLAN technologies provide communication resources in excess of what is required for the above task. In this paper, we aim to verify if a WLAN-based USV communication solution can be reliably utilized for both time-sensitive control loop and high-throughput payload communication simultaneously, which could provide significant advantages during USV construction and operation. For this purpose, we analyzed traffic parameters of popular USV payloads, designed a test system to monitor the impact of such traffic sharing a WLAN link with a USV control loop communication and conducted laboratory and field experiments. As initial results indicated the significant impact of payload traffic on the quality of control communication, we have also proposed a method of employing Commercial Off The Shelf (COTS) hardware for this purpose, in a manner which allows the above-mentioned link sharing to operate reliably in changing real-world conditions. The subsequent verification, first in the laboratory and then during a real-world USV field deployment, confirmed the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Design, Communication, and Control of Autonomous Vehicle Systems)
Show Figures

Figure 1

34 pages, 17028 KB  
Article
Vibration Signal Denoising Method Based on ICFO-SVMD and Improved Wavelet Thresholding
by Yanping Cui, Xiaoxu He, Zhe Wu, Qiang Zhang and Yachao Cao
Sensors 2026, 26(2), 750; https://doi.org/10.3390/s26020750 - 22 Jan 2026
Viewed by 238
Abstract
Non-stationary, multi-component vibration signals in rotating machinery are easily contaminated by strong background noise, which masks weak fault features and degrades diagnostic reliability. This paper proposes a joint denoising method that combines an improved cordyceps fungus optimization algorithm (ICFO), successive variational mode decomposition [...] Read more.
Non-stationary, multi-component vibration signals in rotating machinery are easily contaminated by strong background noise, which masks weak fault features and degrades diagnostic reliability. This paper proposes a joint denoising method that combines an improved cordyceps fungus optimization algorithm (ICFO), successive variational mode decomposition (SVMD), and an improved wavelet thresholding scheme. ICFO, enhanced by Chebyshev chaotic initialization, a longitudinal–transverse crossover fusion mutation operator, and a thinking innovation strategy, is used to adaptively optimize the SVMD penalty factor and number of modes. The optimized SVMD decomposes the noisy signal into intrinsic mode functions, which are classified into effective and noise-dominated components via the Pearson correlation coefficient. An improved wavelet threshold function, whose threshold is modulated by the sub-band signal-to-noise ratio, is then applied to the effective components, and the denoised signal is reconstructed. Simulation experiments on nonlinear, non-stationary signals with different noise levels (SNR = 1–20 dB) show that the proposed method consistently achieves the highest SNR and lowest RMSE compared to VMD, SVMD, VMD–WTD, CFO–SVMD, and WTD. Tests on CWRU bearing data and gearbox vibration signals with added −2 dB Gaussian white noise further confirm that the method yields the lowest residual variance ratio and highest signal energy ratio while preserving key fault characteristic frequencies. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

20 pages, 17064 KB  
Article
PriorSAM-DBNet: A SAM-Prior-Enhanced Dual-Branch Network for Efficient Semantic Segmentation of High-Resolution Remote Sensing Images
by Qiwei Zhang, Yisong Wang, Ning Li, Quanwen Jiang and Yong He
Sensors 2026, 26(2), 749; https://doi.org/10.3390/s26020749 - 22 Jan 2026
Viewed by 299
Abstract
Semantic segmentation of high-resolution remote sensing imagery is a critical technology for the intelligent interpretation of sensor data, supporting automated environmental monitoring and urban sensing systems. However, processing data from dense urban scenarios remains challenging due to sensor signal occlusions (e.g., shadows) and [...] Read more.
Semantic segmentation of high-resolution remote sensing imagery is a critical technology for the intelligent interpretation of sensor data, supporting automated environmental monitoring and urban sensing systems. However, processing data from dense urban scenarios remains challenging due to sensor signal occlusions (e.g., shadows) and the complexity of parsing multi-scale targets from optical sensors. Existing approaches often exhibit a trade-off between the accuracy of global semantic modeling and the precision of complex boundary recognition. While the Segment Anything Model (SAM) offers powerful zero-shot structural priors, its direct application to remote sensing is hindered by domain gaps and the lack of inherent semantic categorization. To address these limitations, we propose a dual-branch cooperative network, PriorSAM-DBNet. The main branch employs a Densely Connected Swin (DC-Swin) Transformer to capture cross-scale global features via a hierarchical shifted window attention mechanism. The auxiliary branch leverages SAM’s zero-shot capability to exploit structural universality, generating object-boundary masks as robust signal priors while bypassing semantic domain shifts. Crucially, we introduce a parameter-efficient Scaled Subsampling Projection (SSP) module that employs a weight-sharing mechanism to align cross-modal features, freezing the massive SAM backbone to ensure computational viability for practical sensor applications. Furthermore, a novel Attentive Cross-Modal Fusion (ACMF) module is designed to dynamically resolve semantic ambiguities by calibrating the global context with local structural priors. Extensive experiments on the ISPRS Vaihingen, Potsdam, and LoveDA-Urban datasets demonstrate that PriorSAM-DBNet outperforms state-of-the-art approaches. By fine-tuning only 0.91 million parameters in the auxiliary branch, our method achieves mIoU scores of 82.50%, 85.59%, and 53.36%, respectively. The proposed framework offers a scalable, high-precision solution for remote sensing semantic segmentation, particularly effective for disaster emergency response where rapid feature recognition from sensor streams is paramount. Full article
Show Figures

Figure 1

32 pages, 2129 KB  
Article
Artificial Intelligence-Based Depression Detection
by Gabor Kiss and Patrik Viktor
Sensors 2026, 26(2), 748; https://doi.org/10.3390/s26020748 - 22 Jan 2026
Viewed by 381
Abstract
Decisions made by pilots and drivers suffering from depression can endanger the lives of hundreds of people, as demonstrated by the tragedies of Germanwings flight 9525 and Air India flight 171. Since the detection of depression is currently based largely on subjective self-reporting, [...] Read more.
Decisions made by pilots and drivers suffering from depression can endanger the lives of hundreds of people, as demonstrated by the tragedies of Germanwings flight 9525 and Air India flight 171. Since the detection of depression is currently based largely on subjective self-reporting, there is an urgent need for fast, objective, and reliable detection methods. In our study, we present an artificial intelligence-based system that combines iris-based identification with the analysis of pupillometric and eye movement biomarkers, enabling the real-time detection of physiological signs of depression before driving or flying. The two-module model was evaluated based on data from 242 participants: the iris identification module operated with an Equal Error Rate of less than 0.5%, while the depression-detecting CNN-LSTM network achieved 89% accuracy and an AUC value of 0.94. Compared to the neutral state, depressed individuals responded to negative news with significantly greater pupil dilation (+27.9% vs. +18.4%), while showing a reduced or minimal response to positive stimuli (−1.3% vs. +6.2%). This was complemented by slower saccadic movement and longer fixation time, which is consistent with the cognitive distortions characteristic of depression. Our results indicate that pupillometric deviations relative to individual baselines can be reliably detected and used with high accuracy for depression screening. The presented system offers a preventive safety solution that could reduce the number of accidents caused by human error related to depression in road and air traffic in the future. Full article
Show Figures

Figure 1

45 pages, 5287 KB  
Systematic Review
Cybersecurity in Radio Frequency Technologies: A Scientometric and Systematic Review with Implications for IoT and Wireless Applications
by Patrícia Rodrigues de Araújo, José Antônio Moreira de Rezende, Décio Rennó de Mendonça Faria and Otávio de Souza Martins Gomes
Sensors 2026, 26(2), 747; https://doi.org/10.3390/s26020747 - 22 Jan 2026
Viewed by 447
Abstract
Cybersecurity in radio frequency (RF) technologies has become a critical concern, driven by the expansion of connected systems in urban and industrial environments. Although research on wireless networks and the Internet of Things (IoT) has advanced, comprehensive studies that provide a global and [...] Read more.
Cybersecurity in radio frequency (RF) technologies has become a critical concern, driven by the expansion of connected systems in urban and industrial environments. Although research on wireless networks and the Internet of Things (IoT) has advanced, comprehensive studies that provide a global and integrated view of cybersecurity development in this field remain limited. This work presents a scientometric and systematic review of international publications from 2009 to 2025, integrating the PRISMA protocol with semantic screening supported by a Large Language Model to enhance classification accuracy and reproducibility. The analysis identified two interdependent axes: one focusing on signal integrity and authentication in GNSS systems and cellular networks; the other addressing the resilience of IoT networks, both strongly associated with spoofing and jamming, as well as replay, relay, eavesdropping, and man-in-the-middle (MitM) attacks. The results highlight the relevance of RF cybersecurity in securing communication infrastructures and expose gaps in widely adopted technologies such as RFID, NFC, BLE, ZigBee, LoRa, Wi-Fi, and unlicensed ISM bands, as well as in emerging areas like terahertz and 6G. These gaps directly affect the reliability and availability of IoT and wireless communication systems, increasing security risks in large-scale deployments such as smart cities and cyber–physical infrastructures. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in Internet of Things (IoT))
Show Figures

Figure 1

18 pages, 3948 KB  
Article
Reliable Automated Displacement Monitoring Using Robotic Total Station Assisted by a Fixed-Length Track
by Yunhui Jiang, He Gao and Jianguo Zhou
Sensors 2026, 26(2), 746; https://doi.org/10.3390/s26020746 - 22 Jan 2026
Viewed by 230
Abstract
Robotic total stations are multi-sensor integrated instruments widely used in displacement monitoring. The principles of polar coordinate or forward intersection systems are usually utilized for calculating monitoring results. However, the polar coordinate method lacks redundant observations, leading to unreliable results sometimes. Forward intersection [...] Read more.
Robotic total stations are multi-sensor integrated instruments widely used in displacement monitoring. The principles of polar coordinate or forward intersection systems are usually utilized for calculating monitoring results. However, the polar coordinate method lacks redundant observations, leading to unreliable results sometimes. Forward intersection requires two instruments for automated monitoring, doubling the cost. In this regard, this paper proposes a novel automated displacement monitoring method using the robotic total station assisted by a fixed-length track. By setting up two station points at both ends of a fixed-length track, the robotic total station is driven to move back and forth on the track and obtain observations at both station points. Then, automated monitoring based on the principle of forward intersection with a single robotic total station is achieved. Simulation and feasibility tests show that the overall accuracy of forward intersection is better than that of polar coordinate system as the monitoring distance increases. At the same time, regardless of tracking a prism or not, the robotic total station is able to automatically find and aim at the targets when moving between station points on the track. Further practical tests show that the reliability of the monitoring results of the proposed method is superior to the polar coordinate method, which provides new ideas for ensuring the reliability of results while reducing cost in actual monitoring tasks. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

17 pages, 1972 KB  
Article
Using Low-Cost Sensors for Fenceline Monitoring to Measure Emissions from Prescribed Fires
by Annamarie Guth, Marissa Dauner, Evan R. Coffey and Michael Hannigan
Sensors 2026, 26(2), 745; https://doi.org/10.3390/s26020745 - 22 Jan 2026
Viewed by 235
Abstract
Prescribed burning is a highly effective way to reduce wildfire risk; however, prescribed fires release harmful pollutants. Quantifying emissions from prescribed fires is valuable for atmospheric modeling and understanding impacts on nearby communities. Emissions are commonly reported as emission factors, which are traditionally [...] Read more.
Prescribed burning is a highly effective way to reduce wildfire risk; however, prescribed fires release harmful pollutants. Quantifying emissions from prescribed fires is valuable for atmospheric modeling and understanding impacts on nearby communities. Emissions are commonly reported as emission factors, which are traditionally calculated cumulatively over an entire combustion event. However, cumulative emission factors do not capture variability in emissions throughout a combustion event. Reliable emission factor calculations require knowledge of the state of the plume, which is unavailable when equipment is deployed for multiple days. In this study, we evaluated two different methods used to detect prescribed fire plumes: the event detection algorithm and a random forest model. Results show that the random forest model outperformed the event detection algorithm, with a detection accuracy of 61% and a 3% false positive rate, compared to 51% accuracy and a 31% false positive rate for the event detection algorithm. Overall, the random forest model provides more robust emission factor calculations and a promising framework for plume detection on future prescribed fires. This work provides a unique approach to fenceline monitoring, as it is one of the only projects to our knowledge using fenceline monitoring to measure emissions from prescribed fire plumes. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

17 pages, 2141 KB  
Article
Optimizing Surface Functionalization for Aptameric Graphene Nanosensors in Undiluted Physiological Media
by Wenting Dai, Ziran Wang, Shifeng Yu, Kechun Wen, Yucheng Yang and Qiao Lin
Sensors 2026, 26(2), 744; https://doi.org/10.3390/s26020744 - 22 Jan 2026
Viewed by 238
Abstract
This paper presents the optimization of surface modification for aptameric graphene nanosensors for the measurement of biomarkers in undiluted physiological media. In these sensors, graphene transduces the binding between an aptamer and the intended target biomarker into a measurable signal while being coated [...] Read more.
This paper presents the optimization of surface modification for aptameric graphene nanosensors for the measurement of biomarkers in undiluted physiological media. In these sensors, graphene transduces the binding between an aptamer and the intended target biomarker into a measurable signal while being coated with a polyethylene glycol (PEG) nanolayer to minimize nonspecific adsorption of matrix molecules. We perform a systematic study of the aptamer and PEG attachment schemes and parameters, including the impact of the serial or parallel PEG–aptamer attachment scheme, PEG molecular weight and surface density, and aptamer surface density on the sensor behavior, such as the responsivity to biomarker concentration changes, and importantly, they are used for operation in physiological media and have the ability to reject nonspecific binding to interfering molecules. We then use the understanding from this parametric study to identify graphene nanosensor designs that are optimally functionalized with PEG and aptamers to be strongly responsive to target biomarkers and effectively reduce nonspecific adsorption of interferents, thereby enabling sensitive and specific biomarker measurements in undiluted physiological media. The experimental results show that nanosensors that were optimized via serial modification with 5000 Da PEG at 15 mM and a 94 nt DNA aptamer at 500 nM allowed specific measurement of C-reactive protein (CRP) in undiluted human serum with a limit of detection (LOD) down to 27 pM, representing an up to 1000-fold improvement compared to previously reported CRP measurements. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

22 pages, 2756 KB  
Article
DACL-Net: A Dual-Branch Attention-Based CNN-LSTM Network for DOA Estimation
by Wenjie Xu and Shichao Yi
Sensors 2026, 26(2), 743; https://doi.org/10.3390/s26020743 - 22 Jan 2026
Viewed by 186
Abstract
While deep learning methods are increasingly applied in the field of DOA estimation, existing approaches generally feed the real and imaginary parts of the covariance matrix directly into neural networks without optimizing the input features, which prevents classical attention mechanisms from improving accuracy. [...] Read more.
While deep learning methods are increasingly applied in the field of DOA estimation, existing approaches generally feed the real and imaginary parts of the covariance matrix directly into neural networks without optimizing the input features, which prevents classical attention mechanisms from improving accuracy. This paper proposes a spatio-temporal fusion model named DACL-Net for DOA estimation. The spatial branch applies a two-dimensional Fourier transform (2D-FT) to the covariance matrix, causing angles to appear as peaks in the magnitude spectrum. This operation transforms the original covariance matrix into a dark image with bright spots, enabling the convolutional neural network (CNN) to focus on the bright-spot components via an attention module. Additionally, a spectrum attention mechanism (SAM) is introduced to enhance the extraction of temporal features in the time branch. The model learns simultaneously from two data branches and finally outputs DOA results through a linear layer. Simulation results demonstrate that DACL-Net outperforms existing algorithms in terms of accuracy, achieving an RMSE of 0.04° at an SNR of 0 dB. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

28 pages, 8287 KB  
Review
Recent Advances in Ultra-Weak Fiber Bragg Gratings Array for High-Performance Distributed Acoustic Sensing (Invited)
by Yihang Wang, Baijie Xu, Guanfeng Chen, Guixin Yin, Xizhen Xu, Zhiwei Lin, Cailing Fu, Yiping Wang and Jun He
Sensors 2026, 26(2), 742; https://doi.org/10.3390/s26020742 - 22 Jan 2026
Viewed by 334
Abstract
Distributed acoustic sensing (DAS) systems have been widely employed in oil and gas resource exploration, pipeline monitoring, traffic and transportation, structural health monitoring, hydrophone usage, and perimeter security due to their ability to perform large-scale distributed acoustic measurements. Conventional DAS relies on Rayleigh [...] Read more.
Distributed acoustic sensing (DAS) systems have been widely employed in oil and gas resource exploration, pipeline monitoring, traffic and transportation, structural health monitoring, hydrophone usage, and perimeter security due to their ability to perform large-scale distributed acoustic measurements. Conventional DAS relies on Rayleigh backscattering (RBS) from standard single-mode fibers (SMFs), which inherently limits the signal-to-noise ratio (SNR) and sensing robustness. Ultra-weak fiber Bragg grating (UWFBG) arrays can significantly enhance backscattering intensity and thereby improve DAS performance. This review provides a comprehensive overview of recent advances in UWFBG arrays for high-performance DAS. We introduce major inscription techniques for UWFBG arrays, including the drawing tower grating method, ultraviolet (UV) exposure through UV-transparent coating fiber technologies, and femtosecond laser direct writing methods. Furthermore, we summarize the applications of UWFBG arrays in DAS systems for the enhancement of RBS intensity, suppression of fading, improvement of frequency response, and phase noise compensation. Finally, the prospects of UWFBG-enhanced DAS technologies are discussed. Full article
(This article belongs to the Special Issue FBG and UWFBG Sensing Technology)
Show Figures

Figure 1

24 pages, 4209 KB  
Article
Stability-Oriented Deep Learning for Hyperspectral Soil Organic Matter Estimation
by Yun Deng and Yuxi Shi
Sensors 2026, 26(2), 741; https://doi.org/10.3390/s26020741 - 22 Jan 2026
Viewed by 168
Abstract
Soil organic matter (SOM) is a key indicator for evaluating soil fertility and ecological functions, and hyperspectral technology provides an effective means for its rapid and non-destructive estimation. However, in practical soil systems, the spectral response of SOM is often highly covariant with [...] Read more.
Soil organic matter (SOM) is a key indicator for evaluating soil fertility and ecological functions, and hyperspectral technology provides an effective means for its rapid and non-destructive estimation. However, in practical soil systems, the spectral response of SOM is often highly covariant with mineral composition, moisture conditions, and soil structural characteristics. Under small-sample conditions, hyperspectral SOM modeling results are usually highly sensitive to spectral preprocessing methods, sample perturbations, and model architecture and parameter configurations, leading to fluctuations in predictive performance across independent runs and thereby limiting model stability and practical applicability. To address these issues, this study proposes a multi-strategy collaborative deep learning modeling framework for small-sample conditions (SE-EDCNN-DA-LWGPSO). Under unified data partitioning and evaluation settings, the framework integrates spectral preprocessing, data augmentation based on sensor perturbation simulation, multi-scale dilated convolution feature extraction, an SE channel attention mechanism, and a linearly weighted generalized particle swarm optimization algorithm. Subtropical red soil samples from Guangxi were used as the study object. Samples were partitioned using the SPXY method, and multiple independent repeated experiments were conducted to evaluate the predictive performance and training consistency of the model under fixed validation conditions. The results indicate that the combination of Savitzky–Golay filtering and first-derivative transformation (SG–1DR) exhibits superior overall stability among various preprocessing schemes. In model structure comparison and ablation analysis, as dilated convolution, data augmentation, and channel attention mechanisms were progressively introduced, the fluctuations of prediction errors on the validation set gradually converged, and the performance dispersion among different independent runs was significantly reduced. Under ten independent repeated experiments, the final model achieved R2 = 0.938 ± 0.010, RMSE = 2.256 ± 0.176 g·kg−1, and RPD = 4.050 ± 0.305 on the validation set, demonstrating that the proposed framework has good modeling consistency and numerical stability under small-sample conditions. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

17 pages, 4725 KB  
Article
Hyperspectral Inversion of Soil Organic Carbon in Daylily Cultivation Areas of Yunzhou District
by Zelong Yao, Xiuping Ran, Chenbo Yang, Ping Li and Rutian Bi
Sensors 2026, 26(2), 740; https://doi.org/10.3390/s26020740 - 22 Jan 2026
Viewed by 193
Abstract
Accurate determination of Soil Organic Carbon (SOC), which is the foundation of soil health and safeguards ecological and food security, is crucial in local agricultural production. We aimed to investigate the influence of soil texture on hyperspectral models for predicting SOC content and [...] Read more.
Accurate determination of Soil Organic Carbon (SOC), which is the foundation of soil health and safeguards ecological and food security, is crucial in local agricultural production. We aimed to investigate the influence of soil texture on hyperspectral models for predicting SOC content and to evaluate the role of different preprocessing methods and feature band selection algorithms in improving modeling efficiency. Laboratory-determined SOC content and hyperspectral reflectance data were obtained using soil samples from daylily cultivation areas in Yunzhou District, Datong City. Mathematical transformations, including Savitzky–Golay smoothing (SG), First Derivative (FD), Second Derivative (SD), Multiplicative Scatter Correction (MSC), and Standard Normal Variate (SNV), were applied to the spectral reflectance data. Feature bands extracted based on the successive projection algorithm (SPA) and Competitive Adaptive Reweighted Sampling (CARS) were used to establish SOC content inversion models employing four algorithms: partial least-squares regression (PLSR), Random Forest (RF), Backpropagation Neural Network (BP), and Convolutional Neural Network (CNN). The results indicate the following: (1) Preprocessing can effectively increase the correlation between the soil spectral reflectance process and SOC content. (2) SPA and CARS effectively screened the characteristic bands of SOC in daylily cultivated soil from the spectral curves. The SPA algorithm and CARS selected 4–11 and 9–122 bands, respectively, and both algorithms facilitated model construction. (3) Among all the constructed models, the FD-CARS-PLSR performed most prominently, with coefficients of determination (R2) for the training and validation sets reaching 0.93 and 0.83, respectively, demonstrating high model stability and reliability. (4) Incorporating soil texture as an auxiliary variable into the PLSR inversion model improved the inversion accuracy, with accuracy gains ranging between 0.01 and 0.05. Full article
(This article belongs to the Special Issue Spectroscopy and Sensing Technologies for Smart Agriculture)
Show Figures

Figure 1

42 pages, 43567 KB  
Article
DaRA Dataset: Combining Wearable Sensors, Location Tracking, and Process Knowledge for Enhanced Human Activity and Human Context Recognition in Warehousing
by Friedrich Niemann, Fernando Moya Rueda, Moh’d Khier Al Kfari, Nilah Ravi Nair, Dustin Schauten, Veronika Kretschmer, Stefan Lüdtke and Alice Kirchheim
Sensors 2026, 26(2), 739; https://doi.org/10.3390/s26020739 - 22 Jan 2026
Viewed by 307
Abstract
Understanding human movement in industrial environments requires more than simple step counts—it demands contextual information to interpret activities and enhance workflows. Key factors such as location and process context are essential. However, research on context-sensitive human activity recognition is limited by the lack [...] Read more.
Understanding human movement in industrial environments requires more than simple step counts—it demands contextual information to interpret activities and enhance workflows. Key factors such as location and process context are essential. However, research on context-sensitive human activity recognition is limited by the lack of publicly available datasets that include both human movement and contextual labels. Our work introduces the DaRA dataset to address this research gap. DaRA comprises over 109 h of video footage, including 32 h from wearable first-person cameras and 77 h from fixed third-person cameras. In a laboratory environment replicating a realistic warehouse, scenarios such as order picking, packaging, unpacking, and storage were captured. The movements of 18 subjects were captured using inertial measurement units, Bluetooth devices for indoor localization, wearable first-person cameras, and fixed third-person cameras. DaRA offers detailed annotations with 12 class categories and 207 class labels covering human movements and contextual information such as process steps and locations. A total of 15 annotators and 8 revisers contributed over 1572 h in annotation and 361 h in revision. High label quality is reflected in Light’s Kappa values ranging from 78.27% to 99.88%. Therefore, DaRA provides a robust, multimodal foundation for human activity and context recognition in industrial settings. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Graphical abstract

25 pages, 4607 KB  
Article
CHARMS: A CNN-Transformer Hybrid with Attention Regularization for MRI Super-Resolution
by Xia Li, Haicheng Sun and Tie-Qiang Li
Sensors 2026, 26(2), 738; https://doi.org/10.3390/s26020738 - 22 Jan 2026
Viewed by 274
Abstract
Magnetic resonance imaging (MRI) super-resolution (SR) enables high-resolution reconstruction from low-resolution acquisitions, reducing scan time and easing hardware demands. However, most deep learning-based SR models are large and computationally heavy, limiting deployment in clinical workstations, real-time pipelines, and resource-restricted platforms such as low-field [...] Read more.
Magnetic resonance imaging (MRI) super-resolution (SR) enables high-resolution reconstruction from low-resolution acquisitions, reducing scan time and easing hardware demands. However, most deep learning-based SR models are large and computationally heavy, limiting deployment in clinical workstations, real-time pipelines, and resource-restricted platforms such as low-field and portable MRI. We introduce CHARMS, a lightweight convolutional–Transformer hybrid with attention regularization optimized for MRI SR. CHARMS employs a Reverse Residual Attention Fusion backbone for hierarchical local feature extraction, Pixel–Channel and Enhanced Spatial Attention for fine-grained feature calibration, and a Multi-Depthwise Dilated Transformer Attention block for efficient long-range dependency modeling. Novel attention regularization suppresses redundant activations, stabilizes training, and enhances generalization across contrasts and field strengths. Across IXI, Human Connectome Project Young Adult, and paired 3T/7T datasets, CHARMS (~1.9M parameters; ~30 GFLOPs for 256 × 256) surpasses leading lightweight and hybrid baselines (EDSR, PAN, W2AMSN-S, and FMEN) by 0.1–0.6 dB PSNR and up to 1% SSIM at ×2/×4 upscaling, while reducing inference time ~40%. Cross-field fine-tuning yields 7T-like reconstructions from 3T inputs with ~6 dB PSNR and 0.12 SSIM gains over native 3T. With near-real-time performance (~11 ms/slice, ~1.6–1.9 s per 3D volume on RTX 4090), CHARMS offers a compelling fidelity–efficiency balance for clinical workflows, accelerated protocols, and portable MRI. Full article
(This article belongs to the Special Issue Sensing Technologies in Digital Radiology and Image Analysis)
Show Figures

Figure 1

30 pages, 1726 KB  
Article
A Sensor-Oriented Multimodal Medical Data Acquisition and Modeling Framework for Tumor Grading and Treatment Response Analysis
by Linfeng Xie, Shanhe Xiao, Bihong Ming, Zhe Xiang, Zibo Rui, Xinyi Liu and Yan Zhan
Sensors 2026, 26(2), 737; https://doi.org/10.3390/s26020737 - 22 Jan 2026
Viewed by 233
Abstract
In precision oncology research, achieving joint modeling of tumor grading and treatment response, together with interpretable mechanism analysis, based on multimodal medical imaging and clinical data remains a challenging and critical problem. From a sensing perspective, these imaging and clinical data can be [...] Read more.
In precision oncology research, achieving joint modeling of tumor grading and treatment response, together with interpretable mechanism analysis, based on multimodal medical imaging and clinical data remains a challenging and critical problem. From a sensing perspective, these imaging and clinical data can be regarded as heterogeneous sensor-derived signals acquired by medical imaging sensors and clinical monitoring systems, providing continuous and structured observations of tumor characteristics and patient states. Existing approaches typically rely on invasive pathological grading, while grading prediction and treatment response modeling are often conducted independently. Moreover, multimodal fusion procedures generally lack explicit structural constraints, which limits their practical utility in clinical decision-making. To address these issues, a grade-guided multimodal collaborative modeling framework was proposed. Built upon mature deep learning models, including 3D ResNet-18, MLP, and CNN–Transformer, tumor grading was incorporated as a weakly supervised prior into the processes of multimodal feature fusion and treatment response modeling, thereby enabling an integrated solution for non-invasive grading prediction, treatment response subtype discovery, and intrinsic mechanism interpretation. Through a grade-guided feature fusion mechanism, discriminative information that is highly correlated with tumor malignancy and treatment sensitivity is emphasized in the multimodal joint representation, while irrelevant features are suppressed to prevent interference with model learning. Within a unified framework, grading prediction and grade-conditioned treatment response modeling are jointly realized. Experimental results on real-world clinical datasets demonstrate that the proposed method achieved an accuracy of 84.6% and a kappa coefficient of 0.81 in the tumor-grading prediction task, indicating a high level of consistency with pathological grading. In the treatment response prediction task, the proposed model attained an AUC of 0.85, a precision of 0.81, and a recall of 0.79, significantly outperforming single-modality models, conventional early-fusion models, and multimodal CNN–Transformer models without grading constraints. In addition, treatment-sensitive and treatment-resistant subtypes identified under grading conditions exhibited stable and significant stratification differences in clustering consistency and survival analysis, validating the potential value of the proposed approach for clinical risk assessment and individualized treatment decision-making. Full article
(This article belongs to the Special Issue Application of Optical Imaging in Medical and Biomedical Research)
Show Figures

Figure 1

26 pages, 2875 KB  
Article
Noise Reduction for Water Supply Pipeline Leakage Signals Based on the Black-Winged Kite Algorithm
by Zhu Jiang, Jiale Li, Haiyan Ning, Xiang Zhang and Yao Yang
Sensors 2026, 26(2), 736; https://doi.org/10.3390/s26020736 - 22 Jan 2026
Viewed by 180
Abstract
In order to solve the problem of false alarms and missed alarms in pipeline monitoring caused by a large amount of noise in the negative pressure wave signal collected by pressure sensors, a new pressure signal denoising method based on the black-winged kite [...] Read more.
In order to solve the problem of false alarms and missed alarms in pipeline monitoring caused by a large amount of noise in the negative pressure wave signal collected by pressure sensors, a new pressure signal denoising method based on the black-winged kite algorithm (BWK) is proposed. First, the variational mode decomposition (VMD) parameters are optimized through BWK. Next, the effective modal components are screened by sample entropy, and the secondary noise reduction of the signal is carried out by using the wavelet thresholding (WT). Finally, the signal is reconstructed to achieve noise reduction. Simulation experiments show that, compared with WT and empirical mode decomposition (EMD), the method proposed in this paper can achieve the best noise reduction effect under both high and low signal-to-noise ratio (SNR) conditions. The method proposed in the paper can achieve the highest SNR of 14.2280 dB, compared to WT’s SNR of 12.6458 dB and EMD’s SNR of 5.5292 dB. To further validate the performance of the algorithm, an experimental platform for simulating pipeline leaks is built. Compared with WT and EMD, the method proposed in this paper also shows the best noise reduction effect. This method provides a high-precision and adaptive solution for leak detection in urban water supply pipelines and has strong engineering application value. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

20 pages, 908 KB  
Article
Wearable ECG-PPG Deep Learning Model for Cardiac Index-Based Noninvasive Cardiac Output Estimation in Cardiac Surgery Patients
by Minwoo Kim, Min Dong Sung, Jimyeoung Jung, Sung Pil Cho, Junghwan Park, Sarah Soh, Hyun Chel Joo and Kyung Soo Chung
Sensors 2026, 26(2), 735; https://doi.org/10.3390/s26020735 - 22 Jan 2026
Viewed by 376
Abstract
Accurate cardiac output (CO) measurement is vital for hemodynamic management; however, it usually requires invasive monitoring, which limits its continuous and out-of-hospital use. Wearable sensors integrated with deep learning offer a noninvasive alternative. This study developed and validated a lightweight deep learning model [...] Read more.
Accurate cardiac output (CO) measurement is vital for hemodynamic management; however, it usually requires invasive monitoring, which limits its continuous and out-of-hospital use. Wearable sensors integrated with deep learning offer a noninvasive alternative. This study developed and validated a lightweight deep learning model using wearable electrocardiography (ECG) and photoplethysmography (PPG) signals to predict CO and examined whether cardiac index-based normalization (Cardiac Index (CI) = CO/body surface area) improves performance. Twenty-seven patients who underwent cardiac surgery and had pulmonary artery catheters were prospectively enrolled. Single-lead ECG (HiCardi+ chest patch) and finger PPG (WristOx2 3150) were recorded simultaneously and processed through an ECG–PPG fusion network with cross-modal interaction. Three models were trained as follows: (1) CI prediction, (2) direct CO prediction, and (3) indirect CO prediction. The total number of CO = predicted CI × body surface area. Reference values were derived from thermodilution. The CI model achieved the best performance, and the indirect CO model showed significant reductions in error/agreement metrics (MAE/RMSE/bias; p < 0.0001), while correlation-based metrics are reported descriptively without implying statistical significance. The Pearson correlation coefficient (PCC) and percentage error (PE) for the indirect CO estimates (PCC = 0.904; PE = 23.75%). The indirect CO estimates met the predefined PE < 30% agreement benchmark for method-comparison; this is not a universal clinical standard. These results demonstrate that wearable ECG–PPG fusion deep learning can achieve accurate, noninvasive CO estimation and that CI-based normalization enhances model agreement with pulmonary artery catheter measurements, supporting continuous catheter-free hemodynamic monitoring. Full article
Show Figures

Figure 1

23 pages, 1759 KB  
Systematic Review
Redefining Prosthetic Needs: Insights from Individuals with Upper Limb Loss—A Systematic Review
by Andreia Caldas, Demétrio Matos, Adam de Eyto and Nuno Martins
Sensors 2026, 26(2), 734; https://doi.org/10.3390/s26020734 - 22 Jan 2026
Viewed by 564
Abstract
Background: Upper limb loss has a profound impact on individuals’ daily activities, self-image, and social interactions. Despite continuous technological advances in upper-limb prosthetics, high rates of device abandonment persist, highlighting the need to better understand users’ functional and psychosocial needs. Methods: To gain [...] Read more.
Background: Upper limb loss has a profound impact on individuals’ daily activities, self-image, and social interactions. Despite continuous technological advances in upper-limb prosthetics, high rates of device abandonment persist, highlighting the need to better understand users’ functional and psychosocial needs. Methods: To gain a deeper understanding of the perspectives of upper limb amputees and the synthesis of their needs across ergonomic, functional, and psychological dimensions, this study was conducted. A systematic review was conducted following PRISMA guidelines to synthesize user-reported evidence on upper-limb prosthesis use. Articles indexed in the Web of Science database between 2016 and December 2023 were screened using predefined search terms related to upper-limb amputation, prostheses, social impact, and user needs. Studies were included if they reported direct perspectives of upper-limb prosthesis users regarding usability, functionality, and lived experience. Results: Out of 239 papers identified, 31 were included and analyzed. The findings reveal that functional performance, comfort, weight, intuitive control, and reliability are strongly interconnected with psychosocial factors such as confidence, embodiment, social participation, and acceptance. Technological advances have not consistently translated into improved alignment between prosthetic solutions and user needs, which is reflected in continued dissatisfaction and abandonment. Conclusions: This review provides a structured synthesis of user-reported needs across functional, ergonomic, and psychosocial dimensions, translating these insights into design-relevant guidelines. Emphasizing a user-centered and interdisciplinary perspective, the findings aim to support the development of upper-limb prosthetic devices that are more usable, acceptable, and aligned with users’ expectations, ultimately bridging the gap between user expectations and technological capabilities and promoting long-term adoption and quality of life. Full article
Show Figures

Figure 1

21 pages, 5838 KB  
Article
SRCT: Structure-Preserving Method for Sub-Meter Remote Sensing Image Super-Resolution
by Tianxiong Gao, Shuyan Zhang, Wutao Yao, Erping Shang, Jin Yang, Yong Ma and Yan Ma
Sensors 2026, 26(2), 733; https://doi.org/10.3390/s26020733 - 22 Jan 2026
Viewed by 163
Abstract
To address the scarcity of sub-meter remote sensing samples and structural inconsistencies such as edge blur and contour distortion in super-resolution reconstruction, this paper proposes SRCT, a super-resolution method tailored for sub-meter remote sensing imagery. The method consists of two parts: external structure [...] Read more.
To address the scarcity of sub-meter remote sensing samples and structural inconsistencies such as edge blur and contour distortion in super-resolution reconstruction, this paper proposes SRCT, a super-resolution method tailored for sub-meter remote sensing imagery. The method consists of two parts: external structure guidance and internal structure optimization. External structure guidance is jointly realized by the structure encoder (SE) and structure guidance module (SGM): the SE extracts key structural features from high-resolution images, and the SGM injects these structural features into the super-resolution network layer by layer, achieving structural transfer from external priors to the reconstruction network. Internal structure optimization is handled by the backbone network SGCT, which introduces a dual-branch residual dense group (DBRDG): one branch uses window-based multi-head self-attention to model global geometric structures, and the other branch uses lightweight convolutions to model local texture features, enabling the network to adaptively balance structure and texture reconstruction internally. Experimental results show that SRCT clearly outperforms existing methods on structure-related metrics, with DISTS reduced by 8.7% and LPIPS reduced by 7.2%, and significantly improves reconstruction quality in structure-sensitive regions such as building contours and road continuity, providing a new technical route for sub-meter remote sensing image super-resolution reconstruction. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

15 pages, 6862 KB  
Article
SLR-Net: Lightweight and Accurate Detection of Weak Small Objects in Satellite Laser Ranging Imagery
by Wei Zhu, Jinlong Hu, Weiming Gong, Yong Wang and Yi Zhang
Sensors 2026, 26(2), 732; https://doi.org/10.3390/s26020732 - 22 Jan 2026
Viewed by 189
Abstract
To address the challenges of insufficient efficiency and accuracy in traditional detection models caused by minute target sizes, low signal-to-noise ratios (SNRs), and feature volatility in Satellite Laser Ranging (SLR) images, this paper proposes an efficient, lightweight, and high-precision detection model. The core [...] Read more.
To address the challenges of insufficient efficiency and accuracy in traditional detection models caused by minute target sizes, low signal-to-noise ratios (SNRs), and feature volatility in Satellite Laser Ranging (SLR) images, this paper proposes an efficient, lightweight, and high-precision detection model. The core motivation of this study is to fundamentally enhance the model’s capabilities in feature extraction, fusion, and localization for minute and blurred targets through a specifically designed network architecture and loss function, without significantly increasing the computational burden. To achieve this goal, we first design a DMS-Conv module. By employing dense sampling and channel function separation strategies, this module effectively expands the receptive field while avoiding the high computational overhead and sampling artifacts associated with traditional multi-scale methods, thereby significantly improving feature representation for faint targets. Secondly, to optimize information flow within the feature pyramid, we propose a Lightweight Upsampling Module (LUM). Integrating depthwise separable convolutions with a channel reshuffling mechanism, this module replaces traditional transposed convolutions at a minimal computational cost, facilitating more efficient multi-scale feature fusion. Finally, addressing the stringent requirements for small target localization accuracy, we introduce the MPD-IoU Loss. By incorporating the diagonal distance of bounding boxes as a geometric penalty term, this loss function provides finer and more direct spatial alignment constraints for model training, effectively boosting localization precision. Experimental results on a self-constructed real-world SLR observation dataset demonstrate that the proposed model achieves an mAP50:95 of 47.13% and an F1-score of 88.24%, with only 2.57 M parameters and 6.7 GFLOPs. Outperforming various mainstream lightweight detectors in the comprehensive performance of precision and recall, these results validate that our method effectively resolves the small target detection challenges in SLR scenarios while maintaining a lightweight design, exhibiting superior performance and practical value. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

16 pages, 3906 KB  
Article
S3PM: Entropy-Regularized Path Planning for Autonomous Mobile Robots in Dense 3D Point Clouds of Unstructured Environments
by Artem Sazonov, Oleksii Kuchkin, Irina Cherepanska and Arūnas Lipnickas
Sensors 2026, 26(2), 731; https://doi.org/10.3390/s26020731 - 21 Jan 2026
Viewed by 290
Abstract
Autonomous navigation in cluttered and dynamic industrial environments remains a major challenge for mobile robots. Traditional occupancy-grid and geometric planning approaches often struggle in such unstructured settings due to partial observability, sensor noise, and the frequent presence of moving agents (machinery, vehicles, humans). [...] Read more.
Autonomous navigation in cluttered and dynamic industrial environments remains a major challenge for mobile robots. Traditional occupancy-grid and geometric planning approaches often struggle in such unstructured settings due to partial observability, sensor noise, and the frequent presence of moving agents (machinery, vehicles, humans). These limitations seriously undermine long-term reliability and safety compliance—both essential for Industry 4.0 applications. This paper introduces S3PM, a lightweight entropy-regularized framework for simultaneous mapping and path planning that operates directly on dense 3D point clouds. Its key innovation is a dynamics-aware entropy field that fuses per-voxel occupancy probabilities with motion cues derived from residual optical flow. Each voxel is assigned a risk-weighted entropy score that accounts for both geometric uncertainty and predicted object dynamics. This representation enables (i) robust differentiation between reliable free space and ambiguous/hazardous regions, (ii) proactive collision avoidance, and (iii) real-time trajectory replanning. The resulting multi-objective cost function effectively balances path length, smoothness, safety margins, and expected information gain, while maintaining high computational efficiency through voxel hashing and incremental distance transforms. Extensive experiments in both real-world and simulated settings, conducted on a Raspberry Pi 5 (with and without the Hailo-8 NPU), show that S3PM achieves 18–27% higher IoU in static/dynamic segmentation, 0.94–0.97 AUC in motion detection, and 30–45% fewer collisions compared to OctoMap + RRT* and standard probabilistic baselines. The full pipeline runs at 12–15 Hz on the bare Pi 5 and 25–30 Hz with NPU acceleration, making S3PM highly suitable for deployment on resource-constrained embedded platforms. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing—2nd Edition)
Show Figures

Figure 1

33 pages, 3714 KB  
Article
SADQN-Based Residual Energy-Aware Beamforming for LoRa-Enabled RF Energy Harvesting for Disaster-Tolerant Underground Mining Networks
by Hilary Kelechi Anabi, Samuel Frimpong and Sanjay Madria
Sensors 2026, 26(2), 730; https://doi.org/10.3390/s26020730 - 21 Jan 2026
Viewed by 187
Abstract
The end-to-end efficiency of radio-frequency (RF)-powered wireless communication networks (WPCNs) in post-disaster underground mine environments can be enhanced through adaptive beamforming. The primary challenges in such scenarios include (i) identifying the most energy-constrained nodes, i.e., nodes with the lowest residual energy to prevent [...] Read more.
The end-to-end efficiency of radio-frequency (RF)-powered wireless communication networks (WPCNs) in post-disaster underground mine environments can be enhanced through adaptive beamforming. The primary challenges in such scenarios include (i) identifying the most energy-constrained nodes, i.e., nodes with the lowest residual energy to prevent the loss of tracking and localization functionality; (ii) avoiding reliance on the computationally intensive channel state information (CSI) acquisition process; and (iii) ensuring long-range RF wireless power transfer (LoRa-RFWPT). To address these issues, this paper introduces an adaptive and safety-aware deep reinforcement learning (DRL) framework for energy beamforming in LoRa-enabled underground disaster networks. Specifically, we develop a Safe Adaptive Deep Q-Network (SADQN) that incorporates residual energy awareness to enhance energy harvesting under mobility, while also formulating a SADQN approach with dual-variable updates to mitigate constraint violations associated with fairness, minimum energy thresholds, duty cycle, and uplink utilization. A mathematical model is proposed to capture the dynamics of post-disaster underground mine environments, and the problem is formulated as a constrained Markov decision process (CMDP). To address the inherent NP hardness of this constrained reinforcement learning (CRL) formulation, we employ a Lagrangian relaxation technique to reduce complexity and derive near-optimal solutions. Comprehensive simulation results demonstrate that SADQN significantly outperforms all baseline algorithms: increasing cumulative harvested energy by approximately 11% versus DQN, 15% versus Safe-DQN, and 40% versus PSO, and achieving substantial gains over random beamforming and non-beamforming approaches. The proposed SADQN framework maintains fairness indices above 0.90, converges 27% faster than Safe-DQN and 43% faster than standard DQN in terms of episodes, and demonstrates superior stability, with 33% lower performance variance than Safe-DQN and 66% lower than DQN after convergence, making it particularly suitable for safety-critical underground mining disaster scenarios where reliable energy delivery and operational stability are paramount. Full article
Show Figures

Figure 1

41 pages, 14069 KB  
Article
Quantitative Evaluation and Optimization of Museum Fatigue Using Computer Vision Human Pose Estimation
by Zhongsu Cheng, Yuxiao Zhang and Lin Zhang
Sensors 2026, 26(2), 729; https://doi.org/10.3390/s26020729 - 21 Jan 2026
Viewed by 263
Abstract
Museums are key institutions for cultural communication and public education, and their operating concept is shifting from exhibit-centered to experience-centered. As expectations for exhibition experience rise, museum fatigue has become a major constraint on visitors. Existing studies rely on questionnaires and other subjective [...] Read more.
Museums are key institutions for cultural communication and public education, and their operating concept is shifting from exhibit-centered to experience-centered. As expectations for exhibition experience rise, museum fatigue has become a major constraint on visitors. Existing studies rely on questionnaires and other subjective measures, which makes it difficult to locate fatigue in specific spaces. At the same time, body pose detection and fatigue recognition techniques remain hard to apply in museums because of complex spatial configurations and dense visitor flows. Effective methods for quantifying and mitigating museum fatigue are still lacking. This study proposes a contact-free sensing scheme based on computer vision and builds a coupled analytical framework with three stages: Human Pose Estimation (HPE) for visitor posture detection, fatigue assessment, and fatigue mitigation. A Fatigue Index (FI) quantifies bodily fatigue. Applying this index to the exhibition space in both the baseline and adjusted configurations guides the formulation of mitigation strategies and shows a consistent reduction in FI, which indicates that the adopted measures are effective. The proposed approach establishes a complete frame from fatigue quantification to fatigue mitigation, supports evaluation of exhibition space design, and provides theoretical and methodological support for future improvements to museum experience. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

14 pages, 9818 KB  
Article
REHEARSE-3D: A Multi-Modal Emulated Rain Dataset for 3D Point Cloud De-Raining
by Abu Mohammed Raisuddin, Jesper Holmblad, Hamed Haghighi, Yuri Poledna, Maikol Funk Drechsler, Valentina Donzella and Eren Erdal Aksoy
Sensors 2026, 26(2), 728; https://doi.org/10.3390/s26020728 - 21 Jan 2026
Viewed by 228
Abstract
Sensor degradation poses a significant challenge in autonomous driving. During heavy rainfall, interference from raindrops can adversely affect the quality of LiDAR point clouds, resulting in, for instance, inaccurate point measurements. This, in turn, can potentially lead to safety concerns if autonomous driving [...] Read more.
Sensor degradation poses a significant challenge in autonomous driving. During heavy rainfall, interference from raindrops can adversely affect the quality of LiDAR point clouds, resulting in, for instance, inaccurate point measurements. This, in turn, can potentially lead to safety concerns if autonomous driving systems are not weather-aware, i.e., if they are unable to discern such changes. In this study, we release a new, large-scale, multi-modal emulated rain dataset, REHEARSE-3D, to promote research advancements in 3D point cloud de-raining. Distinct from the most relevant competitors, our dataset is unique in several respects. First, it is the largest point-wise annotated dataset (9.2 billion annotated points), and second, it is the only one with high-resolution LiDAR data (LiDAR-256) enriched with 4D RADAR point clouds logged in both daytime and nighttime conditions in a controlled weather environment. Furthermore, REHEARSE-3D involves rain-characteristic information, which is of significant value not only for sensor noise modeling but also for analyzing the impact of weather at the point level. Leveraging REHEARSE-3D, we benchmark raindrop detection and removal in fused LiDAR and 4D RADAR point clouds. Our comprehensive study further evaluates the performance of various statistical and deep learning models, where SalsaNext and 3D-OutDet achieve above 94% IoU for raindrop detection. Full article
Show Figures

Figure 1

17 pages, 1554 KB  
Article
Fusing EEG Features Extracted by Microstate Analysis and Empirical Mode Decomposition for Diagnosis of Schizophrenia
by Shirui Song, Lingyan Du, Jie Yin and Shihai Ling
Sensors 2026, 26(2), 727; https://doi.org/10.3390/s26020727 - 21 Jan 2026
Viewed by 232
Abstract
Accurate early diagnosis and precise assessment of disease severity are imperative for the treatment and rehabilitation of schizophrenia patients. To achieve this, we propose a computer-aided diagnostic method for schizophrenia that utilizes fusion features derived from microstate analysis and empirical mode decomposition (EMD) [...] Read more.
Accurate early diagnosis and precise assessment of disease severity are imperative for the treatment and rehabilitation of schizophrenia patients. To achieve this, we propose a computer-aided diagnostic method for schizophrenia that utilizes fusion features derived from microstate analysis and empirical mode decomposition (EMD) based on Electroencephalography (EEG) signals. At the same time, the obtained fusion features from microstate analysis and EMD are input into the Least Absolute Shrinkage and Selection Operator (LASSO) feature selection algorithm to reduce the dimensionality of feature vectors. Finally, the reduced feature vector is fed to a Logistic Regression classifier to classify SCH and healthy EEG signals. In addition, the ability of the integrated features to distinguish the severity of schizophrenia symptoms was evaluated, and the Shapley Additive Explanations (SHAP) algorithm was used to analyze the importance of the classification features that differentiate schizophrenia symptoms. Experimental results from both public and private datasets demonstrate the efficacy of EMD features in identifying healthy controls, while microstate features excel in classifying the severity of symptoms among schizophrenia patients. The classification evaluation metrics of the fused features significantly outperform those obtained using EMD or microstate analysis features independently. The fusion feature method proposed in this study achieved accuracies of 100% and 90.7% for the classification of schizophrenia in public datasets and private datasets, respectively, and an accuracy of 93.6% for the classification of schizophrenia symptoms in private datasets. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

5 pages, 176 KB  
Editorial
Sensors Based on Optical and Photonic Devices
by Francesco De Leonardis
Sensors 2026, 26(2), 726; https://doi.org/10.3390/s26020726 - 21 Jan 2026
Viewed by 262
Abstract
Programmable photonics is an emerging technology that merges photonics and electronics, enabling innovative light-based information processing with high speed and low power consumption [...] Full article
(This article belongs to the Special Issue Sensors Based on Optical and Photonic Devices)
17 pages, 4767 KB  
Article
Adaptive Low-Resolution Combination Search for Reference-Independent Image Super-Resolution
by Ye Tian
Sensors 2026, 26(2), 725; https://doi.org/10.3390/s26020725 - 21 Jan 2026
Viewed by 165
Abstract
Accurately reconstructing high-resolution (HR) images remains challenging in scenarios where HR observations cannot be captured due to optical, hardware, or cost constraints. To address this limitation, we introduce an image super-resolution (SR) framework that reconstructs HR content solely from multiple low-resolution (LR) measurements, [...] Read more.
Accurately reconstructing high-resolution (HR) images remains challenging in scenarios where HR observations cannot be captured due to optical, hardware, or cost constraints. To address this limitation, we introduce an image super-resolution (SR) framework that reconstructs HR content solely from multiple low-resolution (LR) measurements, without relying on any HR reference images. The proposed method formulates a unified degradation model that describes how HR pixels contribute to LR observations under subpixel shifts and anisotropic downsampling. Based on this model, we develop an adaptive search algorithm capable of identifying the minimal and most informative combination of LR images required to equivalently represent the latent HR image. The selected LR images are then used to construct a solvable linear system whose solution directly yields the HR pixel values. Experiments conducted on the USAF 1951 resolution target demonstrate that the proposed approach improves Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) by 27.33% and 44.64%, respectively, achieving a resolvable spatial frequency of 228 line pairs per millimeter. In semiconductor chip inspection, PSNR and SSIM increase by 22.36% and 40.38%. These results verify that the proposed LR-combination-based strategy provides a physically interpretable and highly practical alternative for applications in which HR reference images cannot be obtained. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

14 pages, 8570 KB  
Article
Enhancing Robotic Grasping Detection Using Visual–Tactile Fusion Perception
by Dongyuan Zheng and Yahong Chen
Sensors 2026, 26(2), 724; https://doi.org/10.3390/s26020724 - 21 Jan 2026
Viewed by 373
Abstract
With the advancement of tactile sensors, researchers increasingly integrate tactile perception into robotics, but only for tasks such as object reconstruction, classification, recognition, and grasp state assessment. In this paper, we rethink the relationship between visual and tactile perception and propose a novel [...] Read more.
With the advancement of tactile sensors, researchers increasingly integrate tactile perception into robotics, but only for tasks such as object reconstruction, classification, recognition, and grasp state assessment. In this paper, we rethink the relationship between visual and tactile perception and propose a novel robotic grasping detection method based on visual–tactile perception. Initially, we construct a visual–tactile dataset containing the grasp stability for each potential grasping position. Next, we introduce a novel Grasp Stability Prediction Module (GSPM) to generate a grasp stability probability map, providing prior knowledge regarding grasp stability to the grasp detection network for each possible grasp position. Finally, the map is multiplied element-wise with the corresponding colored image and inputted into the grasp detection network. Experimental results demonstrate that our novel visual–tactile fusion method significantly enhances robotic grasping detection accuracy. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

36 pages, 4575 KB  
Article
A PI-Dual-STGCN Fault Diagnosis Model Based on the SHAP-LLM Joint Explanation Framework
by Zheng Zhao, Shuxia Ye, Liang Qi, Hao Ni, Siyu Fei and Zhe Tong
Sensors 2026, 26(2), 723; https://doi.org/10.3390/s26020723 - 21 Jan 2026
Viewed by 289
Abstract
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability [...] Read more.
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability of graph data by introducing physical constraints and constructs a dual-graph architecture based on physical topology graphs and signal similarity graphs. The experimental results show that the dual-graph complementary architecture enhances diagnostic accuracy to 99.22%. Second, a general-purpose SHAP-LLM explanation framework is designed: Explainable AI (XAI) technology is used to analyze the decision logic of the diagnostic model and generate visual explanations, establishing a hierarchical knowledge base that includes performance metrics, explanation reliability, and fault experience. Retrieval-Augmented Generation (RAG) technology is innovatively combined to integrate model performance and Shapley Additive Explanations (SHAP) reliability assessment through the main report prompt, while the sub-report prompt enables detailed fault analysis and repair decision generation. Finally, experiments demonstrate that this approach avoids the uncertainty of directly using large models for fault diagnosis: we delegate all fault diagnosis tasks and core explainability tasks to more mature deep learning algorithms and XAI technology and only leverage the powerful textual reasoning capabilities of large models to process pre-quantified, fact-based textual information (e.g., model performance metrics, SHAP explanation results). This method enhances diagnostic transparency through XAI-generated visual and quantitative explanations of model decision logic while reducing the risk of large model hallucinations by restricting large models to reasoning over grounded, fact-based textual content rather than direct fault diagnosis, providing verifiable intelligent decision support for industrial fault diagnosis. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Previous Issue
Back to TopTop