Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,991)

Search Parameters:
Keywords = dual sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 2095 KB  
Article
Low-Cost Non-Invasive Microwave Glucose Sensor Based on Dual Complementary Split-Ring Resonator
by Guodi Xu, Zhiliang Kang, Xing Feng and Minqiang Li
Sensors 2026, 26(7), 2056; https://doi.org/10.3390/s26072056 - 25 Mar 2026
Abstract
Rapid and real-time monitoring of blood glucose concentration is critical for the diagnosis and management of diabetes, while conventional invasive detection methods suffer from inconvenience and discomfort, making non-invasive detection a research hotspot. In this study, a dual complementary split-ring resonator (DS-CSRR) operating [...] Read more.
Rapid and real-time monitoring of blood glucose concentration is critical for the diagnosis and management of diabetes, while conventional invasive detection methods suffer from inconvenience and discomfort, making non-invasive detection a research hotspot. In this study, a dual complementary split-ring resonator (DS-CSRR) operating at 3.3 GHz was designed and fabricated for non-invasive glucose concentration detection, aiming to address the problems of low sensitivity and large size of existing microwave glucose sensors. The sensor was fabricated on a low-cost FR4 dielectric substrate with dimensions of 20 × 30 × 0.8 mm3, and two U-shaped slots were incorporated into the traditional DS-CSRR structure to realize cross-polarization excitation. This design not only enhances the interaction between the electric field and glucose solution but also optimizes the quality factor (Q) and electric field distribution of the resonator without changing the overall size. Compared with the traditional DS-CSRR, the Q factor of the modified structure is increased to 130 under no-load conditions. The transmission coefficient Signal Port 2 to Port 1 (S21) of the sensor loaded with glucose solutions of different concentrations was measured using a vector network analyzer (VNA). The experimental results show a good linear frequency shift with the increase in glucose concentration, with a measured sensitivity of 1.95 kHz/(mg·dL−1). In addition, the sensor is characterized by miniaturization, low cost and easy fabrication due to the adoption of standard PCB fabrication processes. This study successfully demonstrates a non-invasive microwave sensor with high sensitivity for glucose concentration detection, which has promising application potential in personal continuous glucose monitoring, and also provides a useful design strategy for the development of miniaturized high-sensitivity microwave biosensors. Full article
(This article belongs to the Section Wearables)
33 pages, 1789 KB  
Article
An AI-Driven Dual-Spectral Vision–Language Sensing Framework for Intelligent Agricultural Phenotyping
by Lei Shi, Zhiyuan Chen, Chengze Li, Yang Hu, Xintong Wang, Haibo Wang and Yihong Song
Sensors 2026, 26(7), 2045; https://doi.org/10.3390/s26072045 - 25 Mar 2026
Abstract
Seed varietal purity and physiological viability are critical determinants of crop yield and quality. However, non-destructive assessment faces significant challenges in fine-grained variety discrimination and the perception of internal defects. This study proposes S3-Net, an AI-driven multimodal sensing framework that integrates vision–language alignment [...] Read more.
Seed varietal purity and physiological viability are critical determinants of crop yield and quality. However, non-destructive assessment faces significant challenges in fine-grained variety discrimination and the perception of internal defects. This study proposes S3-Net, an AI-driven multimodal sensing framework that integrates vision–language alignment with dual-spectral sensor fusion for autonomous seed quality evaluation. We introduce a Knowledge–Vision Alignment (KVA) module that incorporates encyclopedic morphological descriptions to guide feature learning, significantly enhancing few-shot generalization. Complementarily, a Dual-Spectral Fusion (DSF) module combines high-resolution RGB textures with penetrative Short-Wave Infrared (SWIR) sensing to jointly characterize external and internal traits. Experimental results on a custom multimodal dataset of 6000 samples across 12 crop categories demonstrate that S3-Net achieves 96.9% accuracy for species identification and 95.8% for viability detection. Notably, S3-Net outperforms ResNet-50 by 40.3% in extreme 1-shot scenarios. With a stable inference throughput of 95 fps, the system meets the high-throughput demands of industrial-scale applications, providing a robust and efficient solution for intelligent agricultural phenotyping. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Sensing)
14 pages, 3201 KB  
Article
The Effect of Cage Symmetry on the Magnetic and Thermodynamic Behavior of C60 Fullerene
by Numan Şarlı, Gökçen Dikici Yıldız and Yasin Göktürk Yıldız
Crystals 2026, 16(4), 218; https://doi.org/10.3390/cryst16040218 - 25 Mar 2026
Abstract
This study employs effective field theory to investigate the magnetic properties of the Carbon-60 fullerene cage (C60). The analysis shows that the magnetic behavior of the C60 molecule mirrors that of its sixty constituent carbon atoms, a phenomenon attributed to the [...] Read more.
This study employs effective field theory to investigate the magnetic properties of the Carbon-60 fullerene cage (C60). The analysis shows that the magnetic behavior of the C60 molecule mirrors that of its sixty constituent carbon atoms, a phenomenon attributed to the molecule’s unique cage geometry and defined herein as the “identic magnetic effect” (IME). Furthermore, thermodynamic quantities, including magnetic susceptibility, specific heat, and internal energy, exhibit dual peaks at the coercive field points when the temperature is below the critical threshold (T < Tc). As the temperature exceeds this threshold (T > Tc), these peaks coalesce into a single maximum. These findings show good quantitative agreement with experimental phase transition characteristics, reflecting the magnetic behavior induced by the C60 cage geometry. IME behavior can open the door to modeling and produce a new class of IME sensors (IMESs). Full article
(This article belongs to the Section Inorganic Crystalline Materials)
Show Figures

Figure 1

29 pages, 7114 KB  
Article
Modeling and Experimental Study of Fuzzy Control System for Operating Parameters of Grain Combine Harvester Cleaning Device
by Jing Pang, Yahao Tian, Zhanchao Dai, Zhe Du, Fengkui Dang, Xinqi Chen and Xinping Li
Appl. Sci. 2026, 16(7), 3137; https://doi.org/10.3390/app16073137 - 24 Mar 2026
Abstract
The cleaning unit is a key functional component of grain combine harvesters, yet its operating parameters are still predominantly adjusted according to operator experience, resulting in limited adaptability to fluctuating working conditions. To enhance the intelligence and stability of the cleaning process, this [...] Read more.
The cleaning unit is a key functional component of grain combine harvesters, yet its operating parameters are still predominantly adjusted according to operator experience, resulting in limited adaptability to fluctuating working conditions. To enhance the intelligence and stability of the cleaning process, this study develops a fuzzy control approach supported by data-driven performance modeling. Based on multi-condition bench experiments, feeding rate, fan speed, cleaning sieve vibration frequency, and sieve opening were selected as input variables. Gaussian Process Regression (GPR) models were established to describe the nonlinear relationships between operating parameters and cleaning loss rate and impurity rate, and impurity rate was inferred online to compensate for the absence of a reliable sensor. Taking feeding rate variation as the primary disturbance, a dual-input fuzzy control strategy was designed using loss rate monitoring and model-predicted impurity rate as feedback signals. Simulation and bench test results show that, under small and moderate load disturbances (±20% and ±35%), the proposed method reduces either impurity rate or cleaning loss rate through coordinated parameter adjustment. Under large disturbances (±50%), performance deterioration cannot be fully eliminated, but its extent is alleviated compared with open-loop conditions. Full article
Show Figures

Figure 1

31 pages, 16969 KB  
Article
Research on Cooperative Vehicle–Infrastructure Perception Integrating Enhanced Point-Cloud Features and Spatial Attention
by Shiyang Yan, Yanfeng Wu, Zhennan Liu and Chengwei Xie
World Electr. Veh. J. 2026, 17(4), 164; https://doi.org/10.3390/wevj17040164 - 24 Mar 2026
Abstract
Vehicle–infrastructure cooperative perception (VICP) extends the sensing capability of single-vehicle systems by integrating multi-source information from onboard and roadside sensors, thereby alleviating limitations in sensing range and field-of-view coverage. However, in complex urban environments, the robustness of such systems—particularly in terms of blind-spot [...] Read more.
Vehicle–infrastructure cooperative perception (VICP) extends the sensing capability of single-vehicle systems by integrating multi-source information from onboard and roadside sensors, thereby alleviating limitations in sensing range and field-of-view coverage. However, in complex urban environments, the robustness of such systems—particularly in terms of blind-spot coverage and feature representation—is severely affected by both static and dynamic occlusions, as well as distance-induced sparsity in point cloud data. To address these challenges, a 3D object detection framework incorporating point cloud feature enhancement and spatially adaptive fusion is proposed. First, to mitigate feature degradation under sparse and occluded conditions, a Redefined Squeeze-and-Excitation Network (R-SENet) attention module is integrated into the feature encoding stage. This module employs a dual-dimensional squeeze-and-excitation mechanism operating across pillars and intra-pillar points, enabling adaptive recalibration of critical geometric features. In addition, a Feature Pyramid Backbone Network (FPB-Net) is designed to improve target representation across varying distances through multi-scale feature extraction and cross-layer aggregation. Second, to address feature heterogeneity and spatial misalignment between heterogeneous sensing agents, a Spatial Adaptive Feature Fusion (SAFF) module is introduced. By explicitly encoding the origin of features and leveraging spatial attention mechanisms, the SAFF module enables dynamic weighting and complementary fusion between fine-grained vehicle-side features and globally informative roadside semantics. Extensive experiments conducted on the DAIR-V2X benchmark and a custom dataset demonstrate that the proposed approach outperforms several state-of-the-art methods. Specifically, Average Precision (AP) scores of 0.762 and 0.694 are achieved at an IoU threshold of 0.5, while AP scores of 0.617 and 0.563 are obtained at an IoU threshold of 0.7 on the two datasets, respectively. Furthermore, the proposed framework maintains real-time inference performance, highlighting its effectiveness and practical potential for real-world deployment. Full article
(This article belongs to the Section Automated and Connected Vehicles)
Show Figures

Figure 1

25 pages, 10489 KB  
Article
An Unsupervised Machine Learning-Based Approach for Combining Sentinel 1 and 2 to Assess the Severity of Fires over Large Areas Using a Google Earth Engine
by Ciro Giuseppe Riccardi, Nicodemo Abate and Rosa Lasaponara
Remote Sens. 2026, 18(6), 956; https://doi.org/10.3390/rs18060956 - 23 Mar 2026
Viewed by 121
Abstract
Wildfires represent a significant global environmental challenge, necessitating advanced monitoring and assessment techniques. This study explores the integration of Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 optical data within a Google Earth Engine (GEE) framework to enhance wildfire detection, burned area estimation, and [...] Read more.
Wildfires represent a significant global environmental challenge, necessitating advanced monitoring and assessment techniques. This study explores the integration of Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 optical data within a Google Earth Engine (GEE) framework to enhance wildfire detection, burned area estimation, and severity assessment. By leveraging SAR’s capability to penetrate atmospheric obstructions and optical data’s spectral sensitivity to vegetation changes, the proposed methodology addresses limitations of single-sensor approaches. The results demonstrate strong correlations between SAR-based indices, such as the Radar Vegetation Index (RVI) and Dual-Polarized SAR Vegetation Index (DPSVI), and traditional optical indices, including the Normalized Burn Ratio (NBR) and differenced NBR (ΔNBR). Despite challenges related to terrain influence, sensor resolution differences, and computational demands, the integration of multi-sensor data in a cloud-based environment offers a scalable and efficient solution for wildfire monitoring. During the peak of the fire events, significant atmospheric obstruction was technically verified using Sentinel-2 metadata and the QA60 cloud mask band, which confirmed persistent cloud cover and thick smoke plumes over the study areas. This interference limited the reliability of purely optical monitoring, further justifying the integration of SAR data. Future research should focus on refining data fusion techniques, incorporating additional datasets such as thermal infrared imagery and meteorological variables, and enhancing automation through artificial intelligence (AI). This study underscores the potential of remote sensing advancements in improving fire management strategies and global wildfire mitigation efforts. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Burned Area Mapping)
Show Figures

Figure 1

22 pages, 14276 KB  
Article
DualFOD: A Dual-Modality Deep Learning Framework for UAS-Based Foreign Object Debris Detection Using Thermal and RGB Imagery
by Owais Ahmed, Caleb S. Caldwell and Adeel Khalid
Drones 2026, 10(3), 225; https://doi.org/10.3390/drones10030225 - 23 Mar 2026
Viewed by 126
Abstract
Foreign Object Debris (FOD) poses critical risks to aircraft during takeoff and landing, resulting in billions of dollars in losses annually due to infrastructure damage and flight delays. Advancements in automated inspection technologies have enabled the use of Unmanned Aerial Systems (UAS) combined [...] Read more.
Foreign Object Debris (FOD) poses critical risks to aircraft during takeoff and landing, resulting in billions of dollars in losses annually due to infrastructure damage and flight delays. Advancements in automated inspection technologies have enabled the use of Unmanned Aerial Systems (UAS) combined with Artificial Intelligence (AI) for rapid FOD identification. While prior research has extensively evaluated optical sensors such as RGB imaging and radar, limited work has investigated the potential of thermal imaging for improved FOD visibility under challenging environmental conditions. This study proposes DualFOD, a dual-modality detection framework that integrates a supervised YOLO12-based RGB detector with an unsupervised thermal anomaly extraction pipeline for identifying debris on runway surfaces. A decision-level fusion algorithm combines detections from both branches using spatial proximity matching to produce a unified FOD inventory. The RGB branch achieves a precision of 0.954 and mAP@0.5 of 0.890 on the held-out test set. Cross-site validation at the Cobb County Sport Aviation Complex demonstrates that thermal detection recovers debris missed by RGB at higher altitudes, with the fused output consistently outperforming either single-modality branch. This research contributes toward scalable autonomous FOD monitoring that enhances operational safety in aviation environments. Full article
Show Figures

Figure 1

29 pages, 10740 KB  
Article
Enhancing Monthly Flood Monitoring in Wetlands Through Spatiotemporal Fusion of Multi-Sensor SAR Data: A Case Study of Chen Lake Wetland (2020–2024)
by Chengyu Geng, Cheng Shang, Shan Jiang, Yankun Wang, Ningsheng Chen, Chenxi Zeng, Yadong Zhou and Yun Du
Sustainability 2026, 18(6), 3054; https://doi.org/10.3390/su18063054 - 20 Mar 2026
Viewed by 153
Abstract
Accurate and continuous monitoring of flood dynamics is fundamental to understanding wetland hydrological processes and their ecological implications, yet it remains challenging due to the inherent trade-off between spatial and temporal resolution in remote sensing observations. This study advances flood monitoring methodology by [...] Read more.
Accurate and continuous monitoring of flood dynamics is fundamental to understanding wetland hydrological processes and their ecological implications, yet it remains challenging due to the inherent trade-off between spatial and temporal resolution in remote sensing observations. This study advances flood monitoring methodology by developing and validating a spatiotemporal fusion framework specifically designed for multi-source Synthetic Aperture Radar (SAR) data—an approach that has remained underdeveloped despite its critical importance for all-weather wetland observation. We propose the Fusion SAR Operational Monitoring (FSOM) framework, which integrates three established components—the Flexible Spatiotemporal Data Fusion (FSDAF) model, the Sentinel-1 Dual-Polarized Water Index (SDWI), and automated thresholding classification—into a coherent processing chain that generates consistent high-resolution flood extent time series from multi-sensor SAR data (Sentinel-1 and GF-3). The FSOM was applied to the Chen Lake Wetland from 2020 to 2024, producing a monthly flood map dataset at 5 m spatial resolution. Quantitative validation demonstrated the superiority of the FSOM-derived products. Compared to water classifications using original Sentinel-1 data, the FSOM results achieved a significantly higher overall accuracy (exceeding 90%) and Kappa coefficient (>0.90) than the Sentinel-1 results, which had overall accuracy (exceeding 86%) and Kappa coefficient (>0.75). Critically, the producer’s accuracy for water bodies consistently surpassed 91%, indicating a substantial reduction in omission errors and markedly improved detection of small water bodies. These results confirm the effectiveness of the proposed FSOM framework in mitigating the spatiotemporal resolution trade-off, thereby providing a reliable high-fidelity data foundation to support precise wetland conservation and flood disaster emergency response. The framework thus offers a practical tool for scientists and water resource managers seeking to enhance monitoring capabilities in the world’s most dynamic and ecologically significant wetland ecosystems. Full article
Show Figures

Figure 1

31 pages, 3479 KB  
Article
MV-S2CD: A Modality-Bridged Vision Foundation Model-Based Framework for Unsupervised Optical–SAR Change Detection
by Yongqi Shi, Ruopeng Yang, Changsheng Yin, Yiwei Lu, Bo Huang, Yongqi Wen, Yihao Zhong and Zhaoyang Gu
Remote Sens. 2026, 18(6), 931; https://doi.org/10.3390/rs18060931 - 19 Mar 2026
Viewed by 199
Abstract
Unsupervised change detection (UCD) from heterogeneous bitemporal optical–SAR imagery is challenging due to modality discrepancy, speckle/illumination variations, and the absence of change annotations. We propose MV-S2CD, a vision foundation model (VFM)-based framework that learns a modality-bridged latent space and produces dense change maps [...] Read more.
Unsupervised change detection (UCD) from heterogeneous bitemporal optical–SAR imagery is challenging due to modality discrepancy, speckle/illumination variations, and the absence of change annotations. We propose MV-S2CD, a vision foundation model (VFM)-based framework that learns a modality-bridged latent space and produces dense change maps in a fully unsupervised manner. To robustly adapt pretrained VFM priors to heterogeneous inputs with minimal task-specific parameters, MV-S2CD incorporates lightweight modality-specific adapters and parameter-efficient low-rank adaptation (LoRA) in high-level layers. A shared projector embeds the two observations into a common geometry, enabling consistent cross-modal comparison and reducing sensor-induced domain shift. Building on the bridged representation, we design a dual-branch change reasoning module that decouples structure-sensitive cues from semantic-consistency cues: a structure pathway preserves fine boundaries and local variations, while a semantic-consistency pathway employs reliability gating and multi-scale context aggregation to suppress pseudo-changes caused by modality-specific nuisances and residual misregistration. For label-free optimization, we develop a difference-centric self-supervision scheme with two perturbation views and reliability-guided pseudo-partitioning, jointly enforcing pseudo-unchanged invariance, pseudo-changed/unchanged separability, and sparsity and edge-preserving regularization. Experiments on three heterogeneous optical–SAR benchmarks demonstrate that MV-S2CD consistently improves the Precision–Recall trade-off and achieves state-of-the-art performance among unsupervised baselines, while remaining backbone-flexible and efficient. Full article
Show Figures

Figure 1

21 pages, 2561 KB  
Article
A Range-Aware Attention Framework for Meteorological Visibility Estimation
by Wai Lun Lo, Kwok Wai Wong, Richard Tai Chiu Hsung, Henry Shu Hung Chung, Hong Fu, Harris Sik Ho Tsang and Tony Yulin Zhu
Sensors 2026, 26(6), 1893; https://doi.org/10.3390/s26061893 - 17 Mar 2026
Viewed by 149
Abstract
Accurate meteorological visibility estimation is critical to the safety and reliability of transportation and environmental monitoring systems. Despite the prevalence of deep learning, models often struggle with the non-linear visual degradation caused by varying atmospheric conditions and a scarcity of instrument-calibrated datasets. This [...] Read more.
Accurate meteorological visibility estimation is critical to the safety and reliability of transportation and environmental monitoring systems. Despite the prevalence of deep learning, models often struggle with the non-linear visual degradation caused by varying atmospheric conditions and a scarcity of instrument-calibrated datasets. This study makes two primary contributions. First, we introduce the Hong Kong Chu Hai College Visibility Dataset (HKCHC-VD) comprising 11,148 high-resolution images paired with precise visibility measurements from a Biral SWS-100 sensor. Second, we propose a Range-Aware Attention Framework (RAT-Attn), an adaptive attention mechanism that translates classical range-specific atmospheric modeling into differentiable deep learning operations. This is a domain-specific architectural optimization that integrates a dual-backbone architecture (CNN and Vision Transformer) with a learnable threshold mechanism. This design enables the model to dynamically prioritize spatial and channel-wise features based on estimated visibility intervals, specifically targeting the non-linear visual degradation unique to fog and haze. Experimental results demonstrate that our proposed approach outperforms existing baselines, including VisNet and landmark ANN-based methods. The ResNet + ViT (spatial-threshold) variant achieves the most balanced performance, recording a Mean Squared Error (MSE) of 5.87 km2, a Mean Absolute Error (MAE) of 1.65 km, and a classification accuracy of 87.07%. In critical low-visibility conditions (0 to 10 km), the framework reduces regression error by over 75% compared to the baselines. These results confirm that range-aware adaptive feature fusion is essential for robust meteorological estimation in real-world environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 5079 KB  
Article
Dual-Stream Transformer with Kalman-Based Sensor Fusion for Wearable Fall Detection
by Abheek Pradhan, Sana Alamgeer, Rakesh Suvvari, Syed Tousiful Haque and Anne H. H. Ngu
Big Data Cogn. Comput. 2026, 10(3), 90; https://doi.org/10.3390/bdcc10030090 - 17 Mar 2026
Viewed by 245
Abstract
Wearable fall detection systems face a fundamental challenge: while gyroscope data provide valuable orientation cues, naively combining raw gyroscope and accelerometer signals can degrade performance due to noise contamination. To overcome this challenge, we present a dual-stream transformer architecture that incorporates (i) Kalman-based [...] Read more.
Wearable fall detection systems face a fundamental challenge: while gyroscope data provide valuable orientation cues, naively combining raw gyroscope and accelerometer signals can degrade performance due to noise contamination. To overcome this challenge, we present a dual-stream transformer architecture that incorporates (i) Kalman-based sensor fusion to convert noisy gyroscope angular velocities into stable orientation estimates (roll, pitch, yaw), maintaining an internal state of body pose, and (ii) processing accelerometer and orientation streams in separate encoder pathways before fusion to prevent cross-modal interference. Our architecture further integrates Squeeze-and-Excitation channel attention and Temporal Attention Pooling to focus on fall-critical temporal patterns. Evaluated on the SmartFallMM dataset using 21-fold leave-one-subject-out cross-validation, the dual-stream Kalman transformer achieves 91.10% F1, outperforming single-stream Kalman transformers (89.80% F1) by 1.30% and single-stream baseline transformers (88.96% F1) by 2.14%. We further evaluate the model in real time using a watch-based SmartFall App on five participants, maintaining an average F1 score of 83% and an accuracy of 90%. These results indicate robust performance in both offline and real-world deployment settings, establishing a new state-of-the-art for inertial-measurement-unit-based fall detection on commodity smartwatch devices. Full article
Show Figures

Figure 1

20 pages, 3407 KB  
Article
HT-NRC: A High-Throughput and Noise-Resilient Lossless Image Compression Architecture for Deep-Space CMOS Cameras
by Haoyu Wu, Yonglin Bai and Jiarui Gao
Appl. Sci. 2026, 16(6), 2873; https://doi.org/10.3390/app16062873 - 17 Mar 2026
Viewed by 177
Abstract
Lossless image compression is pivotal for deep-space exploration. Considering the requirements of deep-space exploration for a high compression ratio and real-time processing, traditional image compression algorithms have garnered significant attention. However, existing algorithms struggle with real-time processing speed and compression degradation in high-noise [...] Read more.
Lossless image compression is pivotal for deep-space exploration. Considering the requirements of deep-space exploration for a high compression ratio and real-time processing, traditional image compression algorithms have garnered significant attention. However, existing algorithms struggle with real-time processing speed and compression degradation in high-noise regions, failing to meet the throughput demands of next-generation sensors. To address these challenges, this paper proposes a high-throughput and noise-resilient lossless image compression architecture, named HT-NRC, for deep-space CMOS cameras. First, to overcome the throughput bottleneck, we introduce a parallel processing method, which is built on index-based dispatch and Reorder mechanism. This is achieved by dynamically distributing pixel streams into parallel cores and utilizing a Reorder Buffer for sequence restoration. Second, to mitigate low compression efficiency in noisy backgrounds, we present a Heterogeneous Dual-Path Coding scheme. This system adaptively separates structural information for predictive coding and stochastic noise for raw packing with Bit-Plane Slicing (BPS) strategy. The proposed architecture was implemented on a Xilinx Virtex-7 FPGA (Xilinx, Inc., San Jose, CA, USA). Operating at 100 MHz, the system achieves a processing throughput of 414.7 Mpixel/s and a high average compression ratio under deep-space image datasets, while consuming an estimated total on-chip power of only 2.1 W. Experimental results show that our proposed method substantially outperforms existing baseline methods. Specifically, compared to the optimized serial JPEG-LS implementation processing one pixel per clock cycle, our parallel architecture achieves an approximately 314.7% increase in processing throughput. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

11 pages, 383 KB  
Article
Manual Dexterity Shows Greater Discretionary Value than Sensor-Based Gait and Balance Measures in Identifying Early Functional Impairment in Multiple Sclerosis
by Mousa Hujirat and Alon Kalron
Sensors 2026, 26(6), 1866; https://doi.org/10.3390/s26061866 - 16 Mar 2026
Viewed by 214
Abstract
Objective: To determine which physical clinical test best differentiates minimally impaired people with MS (pwMS) from healthy controls and to compare the discriminatory value of upper limb clinical assessments with sensor-based gait and postural control measures. Methods: Forty-one participants (21 pwMS, [...] Read more.
Objective: To determine which physical clinical test best differentiates minimally impaired people with MS (pwMS) from healthy controls and to compare the discriminatory value of upper limb clinical assessments with sensor-based gait and postural control measures. Methods: Forty-one participants (21 pwMS, 20 matched healthy controls) completed a single testing session including upper limb clinical assessments (Nine-Hole Peg Test [9HPT], grip strength), gait (Timed 25-Foot Walk, Six-Minute Walk Test, and cognitive–walking dual task), and static balance assessments using wearable inertial sensors (APDM Mobility Lab system). Dual-task costs (DTCs) were calculated for gait parameters. Between-group comparisons were performed using independent t-tests. Pearson correlation analyses were conducted to examine interrelationships among gait variables, and a parsimonious binary logistic regression model was constructed, including non-dominant 9HPT and dual-task walking speed. Receiver operating characteristic (ROC) analyses were performed to evaluate discriminative performance and determine the optimal 9HPT cutoff. Results: PwMS performed significantly slower on the 9HPT for both hands (p ≤ 0.006) and demonstrated reduced walking performance and higher gait DTCs (p ≤ 0.041) compared with controls. No significant group differences were observed in grip strength or sensor-based postural control. In multivariable analysis, the overall model was significant (p < 0.001; Nagelkerke R2 = 0.49), and the non-dominant 9HPT remained the only independent predictor of group status (OR = 1.75, 95% CI [1.17–2.61]), whereas dual-task walking speed was not significant after adjustment. ROC analysis demonstrated good discriminative ability for the non-dominant 9HPT (AUC = 0.84, 95% CI [0.71–0.97]) and acceptable discrimination for dual-task walking speed (AUC = 0.75, 95% CI [0.60–0.90]). The optimal 9HPT cutoff was ≥21.4 s, yielding 71% sensitivity and 100% specificity in this cohort. Conclusions: Manual dexterity of the non-dominant hand may serve as a sensitive screening marker of early functional impairment in MS, demonstrating greater discriminatory value than sensor-based gait and balance measures. These findings support the inclusion of upper limb dexterity testing in the routine assessment of minimally impaired pwMS. Validation in larger, longitudinal cohorts is warranted. Full article
(This article belongs to the Special Issue Sensor-Based Rehabilitation in Neurological Diseases)
Show Figures

Figure 1

30 pages, 2135 KB  
Article
SBM–Attention U-Net: A Hybrid Transformer Network for Liver Tumor Segmentation in Medical Images
by Yiru Chen, Xuefeng Li, Yang Du, Hui Jiang, Xiaohui Liu, Nan Ma and Xuemei Wang
Sensors 2026, 26(6), 1851; https://doi.org/10.3390/s26061851 - 15 Mar 2026
Viewed by 187
Abstract
This study proposes a novel liver and liver tumor segmentation model. The architecture integrates BiFormer into the bottom two layers of the Attention U-Net encoder to enhance global semantic context modeling and establish long-range pixel-wise dependencies. The proposed spatial-channel dual attention (SCDA) mechanism [...] Read more.
This study proposes a novel liver and liver tumor segmentation model. The architecture integrates BiFormer into the bottom two layers of the Attention U-Net encoder to enhance global semantic context modeling and establish long-range pixel-wise dependencies. The proposed spatial-channel dual attention (SCDA) mechanism is incorporated into the first three encoder layers to refine the fine-grained feature processing capabilities, particularly for precise delineation of liver and tumor boundaries. Eventually, a Mix Structure Block (MSB) is implemented within the decoder to optimize fusion of deep semantic and shallow spatial features, thereby elevating segmentation accuracy. Ablation experiments were conducted on three publicly available datasets. On the 3Dircadb dataset, the mean dice coefficient achieved was 0.9377 and the mean IoU Index achieved was 0.8889. On the LITS dataset, the mean dice coefficient achieved was 0.9257 and the mean IoU Index achieved was 0.8704. On the CHAOS dataset, the mean dice coefficient achieved was 0.9611 and the mean IoU Index achieved was 0.9259. These results validate the functionality and effectiveness of the proposed network model. This study constructed a novel neural network based on attention mechanisms; by enabling precise and automated segmentation directly from raw sensor-acquired medical images, the proposed method enhances the diagnostic value of these imaging sensors, facilitating more accurate clinical decision-making. Full article
Show Figures

Figure 1

23 pages, 6668 KB  
Article
Development of a Visual SLAM-Based Autonomous UAV System for Greenhouse Plant Monitoring
by Jing-Heng Lin and Ta-Te Lin
Drones 2026, 10(3), 205; https://doi.org/10.3390/drones10030205 - 15 Mar 2026
Viewed by 350
Abstract
Autonomous monitoring is essential for precision agriculture in greenhouses, yet deploying unmanned aerial vehicles (UAVs) in confined, GPS-denied environments remains limited by payload, power, and cost constraints. This study developed and validated an autonomous UAV system for reliable, low-cost operation in such conditions. [...] Read more.
Autonomous monitoring is essential for precision agriculture in greenhouses, yet deploying unmanned aerial vehicles (UAVs) in confined, GPS-denied environments remains limited by payload, power, and cost constraints. This study developed and validated an autonomous UAV system for reliable, low-cost operation in such conditions. The proposed system employs a dual-link edge-computing architecture: a lightweight onboard controller handles flight control and sensor acquisition, while visual simultaneous localization and mapping (V-SLAM) is offloaded to an edge computer via the FPV video link. Phenotyping (flower detection and tracking/counting) is performed offline from the side-view RGB stream and does not participate in the flight control loop. Using muskmelon (Cucumis melo L.) flower development as a case study, the UAV autonomously executed daily missions for 27 days in a commercial greenhouse, performing flower detection and tracking to monitor phenological dynamics. Localization and control accuracy were evaluated against a validated UWB reference system, achieving 5.4~8.0 cm 2D RMSE for trajectory tracking and 12.7 cm translation RMSE for greenhouse mapping. This work demonstrates a practical architecture for autonomous monitoring in GPS-denied agricultural environments, with operational boundaries characterized through the sustained field deployment. The system’s design principles may extend to other indoor or communication-limited scenarios requiring lightweight, intelligent robotic operation. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

Back to TopTop