Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (660)

Search Parameters:
Keywords = multi-sensor information integration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 12579 KB  
Article
Detecting Ship-to-Ship Transfer by MOSA: Multi-Source Observation Framework with SAR and AIS
by Peixin Cai, Bingxin Liu, Xiaoyang Li, Xinhao Li, Siqi Wang, Peng Liu, Peng Chen and Ying Li
Remote Sens. 2026, 18(3), 473; https://doi.org/10.3390/rs18030473 - 2 Feb 2026
Abstract
Ship-to-ship (STS) transfer has become a major concern for maritime security and regulatory authorities, as it is frequently exploited for smuggling and other illicit activities. Accurate and timely identification of STS events is therefore essential for effective maritime supervision. Existing monitoring approaches, however, [...] Read more.
Ship-to-ship (STS) transfer has become a major concern for maritime security and regulatory authorities, as it is frequently exploited for smuggling and other illicit activities. Accurate and timely identification of STS events is therefore essential for effective maritime supervision. Existing monitoring approaches, however, suffer from two inherent limitations: AIS-based surveillance is vulnerable to intentional signal shutdown or manipulation, and remote-sensing-based ship detection alone lacks digital identity information and cannot assess the legitimacy of transfer activities. To address these challenges, we propose a Multi-source Observation framework with SAR and AIS (MOSA), which integrates SAR imagery with AIS data. The framework consists of two key components: STS-YOLO, a high-precision fine-grained ship detection model, in which a dynamic adaptive feature extraction (DAFE) module and a multi-attention mechanism (MAM) are introduced to enhance feature representation and robustness in complex maritime SAR scenes, and the SAR-AIS Consistency Analysis Workflow (SACA-Workflow), designed to identify suspected abnormal STS behaviors by analyzing inconsistencies between physical and digital ship identities. Experimental results on the SDFSD-v1.5 dataset demonstrate the quantitative performance gains and improved fine-grained detection performance of STS-YOLO in terms of standard detection metrics. In addition, generalization experiments conducted on large-scene SAR imagery from the waters near Panama and Singapore, in addition to multi-satellite SAR data (Capella Space and Umbra) from the Gibraltar region, validate the cross-regional and cross-sensor robustness of the proposed framework. The effectiveness of the SACA-Workflow is evaluated qualitatively through representative case studies. In all evaluated scenarios, the SACA-Workflow effectively assists in identifying suspected abnormal STS events and revealing potential AIS inconsistency indicators. Overall, MOSA provides a robust and practical solution for multi-scenario maritime monitoring and supports reliable detection of suspected abnormal STS activities. Full article
(This article belongs to the Special Issue Remote Sensing in Maritime Navigation and Transportation)
Show Figures

Figure 1

27 pages, 496 KB  
Article
An Intelligent Sensing Framework for Early Ransomware Detection Using MHSA-LSTM Machine Learning
by Abdullah Alqahtani, Mordecai Opoku Ohemeng and Frederick T. Sheldon
Sensors 2026, 26(3), 952; https://doi.org/10.3390/s26030952 (registering DOI) - 2 Feb 2026
Abstract
Ransomware represents a critical and evolving cybersecurity threat that often evades traditional defenses during its early stages. We present a novel intelligent sensing framework (ISF) designed for proactive, early-stage ransomware detection, centered on a Multi-Head Self-Attention Long Short-Term Memory (MHSA-LSTM) sensor model. The [...] Read more.
Ransomware represents a critical and evolving cybersecurity threat that often evades traditional defenses during its early stages. We present a novel intelligent sensing framework (ISF) designed for proactive, early-stage ransomware detection, centered on a Multi-Head Self-Attention Long Short-Term Memory (MHSA-LSTM) sensor model. The core innovation of this sensor is its self-attention mechanism, which is augmented to autonomously prioritize the most discriminative behavioral features by incorporating a relevance coefficient derived from information gain (μ), thereby filtering out noise and overcoming data scarcity inherent in initial attack phases. The framework was validated using a comprehensive dataset derived from the dynamic analysis of 39,378 ransomware samples and 9732 benign applications. The MHSA-LSTM sensor achieved superior performance, recording a peak accuracy of 98.4%, a low False Positive Rate (FPR) of 0.089, and an F1 score of 0.972 using an optimized 25-feature set. This performance consistently surpassed established sequence models, including CNN-LSTM and Stacked LSTM, confirming the significant potential of the ISF as a robust and scalable solution for enhancing defenses against modern, stealthy threats. Most significantly, integration of μ as a statistical anchor resulted in a 49% reduction in False Positive Rates (FPRs) compared to standard attention-based models. This addresses the main operational barrier to deploying deep learning sensors in live environments. Full article
(This article belongs to the Special Issue Intelligent Sensors for Security and Attack Detection)
Show Figures

Figure 1

22 pages, 4725 KB  
Article
Design of Multi-Source Fusion Wireless Acquisition System for Grid-Forming SVG Device Valve Hall
by Liqian Liao, Yuanwei Zhou, Guangyu Tang, Jiayi Ding, Ping Wang, Bo Yin, Liangbo Xie, Jie Zhang and Hongxin Zhong
Electronics 2026, 15(3), 641; https://doi.org/10.3390/electronics15030641 - 2 Feb 2026
Abstract
With the increasing deployment of grid-forming static var generators (GFM-SVG) in modern power systems, the reliability of the valve hall that houses the core power modules has become a critical concern. To overcome the limitations of conventional wired monitoring systems—complex cabling, poor scalability, [...] Read more.
With the increasing deployment of grid-forming static var generators (GFM-SVG) in modern power systems, the reliability of the valve hall that houses the core power modules has become a critical concern. To overcome the limitations of conventional wired monitoring systems—complex cabling, poor scalability, and incomplete state perception—this paper proposes and implements a multi-source fusion wireless data acquisition system specifically designed for GFM-SVG valve halls. The system integrates acoustic, visual, and infrared sensing nodes into a wireless sensor network (WSN) to cooperatively capture thermoacoustic visual multi-physics information of key components. A dual-mode communication scheme, using Wireless Fidelity (Wi-Fi) as the primary link and Fourth-Generation Mobile Communication Network (4G) as a backup channel, is adopted together with data encryption, automatic reconnection, and retransmission-checking mechanisms to ensure reliable operation in strong electromagnetic interference environments. The main innovation lies in a multi-source information fusion algorithm based on an improved Dempster–Shafer (D–S) evidence theory, which is combined with the object detection capability of the You Only Look Once, Version 8 (YOLOv8) model to effectively handle the uncertainty and conflict of heterogeneous data sources. This enables accurate identification and early warning of multiple types of faults, including local overheating, abnormal acoustic signatures, and coolant leakage. Experimental results demonstrate that the proposed system achieves a fault-diagnosis accuracy of 98.5%, significantly outperforming single-sensor approaches, and thus provides an efficient and intelligent operation-and-maintenance solution for ensuring the safe and stable operation of GFM-SVG equipment. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

30 pages, 14668 KB  
Article
RAPT-Net: Reliability-Aware Precision-Preserving Tolerance-Enhanced Network for Tiny Target Detection in Wide-Area Coverage Aerial Remote Sensing
by Peida Zhou, Xiaojun Guo, Xiaoyong Sun, Bei Sun, Shaojing Su, Wei Jiang, Runze Guo, Zhaoyang Dang and Siyang Huang
Remote Sens. 2026, 18(3), 449; https://doi.org/10.3390/rs18030449 - 1 Feb 2026
Abstract
Multi-platform aerial remote sensing supports critical applications including wide-area surveillance, traffic monitoring, maritime security, and search and rescue. However, constrained by observation altitude and sensor resolution, targets inherently exhibit small-scale characteristics, making small object detection a fundamental bottleneck. Aerial remote sensing faces three [...] Read more.
Multi-platform aerial remote sensing supports critical applications including wide-area surveillance, traffic monitoring, maritime security, and search and rescue. However, constrained by observation altitude and sensor resolution, targets inherently exhibit small-scale characteristics, making small object detection a fundamental bottleneck. Aerial remote sensing faces three unique challenges: (1) spatial heterogeneity of modality reliability due to scene diversity and illumination dynamics; (2) conflict between precise localization requirements and progressive spatial information degradation; (3) annotation ambiguity from imaging physics conflicting with IoU-based training. This paper proposes RAPT-Net with three core modules: MRAAF achieves scene-adaptive modality integration through two-stage progressive fusion; CMFE-SRP employs hierarchy-specific processing to balance spatial details and semantic enhancement; DS-STD increases positive sample coverage to 4× through spatial tolerance expansion. Experiments on VEDAI (satellite) and RGBT-Tiny (UAV) demonstrate mAP values of 62.22% and 18.52%, improving over the state of the art by 4.3% and 10.3%, with a 17.3% improvement on extremely tiny targets. Full article
(This article belongs to the Special Issue Small Target Detection, Recognition, and Tracking in Remote Sensing)
Show Figures

Figure 1

21 pages, 2013 KB  
Article
Machine Learning Models for Reliable Gait Phase Detection Using Lower-Limb Wearable Sensor Data
by Muhammad Fiaz, Rosita Guido and Domenico Conforti
Appl. Sci. 2026, 16(3), 1397; https://doi.org/10.3390/app16031397 - 29 Jan 2026
Viewed by 92
Abstract
Accurate gait-phase detection is essential for rehabilitation monitoring, prosthetic control, and human–robot interaction. Artificial intelligence supports continuous, personalized mobility assessment by extracting clinically meaningful patterns from wearable sensors. A richer view of gait dynamics can be achieved by integrating additional signals, including inertial, [...] Read more.
Accurate gait-phase detection is essential for rehabilitation monitoring, prosthetic control, and human–robot interaction. Artificial intelligence supports continuous, personalized mobility assessment by extracting clinically meaningful patterns from wearable sensors. A richer view of gait dynamics can be achieved by integrating additional signals, including inertial, plantar flex, footswitch, and EMG data, leading to more accurate and informative gait analysis. Motivated by these needs, this study investigates discrete gait-phase recognition for the right leg using a multi-subject IMU dataset collected from lower-limb sensors. IMU recordings were segmented into 128-sample windows across 23 channels, and each window was flattened into a 2944-dimensional feature vector. To ensure reliable ground-truth labels, we developed an automatic relabeling pipeline incorporating heel-strike and toe-off detection, adaptive threshold tuning, and sensor fusion across sensor modalities. These windowed vectors were then used to train a comprehensive suite of machine learning models, including Random Forests, Extra Trees, k-Nearest Neighbors, XGBoost, and LightGBM. All models underwent systematic hyperparameter tuning, and their performance was assessed through k-fold cross-validation. The results demonstrate that tree-based ensemble models provide accurate and stable gait-phase classification with accuracy exceeding 97% across both test sets, underscoring their potential for future real-time gait analysis and lower-limb assistive technologies. Full article
Show Figures

Figure 1

17 pages, 3304 KB  
Article
High-Resolution Azimuth Estimation Method Based on a Pressure-Gradient MEMS Vector Hydrophone
by Xiao Chen, Ying Zhang and Yujie Chen
Micromachines 2026, 17(2), 167; https://doi.org/10.3390/mi17020167 - 27 Jan 2026
Viewed by 167
Abstract
The pressure-gradient Micro-Electro-Mechanical Systems (MEMS) vector hydrophone is a novel type of sensor capable of simultaneously acquiring both scalar and vectorial information within an acoustic field. Conventional azimuth estimation methods struggle to achieve high-resolution localization using a single pressure-gradient MEMS vector hydrophone. In [...] Read more.
The pressure-gradient Micro-Electro-Mechanical Systems (MEMS) vector hydrophone is a novel type of sensor capable of simultaneously acquiring both scalar and vectorial information within an acoustic field. Conventional azimuth estimation methods struggle to achieve high-resolution localization using a single pressure-gradient MEMS vector hydrophone. In practical marine environments, the multiple signal classification (MUSIC) algorithm is hampered by significant resolution performance loss. Similarly, the complex acoustic intensity (CAI) method is constrained by a high-resolution threshold for multiple targets, often resulting in inaccurate azimuth estimates. Therefore, a cross-spectral model between the acoustic pressure and the particle velocity for the pressure-gradient MEMS vector hydrophone was established. Integrated with an improved particle swarm optimization (IPSO) algorithm, a high-resolution azimuth estimation method utilizing this hydrophone is proposed. Furthermore, the corresponding Cramér-Rao Bound is derived. Simulation results demonstrate that the proposed algorithm accurately resolves two targets separated by only 5° at a low signal-to-noise ratio (SNR) of 5 dB, boasting a root mean square error of approximately 0.35° and a 100% success rate. Compared with the CAI method and the MUSIC algorithm, the proposed method achieves a lower resolution threshold and higher estimation accuracy, alongside low computational complexity that enables efficient real-time processing. Field tests in an actual seawater environment validate the algorithm’s high-resolution performance as predicted by simulations, thus confirming its practical efficacy. The proposed algorithm addresses key limitations in underwater detection by enhancing system robustness and offering high-resolution azimuth estimation. This capability holds promise for extending to multi-target scenarios in complex marine settings. Full article
(This article belongs to the Special Issue Micro Sensors and Devices for Ocean Engineering)
Show Figures

Figure 1

23 pages, 3420 KB  
Article
Design of a Wireless Monitoring System for Cooling Efficiency of Grid-Forming SVG
by Liqian Liao, Jiayi Ding, Guangyu Tang, Yuanwei Zhou, Jie Zhang, Hongxin Zhong, Ping Wang, Bo Yin and Liangbo Xie
Electronics 2026, 15(3), 520; https://doi.org/10.3390/electronics15030520 - 26 Jan 2026
Viewed by 204
Abstract
The grid-forming static var generator (SVG) is a key device that supports the stable operation of power grids with a high penetration of renewable energy. The cooling efficiency of its forced water-cooling system directly determines the reliability of the entire unit. However, existing [...] Read more.
The grid-forming static var generator (SVG) is a key device that supports the stable operation of power grids with a high penetration of renewable energy. The cooling efficiency of its forced water-cooling system directly determines the reliability of the entire unit. However, existing wired monitoring methods suffer from complex cabling and limited capacity to provide a full perception of the water-cooling condition. To address these limitations, this study develops a wireless monitoring system based on multi-source information fusion for real-time evaluation of cooling efficiency and early fault warning. A heterogeneous wireless sensor network was designed and implemented by deploying liquid-level, vibration, sound, and infrared sensors at critical locations of the SVG water-cooling system. These nodes work collaboratively to collect multi-physical field data—thermal, acoustic, vibrational, and visual information—in an integrated manner. The system adopts a hybrid Wireless Fidelity/Bluetooth (Wi-Fi/Bluetooth) networking scheme with electromagnetic interference-resistant design to ensure reliable data transmission in the complex environment of converter valve halls. To achieve precise and robust diagnosis, a three-layer hierarchical weighted fusion framework was established, consisting of individual sensor feature extraction and preliminary analysis, feature-level weighted fusion, and final fault classification. Experimental validation indicates that the proposed system achieves highly reliable data transmission with a packet loss rate below 1.5%. Compared with single-sensor monitoring, the multi-source fusion approach improves the diagnostic accuracy for pump bearing wear, pipeline micro-leakage, and radiator blockage to 98.2% and effectively distinguishes fault causes and degradation tendencies of cooling efficiency. Overall, the developed wireless monitoring system overcomes the limitations of traditional wired approaches and, by leveraging multi-source fusion technology, enables a comprehensive assessment of cooling efficiency and intelligent fault diagnosis. This advancement significantly enhances the precision and reliability of SVG operation and maintenance, providing an effective solution to ensure the safe and stable operation of both grid-forming SVG units and the broader power grid. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

38 pages, 9532 KB  
Article
Methods for GIS-Driven Airspace Management: Integrating Unmanned Aircraft Systems (UASs), Advanced Air Mobility (AAM), and Crewed Aircraft in the NAS
by Ryan P. Case and Joseph P. Hupy
Drones 2026, 10(2), 82; https://doi.org/10.3390/drones10020082 - 24 Jan 2026
Viewed by 390
Abstract
The rapid growth of Unmanned Aircraft Systems (UASs) and Advanced Air Mobility (AAM) presents significant integration and safety challenges for the National Airspace System (NAS), often relying on disconnected Air Traffic Management (ATM) and Unmanned Aircraft System Traffic Management (UTM) practices that contribute [...] Read more.
The rapid growth of Unmanned Aircraft Systems (UASs) and Advanced Air Mobility (AAM) presents significant integration and safety challenges for the National Airspace System (NAS), often relying on disconnected Air Traffic Management (ATM) and Unmanned Aircraft System Traffic Management (UTM) practices that contribute to airspace incidents. This study evaluates Geographic Information Systems (GISs) as a unified, data-driven framework to enhance shared airspace safety and efficiency. A comprehensive, multi-phase methodology was developed using GIS (specifically Esri ArcGIS Pro) to integrate heterogeneous aviation data, including FAA aeronautical data, Automatic Dependent Surveillance–Broadcast (ADS-B) for crewed aircraft, and UAS Flight Records, necessitating detailed spatial–temporal data preprocessing for harmonization. The effectiveness of this GIS-based approach was demonstrated through a case study analyzing a critical interaction between a University UAS (Da-Jiang Innovations (DJI) M300) and a crewed Piper PA-28-181 near Purdue University Airport (KLAF). The resulting two-dimensional (2D) and three-dimensional (3D) models successfully enabled the visualization, quantitative measurement, and analysis of aircraft trajectories, confirming a minimum separation of approximately 459 feet laterally and 339 feet vertically. The findings confirm that a GIS offers a centralized, scalable platform for collating, analyzing, modeling, and visualizing air traffic operations, directly addressing ATM/UTM integration deficiencies. This GIS framework, especially when combined with advancements in sensor technologies and Artificial Intelligence (AI) for anomaly detection, is critical for modernizing NAS oversight, improving situational awareness, and establishing a foundation for real-time risk prediction and dynamic airspace management. Full article
(This article belongs to the Special Issue Urban Air Mobility Solutions: UAVs for Smarter Cities)
Show Figures

Graphical abstract

26 pages, 7633 KB  
Review
Compound Meta-Optics for Advanced Optical Engineering
by Hak-Ryeol Lee, Dohyeon Kim and Sun-Je Kim
Sensors 2026, 26(3), 792; https://doi.org/10.3390/s26030792 - 24 Jan 2026
Viewed by 419
Abstract
Compound meta-optics, characterized by the unprecedented complex optical architectures containing single or multiple meta-optics elements, has emerged as a powerful paradigm for overcoming the physical limitations of single-layer metasurfaces. This review systematically examines the recent progress in this burgeoning field, primarily focusing on [...] Read more.
Compound meta-optics, characterized by the unprecedented complex optical architectures containing single or multiple meta-optics elements, has emerged as a powerful paradigm for overcoming the physical limitations of single-layer metasurfaces. This review systematically examines the recent progress in this burgeoning field, primarily focusing on the development of high-performance optical systems for imaging, display, sensing, and computing. We first focus on the design of compound metalens architectures that integrate metalenses with additional elements such as iris, refractive optics, or other meta-optics elements. These configurations effectively succeed in providing multiple high-quality image quality metrics simultaneously by correcting monochromatic and chromatic aberrations, expanding the field of view, enhancing overall efficiency, and so on. Thus, the compound approach enables practical applications in next-generation cameras and sensors. Furthermore, we explore the advancement of cascaded metasurfaces in the realm of wave-optics, specifically for advanced meta-holography and optical computing. These multi-layered systems facilitate complex wavefront engineering, leading to significant increases in information capacity and functionality for security and analog optical computing applications. By providing a comprehensive overview of fundamental principles, design strategies, and emerging applications, this review aims to offer a clear perspective on the pivotal role of compound meta-optics in devising and optimizing compact, multifunctional optical systems to optics engineers with a variety of professional knowledge backgrounds and techniques. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

26 pages, 4329 KB  
Review
Advanced Sensor Technologies in Cutting Applications: A Review
by Motaz Hassan, Roan Kirwin, Chandra Sekhar Rakurty and Ajay Mahajan
Sensors 2026, 26(3), 762; https://doi.org/10.3390/s26030762 - 23 Jan 2026
Viewed by 331
Abstract
Advances in sensing technologies are increasingly transforming cutting operations by enabling data-driven condition monitoring, predictive maintenance, and process optimization. This review surveys recent developments in sensing modalities for cutting systems, including vibration sensors, acoustic emission sensors, optical and vision-based systems, eddy-current sensors, force [...] Read more.
Advances in sensing technologies are increasingly transforming cutting operations by enabling data-driven condition monitoring, predictive maintenance, and process optimization. This review surveys recent developments in sensing modalities for cutting systems, including vibration sensors, acoustic emission sensors, optical and vision-based systems, eddy-current sensors, force sensors, and emerging hybrid/multi-modal sensing frameworks. Each sensing approach offers unique advantages in capturing mechanical, acoustic, geometric, or electromagnetic signatures related to tool wear, process instability, and fault development, while also showing modality-specific limitations such as noise sensitivity, environmental robustness, and integration complexity. Recent trends show a growing shift toward hybrid and multi-modal sensor fusion, where data from multiple sensors are combined using advanced data analytics and machine learning to improve diagnostic accuracy and reliability under changing cutting conditions. The review also discusses how artificial intelligence, Internet of Things connectivity, and edge computing enable scalable, real-time monitoring solutions, along with the challenges related to data needs, computational costs, and system integration. Future directions highlight the importance of robust fusion architectures, physics-informed and explainable models, digital twin integration, and cost-effective sensor deployment to accelerate adoption across various manufacturing environments. Overall, these advancements position advanced sensing and hybrid monitoring strategies as key drivers of intelligent, Industry 4.0-oriented cutting processes. Full article
Show Figures

Figure 1

54 pages, 3083 KB  
Review
A Survey on Green Wireless Sensing: Energy-Efficient Sensing via WiFi CSI and Lightweight Learning
by Rod Koo, Xihao Liang, Deepak Mishra and Aruna Seneviratne
Energies 2026, 19(2), 573; https://doi.org/10.3390/en19020573 - 22 Jan 2026
Viewed by 160
Abstract
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often [...] Read more.
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often trained and run on graphics processing units (GPUs) can negate these gains. This review highlights two core energy efficiency levers in CSI-based wireless sensing. First ambient CSI harvesting cuts power use by an order of magnitude compared to radar and active Internet of Things (IoT) sensors. Second, integrated sensing and communication (ISAC) embeds sensing functionality into existing WiFi links, thereby reducing device count, battery waste, and carbon impact. We review conventional handcrafted and accuracy-first methods to set the stage for surveying green learning strategies and lightweight learning techniques, including compact hybrid neural architectures, pruning, knowledge distillation, quantisation, and semi-supervised training that preserve accuracy while reducing model size and memory footprint. We also discuss hardware co-design from low-power microcontrollers to edge application-specific integrated circuits (ASICs) and WiFi firmware extensions that align computation with platform constraints. Finally, we identify open challenges in domain-robust compression, multi-antenna calibration, energy-proportionate model scaling, and standardised joules per inference metrics. Our aim is a practical battery-friendly wireless sensing stack ready for smart home and 6G era deployments. Full article
Show Figures

Graphical abstract

35 pages, 5497 KB  
Article
Robust Localization of Flange Interface for LNG Tanker Loading and Unloading Under Variable Illumination a Fusion Approach of Monocular Vision and LiDAR
by Mingqin Liu, Han Zhang, Jingquan Zhu, Yuming Zhang and Kun Zhu
Appl. Sci. 2026, 16(2), 1128; https://doi.org/10.3390/app16021128 - 22 Jan 2026
Viewed by 61
Abstract
The automated localization of the flange interface in LNG tanker loading and unloading imposes stringent requirements for accuracy and illumination robustness. Traditional monocular vision methods are prone to localization failure under extreme illumination conditions, such as intense glare or low light, while LiDAR, [...] Read more.
The automated localization of the flange interface in LNG tanker loading and unloading imposes stringent requirements for accuracy and illumination robustness. Traditional monocular vision methods are prone to localization failure under extreme illumination conditions, such as intense glare or low light, while LiDAR, despite being unaffected by illumination, suffers from limitations like a lack of texture information. This paper proposes an illumination-robust localization method for LNG tanker flange interfaces by fusing monocular vision and LiDAR, with three scenario-specific innovations beyond generic multi-sensor fusion frameworks. First, an illumination-adaptive fusion framework is designed to dynamically adjust detection parameters via grayscale mean evaluation, addressing extreme illumination (e.g., glare, low light with water film). Second, a multi-constraint flange detection strategy is developed by integrating physical dimension constraints, K-means clustering, and weighted fitting to eliminate background interference and distinguish dual flanges. Third, a customized fusion pipeline (ROI extraction-plane fitting-3D circle center solving) is established to compensate for monocular depth errors and sparse LiDAR point cloud limitations using flange radius prior. High-precision localization is achieved via four key steps: multi-modal data preprocessing, LiDAR-camera spatial projection, fusion-based flange circle detection, and 3D circle center fitting. While basic techniques such as LiDAR-camera spatiotemporal synchronization and K-means clustering are adapted from prior works, their integration with flange-specific constraints and illumination-adaptive design forms the core novelty of this study. Comparative experiments between the proposed fusion method and the monocular vision-only localization method are conducted under four typical illumination scenarios: uniform illumination, local strong illumination, uniform low illumination, and low illumination with water film. The experimental results based on 20 samples per illumination scenario (80 valid data sets in total) show that, compared with the monocular vision method, the proposed fusion method reduces the Mean Absolute Error (MAE) of localization accuracy by 33.08%, 30.57%, and 75.91% in the X, Y, and Z dimensions, respectively, with the overall 3D MAE reduced by 61.69%. Meanwhile, the Root Mean Square Error (RMSE) in the X, Y, and Z dimensions is decreased by 33.65%, 32.71%, and 79.88%, respectively, and the overall 3D RMSE is reduced by 64.79%. The expanded sample size verifies the statistical reliability of the proposed method, which exhibits significantly superior robustness to extreme illumination conditions. Full article
Show Figures

Figure 1

22 pages, 3491 KB  
Article
Synergistic Effects and Differential Roles of Dual-Frequency and Multi-Dimensional SAR Features in Forest Aboveground Biomass and Component Estimation
by Yifan Hu, Yonghui Nie, Haoyuan Du and Wenyi Fan
Remote Sens. 2026, 18(2), 366; https://doi.org/10.3390/rs18020366 - 21 Jan 2026
Viewed by 102
Abstract
Accurate quantification of forest aboveground biomass (AGB) is essential for monitoring terrestrial carbon stocks. While total AGB estimation is widely practiced, resolving component biomass such as canopy, branches, leaves, and trunks enhances the precision of carbon sink assessments and provides critical structural parameters [...] Read more.
Accurate quantification of forest aboveground biomass (AGB) is essential for monitoring terrestrial carbon stocks. While total AGB estimation is widely practiced, resolving component biomass such as canopy, branches, leaves, and trunks enhances the precision of carbon sink assessments and provides critical structural parameters for ecosystem modeling. Most studies rely on a single SAR sensor or a limited range of SAR features, which restricts their ability to represent vegetation structural complexity and reduces biomass estimation accuracy. Here, we propose a phased fusion strategy that integrates backscatter intensity, interferometric coherence, texture measures, and polarimetric decomposition parameters derived from dual-frequency ALOS-2, GF-3, and Sentinel-1A SAR data. These complementary multi-dimensional SAR features are incorporated into a Random Forest model optimized using an Adaptive Genetic Algorithm (RF-AGA) to estimate forest total and component estimation. The results show that the progressive incorporation of coherence and texture features markedly improved model performance, increasing the accuracy of total AGB to R2 = 0.88 and canopy biomass to R2 = 0.78 under leave-one-out cross-validation. Feature contribution analysis indicates strong complementarity among SAR parameters. Polarimetric decomposition yielded the largest overall contribution, while L-band volume scattering was the primary driver of trunk and canopy estimation. Coherence-enhanced trunk prediction increased R2 by 13 percent, and texture improved canopy representation by capturing structural heterogeneity and reducing saturation effects. This study confirms that integrating coherence and texture information within the RF-AGA framework enhances AGB estimation, and that the differential contributions of multi-dimensional SAR parameters across total and component biomass estimation originate from their distinct structural characteristics. The proposed framework provides a robust foundation for regional carbon monitoring and highlights the value of integrating complementary SAR features with ensemble learning to achieve high-precision forest carbon assessment. Full article
(This article belongs to the Special Issue Advances in Multi-Sensor Remote Sensing for Vegetation Monitoring)
Show Figures

Figure 1

35 pages, 3598 KB  
Article
PlanetScope Imagery and Hybrid AI Framework for Freshwater Lake Phosphorus Monitoring and Water Quality Management
by Ying Deng, Daiwei Pan, Simon X. Yang and Bahram Gharabaghi
Water 2026, 18(2), 261; https://doi.org/10.3390/w18020261 - 19 Jan 2026
Viewed by 216
Abstract
Accurate estimation of Total Phosphorus, referred to as “Phosphorus, Total” (PPUT; µg/L) in the sourced monitoring data, is essential for understanding eutrophication dynamics and guiding water-quality management in inland lakes. However, lake-wide PPUT mapping at high resolution is challenging to achieve using conventional [...] Read more.
Accurate estimation of Total Phosphorus, referred to as “Phosphorus, Total” (PPUT; µg/L) in the sourced monitoring data, is essential for understanding eutrophication dynamics and guiding water-quality management in inland lakes. However, lake-wide PPUT mapping at high resolution is challenging to achieve using conventional in-situ sampling, and nearshore gradients are often poorly resolved by medium- or low-resolution satellite sensors. This study exploits multi-generation PlanetScope imagery (Dove Classic, Dove-R, and SuperDove; 3–5 m, near-daily revisit) to develop a hybrid AI framework for PPUT retrieval in Lake Simcoe, Ontario, Canada. PlanetScope surface reflectance, short-term meteorological descriptors (3 to 7-day aggregates of air temperature, wind speed, precipitation, and sea-level pressure), and in-situ Secchi depth (SSD) were used to train five ensemble-learning models (HistGradientBoosting, CatBoost, RandomForest, ExtraTrees, and GradientBoosting) across eight feature-group regimes that progressively extend from bands-only, to combinations with spectral indices and day-of-year (DOY), and finally to SSD-inclusive full-feature configurations. The inclusion of SSD led to a strong and systematic performance gain, with mean R2 increasing from about 0.67 (SSD-free) to 0.94 (SSD-aware), confirming that vertically integrated optical clarity is the dominant constraint on PPUT retrieval and cannot be reconstructed from surface reflectance alone. To enable scalable SSD-free monitoring, a knowledge-distillation strategy was implemented in which an SSD-aware teacher transfers its learned representation to a student using only satellite and meteorological inputs. The optimal student model, based on a compact subset of 40 predictors, achieved R2 = 0.83, RMSE = 9.82 µg/L, and MAE = 5.41 µg/L, retaining approximately 88% of the teacher’s explanatory power. Application of the student model to PlanetScope scenes from 2020 to 2025 produces meter-scale PPUT maps; a 26 July 2024 case study shows that >97% of the lake surface remains below 10 µg/L, while rare (<1%) but coherent hotspots above 20 µg/L align with tributary mouths and narrow channels. The results demonstrate that combining commercial high-resolution imagery with physics-informed feature engineering and knowledge transfer enables scalable and operationally relevant monitoring of lake phosphorus dynamics. These high-resolution PPUT maps enable lake managers to identify nearshore nutrient hotspots, tributary plume structures. In doing so, the proposed framework supports targeted field sampling, early warning for eutrophication events, and more robust, lake-wide nutrient budgeting. Full article
(This article belongs to the Section Water Quality and Contamination)
Show Figures

Figure 1

17 pages, 5869 KB  
Article
Research on Tool Wear Prediction Method Based on CNN-ResNet-CBAM-BiGRU
by Bo Sun, Hao Wang, Jian Zhang, Lixin Zhang and Xiangqin Wu
Sensors 2026, 26(2), 661; https://doi.org/10.3390/s26020661 - 19 Jan 2026
Viewed by 217
Abstract
Aiming to address insufficient feature extraction, vanishing gradients, and low prediction accuracy in tool wear prediction, this paper proposes a hybrid deep neural network based on a Convolutional Neural Network (CNN), Residual Network (ResNet) residual connections, the Convolutional Block Attention Module (CBAM), and [...] Read more.
Aiming to address insufficient feature extraction, vanishing gradients, and low prediction accuracy in tool wear prediction, this paper proposes a hybrid deep neural network based on a Convolutional Neural Network (CNN), Residual Network (ResNet) residual connections, the Convolutional Block Attention Module (CBAM), and a Bidirectional Gated Recurrent Unit (BiGRU). First, a 34-dimensional multi-domain feature set covering the time domain, frequency domain, and time–frequency domain is constructed, and multi-sensor signals are standardized using z-score normalization. A CNN–BiGRU backbone is then established, where ResNet-style residual connections are introduced to alleviate training degradation and mitigate vanishing-gradient issues in deep networks. Meanwhile, CBAM is integrated into the feature extraction module to adaptively reweight informative features in both channel and spatial dimensions. In addition, a BiGRU layer is embedded for temporal modeling to capture bidirectional dependencies throughout the wear evolution process. Finally, a fully connected layer is used as a regressor to map high-dimensional representations to tool wear values. Experiments on the PHM2010 dataset demonstrate that the proposed hybrid architecture is more stable and achieves better predictive performance than several mainstream deep learning baselines. Systematic ablation studies further quantify the contribution of each component: compared with the baseline CNN model, the mean absolute error (MAE) is reduced by 47.5%, the root mean square error (RMSE) is reduced by 68.5%, and the coefficient of determination (R2) increases by 14.5%, enabling accurate tool wear prediction. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Back to TopTop