Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,003)

Search Parameters:
Keywords = sensor imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1258 KiB  
Article
Spatiotemporal Anomaly Detection in Distributed Acoustic Sensing Using a GraphDiffusion Model
by Seunghun Jeong, Huioon Kim, Young Ho Kim, Chang-Soo Park, Hyoyoung Jung and Hong Kook Kim
Sensors 2025, 25(16), 5157; https://doi.org/10.3390/s25165157 - 19 Aug 2025
Abstract
Distributed acoustic sensing (DAS), which can provide dense spatial and temporal measurements using optical fibers, is quickly becoming critical for large-scale infrastructure monitoring. However, anomaly detection in DAS data is still challenging owing to the spatial correlations between sensing channels and nonlinear temporal [...] Read more.
Distributed acoustic sensing (DAS), which can provide dense spatial and temporal measurements using optical fibers, is quickly becoming critical for large-scale infrastructure monitoring. However, anomaly detection in DAS data is still challenging owing to the spatial correlations between sensing channels and nonlinear temporal dynamics. Recent approaches often disregard the explicit sensor layout and instead handle DAS data as two-dimensional images or flattened sequences, eliminating the spatial topology. This work proposes GraphDiffusion, a novel generative anomaly-detection model that combines a conditional denoising diffusion probabilistic model (DDPM) and a graph neural network (GNN) to overcome these limitations. By treating each channel as a graph node and building edges based on Euclidean proximity, the GNN explicitly models the spatial arrangement of DAS sensors, allowing the network to capture local interchannel dependencies. The conditional DDPM uses iterative denoising to model the temporal dynamics of standard signals, enabling the system to detect deviations without the need for anomalies. The performance evaluations based on real-world DAS datasets reveal that GraphDiffusion achieves 98.2% and 98.0% based on the area under the curve (AUC) of the F1-score at K different levels (F1K-AUC), an AUC of receiver operating characteristic (ROC) at K different levels (ROCK-AUC), outperforming other comparative models. Full article
(This article belongs to the Section Intelligent Sensors)
22 pages, 5916 KiB  
Article
Research on Displacement Tracking Device Inside Hybrid Materials Based on Electromagnetic Induction Principle
by Xiansheng Sun, Yixuan Wang, Yu Chen, Mingyue Cao and Changhong Zhou
Sensors 2025, 25(16), 5143; https://doi.org/10.3390/s25165143 - 19 Aug 2025
Abstract
Magnetic induction imaging technology, as a non-invasive detection method based on the principle of electromagnetic induction, has a wide range of applications in the field of materials science and engineering with the advantages of no radiation and fast imaging. However, it has not [...] Read more.
Magnetic induction imaging technology, as a non-invasive detection method based on the principle of electromagnetic induction, has a wide range of applications in the field of materials science and engineering with the advantages of no radiation and fast imaging. However, it has not been improved to address the problems of high contact measurement interference and low spatial resolution of traditional strain detection methods in bulk materials engineering. For this reason, this study proposes a magnetic induction detection technique incorporating metal particle assistance and designs a hardware detection system based on an eight-coil sensor to improve the sensitivity and accuracy of strain detection. Through finite element simulation and an image reconstruction algorithm, the conductivity distribution reconstruction was realized. Taking asphalt concrete as the research object, particle-reinforced composite specimens with added metal particles were prepared. On this basis, a hardware detection system with eight-coil sensors was designed and constructed, and the functionality and stability of the system were verified. Using finite element analysis technology, two-dimensional and three-dimensional simulation models were established to focus on analyzing the effects of different coil turns and excitation parameters on the induced voltage signal. The method proposed in this study provides a new technical approach for non-contact strain detection in road engineering and can also be applied to other composite materials. Full article
Show Figures

Figure 1

14 pages, 1456 KiB  
Technical Note
A Study on Urban Built-Up Area Extraction Methods and Consistency Evaluation Based on Multi-Source Nighttime Light Remote Sensing Data: A Case Study of Wuhan City
by Shiqi Tu, Qingming Zhan, Ruihan Qiu, Jiashan Yu and Agamo Qubi
Remote Sens. 2025, 17(16), 2879; https://doi.org/10.3390/rs17162879 - 18 Aug 2025
Abstract
Accurate delineation of urban built-up areas is critical for urban monitoring and planning. We evaluated the performance and consistency of three widely used methods—thresholding, multi-temporal image fusion, and support vector machine (SVM)—across three major nighttime light (NTL) datasets (DMSP/OLS, SNPP/VIIRS, and Luojia-1). We [...] Read more.
Accurate delineation of urban built-up areas is critical for urban monitoring and planning. We evaluated the performance and consistency of three widely used methods—thresholding, multi-temporal image fusion, and support vector machine (SVM)—across three major nighttime light (NTL) datasets (DMSP/OLS, SNPP/VIIRS, and Luojia-1). We developed a unified methodological framework and applied it to Wuhan, China, encompassing data preprocessing, feature construction, classification, and cross-dataset validation. The results show that SNPP/VIIRS combined with thresholding or SVM achieved highest accuracy (kappa coefficient = 0.70 and 0.61, respectively) and spatial consistency (intersection over union, IoU = 0.76), attributable to its high radiometric sensitivity and temporal stability. DMSP/OLS exhibited robust performance with SVM (kappa = 0.73), likely benefiting from its long historical coverage, while Luojia-1 was constrained by limited temporal availability, hindering its suitability for temporal fusion methods. This study highlights the critical influence of sensor characteristics and method–dataset compatibility on extraction outcomes. While traditional methods provide interpretability and computational efficiency, the findings suggest a need for integrating deep learning models and hybrid strategies in future work. These advancements could further improve accuracy, robustness, and transferability across diverse urban contexts. Full article
Show Figures

Figure 1

22 pages, 1906 KiB  
Article
A Style Transfer-Based Fast Image Quality Assessment Method for Image Sensors
by Weizhi Xian, Bin Chen, Jielu Yan, Xuekai Wei, Kunyin Guo, Bin Fang and Mingliang Zhou
Sensors 2025, 25(16), 5121; https://doi.org/10.3390/s25165121 - 18 Aug 2025
Abstract
Accurate image quality evaluation is essential for optimizing sensor performance and enhancing the fidelity of visual data. The concept of “image style” encompasses the overall visual characteristics of an image, including elements such as colors, textures, shapes, lines, strokes, and other visual components. [...] Read more.
Accurate image quality evaluation is essential for optimizing sensor performance and enhancing the fidelity of visual data. The concept of “image style” encompasses the overall visual characteristics of an image, including elements such as colors, textures, shapes, lines, strokes, and other visual components. In this paper, we propose a novel full-reference image quality assessment (FR-IQA) method that leverages the principles of style transfer, which we call style- and content-based IQA (SCIQA). Our approach consists of three main steps. First, we employ a deep convolutional neural network (CNN) to decompose and represent images in the deep domain, capturing both low-level and high-level features. Second, we define a comprehensive deep perceptual distance metric between two images, taking into account both image content and style. This metric combines traditional content-based measures with style-based measures inspired by recent advances in neural style transfer. Finally, we formulate a perceptual optimization problem to determine the optimal parameters for the SCIQA model, which we solve via a convex optimization approach. Experimental results across multiple benchmark datasets (LIVE, CSIQ, TID2013, KADID-10k, and PIPAL) demonstrate that SCIQA outperforms state-of-the-art FR-IQA methods. Specifically, SCIQA achieves Pearson linear correlation coefficients (PLCC) of 0.956, 0.941, and 0.895 on the LIVE, CSIQ, and TID2013 datasets, respectively, outperforming traditional methods such as SSIM (PLCC: 0.847, 0.852, 0.665) and deep learning-based methods such as DISTS (PLCC: 0.924, 0.919, 0.855). The proposed method also demonstrates robust generalizability on the large-scale PIPAL dataset, achieving an SROCC of 0.702. Furthermore, SCIQA exhibits strong interpretability, exceptional prediction accuracy, and low computational complexity, making it a practical tool for real-world applications. Full article
(This article belongs to the Special Issue Deep Learning Technology and Image Sensing: 2nd Edition)
Show Figures

Figure 1

24 pages, 8188 KiB  
Article
Top of the Atmosphere Reflected Shortwave Radiative Fluxes from ABI on GOES-18
by Yingtao Ma, Rachel T. Pinker, Wen Chen, Istvan Laszlo, Hye-Yun Kim, Hongqing Liu and Jaime Daniels
Atmosphere 2025, 16(8), 979; https://doi.org/10.3390/atmos16080979 - 17 Aug 2025
Viewed by 65
Abstract
In this study, we describe the derivation and evaluation of Top of the Atmosphere (TOA) Shortwave Radiative (SWR) Fluxes from the Advanced Baseline Imager (ABI) sensor on the GOES-18 satellite. The TOA estimates use narrowband observations from ABI that are transformed to broadband [...] Read more.
In this study, we describe the derivation and evaluation of Top of the Atmosphere (TOA) Shortwave Radiative (SWR) Fluxes from the Advanced Baseline Imager (ABI) sensor on the GOES-18 satellite. The TOA estimates use narrowband observations from ABI that are transformed to broadband (NTB), based on simulations and adjusted to total fluxes using Angular Distribution Models (ADMs). Subsequently, the GOES-18 estimates are evaluated against the Clouds and the Earth’s Radiant Energy System (CERES) data, the only observed SWR broadband flux dataset. The importance of agreement at the TOA is that most methodologies to derive surface SWR start with the satellite observation at the TOA. Moreover, information needed to compute radiative fluxes at both boundaries (TOA and surface) is needed for estimating the energy absorbed by the atmosphere. The methodology described was comprehensively evaluated, and possible sources of errors were identified. The results of the evaluation for the four seasonal months indicate that by using the best available auxiliary data, the accuracy achieved in estimating TOA SWR at the instantaneous scale ranges between 0.55 and 17.14 W m−2 for the bias and 22.21 to 30.64 W m−2 for the standard deviation of biases (differences are ABI minus CERES). It is believed that the high bias of 17.14 for July is related to the predominantly cloudless sky conditions, when the used ADMs do not perform as well as for cloudy conditions. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

22 pages, 1904 KiB  
Article
FPGA–STM32-Embedded Vision and Control Platform for ADAS Development on a 1:5 Scale Vehicle
by Karen Roa-Tort, Diego A. Fabila-Bustos, Macaria Hernández-Chávez, Daniel León-Martínez, Adrián Apolonio-Vera, Elizama B. Ortega-Gutiérrez, Luis Cadena-Martínez, Carlos D. Hernández-Lozano, César Torres-Pérez, David A. Cano-Ibarra, J. Alejandro Aguirre-Anaya and Josué D. Rivera-Fernández
Vehicles 2025, 7(3), 84; https://doi.org/10.3390/vehicles7030084 - 17 Aug 2025
Viewed by 55
Abstract
This paper presents the design, development, and experimental validation of a low-cost, modular, and scalable Advanced Driver Assistance System (ADAS) platform intended for research and educational purposes. The system integrates embedded computer vision and electronic control using an FPGA for accelerated real-time image [...] Read more.
This paper presents the design, development, and experimental validation of a low-cost, modular, and scalable Advanced Driver Assistance System (ADAS) platform intended for research and educational purposes. The system integrates embedded computer vision and electronic control using an FPGA for accelerated real-time image processing and an STM32 microcontroller for sensor data acquisition and actuator management. The YOLOv3-Tiny model is implemented to enable efficient pedestrian and vehicle detection under hardware constraints, while additional vision algorithms are used for lane line detection, ensuring a favorable trade-off between accuracy and processing speed. The platform is deployed on a 1:5 scale gasoline-powered vehicle, offering a safe and cost-effective testbed for validating ADAS functionalities, such as lane tracking, pedestrian and vehicle identification, and semi-autonomous navigation. The methodology includes the integration of a CMOS camera, an FPGA development board, and various sensors (LiDAR, ultrasonic, and Hall-effect), along with synchronized communication protocols to ensure real-time data exchange between vision and control modules. A wireless graphical user interface (GUI) enables remote monitoring and teleoperation. Experimental results show competitive detection accuracy—exceeding 94% in structured environments—and processing latencies below 70 ms per frame, demonstrating the platform’s effectiveness for rapid prototyping and applied training. Its modularity and affordability position it as a powerful tool for advancing ADAS research and education, with high potential for future expansion to full-scale autonomous vehicle applications. Full article
(This article belongs to the Special Issue Design and Control of Autonomous Driving Systems)
Show Figures

Figure 1

19 pages, 2101 KiB  
Article
A Novel Shape-Prior-Guided Automatic Calibration Method for Free-Hand Three-Dimensional Ultrasonography
by Xing-Yang Liu, Jia-Xu Zhao, Hui Tang and Guang-Quan Zhou
Sensors 2025, 25(16), 5104; https://doi.org/10.3390/s25165104 - 17 Aug 2025
Viewed by 64
Abstract
Ultrasound probe calibration is crucial for precise spatial mapping in ultrasound-guided surgical navigation and free-hand 3D ultrasound imaging as it establishes the rigid-body transformation between the ultrasound image plane and an external tracking sensor. However, the existing methods often rely on manual feature [...] Read more.
Ultrasound probe calibration is crucial for precise spatial mapping in ultrasound-guided surgical navigation and free-hand 3D ultrasound imaging as it establishes the rigid-body transformation between the ultrasound image plane and an external tracking sensor. However, the existing methods often rely on manual feature point selection and exhibit limited robustness to outliers, resulting in reduced accuracy, reproducibility, and efficiency. To address these limitations, we propose a fully automated calibration framework that leverages the geometric priors of an N-wire phantom to achieve reliable recognition. The method incorporates a robust feature point extraction algorithm and integrates a hybrid outlier rejection strategy based on the Random Sample Consensus (RANSAC) algorithm. The experimental evaluations demonstrate sub-millimeter accuracy (<0.6 mm) across varying imaging depths, with the calibration process completed in under two minutes and exhibiting high repeatability. These results suggest that the proposed framework provides a robust, accurate, and time-efficient solution for ultrasound probe calibration, with strong potential for clinical integration. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

23 pages, 1657 KiB  
Article
High-Precision Pest Management Based on Multimodal Fusion and Attention-Guided Lightweight Networks
by Ziye Liu, Siqi Li, Yingqiu Yang, Xinlu Jiang, Mingtian Wang, Dongjiao Chen, Tianming Jiang and Min Dong
Insects 2025, 16(8), 850; https://doi.org/10.3390/insects16080850 - 16 Aug 2025
Viewed by 260
Abstract
In the context of global food security and sustainable agricultural development, the efficient recognition and precise management of agricultural insect pests and their predators have become critical challenges in the domain of smart agriculture. To address the limitations of traditional models that overly [...] Read more.
In the context of global food security and sustainable agricultural development, the efficient recognition and precise management of agricultural insect pests and their predators have become critical challenges in the domain of smart agriculture. To address the limitations of traditional models that overly rely on single-modal inputs and suffer from poor recognition stability under complex field conditions, a multimodal recognition framework has been proposed. This framework integrates RGB imagery, thermal infrared imaging, and environmental sensor data. A cross-modal attention mechanism, environment-guided modality weighting strategy, and decoupled recognition heads are incorporated to enhance the model’s robustness against small targets, intermodal variations, and environmental disturbances. Evaluated on a high-complexity multimodal field dataset, the proposed model significantly outperforms mainstream methods across four key metrics, precision, recall, F1-score, and mAP@50, achieving 91.5% precision, 89.2% recall, 90.3% F1-score, and 88.0% mAP@50. These results represent an improvement of over 6% compared to representative models such as YOLOv8 and DETR. Additional ablation studies confirm the critical contributions of key modules, particularly under challenging scenarios such as low light, strong reflections, and sensor data noise. Moreover, deployment tests conducted on the Jetson Xavier edge device demonstrate the feasibility of real-world application, with the model achieving a 25.7 FPS inference speed and a compact size of 48.3 MB, thus balancing accuracy and lightweight design. This study provides an efficient, intelligent, and scalable AI solution for pest surveillance and biological control, contributing to precision pest management in agricultural ecosystems. Full article
Show Figures

Figure 1

27 pages, 2985 KiB  
Article
FPGA Chip Design of Sensors for Emotion Detection Based on Consecutive Facial Images by Combining CNN and LSTM
by Shing-Tai Pan and Han-Jui Wu
Electronics 2025, 14(16), 3250; https://doi.org/10.3390/electronics14163250 - 15 Aug 2025
Viewed by 192
Abstract
This paper proposes emotion recognition methods for consecutive facial images and implements the inference of a neural network model on a field-programmable gate array (FPGA) for real-time sensing of human motion. The proposed emotion recognition methods are based on a neural network architecture [...] Read more.
This paper proposes emotion recognition methods for consecutive facial images and implements the inference of a neural network model on a field-programmable gate array (FPGA) for real-time sensing of human motion. The proposed emotion recognition methods are based on a neural network architecture called Convolutional Long Short-Term Memory Fully Connected Deep Neural Network (CLDNN), which combines convolutional neural networks (CNNs) for spatial feature extraction, long short-term memory (LSTM) for temporal modeling, and fully connected neural networks (FCNNs) for final classification. This architecture can analyze the local feature sequences obtained through convolution of data, making it suitable for processing time-series data such as consecutive facial images. The method achieves an average recognition rate of 99.51% on the RAVDESS database, 87.80% on the BAUM-1s database and 96.82% on the eNTERFACE’05 database, using 10-fold cross-validation on a personal computer (PC). The comparisons in this paper show that our methods outperform existing related works in recognition accuracy. The same model is implemented on an FPGA chip, where it achieves identical accuracy to that on a PC, confirming both its effectiveness and hardware compatibility. Full article
(This article belongs to the Special Issue Lab-on-Chip Biosensors)
Show Figures

Figure 1

30 pages, 1292 KiB  
Review
Advances in UAV Remote Sensing for Monitoring Crop Water and Nutrient Status: Modeling Methods, Influencing Factors, and Challenges
by Xiaofei Yang, Junying Chen, Xiaohan Lu, Hao Liu, Yanfu Liu, Xuqian Bai, Long Qian and Zhitao Zhang
Plants 2025, 14(16), 2544; https://doi.org/10.3390/plants14162544 - 15 Aug 2025
Viewed by 302
Abstract
With the advancement of precision agriculture, Unmanned Aerial Vehicle (UAV)-based remote sensing has been increasingly employed for monitoring crop water and nutrient status due to its high flexibility, fine spatial resolution, and rapid data acquisition capabilities. This review systematically examines recent research progress [...] Read more.
With the advancement of precision agriculture, Unmanned Aerial Vehicle (UAV)-based remote sensing has been increasingly employed for monitoring crop water and nutrient status due to its high flexibility, fine spatial resolution, and rapid data acquisition capabilities. This review systematically examines recent research progress and key technological pathways in UAV-based remote sensing for crop water and nutrient monitoring. It provides an in-depth analysis of UAV platforms, sensor configurations, and their suitability across diverse agricultural applications. The review also highlights critical data processing steps—including radiometric correction, image stitching, segmentation, and data fusion—and compares three major modeling approaches for parameter inversion: vegetation index-based, data-driven, and physically based methods. Representative application cases across various crops and spatiotemporal scales are summarized. Furthermore, the review explores factors affecting monitoring performance, such as crop growth stages, spatial resolution, illumination and meteorological conditions, and model generalization. Despite significant advancements, current limitations include insufficient sensor versatility, labor-intensive data processing chains, and limited model scalability. Finally, the review outlines future directions, including the integration of edge intelligence, hybrid physical–data modeling, and multi-source, three-dimensional collaborative sensing. This work aims to provide theoretical insights and technical support for advancing UAV-based remote sensing in precision agriculture. Full article
Show Figures

Figure 1

20 pages, 7578 KiB  
Article
Cross Attention Based Dual-Modality Collaboration for Hyperspectral Image and LiDAR Data Classification
by Khanzada Muzammil Hussain, Keyun Zhao, Yang Zhou, Aamir Ali and Ying Li
Remote Sens. 2025, 17(16), 2836; https://doi.org/10.3390/rs17162836 - 15 Aug 2025
Viewed by 209
Abstract
Advancements in satellite sensor technology have enabled access to diverse remote sensing (RS) data from multiple platforms. Hyperspectral Image (HSI) data offers rich spectral detail for material identification, while LiDAR captures high-resolution 3D structural information, making the two modalities naturally complementary. By fusing [...] Read more.
Advancements in satellite sensor technology have enabled access to diverse remote sensing (RS) data from multiple platforms. Hyperspectral Image (HSI) data offers rich spectral detail for material identification, while LiDAR captures high-resolution 3D structural information, making the two modalities naturally complementary. By fusing HSI and LiDAR, we can mitigate the limitations of each and improve tasks like land cover classification, vegetation analysis, and terrain mapping through more robust spectral–spatial feature representation. However, traditional multi-scale feature fusion models often struggle with aligning features effectively, which can lead to redundant outputs and diminished spatial clarity. To address these issues, we propose the Cross Attention Bridge for HSI and LiDAR (CAB-HL), a novel dual-path framework that employs a multi-stage cross-attention mechanism to guide the interaction between spectral and spatial features. In CAB-HL, features from each modality are refined across three progressive stages using cross-attention modules, which enhance contextual alignment while preserving the distinctive characteristics of each modality. These fused representations are subsequently integrated and passed through a lightweight classification head. Extensive experiments on three benchmark RS datasets demonstrate that CAB-HL consistently outperforms existing state-of-the-art models, confirm that CAB-HL consistently outperforms in learning deep joint representations for multimodal classification tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence Remote Sensing for Earth Observation)
Show Figures

Figure 1

18 pages, 5623 KiB  
Article
Rapid and Quantitative Prediction of Tea Pigments Content During the Rolling of Black Tea by Multi-Source Information Fusion and System Analysis Methods
by Hanting Zou, Ranyang Li, Xuan Xuan, Yongwen Jiang, Haibo Yuan and Ting An
Foods 2025, 14(16), 2829; https://doi.org/10.3390/foods14162829 - 15 Aug 2025
Viewed by 167
Abstract
Efficient and convenient intelligent online detection methods can provide important technical support for the standardization of processing flow in the tea industry. Hence, this study focuses on the key chemical indicators—tea pigments in the rolling process of black tea as the research object, [...] Read more.
Efficient and convenient intelligent online detection methods can provide important technical support for the standardization of processing flow in the tea industry. Hence, this study focuses on the key chemical indicators—tea pigments in the rolling process of black tea as the research object, and uses multi-source information fusion methods to predict the changes of tea pigments content. Firstly, the tea pigments content of the samples under different rolling time series of black tea is determined by system analysis methods. Secondly, the spectra and images of the corresponding samples under different rolling time series are simultaneously obtained through the portable near-infrared spectrometer and the machine vision system. Then, by extracting the principal components of the image feature information and screening characteristic wavelengths from the spectral information, low-level and middle-level data fusion strategies are chosen to effectively integrate sensor data from different sources. At last, the linear (PLSR) and nonlinear (SVR and LSSVR) models are established respectively based on the different characteristic data information. The research results show that the LSSVR based on middle-level data fusion strategy have the best effect. In the prediction results of theaflavins, thearubigins, and theabrownins, the correlation coefficients of the testing sets are all greater than 0.98, and the relative percentage deviations are all greater than 5. The complementary fusion of the spectrum and image information effectively compensates for the problems of information redundancy and feature missing in the quantitative analysis of tea pigments content using the single-modal data information. Full article
Show Figures

Figure 1

22 pages, 4524 KiB  
Article
RAEM-SLAM: A Robust Adaptive End-to-End Monocular SLAM Framework for AUVs in Underwater Environments
by Yekai Wu, Yongjie Li, Wenda Luo and Xin Ding
Drones 2025, 9(8), 579; https://doi.org/10.3390/drones9080579 - 15 Aug 2025
Viewed by 304
Abstract
Autonomous Underwater Vehicles (AUVs) play a critical role in ocean exploration. However, due to the inherent limitations of most sensors in underwater environments, achieving accurate navigation and localization in complex underwater scenarios remains a significant challenge. While vision-based Simultaneous Localization and Mapping (SLAM) [...] Read more.
Autonomous Underwater Vehicles (AUVs) play a critical role in ocean exploration. However, due to the inherent limitations of most sensors in underwater environments, achieving accurate navigation and localization in complex underwater scenarios remains a significant challenge. While vision-based Simultaneous Localization and Mapping (SLAM) provides a cost-effective alternative for AUV navigation, existing methods are primarily designed for terrestrial applications and struggle to address underwater-specific issues, such as poor illumination, dynamic interference, and sparse features. To tackle these challenges, we propose RAEM-SLAM, a robust adaptive end-to-end monocular SLAM framework for AUVs in underwater environments. Specifically, we propose a Physics-guided Underwater Adaptive Augmentation (PUAA) method that dynamically converts terrestrial scene datasets into physically realistic pseudo-underwater images for the augmentation training of RAEM-SLAM, improving the system’s generalization and adaptability in complex underwater scenes. We also introduce a Residual Semantic–Spatial Attention Module (RSSA), which utilizes a dual-branch attention mechanism to effectively fuse semantic and spatial information. This design enables adaptive enhancement of key feature regions and suppression of noise interference, resulting in more discriminative feature representations. Furthermore, we incorporate a Local–Global Perception Block (LGP), which integrates multi-scale local details with global contextual dependencies to significantly improve AUV pose estimation accuracy in dynamic underwater scenes. Experimental results on real-world underwater datasets demonstrate that RAEM-SLAM outperforms state-of-the-art SLAM approaches in enabling precise and robust navigation for AUVs. Full article
Show Figures

Figure 1

17 pages, 52501 KiB  
Article
Single Shot High-Accuracy Diameter at Breast Height Measurement with Smartphone Embedded Sensors
by Wang Xiang, Songlin Fei and Song Zhang
Sensors 2025, 25(16), 5060; https://doi.org/10.3390/s25165060 - 14 Aug 2025
Viewed by 170
Abstract
Tree diameter at breast height (DBH) is a fundamental metric in forest inventory and management. This paper presents a novel method for DBH estimation using the built-in light detection and ranging (LiDAR) and red, green and blue (RGB) sensors of an iPhone 13 [...] Read more.
Tree diameter at breast height (DBH) is a fundamental metric in forest inventory and management. This paper presents a novel method for DBH estimation using the built-in light detection and ranging (LiDAR) and red, green and blue (RGB) sensors of an iPhone 13 Pro, aiming to improve measurement accuracy and field usability. A single snapshot of a tree, capturing both depth and RGB images, is used to reconstruct a 3D point cloud. The trunk orientation is estimated based on the point cloud to locate the breast height, enabling robust DBH estimation independent of the capture angle. The DBH is initially estimated by the geometrical relationship between trunk size on the image and the depth of the trunk. Finally, a pre-computed lookup table (LUT) is employed to improve the initial DBH estimates into accurate values. Experimental evaluation on 294 trees within a capture range of 0.25 m to 5 m demonstrates a mean absolute error of 0.53 cm and a root mean square error of 0.63 cm. Full article
(This article belongs to the Collection 3D Imaging and Sensing System)
Show Figures

Figure 1

31 pages, 8383 KiB  
Article
Quantifying Emissivity Uncertainty in Multi-Angle Long-Wave Infrared Hyperspectral Data
by Nikolay Golosov, Guido Cervone and Mark Salvador
Remote Sens. 2025, 17(16), 2823; https://doi.org/10.3390/rs17162823 - 14 Aug 2025
Viewed by 173
Abstract
This study quantifies emissivity uncertainty using a new, specifically collected multi-angle thermal hyperspectral dataset, Nittany Radiance. Unlike previous research that primarily relied on model-based simulations, multispectral satellite imagery, or laboratory measurements, we use airborne hyperspectral long-wave infrared (LWIR) data captured from multiple viewing [...] Read more.
This study quantifies emissivity uncertainty using a new, specifically collected multi-angle thermal hyperspectral dataset, Nittany Radiance. Unlike previous research that primarily relied on model-based simulations, multispectral satellite imagery, or laboratory measurements, we use airborne hyperspectral long-wave infrared (LWIR) data captured from multiple viewing angles. The data was collected using the Blue Heron LWIR hyperspectral imaging sensor, flown on a light aircraft in a circular orbit centered on the Penn State University campus. This sensor, with 256 spectral bands (7.56–13.52 μm), captures multiple overlapping images with varying ranges and angles. We analyzed nine different natural and man-made targets across varying viewing geometries. We present a multi-angle atmospheric correction method, similar to FLAASH-IR, modified for multi-angle scenarios. Our results show that emissivity remains relatively stable at viewing zenith angles between 40 and 50° but decreases as angles exceed 50°. We found that emissivity uncertainty varies across the spectral range, with the 10.14–11.05 μm region showing the greatest stability (standard deviations typically below 0.005), while uncertainty increases significantly in regions with strong atmospheric absorption features, particularly around 12.6 μm. These results show how reliable multi-angle hyperspectral measurements are and why angle-specific atmospheric correction matters for non-nadir imaging applications Full article
Show Figures

Figure 1

Back to TopTop