Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,107)

Search Parameters:
Keywords = Lidar technique

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 5497 KB  
Article
Robust Localization of Flange Interface for LNG Tanker Loading and Unloading Under Variable Illumination a Fusion Approach of Monocular Vision and LiDAR
by Mingqin Liu, Han Zhang, Jingquan Zhu, Yuming Zhang and Kun Zhu
Appl. Sci. 2026, 16(2), 1128; https://doi.org/10.3390/app16021128 - 22 Jan 2026
Viewed by 12
Abstract
The automated localization of the flange interface in LNG tanker loading and unloading imposes stringent requirements for accuracy and illumination robustness. Traditional monocular vision methods are prone to localization failure under extreme illumination conditions, such as intense glare or low light, while LiDAR, [...] Read more.
The automated localization of the flange interface in LNG tanker loading and unloading imposes stringent requirements for accuracy and illumination robustness. Traditional monocular vision methods are prone to localization failure under extreme illumination conditions, such as intense glare or low light, while LiDAR, despite being unaffected by illumination, suffers from limitations like a lack of texture information. This paper proposes an illumination-robust localization method for LNG tanker flange interfaces by fusing monocular vision and LiDAR, with three scenario-specific innovations beyond generic multi-sensor fusion frameworks. First, an illumination-adaptive fusion framework is designed to dynamically adjust detection parameters via grayscale mean evaluation, addressing extreme illumination (e.g., glare, low light with water film). Second, a multi-constraint flange detection strategy is developed by integrating physical dimension constraints, K-means clustering, and weighted fitting to eliminate background interference and distinguish dual flanges. Third, a customized fusion pipeline (ROI extraction-plane fitting-3D circle center solving) is established to compensate for monocular depth errors and sparse LiDAR point cloud limitations using flange radius prior. High-precision localization is achieved via four key steps: multi-modal data preprocessing, LiDAR-camera spatial projection, fusion-based flange circle detection, and 3D circle center fitting. While basic techniques such as LiDAR-camera spatiotemporal synchronization and K-means clustering are adapted from prior works, their integration with flange-specific constraints and illumination-adaptive design forms the core novelty of this study. Comparative experiments between the proposed fusion method and the monocular vision-only localization method are conducted under four typical illumination scenarios: uniform illumination, local strong illumination, uniform low illumination, and low illumination with water film. The experimental results based on 20 samples per illumination scenario (80 valid data sets in total) show that, compared with the monocular vision method, the proposed fusion method reduces the Mean Absolute Error (MAE) of localization accuracy by 33.08%, 30.57%, and 75.91% in the X, Y, and Z dimensions, respectively, with the overall 3D MAE reduced by 61.69%. Meanwhile, the Root Mean Square Error (RMSE) in the X, Y, and Z dimensions is decreased by 33.65%, 32.71%, and 79.88%, respectively, and the overall 3D RMSE is reduced by 64.79%. The expanded sample size verifies the statistical reliability of the proposed method, which exhibits significantly superior robustness to extreme illumination conditions. Full article
Show Figures

Figure 1

23 pages, 9975 KB  
Article
Leveraging LiDAR Data and Machine Learning to Predict Pavement Marking Retroreflectivity
by Hakam Bataineh, Dmitry Manasreh, Munir Nazzal and Ala Abbas
Vehicles 2026, 8(1), 23; https://doi.org/10.3390/vehicles8010023 - 20 Jan 2026
Viewed by 182
Abstract
This study focused on developing and validating machine learning models to predict pavement marking retroreflectivity using Light Detection and Ranging (LiDAR) intensity data. The retroreflectivity data was collected using a Mobile Retroreflectometer Unit (MRU) due to its increasing acceptance among states as a [...] Read more.
This study focused on developing and validating machine learning models to predict pavement marking retroreflectivity using Light Detection and Ranging (LiDAR) intensity data. The retroreflectivity data was collected using a Mobile Retroreflectometer Unit (MRU) due to its increasing acceptance among states as a compliant measurement device. A comprehensive dataset was assembled spanning more than 1000 miles of roadways, capturing diverse marking materials, colors, installation methods, pavement types, and vehicle speeds. The final dataset used for model development focused on dry condition measurements and roadway segments most relevant to state transportation agencies. A detailed synchronization process was implemented to ensure the accurate pairing of retroreflectivity and LiDAR intensity values. Using these data, several machine learning techniques were evaluated, and an ensemble of gradient boosting-based models emerged as the top performer, predicting pavement retroreflectivity with an R2 of 0.94 on previously unseen data. The repeatability of the predicted retroreflectivity was tested and showed similar consistency as the MRU. The model’s accuracy was confirmed against independent field segments demonstrating the potential for LiDAR to serve as a practical, low-cost alternative for MRU measurements in routine roadway inspection and maintenance. The approach presented in this study enhances roadway safety by enabling more frequent, network-level assessments of pavement marking performance at lower cost, allowing agencies to detect and correct visibility problems sooner and helping to prevent nighttime and adverse weather crashes. Full article
Show Figures

Figure 1

32 pages, 8079 KB  
Article
Daytime Sea Fog Detection in the South China Sea Based on Machine Learning and Physical Mechanism Using Fengyun-4B Meteorological Satellite
by Jie Zheng, Gang Wang, Wenping He, Qiang Yu, Zijing Liu, Huijiao Lin, Shuwen Li and Bin Wen
Remote Sens. 2026, 18(2), 336; https://doi.org/10.3390/rs18020336 - 19 Jan 2026
Viewed by 132
Abstract
Sea fog is a major meteorological hazard that severely disrupts maritime transportation and economic activities in the South China Sea. As China’s next-generation geostationary meteorological satellite, Fengyun-4B (FY-4B) supplies continuous observations that are well suited for sea fog monitoring, yet a satellite-specific recognition [...] Read more.
Sea fog is a major meteorological hazard that severely disrupts maritime transportation and economic activities in the South China Sea. As China’s next-generation geostationary meteorological satellite, Fengyun-4B (FY-4B) supplies continuous observations that are well suited for sea fog monitoring, yet a satellite-specific recognition method has been lacking. A key obstacle is the radiometric inconsistency between the Advanced Geostationary Radiation Imager (AGRI) sensors on FY-4A and FY-4B, compounded by the cessation of Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) observations, which prevents direct transfer of fog labels. To address these challenges and fill this research gap, we propose a machine learning framework that integrates cross-satellite radiometric recalibration and physical mechanism constraints for robust daytime sea fog detection. First, we innovatively apply a radiation recalibration transfer technique based on the radiative transfer model to normalize FY-4A/B radiances and, together with Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) cloud/fog classification products and ERA5 reanalysis, construct a highly consistent joint training set of FY-4A/B for the winter-spring seasons since 2019. Secondly, to enhance the model’s physical performance, we incorporate key physical parameters related to the sea fog formation process (such as temperature inversion, near-surface humidity, and wind field characteristics) as physical constraints, and combine them with multispectral channel sensitivity and the brightness temperature (BT) standard deviation that characterizes texture smoothness, resulting in an optimized 13-dimensional feature matrix. Using this, we optimize the sea fog recognition model parameters of decision tree (DT), random forest (RF), and support vector machine (SVM) with grid search and particle swarm optimization (PSO) algorithms. The validation results show that the RF model outperforms others with the highest overall classification accuracy (0.91) and probability of detection (POD, 0.81) that surpasses prior FY-4A-based work for the South China Sea (POD 0.71–0.76). More importantly, this study demonstrates that the proposed FY-4B framework provides reliable technical support for operational, continuous sea fog monitoring over the South China Sea. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

27 pages, 1127 KB  
Review
Evolution and Emerging Frontiers in Point Cloud Technology
by Wenjuan Wang, Haleema Ehsan, Shi Qiu, Tariq Ur Rahman, Jin Wang and Qasim Zaheer
Electronics 2026, 15(2), 341; https://doi.org/10.3390/electronics15020341 - 13 Jan 2026
Viewed by 220
Abstract
Point cloud intelligence integrates advanced technologies such as Light Detection and Ranging (LiDAR), photogrammetry, and Artificial Intelligence (AI) to transform transportation infrastructure management. This review highlights state-of-the-art advancements in denoising, registration, segmentation, and surface reconstruction. A detailed case study on three-dimensional (3D) mesh [...] Read more.
Point cloud intelligence integrates advanced technologies such as Light Detection and Ranging (LiDAR), photogrammetry, and Artificial Intelligence (AI) to transform transportation infrastructure management. This review highlights state-of-the-art advancements in denoising, registration, segmentation, and surface reconstruction. A detailed case study on three-dimensional (3D) mesh generation for railway fastener monitoring showcases how these techniques address challenges like noise and computational complexity while enabling precise and efficient infrastructure maintenance. By demonstrating practical applications and identifying future research directions, this work underscores the transformative potential of point cloud intelligence in supporting predictive maintenance, digital twins, and sustainable transportation systems. Full article
Show Figures

Figure 1

8 pages, 2719 KB  
Data Descriptor
Spatial Dataset for Comparing 3D Measurement Techniques on Lunar Regolith Simulant Cones
by Piotr Kędziorski, Janusz Kobaka, Jacek Katzer, Paweł Tysiąc, Marcin Jagoda and Machi Zawidzki
Data 2026, 11(1), 10; https://doi.org/10.3390/data11010010 - 6 Jan 2026
Viewed by 206
Abstract
The presented dataset contains spatial models of cones formed from lunar soil simulants. The cones were formed in a laboratory by allowing the soil to fall freely through a funnel. Then, the cones were measured using three methods: a high-precision handheld laser scanner [...] Read more.
The presented dataset contains spatial models of cones formed from lunar soil simulants. The cones were formed in a laboratory by allowing the soil to fall freely through a funnel. Then, the cones were measured using three methods: a high-precision handheld laser scanner (HLS), photogrammetry, and a low-cost LiDAR system integrated into an iPad Pro. The dataset consists of two groups. The first group contains raw measurement data, and the second group contains the geometry of the cones themselves, excluding their surroundings. This second group was prepared to support the calculation of the cones’ volume. All data are provided in standard 3D file format (.STL). The dataset enables direct comparison of resolution and geometric reconstruction performance across the three techniques and can be reused for benchmarking 3D processing workflows, segmentation algorithms, and shape reconstruction methods. It provides complete geometric information suitable for validating automated extraction procedures for parameters such as cone height, base diameter, and angle of repose, as well as for further research into planetary soil and granular material morphology. Full article
Show Figures

Figure 1

37 pages, 1846 KB  
Review
Visualization Techniques for Spray Monitoring in Unmanned Aerial Spraying Systems: A Review
by Jungang Ma, Hua Zhuo, Peng Wang, Pengchao Chen, Xiang Li, Mei Tao and Zongyin Cui
Agronomy 2026, 16(1), 123; https://doi.org/10.3390/agronomy16010123 - 4 Jan 2026
Viewed by 290
Abstract
Unmanned Aerial Spraying Systems (UASS) has rapidly advanced precision crop protection. However, the spray performance of UASSs is influenced by nozzle atomization, rotor-induced airflow, and external environmental conditions. These factors cause strong spatiotemporal coupling and high uncertainty. As a result, visualization-based monitoring techniques [...] Read more.
Unmanned Aerial Spraying Systems (UASS) has rapidly advanced precision crop protection. However, the spray performance of UASSs is influenced by nozzle atomization, rotor-induced airflow, and external environmental conditions. These factors cause strong spatiotemporal coupling and high uncertainty. As a result, visualization-based monitoring techniques are now essential for understanding these dynamics and supporting spray modeling and drift-mitigation design. This review highlights developments in spray visualization technologies along the “droplet–airflow–target” chain mechanism in UASS spraying. We first outline the physical fundamentals of droplet formation, liquid-sheet breakup, droplet size distribution, and transport mechanisms in rotor-induced flow. Dominant processes are identified across near-field, mid-field, and far-field scales. Next, we summarize major visualization methods. These include optical imaging (PDPA/PDIA, HSI, DIH), laser-based scattering and ranging (LD, LiDAR), and flow-field visualization (PIV). We compare their spatial resolution, measurement range, 3D reconstruction capabilities, and possible sources of error. We then review wind-tunnel trials, field experiments, and point-cloud reconstruction studies. These studies show how downwash flow and tip vortices affect plume structure, canopy disturbance, and deposition patterns. Finally, we discuss emerging intelligent analysis for large-scale monitoring—such as image-based droplet recognition, multimodal data fusion, and data-driven modeling. We outline future directions, including unified feature systems, vortex-coupled models, and embedded closed-loop spray control. This review is a comprehensive reference for advancing UASS analysis, drift assessment, spray optimization, and smart support systems. Full article
(This article belongs to the Special Issue New Trends in Agricultural UAV Application—2nd Edition)
Show Figures

Figure 1

34 pages, 9678 KB  
Article
Comparative Assessment of Vegetation Removal for DTM Generation and Earthwork Volume Estimation Using RTK-UAV Photogrammetry and LiDAR Mapping
by Hyeongseok Kang, Kourosh Khoshelham, Hyeongil Shin, Kirim Lee and Wonhee Lee
Drones 2026, 10(1), 30; https://doi.org/10.3390/drones10010030 - 4 Jan 2026
Viewed by 337
Abstract
Earthwork volume calculation is a fundamental process in civil engineering and construction, requiring high-precision terrain data to assess ground stability encompassing load-bearing capacity, susceptibility to settlement, and slope stability and to ensure accurate cost estimation. However, seasonal and environmental constraints pose significant challenges [...] Read more.
Earthwork volume calculation is a fundamental process in civil engineering and construction, requiring high-precision terrain data to assess ground stability encompassing load-bearing capacity, susceptibility to settlement, and slope stability and to ensure accurate cost estimation. However, seasonal and environmental constraints pose significant challenges to surveying. This study employed unmanned aerial vehicle (UAV) photogrammetry and light detection and ranging (LiDAR) mapping to evaluate the accuracy of digital terrain model (DTM) generation and earthwork volume estimation in densely vegetated areas. For ground extraction, color-based indices (excess green minus red (ExGR), visible atmospherically resistant index (VARI), green-red vegetation index (GRVI)), a geometry-based algorithm (Lasground (new)) and their combinations were compared and analyzed. The results indicated that combining a color index with Lasground (new) outperformed the use of single techniques in both photogrammetric and LiDAR-based surveying. Specifically, the ExGR–Lasground (new) combination produced the most accurate DTM and achieved the highest precision in earthwork volume estimation. The LiDAR-based results exhibited an error of only 0.3% compared with the reference value, while the photogrammetric results also showed only a slight deviation, suggesting their potential as a practical alternative even under dense summer vegetation. Therefore, although prioritizing LiDAR in practice is advisable, this study demonstrates that UAV photogrammetry can serve as an efficient supplementary tool when cost or operational constraints are present. Full article
Show Figures

Figure 1

22 pages, 1366 KB  
Systematic Review
Inspection and Evaluation of Urban Pavement Deterioration Using Drones: Review of Methods, Challenges, and Future Trends
by Pablo Julián López-González, David Reyes-González, Oscar Moreno-Vázquez, Rodrigo Vivar-Ocampo, Sergio Aurelio Zamora-Castro, Lorena del Carmen Santos Cortés, Brenda Suemy Trujillo-García and Joaquín Sangabriel-Lomelí
Future Transp. 2026, 6(1), 10; https://doi.org/10.3390/futuretransp6010010 - 4 Jan 2026
Viewed by 316
Abstract
The rapid growth of urban areas has increased the need for more efficient methods of pavement inspection and maintenance. However, conventional techniques remain slow, labor-intensive, and limited in spatial coverage, and their performance is strongly affected by traffic, weather conditions, and operational constraints. [...] Read more.
The rapid growth of urban areas has increased the need for more efficient methods of pavement inspection and maintenance. However, conventional techniques remain slow, labor-intensive, and limited in spatial coverage, and their performance is strongly affected by traffic, weather conditions, and operational constraints. In response to these challenges, it is essential to synthesize the technological advances that improve inspection efficiency, coverage, and data quality compared to traditional approaches. Herein, we present a systematic review of the state of the art on the use of unmanned aerial vehicles (UAVs) for monitoring and assessing pavement deterioration, highlighting as a key contribution the comparative integration of sensors (photogrammetry, LiDAR, and thermography) with recent automatic damage-detection algorithms. A structured review methodology was applied, including the search, selection, and critical analysis of specialized studies on UAV-based pavement evaluation. The results indicate that UAV photogrammetry can achieve sub-centimeter accuracy (<1 cm) in 3D reconstructions, LiDAR systems can improve deformation detection by up to 35%, and AI-based algorithms can increase crack-identification accuracy by 10% to 25% compared with manual methods. Finally, the synthesis shows that multi-sensor integration and digital twins offer strong potential to enhance predictive maintenance and support the transition towards smarter and more sustainable urban infrastructure management strategies. Full article
Show Figures

Figure 1

23 pages, 52765 KB  
Article
GNSS NRTK, UAS-Based SfM Photogrammetry, TLS and HMLS Data for a 3D Survey of Sand Dunes in the Area of Caleri (Po River Delta, Italy)
by Massimo Fabris and Michele Monego
Land 2026, 15(1), 95; https://doi.org/10.3390/land15010095 - 3 Jan 2026
Viewed by 268
Abstract
Coastal environments are fragile ecosystems threatened by various factors, both natural and anthropogenic. The preservation and protection of these environments, and in particular, the sand dune systems, which contribute significantly to the defense of the inland from flooding, require continuous monitoring. To this [...] Read more.
Coastal environments are fragile ecosystems threatened by various factors, both natural and anthropogenic. The preservation and protection of these environments, and in particular, the sand dune systems, which contribute significantly to the defense of the inland from flooding, require continuous monitoring. To this end, high-resolution and high-precision multitemporal data acquired with various techniques can be used, such as, among other things, the global navigation satellite system (GNSS) using the network real-time kinematic (NRTK) approach to acquire 3D points, UAS-based structure-from-motion photogrammetry (SfM), terrestrial laser scanning (TLS), and handheld mobile laser scanning (HMLS)-based light detection and ranging (LiDAR). These techniques were used in this work for the 3D survey of a portion of vegetated sand dunes in the Caleri area (Po River Delta, northern Italy) to assess their applicability in complex environments such as coastal vegetated dune systems. Aerial-based and ground-based acquisitions allowed us to produce point clouds, georeferenced using common ground control points (GCPs), measured both with the GNSS NRTK method and the total station technique. The 3D data were compared to each other to evaluate the accuracy and performance of the different techniques. The results provided good agreement between the different point clouds, as the standard deviations of the differences were lower than 9.3 cm. The GNSS NRTK technique, used with the kinematic approach, allowed for the acquisition of the bare-ground surface but at a cost of lower resolution. On the other hand, the HMLS represented the poorest ability in the penetration of vegetation, providing 3D points with the highest elevation value. UAS-based and TLS-based point clouds provided similar average values, with significant differences only in dense vegetation caused by a very different platform of acquisition and point of view. Full article
(This article belongs to the Special Issue Digital Earth and Remote Sensing for Land Management, 2nd Edition)
Show Figures

Figure 1

21 pages, 14110 KB  
Article
Estimating Cloud Base Height via Shadow-Based Remote Sensing
by Lipi Mukherjee and Dong L. Wu
Remote Sens. 2026, 18(1), 147; https://doi.org/10.3390/rs18010147 - 1 Jan 2026
Viewed by 280
Abstract
Low clouds significantly impact weather, climate, and multiple environmental and economic sectors such as agriculture, fire risk management, aviation, and renewable energy. Accurate knowledge of cloud base height (CBH) is critical for optimizing crop yields, improving fire danger forecasts, enhancing flight safety, and [...] Read more.
Low clouds significantly impact weather, climate, and multiple environmental and economic sectors such as agriculture, fire risk management, aviation, and renewable energy. Accurate knowledge of cloud base height (CBH) is critical for optimizing crop yields, improving fire danger forecasts, enhancing flight safety, and increasing solar energy efficiency. This study evaluates a shadow-based CBH retrieval method using Moderate Resolution Imaging Spectroradiometer (MODIS) satellite visible imagery and compares the results against collocated lidar measurements from the Micro-Pulse Lidar Network (MPLNET) ground stations. The shadow method leverages sun–sensor geometry to estimate CBH from the displacement of cloud shadows on the surface, offering a practical and high-resolution passive remote sensing technique, especially useful where active sensors are unavailable. The validation results show strong agreement, with a correlation coefficient (R) = 0.96 between shadow-based and lidar-derived CBH estimates, confirming the robustness of the approach for shallow, isolated cumulus clouds. The method’s advantages include direct physical height estimation without reliance on cloud top heights or stereo imaging, applicability across archived datasets, and suitability for diurnal studies. This work highlights the potential of shadow-based retrievals as a reliable, cost-effective tool for global low cloud monitoring, with important implications for atmospheric research and operational forecasting. Full article
Show Figures

Figure 1

19 pages, 9699 KB  
Article
Evaluation of Digital Elevation Models (DEM) Generated from the InSAR Technique in a Sector of the Central Andes of Chile, Using Sentinel 1 and TerraSar-X Images
by Francisco Flores, Paulina Vidal-Páez, Francisco Mena, Waldo Pérez-Martínez and Patricia Oliva
Appl. Sci. 2026, 16(1), 392; https://doi.org/10.3390/app16010392 - 30 Dec 2025
Viewed by 304
Abstract
The Synthetic Aperture Radar Interferometry (InSAR) technique enables researchers to generate Digital Elevation Models (DEMs) from SAR data, which researchers widely apply in multi-temporal analyses, including ground deformation monitoring, susceptibility mapping, and analysis of spatial changes in erosion basins. In this study, we [...] Read more.
The Synthetic Aperture Radar Interferometry (InSAR) technique enables researchers to generate Digital Elevation Models (DEMs) from SAR data, which researchers widely apply in multi-temporal analyses, including ground deformation monitoring, susceptibility mapping, and analysis of spatial changes in erosion basins. In this study, we generated two interferometric DEMs from Sentinel-1 (S1, VV polarization) and TerraSAR-X (TSX, HH polarization, ascending orbit) data, processed in SNAP, over a mountainous sector of the central Andes in Chile. We assessed the accuracy of the DEMs against two reference datasets: the SRTM DEM and a high-resolution LiDAR-derived DEM. We selected 150 randomly distributed points across different slope classes to compute statistical metrics, including RMSE and MedAE. Relative to the LiDAR DEM, both sensors yielded rMSE values of approximately 20 m, increasing to 23–24 m when compared with the SRTM DEM. The MedAE, a metric less sensitive to outliers, was 3.97 m for S1 and 3.26 m for TSX with respect to LiDAR, and 7.07 m for S1 and 7.49 m for TSX relative to SRTM. We observed a clear positive correlation between elevation error and terrain slope. In areas with slopes greater than 45°, the MedAE exceeded 14 m relative to the LiDAR DEM and reached ~15 m relative to the SRTM for both S1 and TSX. Full article
Show Figures

Figure 1

23 pages, 6012 KB  
Article
A Pseudo-Point-Based Adaptive Fusion Network for Multi-Modal 3D Detection
by Chenghong Zhang, Wei Wang, Bo Yu and Hanting Wei
Electronics 2026, 15(1), 59; https://doi.org/10.3390/electronics15010059 - 23 Dec 2025
Viewed by 232
Abstract
A 3D multi-modal detection method using a monocular camera and LiDAR has drawn much attention due to its low cost and strong applicability, making it highly valuable for autonomous driving and unmanned aerial vehicles (UAVs). However, conventional fusion approaches relying on static arithmetic [...] Read more.
A 3D multi-modal detection method using a monocular camera and LiDAR has drawn much attention due to its low cost and strong applicability, making it highly valuable for autonomous driving and unmanned aerial vehicles (UAVs). However, conventional fusion approaches relying on static arithmetic operations often fail to adapt to dynamic, complex scenarios. Furthermore, existing ROI alignment techniques, such as local projection and cross-attention, are inadequate for mitigating the feature misalignment triggered by depth estimation noise in pseudo-point clouds. To address these issues, this paper proposes a pseudo-point-based 3D object detection method that achieves biased fusion of multi-modal data. First, a meta-weight fusion module dynamically generates fusion weights based on global context, adaptively balancing the contributions of point clouds and images. Second, a module combining bidirectional cross-attention and a gating filter mechanism is introduced to eliminate the ROI feature misalignment caused by depth completion noise. Finally, a class-agnostic box fusion strategy is introduced to aggregate highly overlapping detection boxes at the decision level, improving localization accuracy. Experiments on the KITTI dataset show that the proposed method achieves APs of 92.22%, 85.03%, and 82.25% on Easy, Moderate, and Hard difficulty levels, respectively, demonstrating leading performance. Ablation studies further validate the effectiveness and computational efficiency of each module. Full article
Show Figures

Figure 1

31 pages, 3838 KB  
Article
Automated Morphological Characterization of Mediterranean Dehesa Using a Low-Density Airborne LiDAR Technique: A DBSCAN–Concaveman Approach for Segmentation and Delineation of Tree Vegetation Units
by Adrián J. Montero-Calvo, Miguel A. Martín-Tardío and Ángel M. Felicísimo
Forests 2026, 17(1), 16; https://doi.org/10.3390/f17010016 - 22 Dec 2025
Viewed by 317
Abstract
Mediterranean dehesa ecosystems are highly valuable agroforestry systems from ecological, social and economic perspectives. Their structural characterization has traditionally relied on resource-intensive field inventories. This study assesses the applicability of low-density airborne LiDAR data from the Spanish National Aerial Orthophotography Plan (PNOA) for [...] Read more.
Mediterranean dehesa ecosystems are highly valuable agroforestry systems from ecological, social and economic perspectives. Their structural characterization has traditionally relied on resource-intensive field inventories. This study assesses the applicability of low-density airborne LiDAR data from the Spanish National Aerial Orthophotography Plan (PNOA) for the automated morphological characterization of Quercus ilex dehesas. This novel workflow integrates the DBSCAN clustering algorithm for unsupervised segmentation of tree vegetation units and Concaveman for crown perimeter delineation and slicing using concave hulls. The technique was applied over 116 hectares in Santibáñez el Bajo (Cáceres), identifying 1254 vegetation units with 99.8% precision, 97.3% recall and an F-score of 98.5%. A field validation on 35 trees revealed strong agreement with the LiDAR-derived metrics, including crown diameter (R2 = 0.985; bias = −0.96 m) and total height (R2 = 0.955; bias = −0.34 m). Crown base height was overestimated (+0.77 m), leading to a 20.9% underestimation of crown volume, which was corrected using a regression model (R2 = 0.952). This methodology allows us to produce scalable, fully automated forest inventories across extensive Iberian dehesas with similar structural characteristics using publicly available LiDAR data, even with a six-year acquisition gap. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Graphical abstract

21 pages, 3988 KB  
Article
Self-Supervised LiDAR Desnowing with 3D-KNN Blind-Spot Networks
by Junyi Li and Wangmeng Zuo
Remote Sens. 2026, 18(1), 17; https://doi.org/10.3390/rs18010017 - 20 Dec 2025
Viewed by 346
Abstract
Light Detection and Ranging (LiDAR) is fundamental to autonomous driving and robotics, as it provides reliable 3D geometric information. However, snowfall introduces numerous spurious reflections that corrupt range measurements and severely degrade downstream perception. Existing desnowing techniques either rely on handcrafted filtering rules [...] Read more.
Light Detection and Ranging (LiDAR) is fundamental to autonomous driving and robotics, as it provides reliable 3D geometric information. However, snowfall introduces numerous spurious reflections that corrupt range measurements and severely degrade downstream perception. Existing desnowing techniques either rely on handcrafted filtering rules that fail under varying snow densities, or require paired snowy–clean scans, which are nearly impossible to collect in real-world scenarios. Self-supervised LiDAR desnowing approaches address these challenges by projecting raw 3D point clouds into 2D range images and jointly training a point reconstruction network (PR-Net) and a reconstruction difficulty network (RD-Net). Nevertheless, these methods remain limited by their reliance on the outdated Noise2Void training paradigm, which restricts reconstruction quality. In this paper, we redesign PR-Net with a blind-spot architecture to overcome the limitation. Specifically, we introduce a 3D-KNN encoder that aggregates neighborhood features directly in Euclidean 3D space, ensuring geometrically consistent representations. Additionally, we integrate residual state-space blocks (RSSB) to capture long-range contextual dependencies with linear computational complexity. Extensive experiments on both synthetic and real-world datasets, including SnowyKITTI and WADS, demonstrate that our method outperforms state-of-the-art self-supervised desnowing approaches by up to 0.06 IoU while maintaining high computational efficiency. Full article
Show Figures

Graphical abstract

18 pages, 8006 KB  
Article
Optimal Low-Cost MEMS INS/GNSS Integrated Georeferencing Solution for LiDAR Mobile Mapping Applications
by Nasir Al-Shereiqi, Mohammed El-Diasty and Ghazi Al-Rawas
Sensors 2025, 25(24), 7683; https://doi.org/10.3390/s25247683 - 18 Dec 2025
Viewed by 489
Abstract
Mobile mapping systems using LiDAR technology are becoming a reliable surveying technique to generate accurate point clouds. Mobile mapping systems integrate several advanced surveying technologies. This research investigated the development of a low-cost, accurate Microelectromechanical System (MEMS)-based INS/GNSS georeferencing system for LiDAR mobile [...] Read more.
Mobile mapping systems using LiDAR technology are becoming a reliable surveying technique to generate accurate point clouds. Mobile mapping systems integrate several advanced surveying technologies. This research investigated the development of a low-cost, accurate Microelectromechanical System (MEMS)-based INS/GNSS georeferencing system for LiDAR mobile mapping applications, enabling the generation of accurate point clouds. The challenge of using the MEMS IMU is that it is contaminated by high levels of noise and bias instability. To overcome this issue, new denoising and filtering methods were developed using a wavelet neural network (WNN) and an optimal maximum likelihood estimator (MLE) method to achieve an accurate MEMS-based INS/GNSS integration navigation solution for LiDAR mobile mapping applications. Moreover, the final accuracy of the MEMS-based INS/GNSS navigation solution was compared with the ASPRS standards for geospatial data production. It was found that the proposed WNN denoising method improved the MEMS-based INS/GNSS integration accuracy by approximately 11%, and that the optimal MLE method achieved approximately 12% higher accuracy than the forward-only navigation solution without GNSS outages. The proposed WNN denoising outperforms the current state-of-the-art Long Short-Term Memory (LSTM)–Recurrent Neural Network (RNN), or LSTM-RNN, denoising model. Additionally, it was found that, depending on the sensor–object distance, the accuracy of the optimal MLE-based MEMS INS/GNSS navigation solution with WNN denoising ranged from 1 to 3 cm for ground mapping and from 1 to 9 cm for building mapping, which can fulfill the ASPRS standards of classes 1 to 3 and classes 1 to 9 for ground and building mapping cases, respectively. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

Back to TopTop