Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (55)

Search Parameters:
Keywords = drone Lidar point cloud

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 921 KB  
Systematic Review
Steel and Concrete Segmentation in Construction Sites Using Data Fusion: A Literature Review
by Enrique Martín Luna Gutiérrez, Osslan Osiris Vergara Villegas, Vianey Guadalupe Cruz Sánchez, Humberto de Jesús Ochoa Domínguez and Juan Humberto Sossa Azuela
Buildings 2026, 16(1), 140; https://doi.org/10.3390/buildings16010140 - 27 Dec 2025
Viewed by 301
Abstract
Construction progress monitoring remains predominantly manual, labor-intensive, and reliant on subjective human interpretation. Human dependence often leads to redundant or unreliable information, resulting in scheduling delays and increased costs. Advances in drones, point cloud generation, and multisensor data acquisition have expanded access to [...] Read more.
Construction progress monitoring remains predominantly manual, labor-intensive, and reliant on subjective human interpretation. Human dependence often leads to redundant or unreliable information, resulting in scheduling delays and increased costs. Advances in drones, point cloud generation, and multisensor data acquisition have expanded access to high-resolution as-built data. However, transforming data into reliable automated indicators of progress poses a challenge. A limitation is the lack of robust material-level segmentation, particularly for structural materials such as concrete and steel. Concrete and steel are crucial for verifying progress, ensuring quality, and facilitating construction management. Most studies in point cloud segmentation focus on object- or scene-level classification and primarily use geometric features, which limit their ability to distinguish materials with similar geometries but differing physical properties. A consolidated and systematic understanding of the performance of multispectral and multimodal segmentation methods for material-specific classification in construction environments remains unavailable. The systematic review addresses the existing gap by synthesizing and analyzing literature published from 2020 to 2025. The review focuses on segmentation methodologies, multispectral and multimodal data sources, performance metrics, dataset limitations, and documented challenges. Additionally, the review identifies research directions to facilitate automated progress monitoring of construction and to enhance digital twin frameworks. The review indicates strong quantitative performance, with multispectral and multimodal segmentation approaches achieving accuracies of 93–97% when integrating spectral information into point cloud or image-based pipelines. Large-scale environments benefit from combined LiDAR and high-resolution imagery approaches, which achieve classification quality metrics of 85–90%, thereby demonstrating robustness under complex acquisition conditions. Automated inspection workflows reduce inspection time from 24 h to less than 2 h and yield cost reductions of more than 50% compared to conventional methods. Additionally, deep-learning-based defect detection achieves inference times of 5–6 s per structural element, with reported accuracies of around 97%. The findings confirm productivity gains for construction monitoring. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

17 pages, 1903 KB  
Article
GMAFNet: Gated Mechanism Adaptive Fusion Network for 3D Semantic Segmentation of LiDAR Point Clouds
by Xiangbin Kong, Weijun Wu, Minghu Wu, Zhihang Gui, Zhe Luo and Chuyu Miao
Electronics 2025, 14(24), 4917; https://doi.org/10.3390/electronics14244917 - 15 Dec 2025
Viewed by 324
Abstract
Three-dimensional semantic segmentation plays a crucial role in advancing scene understanding in fields such as autonomous driving, drones, and robotic applications. Existing studies usually improve prediction accuracy by fusing data from vehicle-mounted cameras and vehicle-mounted LiDAR. However, current semantic segmentation methods face two [...] Read more.
Three-dimensional semantic segmentation plays a crucial role in advancing scene understanding in fields such as autonomous driving, drones, and robotic applications. Existing studies usually improve prediction accuracy by fusing data from vehicle-mounted cameras and vehicle-mounted LiDAR. However, current semantic segmentation methods face two main challenges: first, they often directly fuse 2D and 3D features, leading to the problem of information redundancy in the fusion process; second, there are often issues of image feature loss and missing point cloud geometric information in the feature extraction stage. From the perspective of multimodal fusion, this paper proposes a point cloud semantic segmentation method based on a multimodal gated attention mechanism. The method comprises a feature extraction network and a gated attention fusion and segmentation network. The feature extraction network utilizes a 2D image feature extraction structure and a 3D point cloud feature extraction structure to extract RGB image features and point cloud features, respectively. Through feature extraction and global feature supplementation, it effectively mitigates the issues of fine-grained image feature loss and point cloud geometric structure deficiency. The gated attention fusion and segmentation network increases the network’s attention to important categories such as vehicles and pedestrians through an attention mechanism and then uses a dynamic gated attention mechanism to control the respective weights of 2D and 3D features in the fusion process, enabling it to solve the problem of information redundancy in feature fusion. Finally, a 3D decoder is used for point cloud semantic segmentation. In this paper, tests will be conducted on the SemanticKITTI and nuScenes large-scene point cloud datasets. Full article
Show Figures

Figure 1

24 pages, 13118 KB  
Article
A Workflow for Urban Heritage Digitization: From UAV Photogrammetry to Immersive VR Interaction with Multi-Layer Evaluation
by Chengyun Zhang, Guiye Lin, Yuyang Peng and Yingwen Yu
Drones 2025, 9(10), 716; https://doi.org/10.3390/drones9100716 - 16 Oct 2025
Cited by 1 | Viewed by 1397
Abstract
Urban heritage documentation often separates 3D data acquisition from immersive interaction, limiting both accuracy and user impact. This study develops and validates an end-to-end workflow that integrates UAV photogrammetry with terrestrial LiDAR and deploys the fused model in a VR environment. Applied to [...] Read more.
Urban heritage documentation often separates 3D data acquisition from immersive interaction, limiting both accuracy and user impact. This study develops and validates an end-to-end workflow that integrates UAV photogrammetry with terrestrial LiDAR and deploys the fused model in a VR environment. Applied to Piazza Vittorio Emanuele II in Rovigo, Italy, the approach achieves centimetre-level registration, completes roofs and upper façades that ground scanning alone cannot capture, and produces stable, high-fidelity assets suitable for real-time interaction. Effectiveness is assessed through a three-layer evaluation framework encompassing vision, behavior, and cognition. Eye-tracking heatmaps and scanpaths show that attention shifts from dispersed viewing to concentrated focus on landmarks and panels. Locomotion traces reveal a transition from diffuse roaming to edge-anchored strategies, with stronger reliance on low-visibility zones for spatial judgment. Post-VR interviews confirm improved spatial comprehension, stronger recognition of cultural values, and enhanced conservation intentions. The results demonstrate that UAV-enabled completeness directly influences how users perceive, navigate, and interpret heritage spaces in VR. The workflow is cost-effective, replicable, and transferable, offering a practical model for under-resourced heritage sites. More broadly, it provides a methodological template for linking drone-based data acquisition to measurable cognitive and cultural outcomes in immersive heritage applications. Full article
(This article belongs to the Special Issue Implementation of UAV Systems for Cultural Heritage)
Show Figures

Figure 1

23 pages, 8993 KB  
Article
Automatic Rooftop Solar Panel Recognition from UAV LiDAR Data Using Deep Learning and Geometric Feature Analysis
by Joel Coglan, Zahra Gharineiat and Fayez Tarsha Kurdi
Remote Sens. 2025, 17(19), 3389; https://doi.org/10.3390/rs17193389 - 9 Oct 2025
Cited by 2 | Viewed by 1378
Abstract
As drone-based Light Detection and Ranging (LiDAR) becomes more accessible, it presents new opportunities for automated, geometry-driven classification. This study investigates the use of LiDAR point cloud data and Machine Learning (ML) to classify rooftop solar panels from building surfaces. While rooftop solar [...] Read more.
As drone-based Light Detection and Ranging (LiDAR) becomes more accessible, it presents new opportunities for automated, geometry-driven classification. This study investigates the use of LiDAR point cloud data and Machine Learning (ML) to classify rooftop solar panels from building surfaces. While rooftop solar detection has been explored using satellite and aerial imagery, LiDAR offers geometric and reflectance-based attributes for classification. Two datasets were used: the University of Southern Queensland (UniSQ) campus, with commercial-sized panels, both elevated and flat, and a suburban area in Newcastle, Australia, with residential-sized panels sitting flush with the roof surface. UniSQ was classified using RANSAC (Random Sample Consensus), while Newcastle’s dataset was processed based on reflectance values. Geometric features were selected based on histogram overlap and Kullback–Leibler (KL) divergence, and models were trained using a Multilayer Perceptron (MLP) classifier implemented in both PyTorch and Scikit-learn libraries. Classification achieved F1 scores of 99% for UniSQ and 95–96% for the Newcastle dataset. These findings support the potential for ML-based classification to be applied to unlabelled datasets for rooftop solar analysis. Future work could expand the model to detect additional rooftop features and estimate panel counts across urban areas. Full article
Show Figures

Figure 1

18 pages, 14975 KB  
Article
Precision Carbon Stock Estimation in Urban Campuses Using Fused Backpack and UAV LiDAR Data
by Shijun Zhang, Nan Li, Longwei Li, Yuchan Liu, Hong Wang, Tingting Xue, Jing Ma and Mengyi Hu
Forests 2025, 16(10), 1550; https://doi.org/10.3390/f16101550 - 8 Oct 2025
Viewed by 645
Abstract
Accurate quantification of campus vegetation carbon stocks is essential for advancing carbon neutrality goals and refining urban carbon management strategies. This study pioneers the integration of drone and backpack LiDAR data to overcome limitations in conventional carbon estimation approaches. The Comparative Shortest-Path (CSP) [...] Read more.
Accurate quantification of campus vegetation carbon stocks is essential for advancing carbon neutrality goals and refining urban carbon management strategies. This study pioneers the integration of drone and backpack LiDAR data to overcome limitations in conventional carbon estimation approaches. The Comparative Shortest-Path (CSP) algorithm was originally developed to segment tree crowns from point cloud data, with its design informed by metabolic ecology theory—specifically, that vascular plants tend to minimize the transport distance to their roots. In this study, we deployed the Comparative Shortest-Path (CSP) algorithm for individual tree recognition across 897 campus trees, achieving 88.52% recall, 72.45% precision, and 79.68% F-score—with 100% accuracy for eight dominant species. Diameter at breast height (DBH) was extracted via least-squares circle fitting, attaining >95% accuracy for key species such as Magnolia grandiflora and Triadica sebifera. Carbon storage was calculated through species-specific allometric models integrated with field inventory data, revealing a total stock of 163,601 kg (mean 182.4 kg/tree). Four dominant species—Cinnamomum camphora, Liriodendron chinense, Salix babylonica, and Metasequoia glyptostroboides—collectively contributed 84.3% of total storage. As the first integrated application of multi-platform LiDAR for campus-scale carbon mapping, this work establishes a replicable framework for precision urban carbon sink assessment, supporting data-driven campus greening strategies and climate action planning. Full article
(This article belongs to the Special Issue Urban Forests and Greening for Sustainable Cities)
Show Figures

Figure 1

16 pages, 11231 KB  
Article
Aerial Vehicle Detection Using Ground-Based LiDAR
by John Kirschler and Jay Wilhelm
Aerospace 2025, 12(9), 756; https://doi.org/10.3390/aerospace12090756 - 22 Aug 2025
Viewed by 1240
Abstract
Ground-based LiDAR sensing offers a promising approach for delivering short-range landing feedback to aerial vehicles operating near vertiports and in GNSS-degraded environments. This work introduces a detection system capable of classifying aerial vehicles and estimating their 3D positions with sub-meter accuracy. Using a [...] Read more.
Ground-based LiDAR sensing offers a promising approach for delivering short-range landing feedback to aerial vehicles operating near vertiports and in GNSS-degraded environments. This work introduces a detection system capable of classifying aerial vehicles and estimating their 3D positions with sub-meter accuracy. Using a simulated Gazebo environment, multiple LiDAR sensors and five vehicle classes, ranging from hobbyist drones to air taxis, were modeled to evaluate detection performance. RGB-encoded point clouds were processed using a modified YOLOv6 neural network with Slicing-Aided Hyper Inference (SAHI) to preserve high-resolution object features. Classification accuracy and position error were analyzed using mean Average Precision (mAP) and Mean Absolute Error (MAE) across varied sensor parameters, vehicle sizes, and distances. Within 40 m, the system consistently achieved over 95% classification accuracy and average position errors below 0.5 m. Results support the viability of high-density LiDAR as a complementary method for precision landing guidance in advanced air mobility applications. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

15 pages, 2538 KB  
Article
Dynamic Obstacle Perception Technology for UAVs Based on LiDAR
by Wei Xia, Feifei Song and Zimeng Peng
Drones 2025, 9(8), 540; https://doi.org/10.3390/drones9080540 - 31 Jul 2025
Cited by 2 | Viewed by 2007
Abstract
With the widespread application of small quadcopter drones in the military and civilian fields, the security challenges they face are gradually becoming apparent. Especially in dynamic environments, the rapidly changing conditions make the flight of drones more complex. To address the computational limitations [...] Read more.
With the widespread application of small quadcopter drones in the military and civilian fields, the security challenges they face are gradually becoming apparent. Especially in dynamic environments, the rapidly changing conditions make the flight of drones more complex. To address the computational limitations of small quadcopter drones and meet the demands of obstacle perception in dynamic environments, a LiDAR-based obstacle perception algorithm is proposed. First, accumulation, filtering, and clustering processes are carried out on the LiDAR point cloud data to complete the segmentation and extraction of point cloud obstacles. Then, an obstacle motion/static discrimination algorithm based on three-dimensional point motion attributes is developed to classify dynamic and static point clouds. Finally, oriented bounding box (OBB) detection is employed to simplify the representation of the spatial position and shape of dynamic point cloud obstacles, and motion estimation is achieved by tracking the OBB parameters using a Kalman filter. Simulation experiments demonstrate that this method can ensure a dynamic obstacle detection frequency of 10 Hz and successfully detect multiple dynamic obstacles in the environment. Full article
Show Figures

Figure 1

26 pages, 11912 KB  
Article
Multi-Dimensional Estimation of Leaf Loss Rate from Larch Caterpillar Under Insect Pest Stress Using UAV-Based Multi-Source Remote Sensing
by He-Ya Sa, Xiaojun Huang, Li Ling, Debao Zhou, Junsheng Zhang, Gang Bao, Siqin Tong, Yuhai Bao, Dashzebeg Ganbat, Mungunkhuyag Ariunaa, Dorjsuren Altanchimeg and Davaadorj Enkhnasan
Drones 2025, 9(8), 529; https://doi.org/10.3390/drones9080529 - 28 Jul 2025
Cited by 1 | Viewed by 844
Abstract
Leaf loss caused by pest infestations poses a serious threat to forest health. The leaf loss rate (LLR) refers to the percentage of the overall tree-crown leaf loss per unit area and is an important indicator for evaluating forest health. Therefore, rapid and [...] Read more.
Leaf loss caused by pest infestations poses a serious threat to forest health. The leaf loss rate (LLR) refers to the percentage of the overall tree-crown leaf loss per unit area and is an important indicator for evaluating forest health. Therefore, rapid and accurate acquisition of the LLR via remote sensing monitoring is crucial. This study is based on drone hyperspectral and LiDAR data as well as ground survey data, calculating hyperspectral indices (HSI), multispectral indices (MSI), and LiDAR indices (LI). It employs Savitzky–Golay (S–G) smoothing with different window sizes (W) and polynomial orders (P) combined with recursive feature elimination (RFE) to select sensitive features. Using Random Forest Regression (RFR) and Convolutional Neural Network Regression (CNNR) to construct a multidimensional (horizontal and vertical) estimation model for LLR, combined with LiDAR point cloud data, achieved a three-dimensional visualization of the leaf loss rate of trees. The results of the study showed: (1) The optimal combination of HSI and MSI was determined to be W11P3, and the LI was W5P2. (2) The optimal combination of the number of sensitive features extracted by the RFE algorithm was 13 HSI, 16 MSI, and hierarchical LI (2 in layer I, 9 in layer II, and 11 in layer III). (3) In terms of the horizontal estimation of the defoliation rate, the model performance index of the CNNRHSI model (MPI = 0.9383) was significantly better than that of RFRMSI (MPI = 0.8817), indicating that the continuous bands of hyperspectral could better monitor the subtle changes of LLR. (4) The I-CNNRHSI+LI, II-CNNRHSI+LI, and III-CNNRHSI+LI vertical estimation models were constructed by combining the CNNRHSI model with the best accuracy and the LI sensitive to different vertical levels, respectively, and their MPIs reached more than 0.8, indicating that the LLR estimation of different vertical levels had high accuracy. According to the model, the pixel-level LLR of the sample tree was estimated, and the three-dimensional display of the LLR for forest trees under the pest stress of larch caterpillars was generated, providing a high-precision research scheme for LLR estimation under pest stress. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

21 pages, 12122 KB  
Article
RA3T: An Innovative Region-Aligned 3D Transformer for Self-Supervised Sim-to-Real Adaptation in Low-Altitude UAV Vision
by Xingrao Ma, Jie Xie, Di Shao, Aiting Yao and Chengzu Dong
Electronics 2025, 14(14), 2797; https://doi.org/10.3390/electronics14142797 - 11 Jul 2025
Viewed by 887
Abstract
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework [...] Read more.
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework that enables robust Sim-to-Real adaptation. Specifically, we first develop a dual-branch strategy for self-supervised feature learning, integrating Masked Autoencoders and contrastive learning. This approach extracts domain-invariant representations from unlabeled simulated imagery to enhance robustness against occlusion while reducing annotation dependency. Leveraging these learned features, we then introduce a 3D Transformer fusion module that unifies multi-view RGB and LiDAR point clouds through cross-modal attention. By explicitly modeling spatial layouts and height differentials, this component significantly improves recognition of small and occluded targets in complex low-altitude environments. To address persistent fine-grained domain shifts, we finally design region-level adversarial calibration that deploys local discriminators on partitioned feature maps. This mechanism directly aligns texture, shadow, and illumination discrepancies which challenge conventional global alignment methods. Extensive experiments on UAV benchmarks VisDrone and DOTA demonstrate the effectiveness of RA3T. The framework achieves +5.1% mAP on VisDrone and +7.4% mAP on DOTA over the 2D adversarial baseline, particularly on small objects and sparse occlusions, while maintaining real-time performance of 17 FPS at 1024 × 1024 resolution on an RTX 4080 GPU. Visual analysis confirms that the synergistic integration of 3D geometric encoding and local adversarial alignment effectively mitigates domain gaps caused by uneven illumination and perspective variations, establishing an efficient pathway for simulation-to-reality UAV perception. Full article
(This article belongs to the Special Issue Innovative Technologies and Services for Unmanned Aerial Vehicles)
Show Figures

Figure 1

18 pages, 13123 KB  
Article
Field Study of UAV Variable-Rate Spraying Method for Orchards Based on Canopy Volume
by Pengchao Chen, Haoran Ma, Zongyin Cui, Zhihong Li, Jiapei Wu, Jianhong Liao, Hanbing Liu, Ying Wang and Yubin Lan
Agriculture 2025, 15(13), 1374; https://doi.org/10.3390/agriculture15131374 - 27 Jun 2025
Cited by 3 | Viewed by 3015
Abstract
The use of unmanned aerial vehicle (UAV) pesticide spraying technology in precision agriculture is becoming increasingly important. However, traditional spraying methods struggle to address the precision application need caused by the canopy differences of fruit trees in orchards. This study proposes a UAV [...] Read more.
The use of unmanned aerial vehicle (UAV) pesticide spraying technology in precision agriculture is becoming increasingly important. However, traditional spraying methods struggle to address the precision application need caused by the canopy differences of fruit trees in orchards. This study proposes a UAV orchard variable-rate spraying method based on canopy volume. A DJI M300 drone equipped with LiDAR was used to capture high-precision 3D point cloud data of tree canopies. An improved progressive TIN densification (IPTD) filtering algorithm and a region-growing algorithm were applied to segment the point cloud of fruit trees, construct a canopy volume-based classification model, and generate a differentiated prescription map for spraying. A distributed multi-point spraying strategy was employed to optimize droplet deposition performance. Field experiments were conducted in a citrus (Citrus reticulata Blanco) orchard (73 trees) and a litchi (Litchi chinensis Sonn.) orchard (82 trees). Data analysis showed that variable-rate treatment in the litchi area achieved a maximum canopy coverage of 14.47% for large canopies, reducing ground deposition by 90.4% compared to the continuous spraying treatment; variable-rate treatment in the citrus area reached a maximum coverage of 9.68%, with ground deposition reduced by approximately 64.1% compared to the continuous spraying treatment. By matching spray volume to canopy demand, variable-rate spraying significantly improved droplet deposition targeting, validating the feasibility of the proposed method in reducing pesticide waste and environmental pollution and providing a scalable technical path for precision plant protection in orchards. Full article
(This article belongs to the Special Issue Smart Spraying Technology in Orchards: Innovation and Application)
Show Figures

Figure 1

27 pages, 358 KB  
Review
LiDAR Technology for UAV Detection: From Fundamentals and Operational Principles to Advanced Detection and Classification Techniques
by Ulzhalgas Seidaliyeva, Lyazzat Ilipbayeva, Dana Utebayeva, Nurzhigit Smailov, Eric T. Matson, Yerlan Tashtay, Mukhit Turumbetov and Akezhan Sabibolda
Sensors 2025, 25(9), 2757; https://doi.org/10.3390/s25092757 - 27 Apr 2025
Cited by 17 | Viewed by 14234
Abstract
As unmanned aerial vehicles (UAVs) are increasingly employed across various industries, the demand for robust and accurate detection has become crucial. Light detection and ranging (LiDAR) has developed as a vital sensor technology due to its ability to provide rich 3D spatial information, [...] Read more.
As unmanned aerial vehicles (UAVs) are increasingly employed across various industries, the demand for robust and accurate detection has become crucial. Light detection and ranging (LiDAR) has developed as a vital sensor technology due to its ability to provide rich 3D spatial information, particularly in applications such as security and airspace monitoring. This review systematically explores recent innovations in LiDAR-based drone detection, deeply focusing on the principles and components of LiDAR sensors, their classifications based on different parameters and scanning mechanisms, and the approaches for processing LiDAR data. The review briefly compares recent research works in LiDAR-based only and its fusion with other sensor modalities, the real-world applications of LiDAR with deep learning, as well as the major challenges in sensor fusion-based UAV detection. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

28 pages, 5568 KB  
Article
Research on Low-Altitude Aircraft Point Cloud Generation Method Using Single Photon Counting Lidar
by Zhigang Su, Shaorui Liang, Jingtang Hao and Bing Han
Photonics 2025, 12(3), 205; https://doi.org/10.3390/photonics12030205 - 27 Feb 2025
Viewed by 803
Abstract
To address the deficiency of aircraft point cloud training data for low-altitude environment perception systems, a method termed APCG (aircraft point cloud generation) is proposed. APCG can generate aircraft point cloud data in the single photon counting Lidar (SPC-Lidar) system based on information [...] Read more.
To address the deficiency of aircraft point cloud training data for low-altitude environment perception systems, a method termed APCG (aircraft point cloud generation) is proposed. APCG can generate aircraft point cloud data in the single photon counting Lidar (SPC-Lidar) system based on information such as aircraft type, position, and attitude. The core of APCG is the aircraft depth image generator, which is obtained through adversarial training of an improved conditional generative adversarial network (cGAN). The training data of the improved cGAN are composed of aircraft depth images formed by spatial sampling and transformation of fine point clouds of 76 types of aircraft and 4 types of drone. The experimental results demonstrate that APCG is capable of efficiently generating diverse aircraft point clouds that reflect the acquisition characteristics of the SPC-Lidar system. The generated point clouds exhibit high similarity to the standard point clouds. Furthermore, APCG shows robust adaptability and stability in response to the variation in aircraft slant range. Full article
(This article belongs to the Special Issue Recent Progress in Single-Photon Generation and Detection)
Show Figures

Figure 1

29 pages, 15780 KB  
Article
Assessing Lightweight Folding UAV Reliability Through a Photogrammetric Case Study: Extracting Urban Village’s Buildings Using Object-Based Image Analysis (OBIA) Method
by Junyu Kuang, Yingbiao Chen, Zhenxiang Ling, Xianxin Meng, Wentao Chen and Zihao Zheng
Drones 2025, 9(2), 101; https://doi.org/10.3390/drones9020101 - 29 Jan 2025
Cited by 1 | Viewed by 1953
Abstract
With the rapid advancement of drone technology, modern drones have achieved high levels of functional integration, alongside structural improvements that include lightweight, compact designs with foldable features, greatly enhancing their flexibility and applicability in photogrammetric applications. Nevertheless, limited research currently explores data collected [...] Read more.
With the rapid advancement of drone technology, modern drones have achieved high levels of functional integration, alongside structural improvements that include lightweight, compact designs with foldable features, greatly enhancing their flexibility and applicability in photogrammetric applications. Nevertheless, limited research currently explores data collected by such compact UAVs, and whether they can balance a small form factor with high data quality remains uncertain. To address this challenge, this study acquired the remote sensing data of a peri-urban area using the DJI Mavic 3 Enterprise and applied Object-Based Image Analysis (OBIA) to extract high-density buildings. It was found that this drone offers high portability, a low operational threshold, and minimal regulatory constraints in practical applications, while its captured imagery provides rich textural details that clearly depict the complex surface features in urban villages. To assess the accuracy of the extraction results, the visual comparison between the segmentation outputs and airborne LiDAR point clouds captured by the DJI M300 RTK was performed, and classification performance was evaluated based on confusion matrix metrics. The results indicate that the boundaries of the segmented objects align well with the building edges in the LiDAR point cloud. The classification accuracy of the three selected algorithms exceeded 80%, with the KNN classifier achieving an accuracy of 91% and a Kappa coefficient of 0.87, which robustly demonstrate the reliability of the UAV data and validate the feasibility of the proposed approach in complex cases. As a practical case reference, this study is expected to promote the wider application of lightweight UAVs across various fields. Full article
Show Figures

Figure 1

20 pages, 7483 KB  
Article
An Enhanced LiDAR-Based SLAM Framework: Improving NDT Odometry with Efficient Feature Extraction and Loop Closure Detection
by Yan Ren, Zhendong Shen, Wanquan Liu and Xinyu Chen
Processes 2025, 13(1), 272; https://doi.org/10.3390/pr13010272 - 19 Jan 2025
Cited by 3 | Viewed by 2913
Abstract
Simultaneous localization and mapping (SLAM) is crucial for autonomous driving, drone navigation, and robot localization, relying on efficient point cloud registration and loop closure detection. Traditional Normal Distributions Transform (NDT) odometry frameworks provide robust solutions but struggle with real-time performance due to the [...] Read more.
Simultaneous localization and mapping (SLAM) is crucial for autonomous driving, drone navigation, and robot localization, relying on efficient point cloud registration and loop closure detection. Traditional Normal Distributions Transform (NDT) odometry frameworks provide robust solutions but struggle with real-time performance due to the high computational complexity of processing large-scale point clouds. This paper introduces an improved NDT-based LiDAR odometry framework to address these challenges. The proposed method enhances computational efficiency and registration accuracy by introducing a unified feature point cloud framework that integrates planar and edge features, enabling more accurate and efficient inter-frame matching. To further improve loop closure detection, a parallel hybrid approach combining Radius Search and Scan Context is developed, which significantly enhances robustness and accuracy. Additionally, feature-based point cloud registration is seamlessly integrated with full cloud mapping in global optimization, ensuring high-precision pose estimation and detailed environmental reconstruction. Experiments on both public datasets and real-world environments validate the effectiveness of the proposed framework. Compared with traditional NDT, our method achieves trajectory estimation accuracy increases of 35.59% and over 35%, respectively, with and without loop detection. The average registration time is reduced by 66.7%, memory usage is decreased by 23.16%, and CPU usage drops by 19.25%. These results surpass those of existing SLAM systems, such as LOAM. The proposed method demonstrates superior robustness, enabling reliable pose estimation and map construction in dynamic, complex settings. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

26 pages, 12469 KB  
Article
UAV Data Collection Co-Registration: LiDAR and Photogrammetric Surveys for Coastal Monitoring
by Carmen Maria Giordano, Valentina Alena Girelli, Alessandro Lambertini, Maria Alessandra Tini and Antonio Zanutta
Drones 2025, 9(1), 49; https://doi.org/10.3390/drones9010049 - 11 Jan 2025
Cited by 4 | Viewed by 2839
Abstract
When georeferencing is a key point of coastal monitoring, it is crucial to understand how the type of data and object characteristics can affect the result of the registration procedure, and, above all, how to assess the reconstruction accuracy. For this reason, the [...] Read more.
When georeferencing is a key point of coastal monitoring, it is crucial to understand how the type of data and object characteristics can affect the result of the registration procedure, and, above all, how to assess the reconstruction accuracy. For this reason, the goal of this work is to evaluate the performance of the iterative closest point (ICP) method for registering point clouds in coastal environments, using a single-epoch and multi-sensor survey of a coastal area (near the Bevano river mouth, Ravenna, Italy). The combination of multiple drone datasets (LiDAR and photogrammetric clouds) is performed via indirect georeferencing, using different executions of the ICP procedure. The ICP algorithm is affected by the differences in the vegetation reconstruction by the two sensors, which may lead to a rotation of the slave cloud. While the dissimilarities between the two clouds can be minimized, reducing their impact, the lack of object distinctiveness, typical of environmental objects, remains a problem that cannot be overcome. This work addresses the use of the ICP method for registering point clouds representative of coastal environments, with some limitations related to the required presence of stable areas between the clouds and the potential errors associated with featureless surfaces. Full article
(This article belongs to the Special Issue UAVs for Coastal Surveying)
Show Figures

Figure 1

Back to TopTop