Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (157)

Search Parameters:
Keywords = rotating lidar

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5843 KiB  
Article
Accurate and Robust Train Localization: Fusing Degeneracy-Aware LiDAR-Inertial Odometry and Visual Landmark Correction
by Lin Yue, Peng Wang, Jinchao Mu, Chen Cai, Dingyi Wang and Hao Ren
Sensors 2025, 25(15), 4637; https://doi.org/10.3390/s25154637 - 26 Jul 2025
Viewed by 331
Abstract
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and [...] Read more.
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and a LiDAR-inertial odometry factor accounting for degenerate states are constructed to adapt to railway train operating environments. Subsequently, a lightweight network based on YOLO improvement is used for recognizing reflective kilometer posts, while PaddleOCR extracts numerical codes. High-precision vertex coordinates of kilometer posts are obtained by jointly using LiDAR point cloud and an image detection box. Next, a kilometer post factor is constructed, and multi-source information is optimized within a factor graph framework. Finally, onboard experiments conducted on real railway vehicles demonstrate high-precision landmark detection at 35 FPS with 94.8% average precision. The proposed method delivers robust positioning within 5 m RMSE accuracy for high-speed, long-distance train travel, establishing a novel framework for intelligent railway development. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

15 pages, 2993 KiB  
Article
A Joint LiDAR and Camera Calibration Algorithm Based on an Original 3D Calibration Plate
by Ziyang Cui, Yi Wang, Xiaodong Chen and Huaiyu Cai
Sensors 2025, 25(15), 4558; https://doi.org/10.3390/s25154558 - 23 Jul 2025
Viewed by 265
Abstract
An accurate extrinsic calibration between LiDAR and cameras is essential for effective sensor fusion, directly impacting the perception capabilities of autonomous driving systems. Although prior calibration approaches using planar and point features have yielded some success, they suffer from inherent limitations. Specifically, methods [...] Read more.
An accurate extrinsic calibration between LiDAR and cameras is essential for effective sensor fusion, directly impacting the perception capabilities of autonomous driving systems. Although prior calibration approaches using planar and point features have yielded some success, they suffer from inherent limitations. Specifically, methods that rely on fitting planar contours using depth-discontinuous points are prone to systematic errors, which hinder the precise extraction of the 3D positions of feature points. This, in turn, compromises the accuracy and robustness of the calibration. To overcome these challenges, this paper introduces a novel 3D calibration plate incorporating the gradient depth, localization markers, and corner features. At the point cloud level, the gradient depth enables the accurate estimation of the 3D coordinates of feature points. At the image level, corner features and localization markers facilitate the rapid and precise acquisition of 2D pixel coordinates, with minimal interference from environmental noise. This method establishes a rigorous and systematic framework to enhance the accuracy of LiDAR–camera extrinsic calibrations. In a simulated environment, experimental results demonstrate that the proposed algorithm achieves a rotation error below 0.002 radians and a translation error below 0.005 m. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 2469 KiB  
Article
Robust Low-Overlap Point Cloud Registration via Displacement-Corrected Geometric Consistency for Enhanced 3D Sensing
by Xin Wang and Qingguang Li
Sensors 2025, 25(14), 4332; https://doi.org/10.3390/s25144332 - 11 Jul 2025
Viewed by 363
Abstract
Accurate alignment of 3D point clouds, achieved by ubiquitous sensors such as LiDAR and depth cameras, is critical for enhancing perception capabilities in robotics, autonomous navigation, and environmental reconstruction. However, low-overlap scenarios—common due to limited sensor field-of-view or occlusions—severely degrade registration robustness and [...] Read more.
Accurate alignment of 3D point clouds, achieved by ubiquitous sensors such as LiDAR and depth cameras, is critical for enhancing perception capabilities in robotics, autonomous navigation, and environmental reconstruction. However, low-overlap scenarios—common due to limited sensor field-of-view or occlusions—severely degrade registration robustness and sensing reliability. To address this challenge, this paper proposes a novel geometric consistency optimization and rectification deep learning network named GeoCORNet. By synergistically designing a geometric consistency enhancement module, a bidirectional cross-attention mechanism, a predictive displacement rectification strategy, and joint optimization of overlap loss with displacement loss, GeoCORNet significantly improves registration accuracy and robustness in complex scenarios. The Attentive Cross-Consistency module of GeoCORNet integrates distance and angular consistency constraints with bidirectional cross-attention to significantly suppress noise from non-overlapping regions while reinforcing geometric coherence in overlapping areas. The predictive displacement rectification strategy dynamically rectifies erroneous correspondences through predicted 3D displacements instead of discarding them, maximizing the utility of sparse sensor data. Furthermore, a novel displacement loss function was developed to effectively constrain the geometric distribution of corrected point-pairs. Experimental results demonstrate that our method outperformed existing approaches in the aspects of registration recall, rotation error, and algorithm robustness under low-overlap conditions. These advances establish a new paradigm for robust 3D sensing in real-world applications where partial sensor data is prevalent. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

12 pages, 3214 KiB  
Article
Singular Value Decomposition (SVD) Method for LiDAR and Camera Sensor Fusion and Pattern Matching Algorithm
by Kaiqiao Tian, Meiqi Song, Ka C. Cheok, Micho Radovnikovich, Kazuyuki Kobayashi and Changqing Cai
Sensors 2025, 25(13), 3876; https://doi.org/10.3390/s25133876 - 21 Jun 2025
Viewed by 725
Abstract
LiDAR and camera sensors are widely utilized in autonomous vehicles (AVs) and robotics due to their complementary sensing capabilities—LiDAR provides precise depth information, while cameras capture rich visual context. However, effective multi-sensor fusion remains challenging due to discrepancies in resolution, data format, and [...] Read more.
LiDAR and camera sensors are widely utilized in autonomous vehicles (AVs) and robotics due to their complementary sensing capabilities—LiDAR provides precise depth information, while cameras capture rich visual context. However, effective multi-sensor fusion remains challenging due to discrepancies in resolution, data format, and viewpoint. In this paper, we propose a robust pattern matching algorithm that leverages singular value decomposition (SVD) and gradient descent (GD) to align geometric features—such as object contours and convex hulls—across LiDAR and camera modalities. Unlike traditional calibration methods that require manual targets, our approach is targetless, extracting matched patterns from projected LiDAR point clouds and 2D image segments. The algorithm computes the optimal transformation matrix between sensors, correcting misalignments in rotation, translation, and scale. Experimental results on a vehicle-mounted sensing platform demonstrate an alignment accuracy improvement of up to 85%, with the final projection error reduced to less than 1 pixel. This pattern-based SVD-GD framework offers a practical solution for maintaining reliable cross-sensor alignment under calibration drift, enabling real-time perception systems to operate robustly without recalibration. This method provides a practical solution for maintaining reliable sensor fusion in autonomous driving applications subject to long-term calibration drift. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensor)
Show Figures

Figure 1

25 pages, 40577 KiB  
Article
Laser SLAM Matching Localization Method for Subway Tunnel Point Clouds
by Yi Zhang, Feiyang Dong, Qihao Sun and Weiwei Song
Sensors 2025, 25(12), 3681; https://doi.org/10.3390/s25123681 - 12 Jun 2025
Cited by 1 | Viewed by 450
Abstract
When facing geometrically similar environments such as subway tunnels, Scan-Map registration is highly dependent on the correct initial value of the pose, otherwise mismatching is prone to occur, which limits the application of SLAM (Simultaneous Localization and Mapping) in tunnels. We propose a [...] Read more.
When facing geometrically similar environments such as subway tunnels, Scan-Map registration is highly dependent on the correct initial value of the pose, otherwise mismatching is prone to occur, which limits the application of SLAM (Simultaneous Localization and Mapping) in tunnels. We propose a novel coarse-to-fine registration strategy that includes geometric feature extraction and a keyframe-based pose optimization model. The method involves initial feature point set acquisition through point distance calculations, followed by the extraction of line and plane features, and convex hull features based on the normal vector’s change rate. Coarse registration is achieved through rotation and translation using three types of feature sets, with the resulting pose serving as the initial value for fine registration via Point-Plane ICP. The algorithm’s accuracy and efficiency are validated using Innovusion lidar scans of a subway tunnel, achieving a single-frame point cloud registration accuracy of 3 cm within 0.7 s, significantly improving upon traditional registration algorithms. The study concludes that the proposed method effectively enhances SLAM’s applicability in challenging tunnel environments, ensuring high registration accuracy and efficiency. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

13 pages, 4604 KiB  
Article
Research on the Detection of Middle Atmosphere Temperature by Pure Rotating Raman–Rayleigh Scattering LiDAR at Daytime and Nighttime
by Bangxin Wang, Cheng Li, Qian Deng, Decheng Wu, Zhenzhu Wang, Hao Yang, Kunming Xing and Yingjian Wang
Photonics 2025, 12(6), 590; https://doi.org/10.3390/photonics12060590 - 9 Jun 2025
Viewed by 562
Abstract
The temperature of the middle atmosphere is of great significance in the coupled study of the upper and lower layers. A pure rotational Raman–Rayleigh scattering LiDAR system was developed for profiling the middle atmospheric temperature at daytime and nighttime continuously by employing an [...] Read more.
The temperature of the middle atmosphere is of great significance in the coupled study of the upper and lower layers. A pure rotational Raman–Rayleigh scattering LiDAR system was developed for profiling the middle atmospheric temperature at daytime and nighttime continuously by employing an ultra-narrow band interferometer. The comparisons between LiDAR detections and radiosonde data show that the LiDAR system has temperature detection capabilities of 80 km and 60 km at night and during the day, respectively. The results demonstrate that our method can reliably detect the atmospheric temperature in the middle atmosphere. The significant non-uniformity in the horizontal distribution of temperature in the middle atmosphere and the vertical gradient of atmospheric temperature could be observed by using the developed LiDAR. Full article
Show Figures

Figure 1

11 pages, 2032 KiB  
Communication
Super-Resolution Reconstruction of LiDAR Images Based on an Adaptive Contour Closure Algorithm over 10 km
by Liang Shi, Xinyuan Zhang, Fei Han, Yicheng Wang, Shilong Xu, Xing Yang and Yihua Hu
Photonics 2025, 12(6), 569; https://doi.org/10.3390/photonics12060569 - 5 Jun 2025
Viewed by 416
Abstract
Reflective Tomography LiDAR (RTL) imaging, an innovative LiDAR technology, offers the significant advantage of an imaging resolution independent of detection distance and receiving optical aperture, evolving from Computed Tomography (CT) principles. However, distinct from transmissive imaging, RTL requires precise alignment of multi-angle echo [...] Read more.
Reflective Tomography LiDAR (RTL) imaging, an innovative LiDAR technology, offers the significant advantage of an imaging resolution independent of detection distance and receiving optical aperture, evolving from Computed Tomography (CT) principles. However, distinct from transmissive imaging, RTL requires precise alignment of multi-angle echo data around the target’s rotation center before image reconstruction. This paper presents an adaptive contour closure algorithm for automated multi-angle echo data registration in RTL. A 10.38 km remote RTL imaging experiment validates the algorithm’s efficacy, showing that it improves the quality factor of reconstructed images by over 23% and effectively suppresses interference from target/detector jitter, laser pulse transmission/reception fluctuations, and atmospheric turbulence. These results support the development of advanced space target perception capabilities and drive the transition of space-based LiDAR from “point” measurements to “volumetric” perception, marking a crucial advancement in space exploration and surveillance. Full article
(This article belongs to the Special Issue Technologies and Applications of Optical Imaging)
Show Figures

Figure 1

20 pages, 9870 KiB  
Article
Analysis, Simulation, and Scanning Geometry Calibration of Palmer Scanning Units for Airborne Hyperspectral Light Detection and Ranging
by Shuo Shi, Qian Xu, Chengyu Gong, Wei Gong, Xingtao Tang and Bowei Zhou
Remote Sens. 2025, 17(8), 1450; https://doi.org/10.3390/rs17081450 - 18 Apr 2025
Viewed by 427
Abstract
Airborne hyperspectral LiDAR (AHSL) is a technology that integrates the spectral content collected using hyperspectral imaging and the precise 3D descriptions of observed objects obtained using LiDAR (light detection and ranging). AHSL detects the spectral and three-dimensional (3D) information on an object simply [...] Read more.
Airborne hyperspectral LiDAR (AHSL) is a technology that integrates the spectral content collected using hyperspectral imaging and the precise 3D descriptions of observed objects obtained using LiDAR (light detection and ranging). AHSL detects the spectral and three-dimensional (3D) information on an object simply using laser measurements. Nevertheless, the advantageous richness of spectral properties also introduces novel issues into the scan unit, the mechanical–optical trade-off. Specifically, the abundant spectral information requires a larger optical aperture, limiting the acceptance of the mechanic load by the scan unit at a demanding rotation speed and flight height. Via the simulation and analysis of scan models, it is exhibited that Palmer scans fit the large optical aperture required by AHSL best. Furthermore, based on the simulation of the Palmer scan model, 45.23% is explored as the optimized ratio of overlap (ROP) for minimizing the diversity of the point density, with a reduction in the coefficient of variation (CV) from 0.47 to 0.19. The other issue is that it is intricate to calibrate the scanning geometry using outside devices due to the complex optical path. A self-calibration strategy is proposed for tackling this problem, which integrates indoor laser vector retrieval and airborne orientation correction. The strategy is composed of the following three improvements: (1) A self-determined laser vector retrieval strategy that utilizes the self-ranging feature of AHSL itself is proposed for retrieving the initial scanning laser vectors with a precision of 0.874 mrad. (2) A linear residual estimated interpolation method (LREI) is proposed for enhancing the precision of the interpolation, reducing the RMSE from 1.517 mrad to 0.977 mrad. Compared to the linear interpolation method, LREI maintains the geometric features of Palmer scanning traces. (3) A least-deviated flatness restricted optimization (LDFO) algorithm is used to calibrate the angle offset in aerial scanning point cloud data, which reduces the standard deviation in the flatness of the scanning plane from 1.389 m to 0.241 m and reduces the distortion of the scanning strip. This study provides a practical scanning method and a corresponding calibration strategy for AHSL. Full article
Show Figures

Figure 1

25 pages, 16833 KiB  
Article
R2SCAT-LPR: Rotation-Robust Network with Self- and Cross-Attention Transformers for LiDAR-Based Place Recognition
by Weizhong Jiang, Hanzhang Xue, Shubin Si, Liang Xiao, Dawei Zhao, Qi Zhu, Yiming Nie and Bin Dai
Remote Sens. 2025, 17(6), 1057; https://doi.org/10.3390/rs17061057 - 17 Mar 2025
Cited by 1 | Viewed by 682
Abstract
LiDAR-based place recognition (LPR) is crucial for the navigation and localization of autonomous vehicles and mobile robots in large-scale outdoor environments and plays a critical role in loop closure detection for simultaneous localization and mapping (SLAM). Existing LPR methods, which utilize 2D bird’s-eye [...] Read more.
LiDAR-based place recognition (LPR) is crucial for the navigation and localization of autonomous vehicles and mobile robots in large-scale outdoor environments and plays a critical role in loop closure detection for simultaneous localization and mapping (SLAM). Existing LPR methods, which utilize 2D bird’s-eye view (BEV) projections of 3D point clouds, achieve competitive performance in efficiency and recognition accuracy. However, these methods often struggle with capturing global contextual information and maintaining robustness to viewpoint variations. To address these challenges, we propose R2SCAT-LPR, a novel, transformer-based model that leverages self-attention and cross-attention mechanisms to extract rotation-robust place feature descriptors from BEV images. R2SCAT-LPR consists of three core modules: (1) R2MPFE, which employs weight-shared cascaded multi-head self-attention (MHSA) to extract multi-level spatial contextual patch features from both the original BEV image and its randomly rotated counterpart; (2) DSCA, which integrates dual-branch self-attention and multi-head cross-attention (MHCA) to capture intrinsic correspondences between multi-level patch features before and after rotation, enhancing the extraction of rotation-robust local features; and (3) a combined NetVLAD module, which aggregates patch features from both the original feature space and the rotated interaction space into a compact and viewpoint-robust global descriptor. Extensive experiments conducted on the KITTI and NCLT datasets validate the effectiveness of the proposed model, demonstrating its robustness to rotation variations and its generalization ability across diverse scenes and LiDAR sensors types. Furthermore, we evaluate the generalization performance and computational efficiency of R2SCAT-LPR on our self-constructed OffRoad-LPR dataset for off-road autonomous driving, verifying its deployability on resource-constrained platforms. Full article
Show Figures

Figure 1

16 pages, 14380 KiB  
Article
Online Calibration Method of LiDAR and Camera Based on Fusion of Multi-Scale Cost Volume
by Xiaobo Han, Jie Luo, Xiaoxu Wei and Yongsheng Wang
Information 2025, 16(3), 223; https://doi.org/10.3390/info16030223 - 13 Mar 2025
Cited by 1 | Viewed by 1712
Abstract
The online calibration algorithm for camera and LiDAR helps solve the problem of multi-sensor fusion and is of great significance in autonomous driving perception algorithms. Existing online calibration algorithms fail to account for both real-time performance and accuracy. High-precision calibration algorithms require high [...] Read more.
The online calibration algorithm for camera and LiDAR helps solve the problem of multi-sensor fusion and is of great significance in autonomous driving perception algorithms. Existing online calibration algorithms fail to account for both real-time performance and accuracy. High-precision calibration algorithms require high hardware requirements, while it is difficult for lightweight calibration algorithms to meet the accuracy requirements. Secondly, sensor noise, vibration, and changes in environmental conditions may reduce calibration accuracy. In addition, due to the large domain differences between different public datasets, the existing online calibration algorithms are unstable for various datasets and have poor algorithm robustness. To solve the above problems, we propose an online calibration algorithm based on multi-scale cost volume fusion. First, a multi-layer convolutional network is used to downsample and concatenate the camera RGB data and LiDAR point cloud data to obtain three-scale feature maps. The latter is then subjected to feature concatenation and group-wise correlation processing to generate three sets of cost volumes of different scales. After that, all the cost volumes are spliced and sent to the pose estimation module. After post-processing, the translation and rotation matrix between the camera and LiDAR coordinate systems can be obtained. We tested and verified this method on the KITTI odometry dataset and measured the average translation error of the calibration results to be 0.278 cm, the average rotation error to be 0.020°, and the single frame took 23 ms, reaching the advanced level. Full article
Show Figures

Graphical abstract

26 pages, 7338 KiB  
Article
Research on Fitting and Denoising Subway Shield-Tunnel Cross-Section Point-Cloud Data Based on the Huber Loss Function
by Yan Bao, Sixuan Li, Chao Tang, Zhe Sun, Kun Yang and Yong Wang
Appl. Sci. 2025, 15(4), 2249; https://doi.org/10.3390/app15042249 - 19 Feb 2025
Viewed by 1019
Abstract
The expansion of tunnel scale has led to a massive demand for inspections. Light Detection And Ranging (LiDAR) has been widely applied in tunnel structure inspections due to its fast data acquisition speed and strong environmental adaptability. However, raw tunnel point-cloud data contain [...] Read more.
The expansion of tunnel scale has led to a massive demand for inspections. Light Detection And Ranging (LiDAR) has been widely applied in tunnel structure inspections due to its fast data acquisition speed and strong environmental adaptability. However, raw tunnel point-cloud data contain noise point clouds, such as non-structural facilities, which affect the detection of tunnel lining structures. Methods such as point-cloud filtering and machine learning have been applied to tunnel point-cloud denoising, but these methods usually require a lot of manual data preprocessing. In order to directly denoise the tunnel point cloud without preprocessing, this study proposes a comprehensive processing method for cross-section fitting and point-cloud denoising in subway shield tunnels based on the Huber loss function. The proposed method is compared with classical fitting denoising methods such as the least-squares method and random sample consensus (RANSAC). This study is experimentally verified with 40 m long shield-tunnel point-cloud data. Experimental results show that the method proposed in this study can more accurately fit the geometric parameters of the tunnel lining structure and denoise the point-cloud data, achieving a better denoising effect. Meanwhile, since coordinate system transformations are required during the point-cloud denoising process to handle the data, manual rotations of the coordinate system can introduce errors. This study simultaneously combines the Huber loss function with principal component analysis (PCA) and proposes a three-dimensional spatial coordinate system transformation method for tunnel point-cloud data based on the characteristics of data distribution. Full article
Show Figures

Figure 1

19 pages, 6549 KiB  
Article
Research on the Tunable Optical Alignment Technology of Lidar Under Complex Working Conditions
by Jianfeng Chen, Jie Ji, Chenbo Xie and Yingjian Wang
Remote Sens. 2025, 17(3), 532; https://doi.org/10.3390/rs17030532 - 5 Feb 2025
Cited by 1 | Viewed by 779
Abstract
Lidar technology is pivotal for detecting and monitoring the atmospheric environment. However, maintaining optical path stability in complex environments poses significant challenges, especially regarding adaptability and cost efficiency. This study proposes a tunable optical alignment method that is applied to the Rotating Rayleigh [...] Read more.
Lidar technology is pivotal for detecting and monitoring the atmospheric environment. However, maintaining optical path stability in complex environments poses significant challenges, especially regarding adaptability and cost efficiency. This study proposes a tunable optical alignment method that is applied to the Rotating Rayleigh Doppler Wind Lidar (RRDWL) to enable precise detection of mid-to-upper atmospheric wind fields. Building on the conventional echo signal strength method, this approach calibrates the signal strength using cloud information and the signal-to-noise ratio (SNR), enabling stratified and tunable optical alignment. Experimental results indicate that the optimized RRDWL achieves a maximum detection height increase from 42 km to nearly 51 km. Additionally, the average horizontal wind speed error at 30 km decreases from 11.3 m/s to 4.4 m/s, with a minimum error of approximately 1 m/s. These findings confirm that the proposed method enhances the effectiveness and reliability of the Lidar system under complex operational and diverse weather conditions. Furthermore, it improves detection performance and provides robust support for applications in related fields. Full article
Show Figures

Figure 1

25 pages, 7531 KiB  
Article
Lidar Doppler Tomography Focusing Error Analysis and Focusing Method for Targets with Unknown Rotational Speed
by Yutang Li, Chen Xu, Dengfeng Liu, Anpeng Song, Jian Li, Dongzhe Han, Kai Jin, Youming Guo and Kai Wei
Remote Sens. 2025, 17(3), 506; https://doi.org/10.3390/rs17030506 - 31 Jan 2025
Viewed by 814
Abstract
Lidar Doppler tomography (LDT) is a significant method for imaging rotating targets in long-distance air and space applications. Typically, these targets are non-cooperative and exhibit unknown rotational speeds. Inferring the rotational speed from observational data is essential for effective imaging. However, existing research [...] Read more.
Lidar Doppler tomography (LDT) is a significant method for imaging rotating targets in long-distance air and space applications. Typically, these targets are non-cooperative and exhibit unknown rotational speeds. Inferring the rotational speed from observational data is essential for effective imaging. However, existing research predominantly emphasizes the development of imaging algorithms and interference suppression, often neglecting the analysis of rotational speed estimation. This paper examines the impact of errors in rotational speed estimation on imaging quality and proposes a robust method for accurate speed estimation that yields focused imaging results. We developed a specialized measurement matrix to characterize the imaging process, which effectively captures the variations in measurement matrices resulting from different rotational speed estimates. We refer to this variation as the law of spatiotemporal propagation of errors, indicating that both the imaging accumulation time and the spatial distribution of the target influence the error distribution of the measurement matrix. Furthermore, we validated this principle through imaging simulations of point and spatial targets. Additionally, we present a method for estimating rotational speed, which includes a coarse estimation phase, image filtering, and a fine estimation phase utilizing Rényi entropy minimization. The initial rough estimate is derived from the periodicity observed in the echo time-frequency distribution. The image filtering process leverages the spatial regularity of the measurement matrix’s error distribution. The precise estimation of rotational speed employs Rényi entropy to assess image quality, thereby enhancing estimation accuracy. We constructed a Lidar Doppler tomography system and validated the effectiveness of the proposed method through close-range experiments. The system achieved a rotational speed estimation accuracy of 97.81%, enabling well-focused imaging with a spatial resolution better than 1 mm. Full article
Show Figures

Figure 1

26 pages, 191820 KiB  
Article
Research on Automatic Tracking and Size Estimation Algorithm of “Low, Slow and Small” Targets Based on Gm-APD Single-Photon LIDAR
by Dongfang Guo, Yanchen Qu, Xin Zhou, Jianfeng Sun, Shengwen Yin, Jie Lu and Feng Liu
Drones 2025, 9(2), 85; https://doi.org/10.3390/drones9020085 - 22 Jan 2025
Cited by 2 | Viewed by 1066
Abstract
In order to solve the problem of detecting, tracking and estimating the size of “low, slow and small” targets (such as UAVs) in the air, this paper designs a single-photon LiDAR imaging system based on Geiger-mode Avalanche Photodiode (Gm-APD). It improves the Mean-Shift [...] Read more.
In order to solve the problem of detecting, tracking and estimating the size of “low, slow and small” targets (such as UAVs) in the air, this paper designs a single-photon LiDAR imaging system based on Geiger-mode Avalanche Photodiode (Gm-APD). It improves the Mean-Shift algorithm and proposes an automatic tracking method that combines the weighted centroid method to realize target extraction, and the principal component analysis (PCA) method of the adaptive rotating rectangle is realized to fit the flight attitude of the target. This method uses the target intensity and distance information provided by Gm-APD LiDAR. It addresses the problem of automatic calibration and size estimation under multiple flight attitudes. The experimental results show that the improved algorithm can automatically track the targets in different flight attitudes in real time and accurately calculate their sizes. The improved algorithm is stable in the 1250-frame tracking experiment of DJI Elf 4 UAV with a flying speed of 5 m/s and a flying distance of 100 m. Among them, the fitting error of the target is always less than 2 pixels, while the size calculation error of the target is less than 2.5 cm. This shows the remarkable advantages of Gm-APD LiDAR in detecting “low, slow and small” targets. It is of practical significance to comprehensively improve the ability of UAV detection and C-UAS systems. However, the application of this technology in complex backgrounds, especially in occlusion or multi-target tracking, still faces certain challenges. In order to realize long-distance detection, further optimizing the field of view of the Gm-APD single-photon LiDAR is still a future research direction. Full article
(This article belongs to the Special Issue Detection, Identification and Tracking of UAVs and Drones)
Show Figures

Figure 1

26 pages, 12469 KiB  
Article
UAV Data Collection Co-Registration: LiDAR and Photogrammetric Surveys for Coastal Monitoring
by Carmen Maria Giordano, Valentina Alena Girelli, Alessandro Lambertini, Maria Alessandra Tini and Antonio Zanutta
Drones 2025, 9(1), 49; https://doi.org/10.3390/drones9010049 - 11 Jan 2025
Cited by 2 | Viewed by 1567
Abstract
When georeferencing is a key point of coastal monitoring, it is crucial to understand how the type of data and object characteristics can affect the result of the registration procedure, and, above all, how to assess the reconstruction accuracy. For this reason, the [...] Read more.
When georeferencing is a key point of coastal monitoring, it is crucial to understand how the type of data and object characteristics can affect the result of the registration procedure, and, above all, how to assess the reconstruction accuracy. For this reason, the goal of this work is to evaluate the performance of the iterative closest point (ICP) method for registering point clouds in coastal environments, using a single-epoch and multi-sensor survey of a coastal area (near the Bevano river mouth, Ravenna, Italy). The combination of multiple drone datasets (LiDAR and photogrammetric clouds) is performed via indirect georeferencing, using different executions of the ICP procedure. The ICP algorithm is affected by the differences in the vegetation reconstruction by the two sensors, which may lead to a rotation of the slave cloud. While the dissimilarities between the two clouds can be minimized, reducing their impact, the lack of object distinctiveness, typical of environmental objects, remains a problem that cannot be overcome. This work addresses the use of the ICP method for registering point clouds representative of coastal environments, with some limitations related to the required presence of stable areas between the clouds and the potential errors associated with featureless surfaces. Full article
(This article belongs to the Special Issue UAVs for Coastal Surveying)
Show Figures

Figure 1

Back to TopTop