Next Article in Journal
A Cross-Estimation Method for Spaceborne Synthetic Aperture Radar Range Antenna Pattern Using Pseudo-Invariant Natural Scenes
Previous Article in Journal
DB-MFENet: A Dual-Branch Multi-Frequency Feature Enhancement Network for Hyperspectral Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Factors Affecting Random Measurement Error in LiDAR Point Cloud Feature Matching Positioning

1
School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China
2
Key Laboratory of Micro-Inertial Instrument and Advanced Navigation Technology, Southeast University, Nanjing 210096, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(8), 1457; https://doi.org/10.3390/rs17081457
Submission received: 15 February 2025 / Revised: 2 April 2025 / Accepted: 15 April 2025 / Published: 18 April 2025

Abstract

:
Light detection and ranging (LiDAR) has the advantage of simultaneous localization and mapping with high precision, making it one of the important sensors for intelligent robotics navigation, positioning, and perception. It is common knowledge that the random measurement error of global navigation satellite system (GNSS) observations is usually considered to be closely related to the elevation angle factor. However, in the LiDAR point cloud feature matching positioning model, the analysis of factors affecting the random measurement error of observations is unsophisticated, which limits the ability of LiDAR sensors to estimate pose parameters. Therefore, this work draws on the random measurement error analysis method of GNSS observations to study the impact of factors such as distance, angle, and feature accuracy on the random measurement error of LiDAR. The experimental results show that feature accuracy is the main factor affecting the measurement error in the LiDAR point cloud feature matching positioning model, compared with distance and angle factors, even under different sensor specifications, point cloud densities, prior maps, and urban road scenes. Furthermore, a simple random measurement error model based on the feature accuracy factor is used to verify the effect of parameter estimation, and the results show that the random error model can effectively reduce the error of pose parameter estimation, with an improvement effect of about 50%.

1. Introduction

With the vigorous development of emerging high-tech industries driven by location information services, intelligent robotics has become an important player in current technological changes and innovations [1,2]. Various intelligent mobile devices such as autonomous vehicles, drones, and intelligent ships can use precise pose information to complete various tasks and make control decisions on behalf of humans [3,4]. In particular, automated guided vehicles and autonomous vehicles have not only changed the paradigm of the workforce but also provided great convenience for human social activities [5,6]. Precise pose information is foundational for intelligent robotics to perceive the environment, plan paths, and execute decisions [7]. LiDAR can not only perceive the 3D coordinate information of objects in urban scenes with high precision but also estimate their pose information, which makes it one of the important sensors for intelligent robotics in navigation, localization, and perception [8,9,10]. For example, LiDAR can use the geometric structural feature information of a building to achieve continuous pose estimation, making up for the deficiency in GNSS positioning caused by signal obstruction in densely populated building environments [11,12].
According to different collection methods of data, LiDAR can usually be divided into the mechanical rotation scanning type and solid-state scanning type [13]. Solid-state scanning LiDAR can obtain extremely dense point cloud information in a local field of view and is often used to detect obstacles in the planned path to ensure the safety of intelligent robotics movement [14,15]. Mechanical rotating LiDAR can obtain point cloud information with a horizontal 360° field of view, making the parameter estimation model have good geometry [16]. Therefore, it is usually used as the main sensor to estimate pose information [17,18]. In addition, the primary observation of LiDAR is the 3D coordinate information of each laser beam reflection point, that is, the discrete point cloud coordinate information of the surface of each object in the scene [19]. As shown in Figure 1, the measurement information is collected under the spherical coordinates of LiDAR. It uses the laser beam (black arrow) to scan the surface F of the object (blue circle), thereby measuring the coordinate information of the points on the surface of the object expressed in distance Di and angle θ .
Among the LiDAR point cloud matching positioning algorithms, feature-based matching methods classify multiple points with the same characteristics into a certain feature, such as a planar or edge feature, and then use the features expressed by multiple points as units to match the point cloud [20,21,22,23]. These methods use multiple points as units for point cloud registration, making them more robust than the Iterative Closest Point (ICP) algorithm that uses single points as matching units [24,25]. In addition, feature-based matching methods do not force the segmentation of the point cloud, can flexibly adapt to the ductility of the object surface compared with the normal distributions transform (NDT) algorithm [26], and also have excellent robustness and computational efficiency. Therefore, feature-based matching methods have been widely used [23,27] and are the focus of this paper.
Regarding the modeling methods of random measurement errors in LiDAR observations, there are significant differences in current matching positioning algorithms. Some algorithms usually regard each observation as a measurement with equal accuracy, and thus directly use the equivalent weight method to establish the random measurement error model [24,28]. The classic LIO-SAM algorithm establishes the LiDAR covariance model based on the measured distance factor of the point cloud [29]. In addition, there are also methods for jointly determining weights based on distance and angle factors [30]. Similarly to GNSS sensors, the specifications of sensors produced by different vendors are usually inconsistent [31,32]. LiDAR sensors mainly include 16 channels, 32 channels, 64 channels, 128 channels, etc. Different LiDAR sensors usually have different scanning densities of LiDAR point clouds [33]. Different prior maps have different point cloud accuracies [34]. Different scenes are usually filled with different objects, and different object surfaces also have differentiated characteristics. These measurement conditions are likely to affect the random measurement errors of the matching positioning model. It is common knowledge that the random measurement error of observations in GNSS is mostly considered to be closely related to the elevation angle factor [35,36]. However, there is still no clear consensus on the main influencing factors for random measurement errors of observations in LiDAR point cloud feature matching methods. A possible reason for this is that current research has not been able to study and analyze the relationship between the actual measurement error and possible influencing factors from the perspective of the parameter estimation model used. In fact, in the LiDAR point cloud feature matching positioning model, the observation vector is not the primary 3D coordinates of the LiDAR point cloud, but the distance between features. As shown in Figure 1, the coincidence of object surface features represents a correct match; that is, the distance d (red arrow) between the planar features F and F is zero. As long as the point is on the planar feature, the actual observation d is insensitive to local changes in the coordinates of the point on the planar feature but may be sensitive to the accuracy of the extracted planar feature.
In the adjustment parameter estimation model, the function model and the random error model need to correspond to each other and be accurate, so that the subsequent adjustment calculation can obtain the correct parameter estimation [37]. Therefore, we draw on the random measurement error analysis method of GNSS to compare the influence of factors such as distance, angle, and feature accuracy on the actual measurement error of LiDAR static scanning. Then, the influence of these factors on the actual measurement errors is verified under different sensor specifications, scanning point cloud densities, prior maps of different qualities, and urban road scenes. Finally, a simple and practical random measurement error model is established to verify the effect of parameter estimation. The experimental results show that under different sensor specifications, point cloud densities, prior maps, and urban road scenes, the actual measurement errors in the LiDAR point cloud feature matching positioning model are all significantly affected by the feature accuracy factor, while the influence of distance and angle factors is not significant, and the random measurement error model established using the feature accuracy factor can effectively reduce the error of parameter estimation.
This paper is organized as follows: Section 2 introduces the LiDAR point cloud feature matching positioning model and the main influencing factors in detail. Section 3 presents the experimental research results under various measurement conditions. Section 4 provides a discussion of the experimental conclusions and plans for future work.

2. Methods

In LiDAR point cloud matching positioning, pose parameter estimation is achieved by extracting the feature information of point clouds. In order to understand, from the perspective of the model, how random measurement errors can be obtained by accurately knowing pose parameters, the feature matching positioning model is introduced in this section. Then, the factors affecting random measurement errors are introduced, and the analysis of these factors in LiDAR point cloud feature matching positioning can be realized by making use of the method of studying random measurement errors in the GNSS field.

2.1. Feature Matching Positioning Model

The inter-frame point cloud feature matching positioning formulas using plane features and edge features are as follows [29]:
d x = n i n R b n P i b + t b n + ε i h x = e k n × R b n P k b + t b n P 0 n × e k n + ε k
where the estimated pose parameter x = R b n t b n T consists of translation t b n and rotation R b n , and the superscript n and subscript b denote the global navigation frame and the LiDAR body frame, respectively. n i n denotes the normal vector of the associated planar feature of neighbors around the i th point P i b , and e k n denotes the unit vector of the associated edge feature of neighbors around the k th point P k b . “ × ” denotes cross-product operation between vectors. P 0 n denotes a point on the line of the associated edge feature. ε i represents the measurement errors of d ( x ) . ε k represents the measurement errors of h ( x ) . d ( x ) and h ( x ) denote the distances from point P i b to the associated planar feature and point P k b to the associated edge feature, respectively. They are both functions of the pose parameter x , and thus, the estimation model of the pose parameter x can be established [37]:
D ( l ) = σ 0 2 Q v = B x ^ l
where D ( l ) is the variance–covariance matrix of the measurements. l is the measurement vector obtained by the distance from the point to the associated feature. σ 0 2 is a priori variance factor, Q is the cofactor factor, and B is the design matrix obtained by taking the partial derivative of (1). v is the vector of residual error of the estimated pose parameter x ^ . If the pose parameter x is known accurately, the actual measurement error can be obtained:
Δ l = B x ˜ l
where x ˜ denotes the accurately known pose parameter, and Δ l is the vector of actual measurement errors. Therefore, when the pose parameter is accurately known, the actual measurement errors of the point cloud feature matching positioning model can be obtained, and the measurement error analysis can be effectively carried out.

2.2. Influencing Factors

According to the LiDAR point cloud data collection mechanism, we intuitively assume that the influencing factors of measurement error may include the distance and angle of the laser reflection point from the LiDAR center, as well as the extracted feature accuracy. The detailed meanings of these factors are shown in Figure 2.
In Figure 2a, the black arrow represents the laser beam, and the black dashed line represents the 0° direction of the LiDAR. In Figure 2b, the parallelogram represents the planar feature, the blue solid line represents the edge feature, and the green lines represent the distance from the point to the feature. As shown in Figure 2a, the LiDAR sensor typically reports distances relative to itself in spherical coordinates. Therefore, distance and angle factors are obviously associated with the measurement error of the LiDAR point cloud. Before using the point cloud of a LiDAR sensor for matching positioning, the spherical data of the sensor need to be converted to Cartesian coordinates, and the vertical angle (namely, elevation angle) correction parameter file needs to be applied to correct the vertical offset of the laser relative to the sensor origin to obtain the greatest accuracy [38]. In addition, since the mechanical rotary LiDAR sensor’s motor can be set to rotate between 300 RPM and 1200 RPM, the horizontal angular resolution is usually 0.1~0.4°, which is approximately ten times the vertical angle resolution. Therefore, the distance and elevation angle factors are the focus of this paper. Figure 2b shows the extraction errors of planar features and edge features, which can be obtained by the following formula:
planar :   d i = A x i + B y i + C z i + D A 2 + B 2 + C 2 edge :   d i = e × ( P i P 0 )
where d i is the distance from point P i = x i y i z i T to the planar feature or edge feature. A, B, C, and D denote the coefficients of the extracted planar feature equation corresponding to the normal vector n . And e denotes the unit vector of the associated edge feature. P 0 denotes a point on the line of the associated edge feature. The feature is determined by multiple points, so these points should be on the feature, and their distance should be zero. If their distances are not zero, it means that there are errors, that is, errors in the extracted features. Therefore, the extraction feature accuracy of the corresponding features can be obtained:
σ f = i = 0 n d i 2 n r
where σ f denotes the extraction feature accuracy of the associated feature. n is the total number of points used by the feature equation to be fitted, and r is the number of coefficients of the feature equation.
Before extracting point cloud features, if the LiDAR is moving to obtain the point cloud data, the self-motion distortion of each point needs to be calibrated in advance with the help of the INS or other prior pose information [39]. Otherwise, self-motion distortion of up to 0.5 m will greatly reduce the accuracy of the LiDAR point cloud coordinates [40]. This self-motion distortion is similar to the atmospheric delay error in each pseudo-range measurement of the GNSS. Before analyzing the measurement error of the pseudo-range positioning model, it is necessary to correct or eliminate the atmospheric error by using methods such as a zero baseline or ultra-short baseline to avoid interfering with the measurement error analysis. Therefore, the analysis of measurement errors in LiDAR point cloud feature matching and positioning should be similar. In addition, it should be noted that in real LiDAR matching positioning, there are various incentive errors caused by special circumstances, such as the linearization error of the positioning model caused by an inaccurate initial pose, error caused by the vibration or impact of the carrier, error caused by interference from other light sources or mirror reflection, error caused by spatiotemporal changes in the scene map (moving objects, rainy and/or snowy weather, seasonal changes in green plants and/or tree trunks) [41,42,43], etc. Most of the measurements containing these incentive errors are classified as outliers in survey adjustments. For the analysis of random measurement errors, it is inappropriate to consider errors caused by special conditions, and if all these excitation errors are taken into consideration, then such an error model will inevitably be complex and difficult to model practically. So, the outliers can be eliminated by applying the commonly used principle of taking three times the standard deviation as the limit error, or by using the famous Detection, Identification, and Adaptation (DIA) algorithm proposed by Teunissen in the GNSS field [44,45]. Therefore, this paper focuses on analyzing the main influencing factors of a random error model that describes the measurement accuracy characteristics in the point cloud feature matching positioning model (2), so as to provide guiding support for establishing a practical basic random measurement error model.

3. Results

In order to study and analyze the factors affecting the random measurement error of the point cloud matching positioning model described in Section 2, the following experiments were designed: (a) comparative analysis of different specifications of LiDARs that are commonly used for matching positioning, such as a horizontal angle range of 360° and an elevation angle range of more than 30°; (b) comparative analysis of different scanning point cloud densities; (c) comparative analysis of a LiDAR self-scanned map and HD map; (d) comparative analysis of various urban road scenes; and (e) comparative analysis of parameter estimation errors. Furthermore, the actual measurement errors Δ l corresponding to the planar features and edge features of each epoch were also represented in short videos to verify the experimental results.

3.1. Different Specifications

We selected the publicly available multi-LiDAR static calibration dataset [46] and our static dataset for experiments. The specifications of the sensors used are shown in Table 1, and the experimental scenes are shown in Figure 3.
Table 1 shows the specifications of four different LiDAR sensors. Velodyne VLP-16, Ouster OS1-64, and Ouster OS0-128 are used in the public multi-LiDAR static calibration dataset. Robosense Helios-32 is used in our static dataset. In Figure 3, the connecting rod with red, blue and green represents the LiDAR sensor, and the surrounding red dots represent the point cloud. As shown in Figure 3, both datasets are collected in indoor scenes. Several coplanar planar features act as the main components of the scenes. Both planar features and edge features are obvious, which is conducive to analyzing whether the differences in these sensor specifications have a dominant impact on the actual measurement errors Δ l in the LiDAR point cloud feature matching positioning model. Similarly to the GNSS short baseline or zero baseline, since these LiDAR sensors are stationary, the pose parameter x can be accurately known. Therefore, taking the measurement data of Ouster OS0-128 as an example, Figure 4 plots its actual measurement errors (expressed in terms of their absolute values) and the distances, angles, and feature accuracies corresponding to the measurement errors.
Figure 4a shows the distances D i , angles θ , feature accuracies σ f , and actual measurement errors Δ l corresponding to planar features, and Figure 4b shows those for edge features. It can be seen that the distances and angles are evenly and discretely distributed, the number of observations corresponding to planar features is significantly greater than that corresponding to edge features, and the feature accuracies and actual measurement errors are relatively smaller than those of edge features. Furthermore, histograms of the actual measurement errors corresponding to the planar features and edge features are plotted as shown in Figure 5.
In Figure 5, the difference in accuracy between the actual measurement errors corresponding to the planar features and the edge features can be more clearly seen. Obviously, both behave like random normal distributions. Since the number of observations corresponding to planar features is much greater than that corresponding to edge features, the subsequent experimental analysis mainly plots the actual measurement errors corresponding to planar features. The actual measurement errors corresponding to edge features can be viewed in the short videos. Drawing on the method of analyzing the relationship between the GNSS pseudo-range measurement error and elevation angle factor, we sort the distance values, angle values, and feature accuracy values in ascending order to analyze the relationship between these factors and the actual measurement error. They are plotted as shown in Figure 6.
Figure 6 shows how the actual measurement errors of Ouster OS0-128, Ouster OS1-64, Robosense Helios-32, and Velodyne VLP-16 change as the distance, angle, and feature accuracy values increase. For example, Figure 4a can be transformed into Figure 6a after gradually increasing the values of the influencing factors. In Figure 6a, as the distance D i increases, the corresponding actual measurement error (the second row and first column of Figure 6a) does not show a significant trend change. And the performance of the angle factor is similar to that of the distance factor. However, as the value of feature accuracy σ f gradually increases, the actual measurement error shows a significant trend change, which means that feature accuracy has a significant dominant influence on the actual measurement error. The remaining three sub-graphs in Figure 6 also show similar performance, even though the specifications of the sensors are not consistent. Note that Figure 6a,b,d are all in the same scene, that is, the scene shown in Figure 3a. Figure 6c belongs to the scene shown in Figure 3b, but the extracted feature accuracies still play a dominant role in the actual measurement errors. Through a comparison of the number of observations in Figure 6a,b,d, it can be found that the sensor specifications are mainly related to the number of observations and have no significant impact on the actual measurement errors.

3.2. Different Densities

In addition to the specification of the LiDAR sensor, the density of the point cloud is also a factor worth comparing and analyzing. Therefore, we applied different down-sampling settings to the data of Ouster OS0-128 in Section 3.1 for comparative experiments; that is, we selected the point clouds of all channels in the data to form the experimental group Ouster OS0-128, only selected the point clouds whose channel IDs are multiples of 4 in the data to form the experimental group Ouster OS0-128_32, and only selected the point clouds whose channel IDs are multiples of 8 in the data to form the experimental group Ouster OS0-128_16. The last experimental group directly uses the data of Ouster OS1-64. Similarly, the relationship between the actual measurement errors and influencing factors of the experimental groups Ouster OS0-128_32 and Ouster OS0-128_16 is plotted as shown in Figure 7.
Comparing Figure 7a,b and Figure 6a,b, we can see that the conclusion that feature accuracy plays a dominant role in the change in actual measurement error still holds true under different point cloud densities. The density of the collected point cloud mainly affects the number of observations.

3.3. Different Prior Maps

In LiDAR point cloud feature matching positioning, point cloud features are extracted from the prior point cloud map. Point cloud maps can usually only reflect the scene at the moment of collection. In most cases, the scene at the same location may also undergo local changes, such as the appearance of, disappearance of, or position changes in movable objects in the scene. The prior maps used in the experiments in Section 3.1 and Section 3.2 are all local maps scanned by the sensor itself, while the point cloud accuracy of the prior HD map produced through multiple precision processes may be higher. Therefore, whether the findings of this paper are consistent under different prior maps is also a question worth investigating. So, we used the high-precision map and sensor self-scanned map for experiments. The HD map was produced by a professional team using the Leica RTC360 3D scanner and its corresponding data processing software Cyclone Register 360 V1.5.0. The absolute accuracy of the point cloud obtained by this technology is said to be up to 1 cm. The detailed results of the comparative experiment are shown in Figure 8.
As shown in Figure 8a, the experimental scene is displayed by the HD map, the location of which is the same as that of the scene scanned by the LiDAR sensor shown in Figure 3b. Figure 8b shows the actual measurement errors obtained by using the HD map as a prior map. Figure 6c and Figure 8b display the comparison of the experimental results using different prior maps. It can be seen that feature accuracy is still the dominant factor in the actual measurement error, and the influence of distance and angle is not significant. At the same time, the feature accuracy obtained using the LiDAR self-scanning map is slightly better than that using the HD map. The corresponding actual measurement error is also slightly smaller. This may be due to a small change in the scene between the LiDAR sensor self-scanning map and the HD map, such as the addition of test personnel and the small vehicle platform. In general, whether in a high-precision map or a LiDAR self-scanning map, feature accuracy has an important impact on the actual measurement error, which is consistent with the previous experimental conclusions.

3.4. Different Scenes

In Section 3.1, two different scenes were used to conduct verification experiments, but these were both indoor scenes. In order to analyze the consistency of the experimental results in common outdoor road scenes, we selected multiple typical scenes from public experimental data, using the same sensor to conduct comparative experiments. The specifications of the sensor are shown in Table 2. The selected scenes include an open square, a narrow street, an intersection with dense buildings, a highway with shrubs, and a traffic intersection with scattered poles. They are from the public Hong Kong UrbanNav Dataset and the Ford Multi-AV Seasonal Dataset [47,48]. The detailed experimental results are shown in Figure 9.
Figure 9 shows photos, point cloud maps, and the relationship between actual measurement errors and planar feature accuracies for these typical urban road scenes. It can be seen that the accuracy of planar features in different scenes varies slightly, but they can accurately reflect the changes in actual measurement errors. For example, the highway in Figure 9d is surrounded by many shrubs, which form low-precision planar features, and these features usually cause large measurement errors. The planar features with high accuracy are road surfaces, and their actual measurement errors are relatively small. These are accurately reflected by the accuracy factors of planar features. Note that other factors are omitted here, and only the feature accuracy factor is plotted. In fact, the experimental results for these typical outdoor scenes are basically consistent with the aforementioned experimental results. Neither distance nor angle factors have significant effects. This can be observed through the short videos in the Supplementary Materials.

3.5. Parameter Estimation Verification

From the above comparative experimental analysis, it can be seen that feature accuracies play a dominant role in the actual measurement errors. Therefore, we can use the information on feature accuracy to establish the random error model in (2) to further verify the validity of the experimental results. We take the observation data of the intersection scene with dense buildings in Section 3.4 as an example to conduct experiments, plot the statistical values of the actual measurement errors in various intervals of feature accuracy, and perform model fitting as shown in Figure 10. The parameters of the cofactor factor Q in (2) are shown in Table 3.
As shown in Figure 10, the standard deviations (blue dots) of the actual measurement errors and the feature accuracies show an approximately linear relationship, which also proves the important influence of feature accuracies on the actual measurement errors. In order to establish a simple and practical fitting model, we directly used the straight line equation for fitting, represented by the red straight line in the figure. Therefore, the diagonal element Q i of the cofactor factor Q shown in Table 3 could be obtained. The a priori variance factor σ 0 2 = 0.05 2 m. Next, we set up an experimental group in which our model was applied, an experimental group where Q was the unit matrix, and an experimental group where the Q matrix was set to be consistent with the classic LIO-SAM algorithm for positioning solution analysis. The cumulative distributions of the estimation errors of this dataset are shown in Figure 11.
As shown in Figure 11, the errors of the pose parameter estimate x ^ obtained by applying our model are smaller than those of the other two groups. In comparison to the indicator with a probability of 95.5%, the pose estimation errors can be reduced to half of those of the other control groups. It can be seen that the random error model established by applying the feature accuracy factor is effective. Furthermore, we applied the same model parameters in Table 3 to perform positioning analysis on the observation data of the narrow street scene in Section 3.4 and obtained similar experimental results, as shown in Figure 12.
As shown in Figure 12, the random error model established by applying the feature accuracy factor is also effective even in other scenes. These positioning experiments are conducted online by simulating the real-time collected point cloud data stream through the rosbag play command in ROS, so the model can also be applied to the sensor as it continuously collects large point clouds in a live SLAM system. Note that we only used a simple model for fitting here, and other more appropriate models can also be used to further improve the estimation effect.

4. Discussion

In Section 3, we compared the effects of distance, angle, and feature accuracy on the actual measurement errors under different LiDAR specifications, different point cloud scanning densities, different prior maps, and different urban road scenes. From the results of these comparative experiments, it can be seen that the feature accuracy factor has a dominant influence on the actual measurement errors, while the influence of distance and angle factors is not particularly significant. Generally, the lower the feature accuracy, the greater the actual measurement error. Through the use of the relationship between feature accuracy and actual measurement error, a random error model of the corresponding observation vector can be constructed to improve the accuracy of parameter estimation.
From the perspective of the LiDAR point cloud feature matching positioning mechanism, the parameter estimation model constructed by (1) is realized by matching with the extracted features. In essence, the observation of the error model is the difference between the features in the currently scanned point cloud (map) and the prior map. If these two types of features belong to the same planar feature or edge feature in the real 3D scene, then the pose parameter estimation that can make these features coincide is the true solution of the model. Therefore, the accuracy of the features extracted from the point cloud should be a very important factor affecting the measurement error. Note that the above experimental results should not be interpreted as indicating that the distance and angle factors are unimportant. They are still related to whether the real scene can be accurately measured (reproduced) in the spherical coordinate system of LiDAR. It is simply the case that on the basis of the excellent measurement accuracy of these LiDAR sensors, the feature accuracy factor has a dominant influence on the actual measurement error in the model (2).
In mathematical statistics, the observation vector, as a set of random variables, can be described using its first moment (expectation) and second-order center distance (variance and covariance). In other words, normal observations are formed by the combination of a trend component (expectation) and a random component. Therefore, the random measurement errors can be traced and analyzed in reverse through the result-oriented method; that is, the normal observations are subtracted from the trend component (expectation, actual pose parameters) to obtain the random component for analysis. For example, the theoretical measurement accuracy of GNSS carrier-phase observations is 0.01 cycles based on the principle of ranging code construction. However, if the measurement random error model is directly established with this theoretical accuracy, the estimated result is not accurate in actual application. Through the use of the reverse tracking analysis method, it is found that the elevation factor has a dominant influence on GNSS measurements. Therefore, the elevation model is often used to describe random measurement errors in the field of GNSS positioning, such as the famous GNSS data processing software GAMIT V10.71 developed by Massachusetts Institute of Technology. The LiDAR sensor using point cloud feature matching positioning is similar. Through reverse tracking analysis, it is found that compared with distance and angle factors, feature extraction accuracy has a dominant influence. And the matching positioning model used is based on features, so this is understandable. Note that the experiments in Section 3 were conducted by designing a single-variable control experiment, which can avoid the difficulty in clarifying the specific impact when multiple variables are mixed. The experimental results have been proven to be consistent across these single-variable control experiments. However, experiments with mixed effects of multiple variables (such as different scanning densities, different urban road scenes, and different LiDAR specifications) can also provide a more comprehensive understanding. The short videos in the Supplementary Materials also show the same results for actual measurement errors under different scanning densities and different urban road scenes.

5. Conclusions

In this paper, the random measurement error analysis method of observations in the GNSS field is referenced to study the influence of factors such as distance, angle, and feature accuracy on the random measurement error of LiDAR. Experimental results show that even under different sensor specifications, point cloud densities, prior maps, and urban road scenes, the feature accuracy factor has a more significant impact on random measurement errors than distance and angle factors. When the random measurement error model is constructed using the feature accuracy factor, it can effectively reduce the error of pose parameter estimation, with an improvement effect of about 50%.
In the process of LiDAR point cloud matching positioning, there are also excitation measurement errors caused by special factors, such as moving objects, carrier jitter, strong light exposure, etc., which will affect the estimation of parameters in the LiDAR positioning model. Therefore, based on the random error model established by normal measurements, identifying and suppressing the influence of these special outlier measurements will be the next step in research.

Supplementary Materials

The supporting information of the short videos mentioned in this paper can be downloaded at https://doi.org/10.5281/zenodo.15124495 (accessed on 18 March 2025).

Author Contributions

G.L. conceived the idea and designed the experiment with S.P. and W.G. G.L. wrote the main manuscript. W.G. and S.P. reviewed the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Science and Technology Major Project of Jiangsu Province (Grant No. BG2024003).

Data Availability Statement

The public multi-LiDAR static calibration dataset can be downloaded at https://github.com/TIERS/tiers-lidars-dataset (accessed on 14 April 2025). The public Hong Kong UrbanNav Dataset can be downloaded at https://github.com/IPNL-POLYU/UrbanNavDataset (accessed on 14 April 2025). The Ford Multi-AV Seasonal Dataset can be downloaded at https://avdata.ford.com (accessed on 14 April 2025). The remaining additional data in this study are available on request from the corresponding author.

Acknowledgments

We would like to thank everyone who contributed to this research, including in data collection, manuscript review, and project support.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LiDARLight Detection and Ranging
GNSSGlobal Navigation Satellite System
HD mapHigh-Definition Map
INSInertial Navigation System
ICPIterative Closest Point
NDTNormal Distributions Transform
RPMRevolution(s) Per Minute
DIADetection, Identification, and Adaptation
ROSRobot Operating System
GAMITGPS Analysis at MIT

References

  1. Raj, R.; Kos, A. A Comprehensive Study of Mobile Robot: History, Developments, Applications, and Future Research Perspectives. Appl. Sci. 2022, 12, 6951. [Google Scholar] [CrossRef]
  2. Dzedzickis, A.; Subačiūtė-Žemaitienė, J.; Šutinys, E.; Samukaitė-Bubnienė, U.; Bučinskas, V. Advanced Applications of Industrial Robotics: New Trends and Possibilities. Appl. Sci. 2022, 12, 135. [Google Scholar] [CrossRef]
  3. Vogt, J. Where is the human got to go? Artificial intelligence, machine learning, big data, digitalisation, and human–robot interaction in Industry 4.0 and 5.0. AI Soc. 2021, 36, 1083–1087. [Google Scholar] [CrossRef]
  4. Bae, I.; Hong, J. Survey on the Developments of Unmanned Marine Vehicles: Intelligence and Cooperation. Sensors 2023, 23, 4643. [Google Scholar] [CrossRef] [PubMed]
  5. Reis, W.P.N.; Junior, M.O. Sensors applied to automated guided vehicle position control: A systematic literature review. Int. J. Adv. Manuf. Technol. 2021, 113, 21–34. [Google Scholar] [CrossRef]
  6. Parekh, D.; Poddar, N.; Rajpurkar, A.; Chahal, M.; Kumar, N.; Joshi, G.P.; Cho, W. A Review on Autonomous Vehicles: Progress, Methods and Challenges. Electronics 2022, 11, 2162. [Google Scholar] [CrossRef]
  7. Wang, Z.; Zhan, J.; Duan, C.; Guan, X.; Lu, P.; Yang, K. A Review of Vehicle Detection Techniques for Intelligent Vehicles. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 3811–3831. [Google Scholar] [CrossRef]
  8. Lee, D.; Jung, M.; Yang, W.; Kim, A. LiDAR odometry survey: Recent advancements and remaining challenges. Intell. Serv. Robot. 2024, 17, 95–118. [Google Scholar] [CrossRef]
  9. Belkin, I.; Abramenko, A.; Yudin, D. Real-time lidar-based localization of mobile ground robot. Procedia Comput. Sci. 2021, 186, 440–448. [Google Scholar] [CrossRef]
  10. Xue, H.; Fu, H.; Ren, R.; Zhang, J.; Liu, B.; Fan, Y.; Dai, B. LiDAR-based Drivable Region Detection for Autonomous Driving. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and System, Prague, Czech Republic, 27 September–1 October 2021; pp. 1110–1116. [Google Scholar] [CrossRef]
  11. Hassan, T.; Fath-Allah, T.; Elhabiby, M.; Awad, A.; El-Tokhey, M. Detection of GNSS No-Line of Sight Signals Using LiDAR Sensors for Intelligent Transportation Systems. Surv. Rev. 2021, 54, 301–309. [Google Scholar] [CrossRef]
  12. Aldibaja, M.; Suganuma, N.; Yoneda, K.; Yanase, R. Challenging Environments for Precise Mapping Using GNSS/INS-RTK Systems: Reasons and Analysis. Remote Sens. 2022, 14, 4058. [Google Scholar] [CrossRef]
  13. Zhou, B.; Xie, D.; Chen, S.; Mo, H.; Li, C.; Li, Q. Comparative Analysis of SLAM Algorithms for Mechanical LiDAR and Solid-State LiDAR. IEEE Sens. J. 2023, 23, 5325–5338. [Google Scholar] [CrossRef]
  14. Van Nam, D.; Gon-Woo, K. Solid-State LiDAR based-SLAM: A Concise Review and Application. In Proceedings of the IEEE International Conference on Big Data and Smart Computing, Jeju Island, Republic of Korea, 17–20 January 2021; pp. 302–305. [Google Scholar] [CrossRef]
  15. Aldao, E.; Santos, L.M.G.-D.; González-Jorge, H. LiDAR Based Detect and Avoid System for UAV Navigation in UAM Corridors. Drones 2022, 6, 185. [Google Scholar] [CrossRef]
  16. Chen, K.; Zhan, K.; Pang, F.; Yang, X.; Zhang, D. R-LIO: Rotating Lidar Inertial Odometry and Mapping. Sustainability 2022, 14, 10833. [Google Scholar] [CrossRef]
  17. Zeng, Q.; Kan, Y.; Tao, X.; Hu, Y. LiDAR Positioning Algorithm Based on ICP and Artificial Landmarks Assistance. Sensors 2021, 21, 7141. [Google Scholar] [CrossRef]
  18. Zhang, J.; Khoshelham, K.; Khodabandeh, A. Fast Converging Lidar-Aided Precise Point Positioning: A Case Study with Low-Cost Gnss. ISPRS 2023, X-1/W1-202, 687–694. [Google Scholar] [CrossRef]
  19. Sakib, S.M.N. LiDAR technology—An overview. IUP J. Electr. Electron. Eng. 2022, 15, 36–57. [Google Scholar]
  20. Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. Robot. Sci. Syst. 2014, 2, 1–9. [Google Scholar] [CrossRef]
  21. Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar] [CrossRef]
  22. Wang, H.; Wang, C.; Chen, C.; Xie, L. F-LOAM: Fast LiDAR Odometry and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Prague, Czech Republic, 27 September–1 October 2021; pp. 4390–4396. [Google Scholar] [CrossRef]
  23. Huang, F.; Wen, W.; Zhang, J.; Hsu, L.-T. Point Wise or Feature Wise? A Benchmark Comparison of Publicly Available Lidar Odometry Algorithms in Urban Canyons. IEEE Intell. Transp. Syst. Mag. 2022, 14, 155–173. [Google Scholar] [CrossRef]
  24. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  25. Mendes, E.; Koch, P.; Lacroix, S. ICP-based pose-graph SLAM. In Proceedings of the IEEE International Symposium on Safety, Security, and Rescue Robotics, Lausanne, Switzerland, 23–27 October 2016; pp. 195–200. [Google Scholar] [CrossRef]
  26. Biber, P.; Strasser, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27 October–1 November 2003; Volume 3, pp. 2743–2748. [Google Scholar] [CrossRef]
  27. Xia, S.; Chen, D.; Wang, R.; Li, J.; Zhang, X. Geometric Primitives in LiDAR Point Clouds: A Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 685–707. [Google Scholar] [CrossRef]
  28. Donoso, F.; Austin, K.; McAree, P. How do ICP variants perform when used for scan matching terrain point clouds? Robot. Auton. Syst. 2016, 87, 147–161. [Google Scholar] [CrossRef]
  29. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 25–29 October 2020; pp. 5135–5142. [Google Scholar] [CrossRef]
  30. Yuan, C.; Liu, X.; Hong, X.; Zhang, F. Pixel-Level Extrinsic Self Calibration of High Resolution LiDAR and Camera in Targetless Environments. IEEE Robot. Autom. Lett. 2021, 6, 7517–7524. [Google Scholar] [CrossRef]
  31. Heidemann, H.K. Lidar Base Specification, 13th ed.; U.S. Geological Survey Techniques and Methods: Reston, VA, USA, 2018; Book 11, Chapter B4, p. 101. [CrossRef]
  32. Liu, J.; Sun, Q.; Fan, Z.; Jia, Y. TOF Lidar Development in Autonomous Vehicle. In Proceedings of the IEEE 3rd Optoelectronics Global Conference, Shenzhen, China, 4–7 September 2018; pp. 185–190. [Google Scholar] [CrossRef]
  33. Hu, J.S.K.; Kuai, T.; Waslander, S.L. Point Density-Aware Voxels for LiDAR 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8459–8468. [Google Scholar] [CrossRef]
  34. Liu, R.; Wang, J.; Zhang, B. High Definition Map for Automated Driving: Overview and Analysis. J. Navig. 2020, 73, 324–341. [Google Scholar] [CrossRef]
  35. Won, D.H.; Ahn, J.; Lee, S.-W.; Lee, J.; Sung, S.; Park, H.-W.; Park, J.-P.; Lee, Y.J. Weighted DOP With Consideration on Elevation-Dependent Range Errors of GNSS Satellites. IEEE Trans. Instrum. Meas. 2012, 61, 3241–3250. [Google Scholar] [CrossRef]
  36. Zhang, H.; Ji, S.; Wang, Z.; Chen, W. Detailed assessment of GNSS observation noise based using zero baseline data. Adv. Space Res. 2018, 62, 2454–2466. [Google Scholar] [CrossRef]
  37. Teunissen, P.J.G. Adjustment Theory: An Introduction; TU Delft OPEN Publishing: Delft, The Netherlands, 2024. [Google Scholar] [CrossRef]
  38. Bergelt, R.; Khan, O.; Hardt, W. Improving the intrinsic calibration of a Velodyne LiDAR sensor. In Proceedings of the IEEE Sensors, Scotland, UK, 29 October–1 November 2017; pp. 1–3. [Google Scholar] [CrossRef]
  39. Gentil, L.; Vidal-Calleja, T.; Huang, S. 3D Lidar-IMU Calibration Based on Upsampled Preintegrated Measurements for Motion Distortion Correction. In Proceedings of the IEEE International Conference on Robotics and Automation, Brisbane, Australia, 21–25 May 2018; pp. 2149–2155. [Google Scholar] [CrossRef]
  40. Groll, L.; Kapp, A. Effect of Fast Motion on Range Images Acquired by Lidar Scanners for Automotive Applications. IEEE Trans. Signal Process. 2007, 55, 2945–2953. [Google Scholar] [CrossRef]
  41. Kim, G.; Eom, J.; Park, Y. Investigation on the occurrence of mutual interference between pulsed terrestrial LIDAR scanners. In Proceedings of the IEEE Intelligent Vehicles Symposium, Seoul, Republic of Korea, 28 June–1 July 2015; pp. 437–442. [Google Scholar] [CrossRef]
  42. Raisuddin, A.M.; Cortinhal, T.; Holmblad, J.; Aksoy, E.E. 3D-OutDet: A Fast and Memory Efficient Outlier Detector for 3D LiDAR Point Clouds in Adverse Weather. In Proceedings of the IEEE Intelligent Vehicles Symposium, Jeju Island, Republic of Korea, 2–5 June 2024; pp. 2862–2868. [Google Scholar] [CrossRef]
  43. Wang, Z.; Liu, G. Improved LeGO-LOAM method based on outlier points elimination. Measurement 2023, 214, 112767. [Google Scholar] [CrossRef]
  44. Teunissen, P.J.G. Distributional theory for the DIA method. J. Geod. 2018, 92, 59–80. [Google Scholar] [CrossRef]
  45. Zaminpardaz, S.; Teunissen, P.J.G. DIA-datasnooping and identifiability. J. Geod. 2019, 93, 85–101. [Google Scholar] [CrossRef]
  46. Qingqing, L.; Xianjia, Y.; Queralta, J.P.; Westerlund, T. Multi-Modal Lidar Dataset for Benchmarking General-Purpose Localization and Mapping Algorithms. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Kyoto, Japan, 23–27 October 2022; pp. 3837–3844. [Google Scholar] [CrossRef]
  47. Hsu, L.-T.; Huang, F.; Ng, H.-F.; Zhang, G.; Zhong, Y.; Bai, X.; Wen, W. Hong Kong UrbanNav: An open-source multisensory dataset for benchmarking urban navigation algorithms. Navigation 2023, 70, navi.602. [Google Scholar] [CrossRef]
  48. Agarwal, S.; Vora, A.; Pandey, G.; Williams, W.; Kourous, H.; McBride, J. Ford Multi-AV Seasonal Dataset. Int. J. Robot. Res. 2020, 39, 1367–1376. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of LiDAR scanning measurement.
Figure 1. Schematic diagram of LiDAR scanning measurement.
Remotesensing 17 01457 g001
Figure 2. Influencing factors of measurement error.
Figure 2. Influencing factors of measurement error.
Remotesensing 17 01457 g002
Figure 3. Experimental scenes around the sensors. (a) Public multi-LiDAR static calibration dataset. (b) Our static dataset.
Figure 3. Experimental scenes around the sensors. (a) Public multi-LiDAR static calibration dataset. (b) Our static dataset.
Remotesensing 17 01457 g003
Figure 4. Actual measurement errors and the distances, angles, and feature accuracies corresponding to the measurement errors. (a) Planar feature. (b) Edge feature.
Figure 4. Actual measurement errors and the distances, angles, and feature accuracies corresponding to the measurement errors. (a) Planar feature. (b) Edge feature.
Remotesensing 17 01457 g004
Figure 5. Histogram of actual measurement errors. (a) Planar feature. (b) Edge feature.
Figure 5. Histogram of actual measurement errors. (a) Planar feature. (b) Edge feature.
Remotesensing 17 01457 g005
Figure 6. Relationship between the influencing factors and the actual measurement errors of the corresponding planar features. (a) Ouster OS0-128. (b) Ouster OS1-64. (c) Robosense Helios-32. (d) Velodyne VLP-16.
Figure 6. Relationship between the influencing factors and the actual measurement errors of the corresponding planar features. (a) Ouster OS0-128. (b) Ouster OS1-64. (c) Robosense Helios-32. (d) Velodyne VLP-16.
Remotesensing 17 01457 g006
Figure 7. Relationship between influencing factors and actual measurement errors of corresponding planar features at different densities. (a) Ouster OS0-128_32. (b) Ouster OS0-128_16.
Figure 7. Relationship between influencing factors and actual measurement errors of corresponding planar features at different densities. (a) Ouster OS0-128_32. (b) Ouster OS0-128_16.
Remotesensing 17 01457 g007
Figure 8. Experimental scene and performance of actual measurement errors using HD map. (a) Experimental scene. (b) Robosense Helios-32.
Figure 8. Experimental scene and performance of actual measurement errors using HD map. (a) Experimental scene. (b) Robosense Helios-32.
Remotesensing 17 01457 g008
Figure 9. Multiple typical urban road scenes. (a) Open square. (b) Narrow street. (c) Intersection with dense buildings. (d) Highway with shrubs. (e) Traffic intersection with scattered poles.
Figure 9. Multiple typical urban road scenes. (a) Open square. (b) Narrow street. (c) Intersection with dense buildings. (d) Highway with shrubs. (e) Traffic intersection with scattered poles.
Remotesensing 17 01457 g009aRemotesensing 17 01457 g009b
Figure 10. Relationship between the standard deviations of actual measurement errors and the feature accuracies. (a) Planar feature. (b) Edge feature.
Figure 10. Relationship between the standard deviations of actual measurement errors and the feature accuracies. (a) Planar feature. (b) Edge feature.
Remotesensing 17 01457 g010
Figure 11. Cumulative distribution of estimation errors.
Figure 11. Cumulative distribution of estimation errors.
Remotesensing 17 01457 g011
Figure 12. Cumulative distribution of positioning errors in the narrow street scene.
Figure 12. Cumulative distribution of positioning errors in the narrow street scene.
Remotesensing 17 01457 g012
Table 1. Sensor specifications of the datasets.
Table 1. Sensor specifications of the datasets.
SensorChannelsRangeField of ViewAngular ResolutionFrequency
Velodyne VLP-1616100 mV: −15° to +15°, H: 0° to 360°V: 2.0°, H: 0.2°10 Hz
Robosense Helios-32 132150 mV: −55° to +15°, H: 0° to 360°V: 0.5°, H: 0.2°10 Hz
Ouster OS1-6464170 mV: −21.2° to +21.2°, H: 0° to 360°V: 0.7°, H: 0.18°10 Hz
Ouster OS0-12812875 mV: −45° to +45°, H: 0° to 360°V: 0.7°, H: 0.18°10 Hz
1 The sensor used in our static dataset.
Table 2. Specifications of the sensor used.
Table 2. Specifications of the sensor used.
SensorChannelsRangeField of ViewAngular ResolutionFrequency
Velodyne
HDL-32E
32100 mV: −30° to +10°
H: 0° to 360°
V: 1.33°
H: 0.2°
10 Hz
Table 3. Parameters of the cofactor factor.
Table 3. Parameters of the cofactor factor.
ModelPlanar FeatureEdge Feature
a 0 b 0 a 0 b 0
Q i = a 0 + b 0   σ f 2 / σ 0 2 0.03 m0.58 m0.08 m0.27 m
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, G.; Gao, W.; Pan, S. Analysis of Factors Affecting Random Measurement Error in LiDAR Point Cloud Feature Matching Positioning. Remote Sens. 2025, 17, 1457. https://doi.org/10.3390/rs17081457

AMA Style

Liu G, Gao W, Pan S. Analysis of Factors Affecting Random Measurement Error in LiDAR Point Cloud Feature Matching Positioning. Remote Sensing. 2025; 17(8):1457. https://doi.org/10.3390/rs17081457

Chicago/Turabian Style

Liu, Guoliang, Wang Gao, and Shuguo Pan. 2025. "Analysis of Factors Affecting Random Measurement Error in LiDAR Point Cloud Feature Matching Positioning" Remote Sensing 17, no. 8: 1457. https://doi.org/10.3390/rs17081457

APA Style

Liu, G., Gao, W., & Pan, S. (2025). Analysis of Factors Affecting Random Measurement Error in LiDAR Point Cloud Feature Matching Positioning. Remote Sensing, 17(8), 1457. https://doi.org/10.3390/rs17081457

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop