Next Article in Journal
Ab Initio Simulation of Attosecond Transient Absorption Spectroscopy in Two-Dimensional Materials
Next Article in Special Issue
A Novel One-Camera-Five-Mirror Three-Dimensional Imaging Method for Reconstructing the Cavitation Bubble Cluster in a Water Hydraulic Valve
Previous Article in Journal
Feasibility Study of Real-Time Monitoring of Pin Connection Wear Using Acoustic Emission
Previous Article in Special Issue
Registration of Dental Tomographic Volume Data and Scan Surface Data Using Dynamic Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3-D Point Cloud Registration Algorithm Based on Greedy Projection Triangulation

1
Institution of Information and Control Engineering, Shenyang Jianzhu University, Shenyang 110168, China
2
Architectural Design and Research Institute, Shenyang Jianzhu University, Shenyang 110168, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(10), 1776; https://doi.org/10.3390/app8101776
Submission received: 21 August 2018 / Revised: 20 September 2018 / Accepted: 20 September 2018 / Published: 30 September 2018
(This article belongs to the Special Issue Intelligent Imaging and Analysis)

Abstract

:
To address the registration problem in current machine vision, a new three-dimensional (3-D) point cloud registration algorithm that combines fast point feature histograms (FPFH) and greedy projection triangulation is proposed. First, the feature information is comprehensively described using FPFH feature description and the local correlation of the feature information is established using greedy projection triangulation. Thereafter, the sample consensus initial alignment method is applied for initial transformation to implement initial registration. By adjusting the initial attitude between the two cloud points, the improved initial registration values can be obtained. Finally, the iterative closest point method is used to obtain a precise conversion relationship; thus, accurate registration is completed. Specific registration experiments on simple target objects and complex target objects have been performed. The registration speed increased by 1.1% and the registration accuracy increased by 27.3% to 50% in the experiment on target object. The experimental results show that the accuracy and speed of registration have been improved and the efficient registration of the target object has successfully been performed using the greedy projection triangulation, which significantly improves the efficiency of matching feature points in machine vision.

Graphical Abstract

1. Introduction

With the rapid development of optical measurement technology and three-dimensional (3-D) imaging [1,2,3], point cloud data has received substantial attention as a special information format that contains complete 3-D spatial data. The application of the 3-D image information is widespread in the fields of 3-D reconstruction for medical applications [4], 3-D object recognition, reverse engineering of mechanical components [5], virtual reality, and many others such as image processing and machine vision [6,7].
There have been many efforts to achieve point cloud registration. The classic algorithm for this purpose is the iterative closest point [8], proposed by Besl and Mckay. This algorithm can be efficiently applied to registration problems for simple situations. However, if there is significant variance in the initial position of the two cloud points, it is easy to fall into a local optimum and thus increase the possibility of inaccurate registration. In order to provide improved initial parameters, it is necessary to perform the initial registration before accurate registration using algorithms such as the sampling consistency initial registration algorithm [9]. Due to the large capacity and complexity of point cloud data models, describing feature points is one of the most important and decisive steps in the processing for initial registration. Various methods have been developed to obtain feature information, such as local binary patterns (LBP) [10], local reference frame (LRF) [11], signatures of histogram of orientations (SHOT) [12], and point feature histograms (PFH) [13]. These feature operators can only provide a single description for feature information with high feature dimensions and high computational complexity.
Other efforts have been made in terms of feature matching. Scale-invariant feature transform (SIFT) [14,15,16] utilizes difference of gaussian (DOG) images to calculate key points. It describes local features of images and obtains the corresponding 3-D feature points through mapping relationships. It has certain stability in terms of the change of view and affine transformation; however, the matching speed for this algorithm is the main limitation. The speeded-up robust features (SURF) algorithm can be used to extract the feature points of the image [17,18,19,20] and implement image matching according to the correlation. However, this algorithm relies too much on the gradient direction of the pixels in the local area, which yields unsatisfactory feature matching results. The intrinsic shape signature (ISS) algorithm has been proposed for feature extraction to complete the initial registration process [21,22]; however, wide range in searching feature point pairs and low computational efficiency are the limitations for this algorithm. The method for interpolating point cloud models using basis functions has been proposed for establishing local correlation to reduce computational complexity [23]. There are some limitations in traditional methods, such as the inability to comprehensively describe feature information and slow matching of feature point pairs. These issues limit the accuracy and speed of 3-D point cloud registration and significantly impacts its application in practical fields. Based on the traditional sampling consistency initial registration, and iterative closest point accurate registration, a new point cloud registration algorithm is proposed herein. The proposed algorithm combines fast point feature histograms (FPFH) feature description with greedy projection triangulation. The FPFH feature descriptor describes feature information accurately and comprehensively, and greedy projection triangulation reflects the topological connection between data points and its neighbors, establishes local optimal correlation, narrows the search scope, and eliminate unnecessary matching times. The combination solves the problems of the slow speed and the low accuracy in traditional point cloud registration, which leads to improvements in the optical 3-D measurement technology. The effectiveness of the proposed algorithm is experimentally verified by performing point cloud registration on a target object.
The contents of the paper consist of four sections. In Section 2, the specifications of the point cloud registration algorithm are discussed. In Section 3, experiments and analysis performed using the point cloud library (PCL) are presented. Finally, the conclusions are presented in Section 4.

2. Point Cloud Registration Algorithm

Regarding the complexity of the target, integral information can only be obtained by scanning multiple stations from different directions. The data scanned by each direction is based on its own coordinate system, and then unify them to the same coordinate system. Control points and target points are set in the scan area such that there are multiple control points or control targets with the same name on the map of the adjacent area. Thus, the adjacent scan data has the same coordinate system through the forced attachment of control points. The specific algorithm is as follows.
First, the FPFH of the point cloud is calculated and the local correlation is established to speed-up the search for the closest eigenvalue using the greedy projection triangulation network. Because of the unknown relative position between the two point cloud models, sample consensus initial alignment is used to obtain an approximate rotation translation matrix to realize the initial transformation. In addition, the iterative closest point is further refined to obtain a more accurate matrix with the initial value. The point cloud registration chart is shown in Figure 1.

2.1. Feature Information Description

FPFH is a simplification algorithm for point feature histograms (PFH), which is a histogram of point features reflecting the local geometric features around a given sample point. All neighboring points in the neighborhood K of the sample point P are examined and a local U V W coordinate system is defined as follows:
{ u = n s v = u × ( P t P s ) P t P s w = u × v .
The relationship between pairs of points in the neighborhood K is represented by the parameters ( α , β , θ ) and can be obtained as follows:
{ α = v × n s β = u × ( P t P s ) P t P s θ = arctan ( w n s , u n t ) ,
where P s and P t ( s t ) denote the point pairs and n s and n t denote their corresponding normals in the sample point neighborhood K .
The eigenvalues of all point pairs are then calculated and the PFH of each sample point P c is then statistically integrated. Next, the neighborhood K of each point is determined to form a simplified point feature histogram (SPFH), which is then integrated into the final FPFH. Hence, each sample point is uniquely represented by the FPFH feature descriptor. The eigenvalues of FPFH can be calculated using the following equation:
F P F H ( P c ) = S P F H ( P c ) + 1 k i = 1 k 1 w k S P F H ( P i ) ,
where w k denotes the distance between the sample point P c and the neighboring point P k in the known metric space.

2.2. Greedy Projection Triangulation

Greedy projection triangulation bridges computer vision and computer graphics. It converts the scattered point cloud into an optimized spatial triangle mesh, thereby reflecting the topological connection relationship between data points and their neighboring points, and maintaining the global information of the point cloud data [24]. The established triangulation network reflects the topological structure of the target object that is represented by the scattered data-set. The triangulation process is shown in Figure 2. The specific steps are given as follows:
Step 1: A point V and its normal vector exist on the surface of the three-dimensional object. The tangent plane perpendicular to the normal vector must first be determined.
Step 2: The point V and its vicinity are projected to the tangent plane passing through V , denoted as the point set { S } , and the point set { S } , which forms all N/2 edges between the two points, is linearly arranged in order of distance from small to large.
Step 3: The local projection method is used to add the shortest edge at each stage and remove the shortest edge from the memory. If the edge does not intersect any of the current triangulation edges, then it is added to the triangulation, otherwise, it is removed. When the memory is empty, the triangulation process ends.
Step 4: Triangulation is used to obtain the connection relationship of the points and return it to the three-dimensional space, which forms the space triangulation of the point V and its nearby points.
Greedy projection triangulation can establish a reasonable data structure for a large number of scattered point clouds in the 3-D space. When positioning a point, the path is unique, and the tetrahedron can be located accurately and quickly, thereby narrowing the search range and eliminating unnecessary matching. This fundamentally improves the overall efficiency of matching feature points.

2.3. Sample Consensus Initial Registration

The sample consensus initial alignment is used for initial registration. Assuming that there exists a source cloud O s = { P i } and a target cloud O t = { Q j } , then the specific steps are as follows:
Step 1: Based on the FPFH feature descriptor of each sample point, greedy projection triangulation is performed on the target point cloud to establish local correlation of the scattered point cloud data.
Step 2: A number of sampling points are selected in the source point cloud O s . In order to ensure that the sampling points are representative, the distance between two sampling points must be greater than the preset minimum distance threshold d .
Step 3: Search for the feature points in the target point cloud O t , whose feature value are close to the sample points in the source point cloud O s . Given that the greedy projection triangulation establishes a reasonable data structure for the target point cloud and then performs feature matching, it directly locates the tetrahedron with a large correlation and searches for the corresponding point pairs within the local scope.
Step 4: The transformation matrix between the corresponding points is obtained. The performance of registration is evaluated according to the total distance error function by solving the corresponding point transformation, which is expressed as follows:
H ( l i ) = { 1 2 l i 2 l i < m i 1 2 m i ( 2 l i m i ) l i > m i ,
in which, m i is the specified value and l i is the distance difference after the corresponding point transformation. When the registration process is completed, the one with the smallest error in all the transformations is considered as the optimal transformation matrix for initial registration.

2.4. Iterative Closest Point Accurate Registration

The initial transformation matrix is the key to improved matching for accurate registration. An optimized rotational translation matrix [ R 0 , T 0 ] was obtained by initial registration, which is used as an initial value for accurate registration to obtain a more accurate transformation relationship by the iterative closest point algorithm.
Based on the optimal rotation translation matrix obtained from the initial registration, the source point cloud O s is transformed into O s , and it is used together with O t as the initial set for accurate registration. For each point in the source point cloud, the nearest corresponding point in the target point cloud is determined to form the initial corresponding point pair and the corresponding point pair with the direction vector threshold is deleted. The rotation matrix R and translation vector T are then determined. Given that R and T have six degrees of freedom while the number of points is huge, a series of new R and T are obtained by continuous optimization. The nearest neighbor point changes with the position of the relevant point after the conversion; therefore, it returns to the process of continuous iteration to find the nearest neighbor point. The objective function is constructed as follows:
f ( R , T ) = 1 N P i = 1 N P | O t i R O s i T | 2 ,
when the change of the objective function is smaller than a certain value, it is believed that the iterative termination condition has been satisfied. More precisely, accurate registration has been completed.

3. Experiment and Analysis

During the experiment, Kinect was used as a 3-D vision sensor to realize point cloud data acquisition. The original point cloud data that was collected was processed on the Geomagic Studio 12 (Geomagic Corporation, North Carolina, the United States) platform and the experiment was completed in Microsoft Visual C++ (Microsoft Corporation, Washington, the United States). The traditional algorithm collects two point cloud data under different orientations of the same object, performs initial registration and fine registration without applying greedy projection triangulation. The greedy projection triangulation is added to address the limitations in terms of registration speed and accuracy, and the superiority of the proposed algorithm is analyzed by comparing with the traditional algorithm.

3.1. Point Cloud Registration Experiment for Simple Target Object

In this experiment, a cup is used as an example for registration. Figure 3a shows the original point cloud data and Figure 3b shows cup point cloud data after removing the background. Figure 3c shows the registration result obtained by the traditional algorithm while Figure 3d shows the registration result obtained using the registration algorithm proposed in this paper. The red regions in the point cloud represents the source point cloud data while the green and the blue regions represent the target point cloud and the rotated cloud point data, respectively.
Table 1 lists the experimental parameters. Table 2 lists the results of the target point cloud conversion obtained using the traditional point cloud registration algorithm and the results obtained using the proposed algorithm, which reflect the relative transformation relationship of the target object. Table 3 compares the registration times of different algorithms.
When analyzing the above experiments, the attitude of the source cloud is considered as a reference and the attitude of the target object is decomposed into three directions, namely X, Y, and Z. The rotation angle in three directions and the matching error distance between the source cloud and the transformed point cloud are considered as the evaluation indices. The rotation angle and the registration error distance in this experiment are shown in Table 4.
From Table 3, it can be observed that for the same point cloud sample with the same experimental parameters, the initial registration time using the traditional algorithm is 0.340 s. Because of the combination of FPFH feature description and greedy projection triangulation, the initial registration time obtained using the proposed algorithm is 0.252 s. Table 4 shows a comparison of the two algorithms. The average error distance obtained is 1.58 mm and 1.49 mm using the traditional algorithm and the proposed algorithm, respectively. As shown in Table 2, the point cloud is transformed by the different transformation matrix, and the average error distance obtained is smaller by the proposed algorithm.

3.2. Point Cloud Registration Experiment for Complex Target Object

In this experiment, the point cloud models of the same person in different orientations are collected and then registered using different algorithms. Figure 4a shows the results of 3-D reconstruction. Figure 4b,c show the registration results of the traditional algorithm and the proposed algorithm, respectively. As can be seen from the figure, the blue point cloud and the red point cloud are more highly integrated in Figure 4c than that in Figure 4b, which can be known the proposed algorithm is more accurate than the traditional algorithm. Table 5 shows a comparison of the registration time of different algorithms. Table 6 shows the obtained rotation angle and the registration error distance in this experiment. The eight groups affine transformations are performed on the input point cloud data, which verify the reliability of the algorithm. Table 7 shows a comparison of the average registration error distance and the total registration time of the eight groups of experiments. The ratio of average registration error reduction is between 27.3% and 50%, and the ratio of total registration time reduction is about 1.1%. It can be seen that the average registration error distance of the proposed algorithm is smaller and the total registration time is shorter than the traditional one, which verifies the reliability of the proposed algorithm.
Compared with the results obtained from the traditional algorithm, it is concluded that the proposed algorithm has higher registration accuracy and faster registration speed. Its advantages can be attributed to the following factors:
(a)
The FPFH feature descriptor describes feature information accurately and comprehensively and avoids the errors in matching feature point pairs.
(b)
Greedy projection triangulation reflects the topological connection between data points and its neighbors, establishes local optimal correlation, narrows the search scope, and reduce unnecessary matching times.
(c)
The combination of the FPFH feature description and the greedy projection triangulation can match similar point pairs accurately and quickly, which is the key to efficient registration.

4. Conclusions

Based on the traditional sample consensus initial alignment and iterative closest point algorithms, a new point cloud registration algorithm based on the combination of the FPFH feature description and the greedy projection triangulation was proposed herein. The 3-D point cloud data is used to improve the information regarding the two-dimensional image, and the data information is completely preserved. The FPFH comprehensively describes the local geometric feature information around the sample point. This simplifies the complexity of feature extraction and improves the accuracy of feature description. Greedy projection triangulation solves the problem that the feature points have a wide search range during the registration process. Thus, the number of matching processes is reduced.
In the registration experiment for target object, the registration speed increased by 1.1% and the registration accuracy improved by 27.3% to 50%. The results show that the optimized spatial triangular mesh established by greedy projection triangulation narrows the search range of feature points, which improved the registration speed and accuracy. The initial registration determines an approximate rotational translation relationship between the two point cloud models. Using it as the initial value, accurate registration is performed to obtain a more precise relative change relationship. The greedy projection triangulation optimizes the traditional registration algorithm, thereby making the registration process faster and more accurate.

Author Contributions

For the research articles, the three authors distributed the responsibilities as J.L. carried out the theoretical algorithm research; D.B. carried out the experimental research and performed the analysis; L.C. conducted the experimental data acquisition and analysis; J.L. and D.B. wrote the paper.

Funding

This research was funded by [the scientific research projects in National Natural Science Foundation of China] grant number [11704263], [Liaoning Province Natural Science Foundation] grant number [201602616], [Liaoning Province Department of Education Scientific Research Project] grant number [2015443].

Acknowledgments

The authors would like to thank Ziyi Meng and Xudong Wang for insightful discussions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Huang, Y.; Da, F.P.; Tao, H.J. An automatic registration algorithm for point cloud based on feature extraction. Chin. J. Lasers 2015, 42, 250–256. [Google Scholar] [CrossRef]
  2. Chen, K.; Zhang, D.; Zhang, Y. Point cloud data processing method of cavity 3D laser scanner. Acta Opt. Sin. 2013, 33, 125–130. [Google Scholar] [CrossRef]
  3. Wei, S.B.; Wang, S.Q.; Zhou, C.H. An Iterative closest point algorithm based on biunique correspondence of point clouds for 3D reconstruction. Acta Opt. Sin. 2015, 35, 252–258. [Google Scholar] [CrossRef]
  4. Logozzo, S.; Kilpelä, A.; Mäkynen, A.; Zanetti, E.M.; Franceschini, G. Recent advances in dental optics—Part II: Experimental tests for a new intraoral scanner. Opt. Lasers Eng. 2014, 54, 187–196. [Google Scholar] [CrossRef]
  5. Calì, M.; Oliveri, S.M.; Ambu, R.; Fichera, G. An Integrated Approach to Characterize the Dynamic Behaviour of a Mechanical Chain Tensioner by Functional Tolerancing. Int. J. Mech. Eng. Educ. 2018, 64, 245–257. [Google Scholar]
  6. Won-Ho, S.; Jin-Woo, P.; Jung-Ryul, K. Traffic Safety Evaluation Based on Vision and Signal Timing Data. In Proceedings of Engineering and Technology Innovation. Proc. Eng. Technol. Innov. 2017, 7, 37–40. [Google Scholar]
  7. Hsu, H.C.; Chu, L.M.; Wang, Z.K.; Tsao, S.C. Position Control and Novel Application of SCARA Robot with Vision System. Adv. Technol. Innov. 2017, 2, 40–45. [Google Scholar]
  8. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 15, 239–256. [Google Scholar] [CrossRef]
  9. Qiu, L.; Zhou, Z.; Guo, J.; Lv, J. An Automatic Registration Algorithm for 3D Maxillofacial Model. 3D Res. 2016, 7, 20. [Google Scholar] [CrossRef]
  10. Xu, J.F.; Liu, Z.G.; Han, Z.W.; Liu, Y. 3D Reconstruction of Contact Network Components Based on SIFT and LBP Point Cloud Registration. J. China Railw. Soc. 2017, 39, 76–81. [Google Scholar] [CrossRef]
  11. Chen, X. Research on Feature Extraction and Recognition of Imaging Lidar Target; National University of Defense Technology: Changsha, China, 2015. [Google Scholar]
  12. Jia, Y.J.; Xiong, F.G.; Han, X.; Kuang, L.Q. SHOT-based multi-scale key point detection technology. J. Laser Opt. Prog. 2018. Available online: http://kns.cnki.net/kcms/detail/31.1690.TN.20180227.1658.018.html (accessed on 20 September 2004).
  13. Huang, J.J. Research on Real-Time 3D Reconstruction of Mobile Scenes Based on PFH and Information Fusion Algorithm; Donghua University: Shanghai, China, 2014. [Google Scholar]
  14. Wu, S.G.; He, S.; Yang, X. The Application of SIFT Method towards Image Registration. Adv. Mater. Res. 2014, 1044, 1392–1396. [Google Scholar] [CrossRef]
  15. Liu, J.F. Feature Matching of Fuzzy Multimedia Image Based on Improved SIFT Matching. Recent Adv. Electr. Electron. Eng. 2016, 9, 34–38. [Google Scholar] [CrossRef]
  16. Shen, Y.; Pan, C.K.; Liu, H.; Gao, B. Point Cloud Registration Method Based on Improved SIFT-ICP Algorithm. Trans. Chin. Soc. Agric. Mach. 2017, 48, 183–189. [Google Scholar]
  17. Manish, I.P.; Vishvjit, K.T.; Shishir, K.S. Image Registration of Satellite Images with Varying Illumination Level Using HOG Descriptor Based SURF. Proc. Comput. Sci. 2016, 93, 382–388. [Google Scholar] [CrossRef]
  18. Li, J.F.; Wang, G.; Li, Q. Improved SURF Detection Combined with Dual FLANN Matching and Clustering Analysis. Appl. Mech. Mater. 2014, 556, 2792–2796. [Google Scholar] [CrossRef]
  19. Huang, L.; Chen, C.; Shen, H.; He, B. Adaptive registration algorithm of color images based on SURF. Measurement 2015, 66, 118–124. [Google Scholar] [CrossRef]
  20. Yan, W.D.; She, H.W.; Yuan, Z.B. Robust Registration of Remote Sensing Image Based on SURF and KCCA. J. Indian Soc. Remote Sens. 2014, 42, 291–299. [Google Scholar] [CrossRef]
  21. Renzhong, L.; Man, Y.; Yu, T.; Yangyang, L.; Huanhuan, Z. Point Cloud Registration Algorithm Based on the ISS Feature Points Combined with Improved ICP Algorithm. J. Laser Opt. Prog. 2017, 54, 312–319. [Google Scholar] [CrossRef]
  22. Liu, W.Q.; Chen, S.L.; Wu, Y.D.; Cai, G.R. Fast CPD Building Point Cloud Registration Algorithm Based on ISS Feature Points; Jimei University: Xiamen, China, 2016; Volume 21, pp. 219–227. [Google Scholar]
  23. Wang, L.H. Technical Research on 3D Point Cloud Data Processing; Beijing Jiaotong University: Beijing, China, 2011. [Google Scholar]
  24. Xie, F.J. Research and Implementation of Key Techniques of 3D Point Cloud Surface Reconstruction on Vehicle Dimensions Measuring System; Hefei University of Technology: Hefei, China, 2017. [Google Scholar]
Figure 1. Point cloud registration chart. FPFH (fast point feature histograms).
Figure 1. Point cloud registration chart. FPFH (fast point feature histograms).
Applsci 08 01776 g001
Figure 2. Greedy projection triangulation schematic.
Figure 2. Greedy projection triangulation schematic.
Applsci 08 01776 g002
Figure 3. (a) Original point cloud; (b) Processed point cloud; (c) Traditional algorithm; and (d) Proposed algorithm.
Figure 3. (a) Original point cloud; (b) Processed point cloud; (c) Traditional algorithm; and (d) Proposed algorithm.
Applsci 08 01776 g003
Figure 4. (a,b) 3-D reconstruction; (c,d) Traditional algorithm; (e,f) Proposed algorithm.
Figure 4. (a,b) 3-D reconstruction; (c,d) Traditional algorithm; (e,f) Proposed algorithm.
Applsci 08 01776 g004
Table 1. The experimental parameters.
Table 1. The experimental parameters.
The Number of Point CloudsIterative Closest Point Accurate Registration Parameters
Source point cloudTarget point cloudThreshold (m)The maximum number of iterationsTransform matrix difference (m)Mean square error (m)
700955660.015001 × 10−100.1
Table 2. Point cloud conversion results of different algorithms.
Table 2. Point cloud conversion results of different algorithms.
AlgorithmsTransormation matrix of Initial Registration (m)Transormation matrix of Accurate Registration (m)
Traditional algorithm [ 0.999 0.008 0.031 0.049 0.003 0.985 0.170 0.040 0.032 0.170 0.985 0.027 0 0 0 1 ] [ 0.999 0.007 0.017 0.057 0.004 0.983 0.185 0.049 0.018 0.185 0.983 0.026 0 0 0 1 ]
Proposed algorithm [ 0.999 0.008 0.031 0.049 0.030 0.985 0.170 0.040 0.032 0.170 0.985 0.028 0 0 0 1 ] [ 0.999 0.012 0.005 0.062 0.011 0.980 0.201 0.059 0.007 0.200 0.980 0.025 0 0 0 1 ]
Table 3. Registration time of different algorithms.
Table 3. Registration time of different algorithms.
AlgorithmTotal Registration Time (s)Initial Registration Time (s)Accurate Registration Time (s)
Traditional algorithm0.3470.3400.007
Proposed algorithm0.2570.2520.005
Table 4. Experimental results of different algorithms.
Table 4. Experimental results of different algorithms.
AlgorithmsX-Direction Rotation Angle (rad)Y-Direction Rotation Angle (rad)Z-Direction Rotation Angle (rad)Average Error Distance (cm)
Traditional algorithm0.1860.018−0.6250.158
Proposed algorithm0.2020.007−0.6170.149
Table 5. Registration time of different algorithms.
Table 5. Registration time of different algorithms.
OrientationAlgorithmsTotal Registration Time (s)Initial Registration Time (s)Accurate Registration Time (s)
1Traditional algorithm11.68011.4280.252
Proposed algorithm11.55311.3360.217
2Traditional algorithm8.2878.1960.091
Proposed algorithm8.0297.9550.074
Table 6. Experimental results of different algorithms.
Table 6. Experimental results of different algorithms.
OrientationAlgorithmsX-Direction Rotation Angle (rad)Y-Direction Rotation Angle (rad)Z-Direction Rotation Angle (rad)Average Error Distance (cm)
1Traditional algorithm0.2830.702−0.1720.015
Proposed algorithm0.1390.561−0.0020.011
2Traditional algorithm0.0530.469−0.5870.009
Proposed algorithm0.0030.495−0.6250.005
Table 7. Comparison of average error distance and total registration time of multiple sets experiment.
Table 7. Comparison of average error distance and total registration time of multiple sets experiment.
GroupRegistration Error of Traditional Algorithm (cm)Registration Error of Proposed Algorithm (cm)Percentage of Average Registration Error Reduction (%)Total Registration Time of Traditional Algorithm (s)Total Registration Time of Proposed Algorithm (s)Percentage of Total Registration Time Reduction (%)
10.0150.01136.411.68011.5531.1
20.0140.01127.311.66911.5491.0
30.0140.01040.011.68411.5591.1
40.0150.01136.411.68111.5561.1
50.0130.01030.011.68511.5581.1
60.0150.01050.011.67311.5511.1
70.0150.01136.411.67811.5511.1
80.0140.01127.311.68311.5581.1

Share and Cite

MDPI and ACS Style

Liu, J.; Bai, D.; Chen, L. 3-D Point Cloud Registration Algorithm Based on Greedy Projection Triangulation. Appl. Sci. 2018, 8, 1776. https://doi.org/10.3390/app8101776

AMA Style

Liu J, Bai D, Chen L. 3-D Point Cloud Registration Algorithm Based on Greedy Projection Triangulation. Applied Sciences. 2018; 8(10):1776. https://doi.org/10.3390/app8101776

Chicago/Turabian Style

Liu, Jian, Di Bai, and Li Chen. 2018. "3-D Point Cloud Registration Algorithm Based on Greedy Projection Triangulation" Applied Sciences 8, no. 10: 1776. https://doi.org/10.3390/app8101776

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop