An Automatic 3D Point Cloud Registration Method Based on Biological Vision

: When measuring surface deformation, because the overlap of point clouds before and after deformation is small and the accuracy of the initial value of point cloud registration cannot be guaranteed, traditional point cloud registration methods cannot be applied. In order to solve this problem, a complete solution is proposed, first, by fixing at least three cones to the target. Then, through cone vertices, initial values of the transformation matrix can be calculated. On the basis of this, the point cloud registration can be performed accurately through the iterative closest point (ICP) algorithm using the neighboring point clouds of cone vertices. To improve the automation of this solution, an accurate and automatic point cloud registration method based on biological vision is proposed. First, the three-dimensional (3D) coordinates of cone vertices are obtained through multi-view observation, feature detection, data fusion, and shape fitting. In shape fitting, a closed-form solution of cone vertices is derived on the basis of the quadratic form. Second, a random strategy is designed to calculate the initial values of the transformation matrix between two point clouds. Then, combined with ICP, point cloud registration is realized automatically and pre-cisely. The simulation results showed that, when the intensity of Gaussian noise ranged from 0 to 1 mr (where mr denotes the average mesh resolution of the models), the rotation and translation errors of point cloud registration were less than 0.1° and 1 mr, respectively. Lastly, a cam-era-projector system to dynamically measure the surface deformation during ablation tests in an arc-heated wind tunnel was developed, and the experimental results showed that the measuring precision for surface deformation exceeded 0.05 mm when surface deformation was smaller than 4 mm.


Introduction
Near-space is a connected region of traditional aeronautics and space.Near-space supersonic vehicles have great potential, but if flown for a long time in an aerothermal environment, the surface of vehicles can be deformed, which causes functional failure.Thus, deformation properties of materials in an aerothermal environment need to be urgently explored.An arc-heated wind tunnel is the main device to simulate an aerothermal environment.There are many methods to reconstruct 3D shape data at different times during ablation tests, but the alignment of point clouds is difficult, because the overlap is too small.
Point cloud registration involves calculating a rigid transformation matrix, consisting of a rotation matrix and a translation vector, to minimize the alignment error between two point clouds, and has been widely used for simultaneous localization and mapping (SLAM) [1][2][3], multi-view point cloud registration [4,5], object recognition [6,7], etc.The classical iterative closest point (ICP) algorithm is the most widely used in point cloud registration [8].It was proposed by Besel, in 1992, and has been used to solve the registration problem of free-form surfaces.The basic idea is to search a corresponding closest point in a test point cloud for each point in a reference point cloud.According to the set consisting of all the closest points, a transformation matrix is calculated between two point clouds, resulting in a registration error.If the registration error cannot satisfy the stopping criterion, the transformed test point cloud is taken as a new test point cloud, and the above steps are repeated until the stopping criterion is satisfied.In the classical ICP algorithm, the registration error is presented in two forms, i.e., point-to-point and point-to-plane.In general, the ICP algorithm based on the point-to-plane registration error converges at a faster rate [9].Using the classical ICP algorithm as a foundation, there have been many variants proposed by researchers [10][11][12][13][14].The key step of these methods involves searching the set of closest points.However, in a real scenario of surface deformation, it is very difficult to accurately determine the closest points without any prior information.Moreover, these methods are very sensitive to the initial values of the transformation matrix.If the quality of initial values is not good enough, these methods easily suffer from local minima.Another way to solve the point cloud registration problem is by using probabilistic methods.The core idea is to transform the registration problem of point clouds into an optimization problem of probability model parameters [15][16][17][18][19][20].As compared with ICP and its variants, these methods perform more robustly with respect to noise and outliers, but their computational complexity is high.Similar to ICP methods, these methods can only be used to align point clouds with large overlaps, whereas an initial value of the transformation matrix with high quality is also required.In the application of surface deformation measurement, the overlap between point clouds is small and the quality of initial values cannot be guaranteed.Thus, if the above methods are directly used to align point clouds, the registration error is large.
In order to solve the problem of surface deformation measurement, a complete solution is proposed.Figure 1 shows the flowchart.Firstly, fix the target to a mounting bracket with at least three cones.Secondly, a cone vertex detection algorithm is deduced to extract feature points.Thirdly, accurately align the point cloud before deformation to the point cloud after deformation using neighboring point clouds of each cone vertices which are not deformed during ablation tests through an automatic registration algorithm.Then, surface deformation can be obtained.This paper is organized as follows: In Section 2, we introduce the detection principles of cone vertices in detail; in Section 3, we derive an automatic registration algorithm for point clouds; in Section 4, we introduce the research methodology, including cone vertex detection, automatic registration, and surface deformation measurement; in Section 5, we present the research results, and in Sections 5.1 and 5.2 we present the accuracy and robustness of the cone vertex detection algorithm and automatic registration algorithm, respectively; in Section 5.3, we provide the results of surface deformation measurement; in Section 6, we discuss the research results; and, in Section 7, we conclude with the contributions of this study and next steps in research.

Figure 1.
A flowchart of surface deformation measurement using the proposed method.

Cone Vertex Detection
As shown in Figure 2, the perception of geometric features in a 3D scene for human beings relies on multi-view observation and data fusion, which confer the advantages of high detection accuracy and robustness to noise.The detection process is executed in several steps.Firstly, a target is independently observed from different viewpoints.Its 3D point clouds are projected onto the retina, and corresponding 2D images are generated.Secondly, the features of interest in each 2D image are extracted.All 2D feature points are reprojected onto the surface of the target, and the corresponding 3D feature points are obtained.Thirdly, all 3D feature points observed from different viewpoints are fused, and the fake 3D feature points (e.g., fake vertices) are deleted according to the analysis made by the brain.Lastly, the target's corresponding geometric shape is fitted using the neighboring point cloud of each 3D feature point on the surface of the target in order to improve the precision of the detected 3D feature points.

Multi-View Observation
Since the measured space is three-dimensional, in order to carry out the all-round observation of a target, three base planes need to be selected as observation planes, for example, XOY, YOZ, and ZOX planes.In each observation plane, the target can be observed from different viewpoints.Without loss of generality, the XOY observation plane can be taken as an example, as shown in Figure 3a.At viewpoint I, the observed 3D point is notated as , , 1, 2,..., 1, where n is the number of points.Its corresponding 2D projection point on the YOZ plane is notated as The center point of the 2D point cloud is notated as The width w1 and height h1 of its corresponding minimum enclosing rectangle (MER) are calculated as follows: In the YOZ plane, the imaging process of the retina can be simulated to generate a binary image 1 I with width W1 and height H1 as: where k is a zoom coefficient.Then, the pixel coordinate of the center point of the image can be calculated as follows: Then, the 2D point cloud 1 i p can be shifted to the image pixel coordinate system as: where p is directly set to 255 and that of the other pixels is set to 0, the target area in the binary image becomes only a set of discrete pixels but not a connected domain, which cannot be used to detect corners.I .Then, the fol- lowing vectors can be obtained: The sum of angles between vectors is notated as j  , as: T or lying exactly on the border of j T , then j  must be equal to 360°.On the basis of this, the image binarization under geometric constraints can be executed according to the following equation:

255,
min 360 0, min 360 The function 3c shows the result of image binarization under geometric constraints, where the first observation has been completed.
As shown in Figure 3a, the point cloud can be rotated by α degrees around the Z-axis; then, the point cloud , , 1, 2,..., 1, position II can be obtained according to Equation (10) as: From Equations ( 1)-( 9), the simulated image 2 I at position II can be generated.In the same way, all simulated images   When the XOY, YOZ, and ZOX planes are taken as the observation planes, then 3s simulated binary images   1, 2,...,3 1,3 u I u s s  can be achieved in total.

Cone Vertex Recognition
The Harris operator can be used to detect corners in all simulated images represents the detected v-th corner in image I u .Its pixel coordinates are notated as u v q , where u r is the number of all detected corners in image I u .Then, its corresponding point u v q in 2D space can be calculated according to Equation (12) as: The closest point where p , in 2D point clouds.Since the corresponding relationship of the points remains unchanged during the projection process from a 3D point cloud to a 2D point cloud, the corresponding point in the 3D point cloud of u v q can be written as is executed on the basis of Euclidean distance according to the following steps: (a) η is notated as the number of categories.i  represents the number of members of Ci.Since the cone vertex is a robust feature point, it should be observed from the most viewpoints.Thus, all values of Ci that satisfy i   should be deleted, as shown in Figure 4c, where  is a threshold set for the number of observations.Figure 4d shows the rough localization result of a cone vertex.
The detected cone vertex is notated as

Shape Fitting
Without loss of generality, 1  can be taken as an example.In this case, is the number of neighboring points.The quadratic form of the conic surface can be written as: Then, Equation ( 14) can be rewritten in matrix form, where The matrices of E and φ are as follows: E can be decomposed using a singular value decomposition (SVD) Then, φ becomes the last column of VE.The quadratic form of the conic surface can be rewritten in matrix form as:  F can be decomposed using an SVD.
The homogeneous coordinates of the cone vertex are represented by the last column of VF (see proof in Appendix A) and notated as   nates of the cone vertex are as follows: In the same way, all coordinates of cone vertices can be obtained.

Automatic Registration Algorithm
    cannot be directly used to calculate the trans- formation matrix between the reference and test point clouds.The transformation matrix consists of a 3 × 3 rotation matrix R and a 3 × 1 translation vector T. To solve the corresponding relationship and improve the robustness of the algorithm, a random strategy is used to calculate the corresponding relationship between   ,, ,, j j j j j j j j x y z x y z .
The transformation between The matrices 1 M and 2 M can be constructed using    , respec- tively, as: Their center points can be calculated using Equation (23) as follows: The origin of the coordinate system can be shifted to the center points of 1 M and 2 M as: Then, there only exists rotation transformation between 1 M and 2 M , expressed as: can be decomposed using an SVD as follows: represents the i-th column of U  ; thus, R must be one of the fol- lowing eight forms: The rotation error is defined as follows: The first item in Equation ( 27) is the data term used to represent the geometric error of rotation estimation; the second item in Equation ( 27) is the constraint term used to limit the coordinate system to being a right-handed coordinate system; trace(•) is a function used to calculate the sum of the diagonal elements of the matrix; whereas The threshold of the rotation error is notated as or the number of iterations is greater than a threshold Nmax: (a) take three different elements from 1  and 2  , respectively; (b) calculate R, T, and emin according to Equations ( 22)- (28).Since the probability of each element obeys a mean distribution, the value of Nmax can be calculated using Equation (29) as: The corresponding point between   ii   should satisfy Equation (29) as:  in the test point cloud.On the above basis, taking the solution of Equation ( 28) as the initial value, the rotation matrix R and translation vector T can be solved using the ICP algorithm.Figure 5 shows the flowchart of the automatic registration algorithm.

Cone Vertex Detection
The algorithm's robustness and accuracy can be evaluated by the detection rate D E is defined as follows: where i E represents the location error of the i-th cone vertex.
A cone model provided by [21] was adopted to test the algorithm performance.To test the influence of noise intensity on the detection of cone vertices, Gaussian noise was independently added to the X-, Y-, and Z-axes of each scene point.The standard deviation σGN of Gaussian noise ranged from 0 to 1 mr with a step size of 0.1 mr, where mr denotes the average mesh resolution of the models.

Automatic Registration
As shown in Figure 6, the reference and test point clouds consisted of five cones.There exists a rigid transformation between the reference and test point clouds, as expressed in Equation (21).The ideal rotation matrix and translation vector are notated as R and T , respectively.The calculated rotation matrix and translation vector are notated as R and T , respectively.Rotation and translation errors under Gaussian noise with different intensities were used to evaluate the algorithm's robustness and accuracy.The rotation error r  is defined as follows: The translation error T  is defined as follows: where dres is equal to the average mesh resolution (dres = 1 mr).To test the influence of Gaussian noise with different intensities on registration performance, Gaussian noise was independently added to the X-, Y-, and Z-axes of each scene point.The standard deviation and step size were the same as in Section 4.1.In the experiment, the corresponding Euler angle of R was [40°, 40°, 40°], and the rotation sequence was "XYZ".

Surface Deformation Measurement
A camera-projector system based on the proposed method, shown in Figure 7a, was developed to measure the surface deformation dynamically during ablation tests in an arc-heated wind tunnel.The resolutions of the projector and camera images were 1280 × 800 px and 2456 × 2058 px, respectively.The sizes of CCD and CCD pixel were 2/3" and 3.45 × 3.45 μm, respectively.The focal length was 75 mm.The data processing platform comprised a Dell laptop with an Intel(R) Core(TM) i7-6700HQ CPU @ 2.6 GHz and 16 GB RAM.The aim was to evaluate the system's precision.Firstly, a model with the size of 40 × 40 × 5 mm was fixed to a special device which had six cones, as shown in Figure 7b.Secondly, the device was placed on a translation stage, as shown in Figure 7c.The translation stage's precision was 0.01 mm.Thirdly, surface ablation could be simulated by moving the model on the translation stage.The camera-projector system was used to reconstruct the model surface and the six cones at time 0 and time k, denoting the refer-ence point cloud and test point cloud, respectively.Fourthly, the reference and test point clouds were aligned using the proposed registration method, and then the model's surface deformation could be measured.Lastly, the measurement result was compared with the reading from the translation stage, and the measurement error of the camera-projector system was calculated.Root-mean-square (RMS) was introduced to evaluate the above measurement error.

Cone Vertex Detection
Figure 8 shows the point clouds of cone models where Gaussian noise was added with different intensities.Here, "+" represents the detected position of the cone vertex.Table 1 shows the statistical results of cone vertex detection.

Surface Deformation Measurement
Figure 12 shows the model's surface deformation at different times.Table 2 shows the comparison results of readings and measurement results.

Discussions
(1) As shown in Figure 8c, when σGN = 1 mr, the conic surface was very rough, but its vertex could still be detected accurately, which fully proves the robustness of the algorithm to Gaussian noise.Table 1 shows the statistical results of the detection rate and location error under Gaussian noise with different intensities.It could be seen that the location error increased with increasing noise intensity.In general, detection of the cone vertex was successful if the location error was lower than 5 mr.Thus, the algorithm can maintain a high detection rate under Gaussian noise with intensity ranging from 0 to 1 mr.(2) Figures 9-11 indicate that the rotation and translation errors increase with increasing noise intensity.When σGN = 1 mr, most details on the conic surfaces were lost, but the point clouds could still be aligned accurately, which proves the robustness of the algorithm to Gaussian noise.In general, the point cloud registration was successful if the rotation error was lower than 5° and the translation error was lower than 5 mr.Thus, the algorithm performs well under Gaussian noise with an intensity ranging from 0 to 1 mr.(3) In general, surface deformation of a near-space supersonic vehicle is no more than 4 mm during ablation tests in an arc-heated wind tunnel.Table 2 shows the statistical results, where it can be seen that the precision of surface deformation measurement exceeds 0.05 mm when surface deformation is smaller than 4 mm.

Conclusions
In order to solve the problem of surface deformation measurement during ablation tests in an arc-heated wind tunnel, in this study, we proposed an automatic point cloud registration method.As compared with other registration methods, we provided a complete solution, including two parts: (1) Guarantee high-quality initial values and overlaps for aligning point clouds which have deformed much.(2) A strategy to automati-cally compute rigid transformations between point clouds.Inspired by 2D artificial targets used to improve precision of camera calibration or photogrammetry, we introduced 3D artificial targets to obtain accurate registration results, which was the key idea for solving the problem of surface deformation measurement; most state-of-art approaches only considered how to align point clouds more accurately and robustly using a big enough overlap.Simulations and experiments were conducted, and the research results indicated the following: (1) The proposed method performed well under Gaussian noise with an intensity ranging from 0 to 1 mr.When σGN = 1 mr, rotation and translation error were smaller than 0.025° and 1 mr, respectively.(2) The error of surface deformation measurement was smaller than 0.05 mm when deformation was no more than 4 mm.In addition to surface deformation measurement, the proposed method can also be applied for experimental studying of soft matter.
However, there are still some aspects that need to be studied: (1) The procedure of cone vertex detection should be more efficient.As compared with the Harris operator used in cone vertex detection, Radon or Hough transform may be more simplified [22,23], but the robustness needs to be evaluated.(2) The 3D artificial target should be more multiple.An artificial neutral network (ANN) can be adopted to train a classifier.And rotational projection statistics (RoPS) can be taken as the inputs of the classifier [6].The classifier can be used to recognize different geometric features and delete fake features, which can improve the efficiency of data fusion and decrease the time cost.

Figure 2 .
Figure 2. The principles of cone vertex detection based on biological vision.

Q 1 A P , 1 BP , and 1 CP 1 A p , 1 Bp , and 1 CQ
As is shown in Figure3b, A, B, and C are three vertices of an arbitrary triangle patch on the target surface, and its corresponding triangle in the image is notated as j T , where t is the number of triangle patches.Both j T and j T are connected domains.The coordinates of A, B, and C are notated as , respectively.The coordinates of A , B , and C in the image are notated as is an arbitrary pixel in image 1 I , and its coordinates are notated as 1, and m is the number of pixels in image 1 ru is the number of all detected corners in image I u , 3s is the number of all simulated images.(b) dij represents the distance between the centers of Ci and Cj.Ci and Cj is the i-th and j-th category.If the minimum of   ij d is smaller than the distance threshold λ, cluster 1 categories and η = η-1.(c) Repeat step (b) until the minimum of   ij d is equal to or greater than λ.(d) The coordinate of each clustering center can be obtained by calculating the mean value of members in its corresponding category.

where ρ is the number ofFigure 4 .
Figure 4.The principles of cone vertex recognition.(a) Corner detection; (b) clustering of feature points; (c) data filtering; (d) rough localization.

y x z y z x y
a a a a a .

 , it indicates that the cone vertices in 1 M and 2 M
are corresponding points and the solution of rotation and translation is credible.Otherwise, the following steps are repeated until min e e  

Figure 5 .
Figure 5. Flowchart of the automatic registration algorithm.

N
is the number of detected cone vertices and T N is the total number of cone vertice.

Figures 9 Figure 9 .Figure 10 .Figure 11 .
Figures 9 and 10 show the reference and test point clouds, respectively, where Gaussian noise was added with different intensities.Here, "+" represents the detected position of the cone vertex.Figure 11 shows the relationship between registration error and noise intensity.

Table 1 .
The statistical results of cone vertex detection (mr denotes the average mesh resolution of the models).