An Experimental Study of a New Keypoint Matching Algorithm for Automatic Point Cloud Registration

: Light detection and ranging (LiDAR) data systems mounted on a moving or stationary platform provide 3D point cloud data for various purposes. In applications where the interested area or object needs to be measured twice or more with a shift, precise registration of the obtained point clouds is crucial for generating a healthy model with the combination of the overlapped point clouds. Automatic registration of the point clouds in the common coordinate system using the iterative closest point (ICP) algorithm or its variants is one of the frequently applied methods in the literature, and a number of studies focus on improving the registration process algorithms for achieving better results. This study proposed and tested a different approach for automatic keypoint detecting and matching in coarse registration of the point clouds before ﬁne registration using the ICP algorithm. In the suggested algorithm, the keypoints were matched considering their geometrical relations expressed by means of the angles and distances among them. Hence, contributing the quality improvement of the 3D model obtained through the ﬁne registration process, which is carried out using the ICP method, was our aim. The performance of the new algorithm was assessed using the root mean square error (RMSE) of the 3D transformation in the rough alignment stage as well as a-prior and a-posterior RMSE values of the ICP algorithm. The new algorithm was also compared with the point feature histogram (PFH) descriptor and matching algorithm, accompanying two commonly used detectors. In result of the comparisons, the superiorities and disadvantages of the suggested algorithm were discussed. The measurements for the datasets employed in the experiments were carried out using scanned data of a 6 cm × 6 cm × 10 cm Aristotle sculpture in the laboratory environment, and a building facade in the outdoor as well as using the publically available Stanford bunny sculpture data. In each case study, the proposed algorithm provided satisfying performance with superior accuracy and less iteration number in the ICP process compared to the other coarse registration methods. From the point clouds where coarse registration has been made with the proposed method, the ﬁne registration accuracies in terms of RMSE values with ICP iterations are calculated as ~0.29 cm for Aristotle and Stanford bunny sculptures, ~2.0 cm for the building facade, respectively.


Introduction
Utilizing 3-dimensional (3D) models in various applications considerably increased in this century with the advances in laser scanning technologies [1]. The measurement devices equipped with LiDAR (Light Detection and Ranging) sensors are commonly used for the acquisition of 3D point clouds data, and the developed algorithms for processing the point cloud data provide successful results in generating as-built models to compare with the actual objects. The scanning process for generating accurate 3D point cloud data of different scale objects and/or land-parts is rather fast and practical compared to the conventional photogrammetric and surveying techniques [2]. The measurement with laser the application; however, local feature descriptors are superior for keypoint matching and point cloud registration purposes [11].
In the literature, there are different algorithms, which are designed and used for keypoint detection, description and matching purposes in point cloud registration [4]. Because the significant contribution of the proposed algorithm in this study is on the coarse registration process, the literature on the coarse registration methods is widely cited here. Basically, coarse registration methods consist of two main steps: the detection step, which includes the determination of either keypoints, lines, planes, surfaces or global features of point clouds; and the description step where the conjugates are determined [10,12]. In the literature, four-points congruent sets (4PCS) and many variants of it are used as a pointbased coarse registration algorithm [13]. The 4PCS approach is an algorithm that reduces the number of selected keypoints and simultaneously applies descriptor and matching to determine the conjugate points [12]. Besides, for determining the geometric features of the keypoints in a point-based registration, the descriptors such as point feature histogram (PFH), fast PFH [14], signature of histogram of orientations (SHOT) [15] or the semantic analysis [16,17] methods are commonly used [12]. In the detection step, there are various point-based detectors released so far [18]. Förstner operator [19], 3D Harris [20], local surface patches (LSPs) [21], normal aligned radial feature (NARF) [22], 3D scale invariant feature transform (3DSIFT) [23,24] and intrinsic shape signatures (ISS) algorithm [25] are commonly used as keypoint detectors.
On the other hand, primitive-based registration approaches are also commonly applied for coarse registration, and these approaches include algorithms, which use lines [26], curves [27], planes [9], or surfaces [28]. The global feature-based registration methods including the normal distributions transform (NDT) based algorithms [29] as well as the algorithms that transform 3D point clouds into 1D histograms and 2D images [30] are also among the frequently used algorithms in practice. In this article, a point-based coarse registration approach is proposed and tested.
In this study, a new automatic keypoint description and matching method for coarse registration of point clouds has been introduced and tested. The original contribution of the proposed algorithm is mainly in its descriptor and matching parts. In the description part of the proposed algorithm, the keypoints are divided into subsets using the mathematical combination method. The distances between the reference keypoint and saliency points in the subset and the angles between the constituted connection lines are calculated in a combinational manner. Using the combination technique for constituting the subsets of the keypoints and method used for calculating the distances and angles between the points and lines is the novelty of the proposed algorithm. Contrary the other methods, which are commonly in use in literature, the proposed method considers the calculated angles independently from the surface normal or surface. In the matching algorithm of the new approach, the sum of the differences of the distances and the angle values created for a certain keypoint set in the two-point clouds are taken into account. For matching the subsets, the summations of the differences of the distances and of the angles, respectively, are used as the single values. The set of points whose summation of the distance and angular differences are closest to zero value are considered conjugate points. In addition to descriptor and matching algorithms of the new method, its detector algorithm contains novelty as well. Unlike the ISS and LSP methods, in the proposed detector, a single point with the highest curvature from each voxel (cube) is selected, and the most suitable keypoints are determined accordingly.
The developed automatic new keypoint matching approach for coarse registration was assessed in result of fine registration using the ICP algorithm [5]. The 3D similarity transformation method was employed in the algorithm. The tests of the study have been carried out using three datasets. The first dataset was obtained through the laser scanning measurements in a laboratory environment, and it is to model a small size sculpture. The second dataset included the point clouds of a building facade with regular geometrical details that were obtained through the terrestrial laser scanning measurements from the stationary and mobile platforms. Further information regarding the measurements and the used laser scanning sensors was provided in the second section of the article. Besides the two datasets obtained through the laser scanning by the authors, the tests were repeated using the publicly available bunny point cloud by the Stanford 3D scanning repository (http://graphics.stanford.edu/data/3Dscanrep/ accessed on 15 March 2021) to make the accuracy values obtained with the proposed algorithm reproducible.
The new algorithm, which was suggested for coarse registration in this study, filters the point cloud and down samples the dataset at first. Then, it identifies the keypoints considering the angles and distances among the point cloud spots. The transformation parameters are automatically calculated for the coarse registration with the Gauss Markov least-squares adjustment (LSA) model. After all, fine registration is applied with the ICP method. In order to provide a comparison between the new keypoint descriptor and matching algorithm and the other methods, which are currently in use, the registration processes using both sculptures and building facade data were repeated. Accordingly, ISS and LSP algorithms for keypoint detection and point future histograms (PFH) for keypoint description and matching were used as well. In the conclusions, the tested algorithms were compared by means of accuracy of the transformation before and after ICP as well as the success of the generated 3D models with each algorithm. The new algorithm outperformed the other tested algorithms by providing smaller RMSE values of the transformation and a smaller iteration number in the ICP process.
The organization of this paper is as follows: detailed information of the laser scanning measurements and used datasets were described in Section 2. The theoretical background of the tested keypoint detecting and matching algorithms as well as the new suggested formula were explained in subtitles of this section as well. The mathematical formulation and fundamental theory of the ICP algorithm were summarized in the last subtitle of Section 2. The numerical results with the comparative test statistics were provided in Section 3. A comprehensive discussion based on the obtained results was given in Section 4. Finally, the major conclusions as well as the recommendations for future studies are included in Section 5.

Datasets Used in the Tests
In the tests of the point cloud registration algorithms, three datasets were used. The first dataset was obtained with the indoor measurements of Aristotle sculpture (approximate size: 6 cm × 6 cm × 10 cm) (see Figure 1) in ITU Geomatics Engineering Department Surveying Laboratory. NextEngine 3D Laser Scanner Ultra HD was used in the scanning of the sculpture [31]. The dimensional accuracy of the scanner is 0.1 mm in macro mode, and 0.3 mm in wide mode [31]. Considering the size and accuracy of the used laser scanner, it is practical and cost-effective equipment for scanning in 3D model generation of small areas or object in industrial applications.
The point cloud data obtained with scanning the southern facade of ITU Yilmaz Akdoruk guest-house building in Maslak Campus area (Figure 2a) is the second dataset in the study. The facades of the building were scanned using static terrestrial (TLS) and mobile laser scanning (MLS) techniques and the 3D models of the facades were generated by combining the point clouds data obtained from these two techniques. In TLS measurements, Leica ScanStation C10 terrestrial laser scanners with 4 mm distance measurement accuracy, 12" angle measurement accuracy and 6 mm point position accuracy (for single measurement), was used [32]. Riegl VMX 450 mobile laser scanning system (see Figure 2b) was used for MLS measurements of the building facades [33]. In Riegl VMX-450 mobile laser scanning systems, two Riegl VQ-450 laser scanners with 8 mm position accuracy are integrated with IMU/GNSS unit.
The third dataset used for testing the proposed algorithm is the bunny sculpture point clouds made available by the Stanford 3D scanning repository (http://graphics.stanford. edu/data/3Dscanrep/, accessed on 15 March 2021). The bunny data is known as one of the Stanford models, which was scanned with a Cyberware 3030 MS scanner at Stanford University Computer Graphics Laboratory (see Figure 3). Cyberware 3030 MS scanner has 0.1 mm dimensional accuracy. This data is frequently used as sample model for testing keypoint matching algorithms in literature. The third dataset used for testing the proposed algorithm is the bunny sculpture point clouds made available by the Stanford 3D scanning repository (http://graphics.stanford.edu/data/3Dscanrep/, accessed on 15 March 2021). The bunny data is known as one of the Stanford models, which was scanned with a Cyberware 3030 MS scanner at Stanford University Computer Graphics Laboratory (see Figure 3). Cyberware 3030 MS scanner has 0.1 mm dimensional accuracy. This data is frequently used as sample model for testing keypoint matching algorithms in literature.  The third dataset used for testing the proposed algorithm is the bunny sculpture point clouds made available by the Stanford 3D scanning repository (http://graphics.stanford.edu/data/3Dscanrep/, accessed on 15 March 2021). The bunny data is known as one of the Stanford models, which was scanned with a Cyberware 3030 MS scanner at Stanford University Computer Graphics Laboratory (see Figure 3). Cyberware 3030 MS scanner has 0.1 mm dimensional accuracy. This data is frequently used as sample model for testing keypoint matching algorithms in literature.

Keypoint Detection
The essential step in the beginning of 3D modeling workflow is the registration of

Keypoint Detection
The essential step in the beginning of 3D modeling workflow is the registration of the point clouds data of partially scanned objects. This process requires successfully merging the overlapped point clouds using the estimated transformation parameters. In precise estimation of the transformation parameters, and hence in high performance of point clouds registration, detecting and matching the keypoints rigorously is crucial.
In order to give comparative results on the performance of the detection algorithm suggested and tested in this study, the local surface patches (LSP) and intrinsic shape signatures (ISS) algorithms were also chosen and applied for the keypoint detection. The LSP algorithm is one of the point-wise saliency measurement method, and detects saliency of a vertex considering the shape index (SI(p)), which bases on the maximum and minimum principle curvatures (C max. , C min. ) at the vertex as given in Equation (1) [21,34].
and if the mean shape index µ SI is as given in Equation (2): where (p) means the set of points in the support of p, and q is a member of this set, N = [(p)] is the number of points in the support of p. Accordingly, a feature point goes out of the pruning steps when its SI is significantly greater or smaller than µ SI such as in SI(p) ≥ (1 + α)µ SI (p) ∨ SI(p) ≤ (1 − β)µ SI (p), and α and β are scalar parameters, which define the magnitude of differences from the mean assumed as significant [34].
The intrinsic shape signatures (ISS) algorithm bases on the eigenvalue decomposition of the scatter matrix having the points, which belong to the support of p is given in Equation (3) [25]: where ∑ (p) shows the scatter matrix of the support of point p, and its eigenvalues with order of decreasing magnitudes are λ 1 , λ 2 , λ 3 . In the process, the points having the ratio between the two successive eigenvalues below a threshold value (Th) are retained (see in Equation (4)).
Further explanations and details of the special cases on the implementation of the ISS algorithm with case studies are given by Tombari et al. [34].
In the proposed and tested detection algorithm given here, similar processing steps with the LSP and ISS point-wise saliency measurement methods are followed, and the surface curvatures at the points are considered in order to identify the keypoints. While estimating the surface curvature at each point, the covariance analysis method, which uses the ratio between minimum and summation of the Eigen values, was preferred. However, while the given algorithms work directly on the point cloud without any intermediate tessellation, the detector concentrates on samples in regions of high curvature and employs the estimates of local variations and quadric error metrics [35]. In the algorithm, a voxelbased filtering is applied to enhance the computational efficiency considering maximum surface curvatures. Accordingly, the point cloud spots, in xyz-plane, are partitioned into three dimensional blocks with an appropriate block size decided considering the resolution of dataset. In each block, the laser spots are further organized into an octree partition structure with a set of three-dimensional voxels (as seen in Figure 4) [36]. Figure 5 gives an overview of the major steps of the suggested keypoint detection algorithm. The keypoint detection algorithm was developed on the Matlab platform.
estimating the surface curvature at each point, the covariance analysis method, which uses the ratio between minimum and summation of the Eigen values, was preferred. However, while the given algorithms work directly on the point cloud without any intermediate tessellation, the detector concentrates on samples in regions of high curvature and employs the estimates of local variations and quadric error metrics [35]. In the algorithm, a voxel-based filtering is applied to enhance the computational efficiency considering maximum surface curvatures. Accordingly, the point cloud spots, in xyz-plane, are partitioned into three dimensional blocks with an appropriate block size decided considering the resolution of dataset. In each block, the laser spots are further organized into an octree partition structure with a set of three-dimensional voxels (as seen in Figure 4) [36]. Figure 5 gives an overview of the major steps of the suggested keypoint detection algorithm. The keypoint detection algorithm was developed on the Matlab platform.   estimating the surface curvature at each point, the covariance analysis method, which uses the ratio between minimum and summation of the Eigen values, was preferred. However, while the given algorithms work directly on the point cloud without any intermediate tessellation, the detector concentrates on samples in regions of high curvature and employs the estimates of local variations and quadric error metrics [35]. In the algorithm, a voxel-based filtering is applied to enhance the computational efficiency considering maximum surface curvatures. Accordingly, the point cloud spots, in xyz-plane, are partitioned into three dimensional blocks with an appropriate block size decided considering the resolution of dataset. In each block, the laser spots are further organized into an octree partition structure with a set of three-dimensional voxels (as seen in Figure 4) [36]. Figure 5 gives an overview of the major steps of the suggested keypoint detection algorithm. The keypoint detection algorithm was developed on the Matlab platform.

Keypoint Description and Matching
After extracting the keypoints from point clouds using 3D detectors, which analyze local neighborhoods in order to identify the points of interest as described in the previous section, the neighborhood of a keypoint is described with a 3D descriptor that projects the neighborhood into a proper space. In the end, descriptors defined on different surfaces are matched each other [34]. According to this process, the 3D keypoint descriptors serve a description of the environment in the neighborhood of a point within the cloud, and this description usually depends on the geometrical relationships. Points in two different point clouds having similar feature descriptor mostly correspond to the same surface point.
Principal component analysis (PCA) based keypoint descriptors and matching algorithms are commonly used in various applications including object recognition such as removing roof details of 3D building models. For example, Yoshimura et al. [4] applied the principal component analysis (PCA) efficiently for determining the roof corners and sharp roof lines of the buildings. The PCA technique allows analyzing the saliencies according to the geometric properties of the objects. In their study, Yoshimura et al. [4] considered that the points of a wall and the roof fringes extend along a straight line and hence fit the equation of a straight line (ax + by + c = 0). Thus, they determined the building boundaries considering the lines, which belong to the orthogonal projections of the detected points and matched according to the building edge length and the angle similarity among the issued points.
The point feature histograms (PFH) are also commonly used tools as descriptors [37]. In addition to point matching, the PFH descriptor is also used to determine points in a point cloud, such as points on an edge, corner, and plane. This algorithm uses a Darboux frame (see Figure 6) that is constructed between all point pairs within the local neighborhood of a point [14,38]. The source points of the Darboux frame are the points with the smaller angles between the surface normal and the connecting line of the point pairs p s and p t . If n s/t is the corresponding point normal, the Darboux frame u, v, w are constructed as given in Equations (5)- (7): rithms are commonly used in various applications including object recognition such as removing roof details of 3D building models. For example, Yoshimura et al. [4] applied the principal component analysis (PCA) efficiently for determining the roof corners and sharp roof lines of the buildings. The PCA technique allows analyzing the saliencies according to the geometric properties of the objects. In their study, Yoshimura et al. [4] considered that the points of a wall and the roof fringes extend along a straight line and hence fit the equation of a straight line ( + + = 0). Thus, they determined the building boundaries considering the lines, which belong to the orthogonal projections of the detected points and matched according to the building edge length and the angle similarity among the issued points. The point feature histograms (PFH) are also commonly used tools as descriptors [37]. In addition to point matching, the PFH descriptor is also used to determine points in a point cloud, such as points on an edge, corner, and plane. This algorithm uses a Darboux frame (see Figure 6) that is constructed between all point pairs within the local neighborhood of a point [14,38]. The source points of the Darboux frame are the points with the smaller angles between the surface normal and the connecting line of the point pairs and . If / is the corresponding point normal, the Darboux frame u, , are constructed as given in Equations (5)-(7): = × (7) Figure 6. Darboux frame between a point pair (adapted from [38]).
Three angular distances α, φ and θ, computed based on the Darboux frame are in Equations (8)- (10): where d is the distance between p s and p t (Equation (11)). Three angles and a distance element in addition to two normal vectors are used to describe the geometrical relationship of the point pairs in PFH method. These four elements including the angles and the distance are added to the histogram of the point p and the mean percentage of the point pairs in the neighborhood of p, which have like relationships. In PFH, these histograms are calculated for all possible point pairs in the k number of neighborhoods of the point p [37]. As a different approach, in the four-points congruent sets (4PCS) algorithm, four points are selected from the reference point cloud and their correspondences are searched in the complementing point cloud. Thus the corresponding points are found according to the defined similarity relations (see Figure 7) [39]. In Figure 7, the point e in the reference point cloud (I 1 ) that is selected with intersecting the coplanar connection lines of two-point pairs (a-b and c-d). The corresponding point of e is e in the complementing point cloud (I 2 ) in the figure. In Equations (12) and (13), the r 1 and r 2 ratios are expressed based on the coplanarity of the intersection points and the correspondences of these ratios are sought in the complementing point cloud as well.
adopted threshold, (ii) calculating two intersecting diagonal lengths ( = ، − ، and = ، − ، , as seen in Figure 7); (iii) estimating probable intersection point of two diagonal elements for each point sets. This step is carried out following the expressions given in Equations (14) and (15): 15) and the difference of these values are compared with a certain threshold as in Equation (16): and finally, step (iv) determining the most suitable four points, which satisfy the comparison criteria in Equation (16).  The processing with 4PCS algorithm is basically completed in four steps including the following: (i) selecting four points from the point cloud (I 1 ) with considering an adopted threshold, (ii) calculating two intersecting diagonal lengths (x = b − a and y = d − c , as seen in Figure 7); (iii) estimating probable intersection point of two diagonal elements for each point sets. This step is carried out following the expressions given in Equations (14) and (15): and the difference of these values are compared with a certain threshold δ as in Equation (16): and finally, step (iv) determining the most suitable four points, which satisfy the comparison criteria in Equation (16). In the proposed descriptor and matching algorithm, the cosines of the angles and the Euclidean distances among the connecting lines are considered. Thus, the cosine similarities and the distance equalities of the 3D vectors are taken into account for matching the keypoints (Equation (17)). In the PFH method, three angles connected to the surface normal are used, while (n-2) angles between the connecting lines are calculated for n key points in the proposed method. However, in order to find the geometric similarities of the keypoint sets in the two-point clouds, the differences' summation of the angle values calculated between the keypoints in both point clouds is taken into account (see Figure 8). As it is seen in Equation (17), the suggested algorithm is based on the use of scalar products of the vectors that is another difference of the new tested algorithm than the commonly used PFH algorithm that uses both cross and scalar products. While the 4PCS method does not use any angle value, it only uses point distance ratio values. In the algorithm proposed here, the (n-1) Euclidean distances are calculated for the n keypoints. For the geometric similarity of the keypoints of the point clouds, the differences' summation of the distances between the keypoints is taken into account.
In the proposed algorithm, angle cosines are described with Equation (17), and in the equation and are the distance vectors between the salient point and the two other keypoints in the combination subset, is the angle between two distance vectors. In the algorithm, firstly the Euclidean distances among the points in the combination subset and the relevant angle cosines are calculated, and then the similarity of the constructed geometries for each selected salient point sets are considered. Algorithm 1 gives the pseudocodes of the new keypoint descriptor. In the proposed algorithm, angle cosines are described with Equation (17), and in the equation d 1 and d 2 are the distance vectors between the salient point and the two other keypoints in the combination subset, θ is the angle between two distance vectors. In the algorithm, firstly the Euclidean distances among the points in the combination subset and the relevant angle cosines are calculated, and then the similarity of the constructed geometries for each selected salient point sets are considered. Algorithm 1 gives the pseudo-codes of the new keypoint descriptor.

Algorithm 1. Generation of point cloud features.
Input: A point cloud is P, number of the sub-points that are going to be selected from the point cloud is N C . Output: Point cloud feature matrix F.

1:
F ← ∅ 2: end for 14: end for 15: return F The notations used in the algorithm are as follows: F is feature matrix for the descriptor (a null matrix in the beginning), N p is the total number of the keypoints in the point cloud "P". C is the matrix, which holds all possible subsets of the point combinations, and N c is the number of the identified keypoints in the subsets. The p r means "reference point", and p b is benchmark. The f A is the function for calculating the angle cosines and f D is the function for calculating the Euclidean distances among the feature points.
According to this algorithm, firstly, the total number of points in the keypoint list is determined with an identification number (as ID = 1 to N p that is the total number of points). This ID information is used as the labels of the points in the performed operations. Then, using the labels of the points, possible subsets (C) consisting of N c elements among the all points are generated. Generating the subsets is a basic combination process in the algorithm and is expressed in the line 3 of the pseudocodes (see Algorithm 1).
In result of the described process, each salient point in a subset having N c elements is considered as a reference point (p r ) and the remaining points in the subset are taken as the benchmarks (p b ), and the process is repeated iteratively. Hence the angle and distance values ( f A and f D ) are calculated and subsequently written into a separate row of the F[i] matrix belong to the addressed point ID information. The given part of the process so far (as given between the lines 7 and 11 in Algorithm 1) runs only the descriptor part of the registration and allows us defining the distinctive features, which are required for matching, and base on the described geometry with the angular and distance relations among the potential keypoints.
In the proposed algorithm, the descriptor part includes calculating the Euclidean distances between all points and the angles between all vectors in the subset. Figure 9 illustrates the calculation of the cosine similarity in algorithm. According to this illustration, if we consider the subset having four points, one point is accepted as a reference at each iteration and this assumption generates two angle cosines and three distance elements between the reference and the benchmarks. These numbers of parameters in algorithm can be generalized as (n-2) angle cosines and (n-1) distance elements when the number of the points in the subset is shown with n. The points identified as keypoints in result of iterative process of similarity checks constitute the matrix elements. Following the identification process, the keypoints are available to be matched. Algorithm 2 gives the pseudocodes of the applied matching process. In the algorithm, the subpoint sets are selected from the source and target point clouds in order to match the keypoints between the two-point clouds. Accordingly, P S , P T , N c and T A are the input parameters in the algorithm. P S and P T are the source and target point clouds, respectively. N c is the number of points in the subsets that are selected from each point cloud in each iteration. These parameters are also included in Algorithm 1 with the same symbols. T A is a threshold parameter for deciding the angle-based geometrical similarities. In the output of the algorithm, the information of the matched point sets selected from the source and target point clouds showing highest similarity are included in a matrix with the name of M M in the algorithm.
In Algorithm 2, the properties of point subsets, which are determined by using Algorithm 1 for the source (indicated with indices S) and target (indicated with indices T) point clouds, are calculated. This process is seen in lines 1 and 2 of the algorithm, and the calculated properties are expressed with F S and F T that include the outputs of Algorithm 1. Then, the distances between points and angles between vectors of the keypoint sets in the source point cloud are compared with the same properties of the keypoint sets in the target point cloud. In lines 6 and 7 of Algorithm 2, the similarity checks are coded. In the lines, the variables S A and S D are differences between the angle similarity properties and the distance similarity properties, respectively. Then, the ID information (L S and L T ) of the points that the relevant features belong to them, and the similarities calculated from these features are stored in a M ST matrix.
After the S A and S D calculations, which are carried out in a certain order following the clock-wise turn, the M ST matrix is organized in order to have rows with descending S A values. The rows with T A values smaller than the threshold are selected in the matrix. These selected rows correspond to the points in the subset with the highest similarity in the source (S) and target (T) point clouds. Thereafter, these rows, which are recorded in a M M matrix, are reordered in order to have descending S D values, and hence the optimum subset of the points for matching purpose is determined. In summary of the followed process, the points are roughly selected considering the angular similarity and they are ordered with respect to the distance-based similarity. In the final step of our description and matching algorithm, the points having the distance differences close to zero values are selected as the common keypoints of the point clouds. Input: Source and target point clouds are P S and P T , number of the sub-points that are going to be selected from the point cloud is N C , distance threshold for similarity of angle based features is T A . Output: Matrix M M storing the information of matched sub-point sets of P S and P T .
for each feature vector f T in feature matrix F T do 6: The processing steps of the descriptor and matching algorithms (see in the pseudocodes given in Algorithms 1 and 2) of the proposed method are also summarized with a flowchart in Figure 10.

Iterative Closest Point (ICP) Algorithm
The iterative closest point algorithm completes the registration of two coarsely aligned point clouds; hence, it aims to provide combined point cloud data in the end of the process. Basically, an ICP algorithm follows these four steps: (i) detecting and selecting keypoints; (ii) matching the points according to minimum distance difference principle; (iii) calculating the rotations R(α x , α y , α z ) and translations T(δ x , δ y , δ z ); (iv) providing an optimum alignment in result of the iterative process [5,41]. Iterations in the algorithm are continued until the root mean square error of the transformation residuals decreases under a predetermined threshold value (τ > 0) or a given iteration number (a) [5]. Figure 11 shows the ICP algorithm steps. The iterative closest point (ICP) algorithm can be used with various geometric data including point sets, polylines, implicit and parametric curves, triangulated faceted surfaces. In principle, the ICP algorithm handles these geometrical representations by evaluating the closest points in two datasets. In the formulation of the algorithm, means a "data" (target) shape, which is moved (registered) to be in best alignment with as a "model" (source) shape [5]. The distance " " between an individual data point of and a model is shown in Equation (18). The iterative closest point (ICP) algorithm can be used with various geometric data including point sets, polylines, implicit and parametric curves, triangulated faceted surfaces. In principle, the ICP algorithm handles these geometrical representations by evaluating the closest points in two datasets. In the formulation of the algorithm, P means a "data" (target) shape, which is moved (registered) to be in best alignment with X as a "model" (source) shape [5]. The distance "d" between an individual data point → p of P and a model X is shown in Equation (18).
In the equation, the closest point in X that yields the minimum distance is represented by One of the fundamental issues in the ICP algorithm is 3D coordinate transformation. Before the ICP registration, the coarse alignment of the point clouds has involved the coordinate transformation process as well. Therefore, the coarse registration of the point clouds involves the estimation of the 3D Helmert transformation parameters including three translations, three rotation angles and a scale factor, in combining process [42]. As different from transformation process in the coarse alignment, the number of estimated parameters in the ICP fine registration part is only six including translation and rotations. Because the scale factor is estimated in coarse registration once, it is excluded out of estimated parameters in ICP iterations. Depending on the obtained accuracy of the estimated 3D transformation, the registration process is classified as either coarse or fine registration [43]. Coarse registration provides a rough alignment of the point clouds and may be sufficient as depending on the purpose of the study. However, in applications where higher accuracy is required, fine registration is needed, and fining process begins after an initial alignment of the point clouds with coarse registration [44].
The seven-parameters Helmert similarity transformation is commonly applied in registration of the point clouds. The conjugate keypoints, which have been detected, identified and matched for the point clouds (conjugate keypoints for the first and second point clouds q i , p i that i = 1,2 . . . m) in the coarse registration section, are used for estimating seven transformation parameters (δ x , δ y , δ z , α x , α y , α z , s). 3D Helmert similarity transformation is formulated as given in Equation (19): In the equation, the Cartesian coordinates of a conjugate keypoint in the first and second point cloud are respectively. The rotation parameters are evaluated in an orthogonal rotation matrix R(α x , α y , α z ) and added to the translation parameters T(δ x , δ y , δ z ) after multiplying with a scale factor (s) [45].
Seven transformation parameters given in Equation (19) are estimated using known coordinates of at least three conjugate keypoints in both point clouds, and in case of availability of more common keypoints, the transformation parameters can also be calculated with least squares adjustment by adopting the given condition in Equation (20) [46]: where, v is the residual matrix and P is the weight matrix of the observables contributed to estimating the parameter in mathematical model of the least squares adjustment procedure. After estimating the transformation parameters with sufficient accuracy, it will be possible to transform the coordinates of a target point cloud (q i ; i = 1,2 . . . m) into the source point cloud (p i ; i = 1,2 . . . m) successfully. In this step, the crucial issue is to determine the transformation parameters as precise as possible. In order to increase the accuracy of the parameters, improving the estimation values step by step iteratively is an effective and generally applied approach.
Other than the conventional ICP method as introduced by Besl and McKay [5], many variants of this algorithm have been developed and used. In these variant formulations, the performance of the original algorithm was tried to be increased by means of accuracy as well as the processor efficiency in computations [47]. Zhang [48] enhanced the conventional ICP technique by replacing the error function with a robust kernel [49]. Chen and Medioni [50] modified the algorithm with replacing the point-to-point distance with the point-to-tangent plane definition [51].

Performance of the Keypoint Detection Algorithms
The case study on testing the proposed detection algorithm that we introduced in this study, were carried out using three different datasets including the building facade, Aristotle and Stanford bunny sculptures' point clouds. The datasets have been described in Section 2.1. In the proposed detection algorithm, the mean curvature of the surface was calculated. Then, the data was filtered according to the optimum curvature criterion that considers the changes of calculated curvatures at the points. After that, a voxel-based filter was applied (see Figure 12). The result of filtering the terrestrial laser scanning point cloud data on the building facade can be seen in Figure 12a with the voxels, and the filtering results of the sculptures, which were colored according to the curvature values, are given in Figure 12b for the Aristotle and Figure 12c for the Stanford bunny. At the initial process, filtering the point cloud is one of the crucial steps that is applied prior identifying the corresponding key- The result of filtering the terrestrial laser scanning point cloud data on the building facade can be seen in Figure 12a with the voxels, and the filtering results of the sculptures, which were colored according to the curvature values, are given in Figure 12b for the Aristotle and Figure 12c for the Stanford bunny. At the initial process, filtering the point cloud is one of the crucial steps that is applied prior identifying the corresponding keypoints.
In order to provide comparative results on the keypoint detection routines and a discussion on their role in ICP performance, we applied the intrinsic shape signatures (ISS) and local surface patch (LSP) methods in the numerical tests using all datasets. The theoretical background and formulations of these methods have already been described in Section 2.2.1. In the result of the keypoint detection process using point cloud data of the Aristotle sculpture, the numbers of detected keypoints for each point clouds are~60 points with the ISS algorithm and~180 points with the LSP algorithm. Figure 13 visualizes the density and distribution of the detected keypoints with each algorithm. On the other hand, the number of identified keypoints with the new algorithm is 18 for the Aristotle sculpture. In the result of keypoint detection process using point cloud data of Stanford bunny sculpture, the numbers of detected keypoints for each point clouds are ∼150 points with the ISS algorithm and ∼350 points with the LSP algorithm. On the other hand, the number of identified keypoints with proposed algorithm is 14 for the Stanford bunny sculpture (see Figures 12c and 19).

Performance Tests of Keypoint Descriptor and Matching Algorithms
In the proposed descriptor algorithm, the angle cosine values between the 3D direction vectors are calculated, and the cosine similarities among the conjugate geometries are searched. The significant originality of the registration algorithm, which is introduced and tested in this article, lays behind its description and matching approach. The theoretical details of the description and matching part of the algorithm are explained in Section 2.2.2. In the point cloud data of Aristotle sculpture, five points of 18 keypoints were matched by the description and matching routine of our algorithm. Figure 15 shows the distribu- Using the point cloud datasets obtained from the terrestrial laser scanning (TLS) and mobile laser scanning (MLS) measurements for the building facade, the ISS algorithm revealed~34 keypoints in TLS point cloud data and~88 keypoints in MLS point cloud data, whereas the LSP algorithm detected~200 keypoints for each point cloud datasets obtained from TLS and MLS techniques, respectively. Figure 14 shows the distributions of the detected keypoints using the ISS and LSP algorithms from two TLS and MLS point clouds. Using the proposed algorithm, the approximate number of keypoints detected according to the filtering of calculated curvatures in the TLS and MLS point clouds is 30 for the building facade (see Figures 12a and 18). In the result of keypoint detection process using point cloud data of Stanford bunny sculpture, the numbers of detected keypoints for each point clouds are ∼150 points with the ISS algorithm and ∼350 points with the LSP algorithm. On the other hand, the number of identified keypoints with proposed algorithm is 14 for the Stanford bunny sculpture In the result of keypoint detection process using point cloud data of Stanford bunny sculpture, the numbers of detected keypoints for each point clouds are~150 points with the ISS algorithm and~350 points with the LSP algorithm. On the other hand, the number of identified keypoints with proposed algorithm is 14 for the Stanford bunny sculpture (see Figures 12c and 19).

Performance Tests of Keypoint Descriptor and Matching Algorithms
In the proposed descriptor algorithm, the angle cosine values between the 3D direction vectors are calculated, and the cosine similarities among the conjugate geometries are searched. The significant originality of the registration algorithm, which is introduced and tested in this article, lays behind its description and matching approach. The theoretical details of the description and matching part of the algorithm are explained in Section 2.2.2. In the point cloud data of Aristotle sculpture, five points of 18 keypoints were matched by the description and matching routine of our algorithm. Figure 15 shows the distribution of matched keypoints of the Aristotle sculpture.  In addition to the new algorithm, the keypoints, which were matched according to the keypoint histograms obtained from the PFH descriptor with the help of the ISS and LSP detectors, were also used in the study. In the result of the ISS+PFH process, six points of~60 identified keypoints were matched for Aristotle sculpture point cloud datasets (Figure 16). Following the LSP+PFH algorithm, eight points of~180 identified keypoints were matched for Aristotle sculpture modeling (Figure 17).
In the description and matching procedure, which were carried out for building facade using terrestrial (TLS) and mobile (MLS) laser scanning data, automatic process of the algorithms (ISS+PFH, LSP+PFH, the proposed algorithm) did not give successful results to match the conjugate points. This is mainly because of the large discrepancies among the identified keypoints sets in each point clouds, and therefore the failure of establishing similarity relations for matching. Unlike the automatic description and matching process, the semi-automatic way of processing where the saliency points are identified using the criteria formulated in Section 2.2.1, and selected manually according to their convenience for similarity, gave result. In this part of the process with the building facade point clouds, filter and detection procedures were carried out considering mean curvatures at the points using our detection algorithm. Thereafter the identified keypoints as the output of the detection algorithm were fed manually into the proposed description and matching algorithm. Figure 18 shows the conjugate points matched using semi-automatic matching of the keypoints identified with the new detector (Algorithms 1 and 2).   In the description and matching procedure, which were carried out for building facade using terrestrial (TLS) and mobile (MLS) laser scanning data, automatic process of the algorithms (ISS+PFH, LSP+PFH, the proposed algorithm) did not give successful results to match the conjugate points. This is mainly because of the large discrepancies among the identified keypoints sets in each point clouds, and therefore the failure of establishing similarity relations for matching. Unlike the automatic description and matching process, the semi-automatic way of processing where the saliency points are identified using the criteria formulated in Section 2.2.1., and selected manually according to their convenience for similarity, gave result. In this part of the process with the building facade point clouds, filter and detection procedures were carried out considering mean curvatures at the points using our detection algorithm. Thereafter the identified keypoints as the output of the detection algorithm were fed manually into the proposed description and matching algorithm. Figure 18 shows the conjugate points matched using semi-automatic matching of the keypoints identified with the new detector (Algorithms 1 and 2). In the point cloud data of the Stanford bunny sculpture, five points of 14 keypoints were matched by the description and matching routine of our algorithm. The second plot of Figure 19 shows the distribution of the matched keypoints of the bunny sculpture. In addition to the new algorithm, the ISS+PFH process was able to match only three points of the detected ∼150 keypoints. On the other hand, the LSP+PFH algorithm did not provide any matching among the ∼350 identified keypoints. In the point cloud data of the Stanford bunny sculpture, five points of 14 keypoints were matched by the description and matching routine of our algorithm. The second plot of Figure 19 shows the distribution of the matched keypoints of the bunny sculpture. In addition to the new algorithm, the ISS+PFH process was able to match only three points of the detected~150 keypoints. On the other hand, the LSP+PFH algorithm did not provide any matching among the~350 identified keypoints. In the point cloud data of the Stanford bunny sculpture, five points of 14 keypoints were matched by the description and matching routine of our algorithm. The second plot of Figure 19 shows the distribution of the matched keypoints of the bunny sculpture. In addition to the new algorithm, the ISS+PFH process was able to match only three points of the detected ∼150 keypoints. On the other hand, the LSP+PFH algorithm did not provide any matching among the ∼350 identified keypoints.

Numerical Validations of the Applied Algorithms in Fine Registration with the ICP Method
High accuracy determination of the coordinate transformation parameters between the point clouds is crucial in combining the datasets for generating a seamless model of 3D objects. The initial alignment of the point clouds appropriately with pre-defined transformation parameters in coarse registration phase plays an essential role in the overall performance of the fine registration using ICP. In this study, we applied the Gauss-Markov adjustment method in calculation of the transformation parameters using the similarity relations among the matched points determined in Section 3.2, and we evaluated the results considering the root mean square errors (RMSE) of the transformation and iteration numbers of computational convergence.
In the tests, three case study datasets (including the Aristotle sculpture, building facade, and Stanford bunny sculpture) were used. The numerical validations were carried out employing three different coarse registration algorithms (ISS+PFH, LSP+PFH, the new purposed algorithm). In validations using each dataset, the matched keypoints as the output of each coarse registration algorithm, the fine registrations of the point clouds datasets have been carried out with the iterative closest point (ICP) method. The success of the tested coarse registration algorithms including the proposed one was assessed and compared through the fine registration accuracies by means of the reached RMSE values in ICP iterations. The iteration numbers that the ICP converged were also considered as another measure to assess the contribution of the coarse registration algorithms on the efficiency of the fine registration process.
The first test was carried out using coarsely aligned point clouds with the matched keypoints for the Aristotle sculpture using the ISS+PFH algorithm. Table 1 shows the transformation parameters calculated with six conjugate points identified with the ISS+PFH algorithm and used for the coarse alignment of point clouds. In the same table, the transformation parameters are given for small and big rotation angles among the axes of two coordinate systems, which are subject to transformation. When the two sets of transformation parameters are compared, it is seen that the calculated parameters have higher accuracy, and thus the RMSE value of the transformation is smaller when the rotation angles are bigger. In Table 1, the RMSE value of the transformation is m o ; δ x , δ y , δ z are translation parameters with their RMSE values (in centimeter), and α x , α y , α z are the rotation parameters with their RMSE values (in radian), and s is the scale factor with its accuracy. Figure 20 summarizes the results obtained from the iterative refining of transformation parameters using ICP for Aristotle sculpture datasets. In the illustrations given in the figure, the point clouds are given before and after transformation processes using final transformation parameters of ICP. As seen from the given graphic below the illustrations, the RMSE value of the ICP converged at 20th iteration, and the accuracy of the ICP iterations dropped from 7.0 cm to 0.9 cm when the keypoints were matched using the ISS+PFH algorithm. The second test was carried out using coarsely aligned point clouds with the matched keypoints for Aristotle sculpture but this time using LSP+PFH algorithm. Table 2 shows the transformation parameters calculated with eight conjugate points identified with LSP+PFH algorithm and used for the coarse alignment of point clouds. According to given statistics in the table, the transformation parameters calculated for the point clouds having big rotation angles among the coordinate axes have higher accuracy and thus provide relatively more precise transformation.  The second test was carried out using coarsely aligned point clouds with the matched keypoints for Aristotle sculpture but this time using LSP+PFH algorithm. Table 2 shows the transformation parameters calculated with eight conjugate points identified with LSP+PFH algorithm and used for the coarse alignment of point clouds. According to given statistics in the table, the transformation parameters calculated for the point clouds having big rotation angles among the coordinate axes have higher accuracy and thus provide relatively more precise transformation.  Figure 21 shows the graphical illustrations of point clouds before and after fine registration with ICP, and the decreasing RMSE values of the transformation with the increased number of iterations in the graphic below the figure. Considering this graphic, it is seen that the RMSE value of the ICP converged in 30th iteration and the accuracy of the transformation decreased from~6.0 cm to~1.0 cm using the LSP+PFH matching algorithm. When compared with the ICP results of point clouds, which have been roughly aligned using the ISS+PFH algorithm, the obtained final accuracy in the result of second test is worse, and the convergence of iterations took longer time. In summary, the ICP process after both the ISS+PFH and LSP+PFH gave similar accuracies around~1.0 cm. In the last test with Aristotle sculpture data, fine registration of point clouds, which were aligned using the new proposed algorithm, was carried out with the ICP method. Table 3 gives the transformation parameters calculated with five conjugate points identified with the new algorithm and used for the coarse alignment of point clouds. According to given statistics in the table, the accuracy of the transformation between the point clouds using the conjugate points by the new algorithm was higher than the accuracies obtained in previous two tests. Additionally, similar to the previous tests, higher accuracy transformation parameters were obtained with the bigger rotation angles.  In the last test with Aristotle sculpture data, fine registration of point clouds, which were aligned using the new proposed algorithm, was carried out with the ICP method. Table 3 gives the transformation parameters calculated with five conjugate points identified with the new algorithm and used for the coarse alignment of point clouds. According to given statistics in the table, the accuracy of the transformation between the point clouds using the conjugate points by the new algorithm was higher than the accuracies obtained in previous two tests. Additionally, similar to the previous tests, higher accuracy transformation parameters were obtained with the bigger rotation angles. In Figure 22, the ICP method performance after the coarse registration with the new algorithm is demonstrated with visual graphics of the point clouds before and after matching. The RMSE value of the ICP converged in 10th iteration, which means a significant improvement in performance of registration algorithm compared to other tested algorithms in previous tests. The accuracy statistic of iterations dropped~0.29 cm while it was not better than~0.90 cm for the other algorithms. The ICP method was applied to the building facade data as well. In the coarse registration process, the TLS and MLS point clouds keypoints and semi-automatically identified conjugate points using the new algorithm were used. In the evaluation of the building facade datasets, the automatic evaluation of the ISS+PFH and LSP+PFH algorithms have not been successfully applied in coarse registration because there were no keypoints successfully matched. Therefore, the keypoints for coarse alignment of the point clouds were identified manually from the saliency points such as window corners (see Figure 18) and introduced to the semi-automatic matching process with the PFH algorithm. However, this trial was also not successful for matching with the PFH algorithm. Different patterns, characteristics and resolutions of two-point clouds data acquired from terrestrial and mobile platforms using different sensors possibly caused this inconsistency, and therefore the automatic and semi-automatic processes were failed in matching the keypoints. The only solution for matching the keypoints of the building facade dataset was applying semiautomatic matching with the new suggested method. In this solution, the keypoints were identified manually from the saliency points, and put into the new matching algorithm.
In Table 4, the transformation parameters between the TLS and MLS point clouds in the coarse registration process using the semi-automatically identified common points using the new algorithm were given. In Figure 23, the ICP method performance for the The ICP method was applied to the building facade data as well. In the coarse registration process, the TLS and MLS point clouds keypoints and semi-automatically identified conjugate points using the new algorithm were used. In the evaluation of the building facade datasets, the automatic evaluation of the ISS+PFH and LSP+PFH algorithms have not been successfully applied in coarse registration because there were no keypoints successfully matched. Therefore, the keypoints for coarse alignment of the point clouds were identified manually from the saliency points such as window corners (see Figure 18) and introduced to the semi-automatic matching process with the PFH algorithm. However, this trial was also not successful for matching with the PFH algorithm. Different patterns, characteristics and resolutions of two-point clouds data acquired from terrestrial and mobile platforms using different sensors possibly caused this inconsistency, and therefore the automatic and semi-automatic processes were failed in matching the keypoints. The only solution for matching the keypoints of the building facade dataset was applying semi-automatic matching with the new suggested method. In this solution, the keypoints were identified manually from the saliency points, and put into the new matching algorithm.
In Table 4, the transformation parameters between the TLS and MLS point clouds in the coarse registration process using the semi-automatically identified common points using the new algorithm were given. In Figure 23, the ICP method performance for the building facade datasets was shown. According to given graphics, the RMSE value of the ICP converged in 50th iteration, and it dropped~0.02 m.  The ICP method was also applied using the Stanford bunny dataset as well. Into the process, the coarsely aligned point clouds with five keypoints, which were matched using the new algorithm, were inserted. Table 5 shows the calculated transformation parameters in the result of coarse alignment. In Figure 24, the ICP method performance after the coarse registration with the new algorithm is demonstrated with visual graphics of the bunny point clouds before and after matching. The RMSE value dropped ∼0.29 cm at the 15th iteration of the ICP.
Because the LSP+PFH method did not provide any conjugate keypoint in result of matching process using bunny data, the coarse alignment was not possible. The ISS+PFH method output only three conjugate keypoints, which were not evenly distributed on the object. Using coarsely aligned point clouds with three conjugate keypoints by the ISS+PFH method, the ICP algorithm provided an RMSE value of ∼0.50 cm at the 15th iteration for bunny dataset. The ICP method was also applied using the Stanford bunny dataset as well. Into the process, the coarsely aligned point clouds with five keypoints, which were matched using the new algorithm, were inserted. Table 5 shows the calculated transformation parameters in the result of coarse alignment. In Figure 24, the ICP method performance after the coarse registration with the new algorithm is demonstrated with visual graphics of the bunny point clouds before and after matching. The RMSE value dropped~0.29 cm at the 15th iteration of the ICP.

Discussion
We developed and introduced a new method for automatic keypoint detection, description, and matching for 3D point cloud coarse registration in this study. The first part of the new method includes a 3D keypoint detection algorithm, which was formulated based on similar working principles with the ISS and LSP methods. In the second part, a 3D descriptor (Algorithm 1) and 3D keypoints matching algorithms (Algorithm 2) were designed and coded. This second part of the designed algorithm is rather different than its counterparts by means of defining the geometrical configuration among the saliency points according to the differential numerical combinations (see Section 2.2.2.) as well as the computational efficiency; hence, it includes original contribution. In order to compare the new algorithm with commonly used counterparts, we included ISS and LSP detectors with the PFH descriptor routine in the numerical tests. The transformation between the point clouds using the matched points in the algorithms was carried out using the Helmert similarity transformation method. The fine registration part has been carried out using the ICP method as formulated in Section 2.2.3. The validation results of three case studies clarified the superior performance of the new algorithm and its contribution to the fine registration performance with ICP by means of success in combining two-point clouds with higher accuracy registration with a smaller number of iterations.
In the numerical tests using point cloud data obtained from shifted perspectives with ultra HD laser scanning measurements with the same 3D laser scanner for small size sculptures (the Aristotle and Stanford bunny), automatic keypoint detection, description, and matching algorithms worked successfully in coarse registration of the point clouds. However, comparisons of the RMSE values of the transformation in coarse registration Because the LSP+PFH method did not provide any conjugate keypoint in result of matching process using bunny data, the coarse alignment was not possible. The ISS+PFH method output only three conjugate keypoints, which were not evenly distributed on the object. Using coarsely aligned point clouds with three conjugate keypoints by the ISS+PFH method, the ICP algorithm provided an RMSE value of~0.50 cm at the 15th iteration for bunny dataset.

Discussion
We developed and introduced a new method for automatic keypoint detection, description, and matching for 3D point cloud coarse registration in this study. The first part of the new method includes a 3D keypoint detection algorithm, which was formulated based on similar working principles with the ISS and LSP methods. In the second part, a 3D descriptor (Algorithm 1) and 3D keypoints matching algorithms (Algorithm 2) were designed and coded. This second part of the designed algorithm is rather different than its counterparts by means of defining the geometrical configuration among the saliency points according to the differential numerical combinations (see Section 2.2.2.) as well as the computational efficiency; hence, it includes original contribution. In order to compare the new algorithm with commonly used counterparts, we included ISS and LSP detectors with the PFH descriptor routine in the numerical tests. The transformation between the point clouds using the matched points in the algorithms was carried out using the Helmert similarity transformation method. The fine registration part has been carried out using the ICP method as formulated in Section 2.2.3. The validation results of three case studies clarified the superior performance of the new algorithm and its contribution to the fine registration performance with ICP by means of success in combining two-point clouds with higher accuracy registration with a smaller number of iterations.
In the numerical tests using point cloud data obtained from shifted perspectives with ultra HD laser scanning measurements with the same 3D laser scanner for small size sculptures (the Aristotle and Stanford bunny), automatic keypoint detection, description, and matching algorithms worked successfully in coarse registration of the point clouds. However, comparisons of the RMSE values of the transformation in coarse registration proved the superiority of the new algorithm over its tested counterparts (see the test results in Section 2.2.3.). Fine registration with the ICP method followed the coarse registration with the tested detection, description and matching algorithms using the point clouds data of the sculptures. In the tests with the ICP algorithm, the efficiency achieved in the iterative improvement of the registration confirmed the superior performance of the new algorithm in matching the keypoints for automatic registration of the point clouds datasets. With the ICP method, it could be possible to obtain RMSE of~0.29 cm within a reasonable converging time using the new algorithm, whereas the achievable accuracy with the matched keypoint sets by the ISS+PFH and LSP+PFH was~1.00 cm for the Aristotle sculpture and~0.50 cm for the Stanford bunny (only ISS+PFH is available for the bunny) at best in the longer convergence time. This result proved the importance of initial alignment with the coarse registration process for generating improved models from the employed point clouds measured with a shift. In literature, it is possible to find similar research studies on investigating the performance of automatic keypoint detection algorithms and their effects on generation of 3D models. Among these studies, Yew and Lee [52] recently carried out research with the ISS+FPFH and ISS+3DFeat-Net algorithms and found conclusions, which confirm our results, regarding the performance.
Another experiment with the tested algorithms was carried out with the point cloud datasets of a building facade, obtained from the outdoor measurements using two different laser scanners mounted on a stationary tripod on a ground station and a mobile land vehicle. However, automatic description and matching algorithms of the point clouds keypoints with each detector did not give successful results and failed in the registration. An important reason for the failure of automatic registration process is the differences among the resolution and qualities of two-point cloud data because they were obtained using different laser scanners on stationary and mobile platforms. Semi-automatic selection and evaluation of the keypoints from the building facade data compensated the problem. Consequently, matching the keypoints for the transformation in coarse registration was carried out with manually fed keypoints to the semi-automatized algorithm. In ICP tests with building facade data, the initially aligned point clouds using the new algorithm provided applicable results by means of final RMSE value of the ICP (~2.0 cm) within a reasonable converging iteration number, and hence its competence in combining the point clouds obtained by the measurements from different platforms has been proven.
There are strengths and weaknesses of the proposed algorithm in practice. The proposed method provides smaller RMSE values in lower iteration numbers than the ISS+PFH and LSP+PFH algorithms. It provides better performance in the description and in matching the keypoints. Although the number of detected keypoints with the proposed algorithm is even less than the other coarse registration algorithms, fine registration of the point clouds, which were coarsely aligned point clouds with the proposed algorithm, provided smaller RMSE values in the ICP method. This method is superior to the ISS+PFH and LSP+PFH methods especially in the coarse registration of point clouds with different resolutions obtained from different laser scanners.
Because the description and matching algorithm of the proposed method is based on the combination method, and because the distance and angle values are calculated for all point combinations in the subset, large amounts of keypoint combinations are examined, and this makes the description and matching process longer and increases the memory and processor burden in the computer system. The relatively cumbersome and limited capabilities of the platform on which the algorithm computation codes were written create another disadvantage in practice for the proposed method.
The proposed algorithm is based on the point-based process. Especially while working with large datasets, not finding the conjugate points and/or mismatching problems constitutes another disadvantage of the new algorithm. In such cases, semi-automatic operation (with manual selection of the saliency points to be matched) of the algorithm is suggested and provides accurate results.
Regarding the large-scale experiment using the proposed algorithm, the method works semi-automatically. Our proposed algorithm needs to be enhanced in the detection of the keypoints of large-scale datasets if we want to apply it automatically in large-scale experiments. Depending on our experiences, this weakness is also valid for other coarse registration algorithms as well (e.g., ISS+PFH, LSP+PFH).

Conclusions
The new algorithm developed and tested in this study is promising for the resultant accuracy of the obtained products as well as the computational efficiency. However, there are still further developments, which are needed to search for especially while combining the point clouds data with changing accuracies and resolutions obtained from different platforms and/or larger areas. Improving the algorithms by employing intelligent techniques having the handling capability of stochastic information of the datasets can lead to the expected improvements in results. In order to aimed progress, in future work, we plan to focus on integrating deep learning features into our algorithms with variance estimation modules. By means of improving computational efficiency and optimization in use of the computer processors, we aim to move the automatic matching algorithm that we designed to another programming platform with parallel processing capability. By developing the automatic keypoint detection and selection algorithm, we plan to expand the research for evaluating the large-scale point cloud datasets.