Virtual Namesake Point Multi-Source Point Cloud Data Fusion Based on FPFH Feature Difference

There are many sources of point cloud data, such as the point cloud model obtained after a bundle adjustment of aerial images, the point cloud acquired by scanning a vehicle-borne light detection and ranging (LiDAR), the point cloud acquired by terrestrial laser scanning, etc. Different sensors use different processing methods. They have their own advantages and disadvantages in terms of accuracy, range and point cloud magnitude. Point cloud fusion can combine the advantages of each point cloud to generate a point cloud with higher accuracy. Following the classic Iterative Closest Point (ICP), a virtual namesake point multi-source point cloud data fusion based on Fast Point Feature Histograms (FPFH) feature difference is proposed. For the multi-source point cloud with noise, different sampling resolution and local distortion, it can acquire better registration effect and improve the accuracy of low precision point cloud. To find the corresponding point pairs in the ICP algorithm, we use the FPFH feature difference, which can combine surrounding neighborhood information and have strong robustness to noise, to generate virtual points with the same name to obtain corresponding point pairs for registration. Specifically, through the establishment of voxels, according to the F2 distance of the FPFH of the target point cloud and the source point cloud, the convolutional neural network is used to output a virtual and more realistic and theoretical corresponding point to achieve multi-source Point cloud data registration. Compared with the ICP algorithm for finding corresponding points in existing points, this method is more reasonable and more accurate, and can accurately correct low-precision point cloud in detail. The experimental results show that the accuracy of our method and the best algorithm is equivalent under the clean point cloud and point cloud of different resolutions. In the case of noise and distortion in the point cloud, our method is better than other algorithms. For low-precision point cloud, it can better match the target point cloud in detail, with better stability and robustness.


Introduction
In recent years, point cloud data have been applied to more field, such as robots and autonomous driving, face recognition, gesture recognition, etc. In the face of autonomous driving systems above L3, high-precision maps have become an indispensable part. A highprecision map is a special map with centimeter-level accuracy and detailed lane information compared to general navigation maps. It can describe road more comprehensively and in detail and reflect the real situation of the road more accurately [1]. High-precision maps are one of the important applications of high-precision point cloud. There are three methods of obtaining high-precision map point cloud data: mobile surveying vehicle collection, drone aerial survey and 1:500 topographic map [2]. The sensors utilized in each data acquisition scheme are different due to the heterogeneity of sensors. The point cloud data obtained are thus very different in accuracy and range from the data set. Specifically, some sensors obtain point cloud with high accuracy but small range, and other sensors obtain point cloud with low accuracy but large range, etc. Therefore, how to fuse the point cloud

•
By introducing Fast Point Feature Histograms (FPFH) features, using the CNN network to learn the F2 distance of FPFH features to obtain the probability, improve the robustness to point cloud noise and point cloud resolution.

•
Use voxels that use spatial information around the point to generate virtual points, increasing the accuracy and stability of finding corresponding points.

•
Show the results compared to other existing works on experimental evaluations under clean, noisy, different resolution and distorted datasets.
The remainder of the paper is organized as follows: We begin with a review of related work in Section 2. The main steps of our method are described in detail in Section 3. Our experiment results are discussed in Section 4, and conclusions are drawn in Section 5.

Related Work
Multi-source point cloud fusion is actually the precise registration of two point clouds with differences in accuracy, resolution and range obtained from the same scene scan. ICP is a commonly used algorithm for precision registration, it is an optimal matching algorithm based on the least squares. The algorithm uses the closest point as the corresponding point in the target point cloud and the source point cloud, and takes the distance between the closest point pair as the target, calculates the optimal rigid body transformation between the point cloud to complete the point cloud matching In 1992, Besl and Mckay et al. proposed the nearest point iterative algorithm to realize free-form surface registration and automatic registration of original point cloud, which became the basic algorithm for automatic point cloud registration [5]. ICP algorithm selects the point with the smallest Euclidean distance as the corresponding point, calculates the rigid transformation matrix with corresponding point pairs and iteratively obtains the optimal transformation. Using the shortest Euclidean distance as the judgment standard to determine the corresponding point directly can find the corresponding point simply and quickly, but it will cause a lot of mismatches and reduce the registration accuracy. At the same time, the ICP algorithm may easily become trapped in a local minimum when there is no good initial position or when there is noise in the point cloud. Aiming at the goal of improving the correct rate of corresponding points, scholars have proposed a series Sensors 2021, 21, 5441 3 of 16 of improvements. Censi et al. proposed using the distance from the point to the line to determine the corresponding point. Even though this method avoids the shortest distance search and improves the search efficiency, the registration accuracy is reduced [10]. Chen et al. added the point cloud normal vector to the original ICP algorithm and improved the point-to-point model to the point-to-surface model [11]. On the basis of Chen, Mitra et al. calculated the distance between the corresponding points of the two point clouds based on the idea of geometric Euclidean distance, and used different methods for registration by using the distance [12]. Luo et al. formed the three points closest to target point cloud to form a triangle and took the perpendicular foot of the triangle as the corresponding point [13]. Segal et al. proposed the G-ICP algorithm, which uses the covariance matrix to play a role similar to weights, eliminates some bad corresponding point pairs and creatively integrates point-to-point ICP and point-to-surface ICP [14].
With the increasing development of technology, deep learning, a subfield of machine learning, has been applied in various fields. In point cloud registration, deep learning is also involved. Elbaz et al. [8] used convolutional neural networks to gather local features to complete point cloud registration. Li Danyu [15] used the convolutional neural network structure of three orthogonal views to distinguish the target and other objects in the 3D point cloud candidate by filtering and matching the data set. In 2019, Baidu unmanned vehicles proposed the first end-to-end high-precision point cloud registration network that achieves comparable registration accuracy to prior state-of-the-art geometric methods [9]. The basic idea is to match dozens of robust key points which are extracted by the module named Weighting Layer in the target point cloud and the source point cloud. First introduce semantic features to automatically avoid dynamic features and select key points that are easy to match, generate grid points on key points, regenerate features for key points and grid points and use the feature difference between key points and grid points to calculate a matching probability, fusing the probability to generate a virtual and robust point with the same name for optimized pose calculation.
These improved methods have a prerequisite, that is, it is assumed that the corresponding point pairs found in the target point cloud and the source point cloud are exactly the same. However, due to different sensors, different scanning angles, different scanning resolutions and other factors, it is difficult to guarantee that the coordinates of the corresponding point pairs representing a certain space in reality are exactly the same. These corresponding point pairs have a certain coordinate error. In the literature [9], the concept of virtual points with the same name is proposed, but the literature [9] mainly acts on the point cloud data obtained by two pieces of the same sensor and focuses on excluding the selection of key points on dynamic objects, which is not suitable for different sensors. For the registration of point cloud data with different accuracy, this article proposes virtual namesake point multi-source point cloud data fusion based on FPFH feature difference. This method does not intend to find the corresponding point with the same name in the existing points of the target point cloud, but directly generates a virtual corresponding point with the same name based on the method of deep learning, which improves the accuracy of the ICP algorithm. At the same time, it is used for overlapping areas and for low-precision points. The cloud has been corrected to a certain extent to make it more closely fit the high-precision point cloud.

Methodology
Aiming at the problem of multi-source point cloud data registration with different accuracy, different resolution and noise, this paper proposes a virtual namesake point multi-source point cloud data fusion based on FPFH feature difference to improve the traditional ICP algorithm. In the ICP algorithm, the step of finding the corresponding point is replaced with a virtual point search based on the FPFH feature difference. In this process, a K-dimensional tree [16] search is used. The K-dimensional tree is a data structure that divides the K-dimensional data space. It is used to represent a collection of points in K-dimensional space. The traditional ICP algorithm consumes a lot of time in the process of repeated iterative search for the closest point, which reduces the computational efficiency. Therefore, using the K-dimensional tree to search for the corresponding point can effectively reduce the computational complexity and realize the fast search of the neighborhood relationship. Embed the multi-core and multi-threaded OpenMP parallel processing mode to accelerate the fast feature histogram extraction of key points while extracting FPFH features. After performing the optimal rigid transformation, the lowprecision target point cloud is corrected in detail to obtain a better-precision corresponding points and conversion matrix. Therefore, the low-precision point cloud is better match the high-precision point cloud. The flowchart of this method is shown in Figure 1. process, a K-dimensional tree [16] search is used. The K-dimensional tree is a data structure that divides the K-dimensional data space. It is used to represent a collection of points in K-dimensional space. The traditional ICP algorithm consumes a lot of time in the process of repeated iterative search for the closest point, which reduces the computational efficiency. Therefore, using the K-dimensional tree to search for the corresponding point can effectively reduce the computational complexity and realize the fast search of the neighborhood relationship. Embed the multi-core and multi-threaded OpenMP parallel processing mode to accelerate the fast feature histogram extraction of key points while extracting FPFH features. After performing the optimal rigid transformation, the low-precision target point cloud is corrected in detail to obtain a better-precision corresponding points and conversion matrix. Therefore, the low-precision point cloud is better match the high-precision point cloud. The flowchart of this method is shown in Figure 1. Our method mainly includes the following key processes: (1) Finding virtual namesake points based on FPFH feature difference: calculate the FPFH eigenvector value of the current point ( , , ) ∈ , = 1, … , in the source point cloud P, and then convert the current point ( , , ) to the target point cloud to acquire the point ′( ′, ′, ′), and construct a 2*2*2 voxel around the point ′( ′, ′, ′) to acquire 8 voxel center points, the center Our method mainly includes the following key processes: (1) Finding virtual namesake points based on FPFH feature difference: calculate the FPFH eigenvector value FPFH p i of the current point Once again, we acquire virtual namesake points as we did in step 1, then replace the low precision points with these virtual namesake points. Correct the accuracy of the points in the low-precision point cloud to improve the accuracy of the low-precision point cloud.

Finding Virtual Namesake Point Based on FPFH Feature Difference
The process of finding virtual namesake point in this paper is divided into two steps. The first step is to extract the F2 difference of FPFH features, and the second step is to calculate the coordinates of the virtual namesake point.
In the first step of extracting the F2 difference of FPFH features, we select FPFH [17]. There are two reasons for the selection. The first reason is the advantages of FPFH's own features. Since the histogram is in a high-dimensional hyperspace, it provides a measurable information space for the feature representation of the point cloud, and FPFH has good robustness to point cloud noise. It can still work under different scanning resolutions or scanning accuracy. The corresponding point can be found correctly. The second reason is that the FPFH feature is a feature descriptor that combines the information of the surrounding neighborhood points. It is necessary to refer to the difference between the conversion point and the surrounding neighborhood while calculating the virtual namesake point. FPFH can provide a high dimensional vector and make full use of neighborhood spatial information to improve accuracy. Here is a brief introduction to the steps of FPFH feature extraction: (1) Suppose the point cloud P is known and its coordinates and r neighborhood are known, that is, a sphere is made with point p 0 as the center and r as the radius. The points surrounded by the sphere are the neighborhood of point p 0 , as shown in Figure 2. As shown, P k1 , P k2 , P k3 , P k4 , P k5 are neighborhood points, and each point in the neighborhood is connected in pairs, forming a point pair with each other. (2) Construct the local coordinate system of the point pair as shown in Figure 3: where u, v, w is the coordinate axis of the coordinate system.   (2) Construct the local coordinate system of the point pair as shown in Figure 3: where , , is the coordinate axis of the coordinate system. , , , After calculating the four eigenvalues of each point pair in the neighborhood, each eigenvalue is divided into five intervals. At this time, since there are three eigenvalues, there are 5 = 125 intervals, so a 125-dimensional histogram will be generated. It is a 125-dimensional feature Point Feature Histogram (PFH).
The FPFH feature is the feature obtained through the integration of simplified PFH [18] features. The neighborhood construction is shown in Figure 4, which only connects the center point and the neighboring points compared to Figure 2. In the above eigenvalues, only select , , , and divide each eigenvalue into 11 intervals according to the  (2) Construct the local coordinate system of the point pair as shown in Figure 3: where , , is the coordinate axis of the coordinate system. , , , f f f f of the point pair: After calculating the four eigenvalues of each point pair in the neighborhood, each eigenvalue is divided into five intervals. At this time, since there are three eigenvalues, there are 5 = 125 intervals, so a 125-dimensional histogram will be generated. It is a 125-dimensional feature Point Feature Histogram (PFH).
The FPFH feature is the feature obtained through the integration of simplified PFH [18] features. The neighborhood construction is shown in Figure 4, which only connects the center point and the neighboring points compared to Figure 2. In the above eigenvalues, only select , , , and divide each eigenvalue into 11 intervals according to the After calculating the four eigenvalues of each point pair in the neighborhood, each eigenvalue is divided into five intervals. At this time, since there are three eigenvalues, there are 5 3 = 125 intervals, so a 125-dimensional histogram will be generated. It is a 125-dimensional feature Point Feature Histogram (PFH).
The FPFH feature is the feature obtained through the integration of simplified PFH [18] features. The neighborhood construction is shown in Figure 4, which only connects the center point and the neighboring points compared to Figure 2. In the above eigenvalues, only select f 1 , f 3 , f 4 , and divide each eigenvalue into 11 intervals according to the range, so that there are 33 eigenvalues representing SPFH features and then weight the SPFH features in the neighborhood to acquire FPFH feature.
where w k is the weight, which represents the Euclidean distance between the point pairs. The second step is to calculate the coordinates of the virtual namesake point. Deep learning is implemented by Convolutional Neural Networks (CNN) and SoftMax. For the input part of the CNN, the first step is to calculate it for a point p i (X i , Y i , Z i ) p i ∈ R 3 , i = 1, . . . , N p in the source point cloud P, the FPFH feature FPFH p i of, and keep it. Then use the initial transformation matrix R, T to perform coordinate transformation on the point p i , and call the transformed point p i (X i , Y i , Z i ) the transformation point. Put the transformation point p i into the target point cloud. Through the neighborhood search method, the neighborhood is divided into 2r s + 1, 2r s + 1, 2r s + 1 voxels, where r is the width of the neighborhood, s is the size of the voxel, and each voxel in it contains some points in the target point cloud. range, so that there are 33 eigenvalues representing SPFH features and then weight the SPFH features in the neighborhood to acquire FPFH feature.
where is the weight, which represents the Euclidean distance between the point pairs.  As shown in Figure 5, there are 8 voxels after the current point is divided into voxels, where q j , j = 1, . . . , 8 is the center point of the voxel, p i is the point converted to the target point cloud, and the other points are the neighbors of the conversion point in the target point cloud. Domain point. Then in each voxel, use other points in the voxel to extract the FPFH feature of the voxel center point q j , and at this time, the FPFH feature FPFH q j , j = 1, . . . , 8 of the 8 voxel center points will be obtained. Finally, calculate the F2 distance between the FPFH feature FPFH q j , j = 1, . . . , 8 of the 8 voxel center points and the FPFH feature FPFH p j of the point p i in the source point cloud, respectively, which is used as the input of 3D CNNS, and the softmax operation can complete the whole step. CNN can learn the similarity distance measurement between the source feature and the target feature. More importantly, it can suppress the matching noise. The SoftMax operation is used to convert the matching cost into a probability, denoted by w j , j = 1, . . . 8. The finally generated virtual namesake point are calculated by Formula (4), as shown in Figure 6:   Through CNN, you can use its powerful expression ability in similarity learning, can automatically learn deeper and more abstract feature information and directly find the corresponding virtual namesake point, avoiding the operation of finding the corresponding point in the existing point. Improve the accuracy of finding corresponding points, thereby improving the accuracy of point cloud matching.  Through CNN, you can use its powerful expression ability in similarity learning, can automatically learn deeper and more abstract feature information and directly find the corresponding virtual namesake point, avoiding the operation of finding the corresponding point in the existing point. Improve the accuracy of finding corresponding points, thereby improving the accuracy of point cloud matching.

Point Cloud Registration Algorithm Based on Virtual Namesake Point
The entire improved point cloud registration algorithm process is similar to the classic ICP algorithm. The biggest change is to use the virtual point search method proposed in Section 3.1. to replace the closest point in the classic ICP algorithm as the corresponding point process. The algorithm is summarized as follows: (1) First select two point cloud datasets P, Q with different accuracy, take the low-precision point cloud P as the source point cloud, the relatively high-precision point cloud Q as the target point cloud and the points , ∈ , = 1, … , in the source point cloud as the candidate points.
(2) Use the initial conversion matrix R, T to transform all points in the source point cloud and convert all points in the source point cloud to obtain the conversion points ′, ′ ∈ , = 1, … , .
(3) Put all the obtained conversion points ′into the target point cloud Q, find the neighbor points of the conversion point in the target point cloud Q, set a threshold at this time, calculate the Euclidean distance between the conversion point and the nearest neighbor point in the target point cloud, compare the calculated distance with the threshold. If it is greater than the threshold, it indicates that the conversion point

Point Cloud Registration Algorithm Based on Virtual Namesake Point
The entire improved point cloud registration algorithm process is similar to the classic ICP algorithm. The biggest change is to use the virtual point search method proposed in Section 3.1. to replace the closest point in the classic ICP algorithm as the corresponding point process. The algorithm is summarized as follows: (1) First select two point cloud datasets P, Q with different accuracy, take the lowprecision point cloud P as the source point cloud, the relatively high-precision point cloud Q as the target point cloud and the points p i , p i ∈ R 3 , i = 1, . . . , N p in the source point cloud as the candidate points.
(9) After obtaining the corresponding points, calculate the conversion matrix R , T according to the least squares, and the calculation principle is to minimize the objective function of Formula (6): Sensors 2021, 21, 5441 9 of 16 (10) Repeat the above steps until the number of cycles is reached or the objective function is basically unchanged.

Point Cloud in Overlapping Areas for Further Correction
In the previous section, the low-precision point cloud was registered with reference to the high-precision point cloud, and the optimal rigid transformation matrix was calculated. However, this is only a rotation and translation operation for the entire point cloud. The geometric difference of the point cloud may be different, so the improvement of the point cloud accuracy of the ICP algorithm is limited to the whole, which requires further improvement in the detailed area.
For the point cloud in the overlapping area of the source point cloud and the target point cloud, after performing the point cloud registration algorithm operation based on the virtual namesake point, it is necessary to search for the virtual namesake point based on the FPFH feature difference again, that is, this time in the existing Under the optimal initial position, find the virtual namesake point in the low-precision point cloud for improvement. The improvement steps are similar to those in Section 3.2, and the operations in Section 3.2 are performed on all candidate points marked in Section 3.3. As shown in Figure 7, the black point in Figure 7 is the high-precision point cloud, the red point is the point cloud to be registered and the green is the point cloud after registration. In detail, the correction direction and size of each point can be based on the characteristics of the point itself. Correction, this approach can not only achieve fine fitting in a small range, but also keep the correction amount between regions without large jumps as a whole and ensure the continuity of the entire map area.
(9) After obtaining the corresponding points, calculate the conversion matrix R , T′ according to the least squares, and the calculation principle is to minimize the objective function of formula (6): (10) Repeat the above steps until the number of cycles is reached or the objective function is basically unchanged.

Point Cloud in Overlapping Areas for Further Correction
In the previous section, the low-precision point cloud was registered with reference to the high-precision point cloud, and the optimal rigid transformation matrix was calculated. However, this is only a rotation and translation operation for the entire point cloud. The geometric difference of the point cloud may be different, so the improvement of the point cloud accuracy of the ICP algorithm is limited to the whole, which requires further improvement in the detailed area.
For the point cloud in the overlapping area of the source point cloud and the target point cloud, after performing the point cloud registration algorithm operation based on the virtual namesake point, it is necessary to search for the virtual namesake point based on the FPFH feature difference again, that is, this time in the existing Under the optimal initial position, find the virtual namesake point in the low-precision point cloud for improvement. The improvement steps are similar to those in Section 3.2, and the operations in Section 3.2 are performed on all candidate points marked in Section 3.3. As shown in Figure 7, the black point in Figure 7 is the high-precision point cloud, the red point is the point cloud to be registered and the green is the point cloud after registration. In detail, the correction direction and size of each point can be based on the characteristics of the point itself. Correction, this approach can not only achieve fine fitting in a small range, but also keep the correction amount between regions without large jumps as a whole and ensure the continuity of the entire map area.  Figure 7 is the high-precision point cloud, the red point is the point cloud to be registered and the green is the point cloud after registration.

Experimental Data and Baseline Algorithms
The experimental data in this paper come from the data set WHU-TLS dataset released by the research group of the Institute of Space Intelligence of Wuhan University and the 3D scanning repository of Stanford [19][20][21]. The WHU-TLS dataset contains a total of more than 1.74 billion 3D points collected from 11 different environments of the ground station scanning point cloud data set. We select some representative four types of point cloud data for experimentation, including Bunny, Sign Board, Sculpture and Chair, which correspond to the data in the result graph below. There are different point cloud resolutions, local distortions and noise between multi-source point clouds. In order to verify the ability of our method to deal with the differences between multi-source point clouds, we performed Gaussian noise addition, point cloud downsampling and point cloud distortion processing on the experimental data, respectively. We compare the performance of ours with the following registration algorithms: ICP [5], Normal Distributions Transform (NDT) [6] and Fast Global Registration (FGR) [22].

Evaluation Metrics
We evaluate the registration by computing the mean isotropic rotation and translation errors: Sensors 2021, 21, 5441 10 of 16 where R id , t id is the real rotation matrix and translation vector, and R i , t i is the rotation matrix and translation vector calculated by the algorithm. The error Error(R) of the rotation matrix is calculated by Formula (8) and expressed by angle. Similarly, the translation error Error(t id ) is expressed by Formula (9). In addition, the Chamfer Distance (CD) is used to evaluate the distance between the point clouds. If there are two point clouds S 1 , S 2 , the Chamfer Distance (CD) is calculated by Formula (10).

Experiment Analysis
Clean Data: We do not perform any processing on the four types of point cloud data and apply a random rigid transformation matrix while keeping it in a clean state. Each type of point cloud data has a correct corresponding relationship for the accuracy evaluation of the algorithm. The experimental results are shown in Table 1. It can be seen that in the clean data, the ICP algorithm is the most accurate overall, but if the data Sign Board falls into the local optimum, FGR is the most stable. Ours is slightly worse than FGR, but FGR is very noise-sensitive. Sensitive, overall, ours is better than NDT. The qualitative results are shown in Figure 8. The red point cloud represents the transformed source point cloud, the blue point cloud represents the target point cloud, because this method is mainly for the precise registration of the test. The effect is shown here with the enlarged image in the data.

Different Resolution Data:
The resolution of point cloud data from different sources is not necessarily the same but can be dense or sparse. In order to simulate the difference in point cloud sampling in reality, we down-sampled the experimental data to test the registration effect at different resolutions. The simulated data are roughly down-sampled to two-thirds of the original points. Experimental results are shown in Table 2. It can be seen that in point cloud data with different resolutions, our algorithm is superior to other methods, except that it is occasionally lower than ICP, but in most cases the ICP falls into the local optimum. The qualitative results are shown in Figure 9. The red point cloud represents the transformed source point cloud, the blue point cloud represents the target point cloud. For better display effects, the details are enlarged. The target point cloud in the figure is also down-sampled but note that the target point cloud used in the algorithm is not sampled, only the source point cloud is down-sampled.
Gaussian Noise Data: In order to more accurately simulate the presence of noise in the real point cloud and verify the ability of our algorithm to deal with noise, we added Gaussian noise to the experimental data. When preparing the simulation experiment, first perform a random rigid transformation on the original data, save the transformation results and then dither the transformed data within a certain range to achieve the purpose of Gaussian noise. The experimental results are shown in Table 3. Generally speaking, the ICP algorithm is the most accurate. The noise added in the experiment conforms to the normal distribution, so the expected value is 0, so the ICP effect is better, followed by our method, which is better than NDT and FGR. In terms of CD distance, because our method can improve the accuracy of the point cloud in detail, our method is the best overall. The qualitative results are shown in Figure 10. The red point cloud represents the transformed source point cloud, and the blue point cloud represents the target point cloud.        Distorted Data: Due to the difference of each sensor, there may be a certain distortion between the point cloud and the point cloud. Therefore, a certain distortion is manually added to the experimental data. Since the point cloud is distorted, the true value of the transformation relationship cannot be obtained. Only applicable for CD distance evaluation.
The experimental results are shown in Table 4. On the whole, our method is the best because it can adaptively correct the surrounding neighborhood of the point. The qualitative result is shown in Figure 11. The red point cloud represents the transformed source point cloud, and the blue Point cloud represents the target point cloud.

Conclusions
A method of virtual namesake point multi-source point cloud data fusion based on FPFH feature difference is proposed. It can synthesize the probability according to the F2 distance between the voxel center points and the existing points in the target point cloud. Then we generate virtual namesake points for registration according to the probability. The use of voxels, FPFH features and CNN estimation can improve the accuracy of point

Conclusions
A method of virtual namesake point multi-source point cloud data fusion based on FPFH feature difference is proposed. It can synthesize the probability according to the F2 distance between the voxel center points and the existing points in the target point cloud. Then we generate virtual namesake points for registration according to the probability. The use of voxels, FPFH features and CNN estimation can improve the accuracy of point cloud fusion. We have compared our algorithm with the classic ICP algorithm, NDT algorithm and FGR algorithm. Through experiments and accuracy evaluation, in the case of clean point cloud and point cloud with different resolutions, our method has the same accuracy as the results of ICP and FGR algorithms and is better than the NDT algorithm. In the case of noise and distortion in the point cloud, our method is better than other algorithms. Since the FPFH feature is used in the calculation process, we will conduct a further study on the operating efficiency.