# MPCR-Net: Multiple Partial Point Clouds Registration Network Using a Global Template

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

- Some algorithms require the structures of pairwise point clouds to be the same. If the geometric structures of the pairwise point clouds are quite different, the registration accuracy will decrease;
- Some algorithms can complete the registration of two partially overlapping point clouds through partial-to-partial point-cloud registration methods. However, these methods rely on the individual training of specific partial data of the point cloud to establish the correspondence point relationship between the pairwise point clouds. Moreover, the registration accuracy is very sensitive to changes in the points.

- A multiple partial point cloud registration method based on a global template is proposed. Each partial point cloud is gradually registered to the global template in patches, which can effectively improve the accuracy of the point cloud registration.
- A clipping network for the global template point cloud, TPCC-Net (clipping network for template point cloud), was designed. In TPCC-Net, the features of partial point clouds and the global template point cloud are extracted and fused through a neural network, and the correspondence points of each partial point cloud are cut out from the global template point cloud. Compared to the existing registration algorithm based on deep learning, this method can reduce the correspondence point estimation error and improve registration efficiency.
- A parameter estimation network for rigid body transformation, TMPE-Net (parameter estimating network for transformation matrix), was designed. The learnable features of a partial point cloud and its correspondence points generated through the TPCC-NET were extracted through a neural network, and the parameters of the rigid body transformation matrix were estimated to minimize the learnable feature gap between the partial point cloud and the global template point cloud.

## 2. Related Work

#### 2.1. Classic Registration Algorithms

#### 2.2. Deep Learning-Based Registration Algorithms

## 3. MPCR-Net

#### 3.1. Overview

- Suppose there are n partial point clouds ${S}_{1},\dots ,{S}_{n}$ and a global template point cloud $T$, ${S}_{1},\dots ,{S}_{n}$ and $T$ are used as the inputs of TPCC-Net. In the TPCC-Net, the feature matrices $F\left({S}_{1}\right),\dots ,F\left({S}_{n}\right)$ of ${S}_{1},\dots ,{S}_{n}$ and $F\left(T\right)$ of $T$ are obtained through the point cloud feature perceptron, respectively;
- Global feature vectors $\xd8\left({S}_{1}\right),\dots ,\xd8\left({S}_{n}\right)$ are obtained by pooling $F\left({S}_{1}\right),\dots ,F\left({S}_{n}\right)$, then the fusion features ${F}_{1},\dots ,{F}_{n}$ are obtained by splicing $\xd8\left({S}_{1}\right),\dots ,\xd8\left({S}_{n}\right)$ and $F\left(T\right)$ [42];
- Index features ${M}_{1},\dots ,{M}_{n}$ of ${F}_{1},\dots ,{F}_{n}$ are obtained through the index feature perceptron, and the indexes ${M}_{1}^{\prime},\dots ,{M}_{n}^{\prime}$ are obtained by normalizing, filtering, and addressing ${M}_{1},\dots ,{M}_{n}$. According to ${M}_{1}^{\prime},\dots ,{M}_{n}^{\prime}$, partial template clouds ${T}_{1}^{\prime},\dots ,{T}_{n}^{\prime}$ corresponding to ${S}_{1},\dots ,{S}_{n}$ respectively are cut out from $T$;
- Partial template clouds and partial point clouds are input into the TMPE-Net in the form of correspondence point groups $\left\{\left({S}_{1},{T}_{1}^{\prime}\right),\left({S}_{2},{T}_{2}^{\prime}\right),\dots ,\left({S}_{n},{T}_{n}^{\prime}\right)\right\}$. In the TMPE -Net, the global feature vector $\left\{\left(\xd8\left({S}_{1}\right),\xd8\left({T}_{1}^{\prime}\right)\right),\left(\xd8\left({S}_{2}\right),\xd8\left({T}_{2}^{\prime}\right)\right),\dots ,\left(\xd8\left({S}_{n}\right),\xd8\left({T}_{n}^{\prime}\right)\right)\right\}$ of $\left\{\left({S}_{1},{T}_{1}^{\prime}\right),\left({S}_{2},{T}_{2}^{\prime}\right),\dots ,\left({S}_{n},{T}_{n}^{\prime}\right)\right\}$ is obtained from the point cloud feature perceptron and the pooling layer, and each group of global feature vectors is spliced to obtain the global fusion feature ${F}_{1}^{\prime},\dots ,{F}_{n}^{\prime}$;
- The dimension of ${F}_{1}^{\prime},\dots ,{F}_{n}^{\prime}$ is reduced through the transformation parameter perceptron and output the transformation parameter vectors ${Z}_{1},\dots ,{Z}_{n}$, and the rigid body transformation matrixes ${G}_{1},\dots ,{G}_{n}$ are constructed according to ${Z}_{1},\dots ,{Z}_{n}$;
- Rigid body transformations are performed on ${S}_{1},\dots ,{S}_{n}$ according to transformation matrices ${G}_{1},\dots ,{G}_{n}$, and the above steps are repeated iteratively to calculate ${G}_{1},\dots ,{G}_{n}$ until ${G}_{1},\dots ,{G}_{n}$ meet the stop condition C; then, all the iteration results are combined to construct the optimal rigid body transformation matrix ${G}_{1}^{\prime},\dots ,{G}_{n}^{\prime}$;
- ${G}_{1},\dots ,{G}_{n}$ is used to register ${S}_{1},\dots ,{S}_{n}$ to $T$, and the ICP algorithm optimizes the registration results to obtain point clouds ${W}_{1},\dots ,{W}_{n}$. ${W}_{1},\dots ,{W}_{n}$ are adjusted to the same coordinate system and obtain a fully registered point cloud ${W}^{\prime}$ spliced by ${W}_{1},\dots ,{W}_{n}$. Subsequent 3D reconstruction tasks, such as surface reconstruction, can be performed based on ${W}^{\prime}$.

#### 3.2. TPCC-Net

#### 3.2.1. Mathematical Formulation

#### 3.2.2. Network Architecture

#### 3.2.3. Working Process

- Extract and fuse point cloud features
- Suppose point clouds ${S}_{i}$ and $T$ contain ${N}_{x}$ and ${N}_{y}$ data points, respectively, and ${N}_{x}<{N}_{y}$. Input ${S}_{i}$ and $T$ to the point cloud feature perceptron; it consists of five multi-layered perceptrons (MLPs), similar to PointNet. The dimensions of ${S}_{i}$ and $T$ are both increased to 1024 after the convolution processing of the point cloud feature perceptron. Afterward, generate the feature matrices $F\left({S}_{i}\right)\in {\mathbb{R}}^{{N}_{y}\times 1024}$ and $F\left(T\right)\in {\mathbb{R}}^{{N}_{x}\times 1024}$ of ${S}_{i}$ and $T$. Weights are shared between the MLPs used for ${S}_{i}$ and $T$.
- Use the max-pooling function to downsample $F\left({S}_{i}\right)$ to generate a global feature vector $\xd8\left({S}_{i}\right)$ corresponding to ${S}_{i}$.
- Join $\xd8\left({S}_{i}\right)$ and $F\left(T\right)$ to build a point cloud fusion feature ${F}_{i}\in {\mathbb{R}}^{{N}_{x}\times 2048}$.

- Construct the index vector
- Input ${F}_{i}$ to the indexed feature perceptron; the dimension of ${F}_{i}$ is reduced to one, and the index feature $M\in {\mathbb{R}}^{{N}_{x}\times 1}$ is then output.
- Use the Tanh activation function to normalize $M$ to construct the index vector ${M}^{\prime}\in {\mathbb{R}}^{{N}_{x}\times 1}$.

- Predict the correspondence points
- Encode all data points in $T$ to construct the index ${M}_{T}\in {\mathbb{R}}^{{N}_{x}\times 2}$.
- Filter out the first ${N}_{y}$ elements close to zero from the index vector ${M}^{\prime}\in {\mathbb{R}}^{{N}_{x}\times 1}$ to form the index element vector ${M}^{\u2033}\in {\mathbb{R}}^{{N}_{y}\times 1}$.
- Find the address of the above ${N}_{y}$ elements in ${M}^{\prime}$ to construct the index ${M}^{\u2034}\in {\mathbb{R}}^{{N}_{y}\times 2}$.
- ${M}^{\u2034}\in {M}_{T}$, according to each index in ${M}^{\u2034}$; the elements corresponding to the index in ${M}^{\u2034}$ are extracted from the global template point cloud $T$, and all the extracted elements in $T$ are combined to construct the estimated correspondence point set ${T}_{i}^{\prime}$ of ${S}_{i}$ in $T$. The correspondence point estimation process is shown in Figure 3, where the purple part is the correspondence point set ${T}_{i}^{\prime}$.

#### 3.3. TMPE-Net

#### 3.3.1. Mathematical Formulation

#### 3.3.2. Network Architecture

#### 3.3.3. Working Process

- Extract and fuse the global feature vector of point clouds
- Input ${S}_{i}\in {\mathbb{R}}^{{N}_{y}\times 3}$ and its corresponding point set ${T}_{i}^{\prime}\in {\mathbb{R}}^{{N}_{y}\times 3}$ into point cloud feature perceptron; then, the dimensions of ${S}_{i}$ and ${T}_{i}^{\prime}$ are both increased to 1024, and the feature matrices $F\left({S}_{i}\right)\in {\mathbb{R}}^{{N}_{y}\times 1024}$ and $F\left({{T}^{\prime}}_{i}\right)\in {\mathbb{R}}^{{N}_{y}\times 1024}$ of ${S}_{i}$ and ${T}_{i}^{\prime}$ are generated. The weights of all convolutional layers in the point cloud feature perceptron are shared for ${S}_{i}$ and ${T}_{i}^{\prime}$.
- Use the max-pooling function to downsample $F\left({S}_{i}\right)$ and $F\left({T}_{i}^{\prime}\right)$ to construct the global feature vectors $\xd8\left({S}_{i}\right)\in {\mathbb{R}}^{1\times 1024}$ and $\xd8\left({T}_{i}^{\prime}\right)\in {\mathbb{R}}^{1\times 1024}$ that correspond to ${S}_{i}$ and ${T}_{i}^{\prime}$, respectively.
- Join $\xd8\left({S}_{i}\right)$ and $F\left({T}_{i}^{\prime}\right)$ to build a global fusion feature ${F}_{i}\in {\mathbb{R}}^{{N}_{x}\times 2048}$ of the point clouds.

- Construct the parameter vector

- 3.
- Estimate the rigid body transformation

#### 3.4. Loss Function

#### 3.5. Training

#### 3.5.1. Preprocessing of Training Data

- Each initial partial point cloud contained 568 data points, which were “cut” from a global template point cloud using the farthest point sampling (FPS) [42] algorithm.
- Gaussian noise of 0.0075 level to the initial partial point cloud was added to simulate the deviation of the coordinate value between the scanned data point and the data point in the global template point cloud under noisy conditions.
- 284 outlier noise points were randomly added to the initial partial point cloud to simulate the disturbance of the scanned point cloud structure by environmental noise and sensor error. This increased the structural difference between the initial partial point cloud and the template point cloud.
- A random rigid body transformation matrix was created, through which the initial partial point cloud was subjected to random rotation transformation (±45° around each Cartesian coordinate axis) around the origin of the coordinate and a random translation transformation (±0.5 unit along each Cartesian coordinate axis) to obtain the partial point cloud.

#### 3.5.2. Training Method

## 4. Experiments

#### 4.1. Experimental Environment

#### 4.2. Experiments Based on Untrained Models

#### 4.2.1. Evaluation Criteria for Experiments

- 1.
- Estimation accuracy

- 2.
- Registration error

- 3.
- Evaluation of work efficiency

#### 4.2.2. Correspondence Point Estimation

- Using the FPS algorithm, the initial partial point cloud was sampled from the global template point cloud according to a sampling ratio of 0.05 to 0.95; the sampling ratio refers to the ratio of the data volume of the initial partial point cloud to the global template point cloud.
- The initial local point cloud was rotated by 20° around the three Cartesian coordinate axes with the coordinate origin as the center and translated 0.5 units along the three coordinate axes to obtain the local point cloud.

#### 4.2.3. Point Cloud Registration

#### 4.2.4. Work Efficiency

#### 4.3. Experiments with Actual Workpieces

#### 4.3.1. Data Sampling and Processing

- Paint the surface of the hull; place the painted hull on the rotating table and ensure that the 3D scanner is aligned with the geometric center of the hull.
- Control the rotating table to rotate the hull to a certain angle.
- Use the 3D scanner to scan the hull and obtain its partial point cloud under the initial angle.
- Repeat steps b and c to obtain partial point clouds ${S}_{2}\u2013{S}_{7}$ of the hull at certain angles. The partial point clouds obtained are shown in Figure 16.

- Paste labels on the surface of the painted hull ‘a’.
- Using scan steps similar to the process of obtaining partial point clouds of the egg-shaped pressure hull, obtain partial point clouds ${V}_{1}\u2013{V}_{7}$ of the hull at angles ${A}_{1}\u2013{A}_{7}$, respectively.
- In the measurement software Optical RevEng 2.4, which is provided by the 3D scanner manufacturer, use the turntable method and the label method to register point clouds ${V}_{1}\u2013{V}_{7}$, and use the ICP algorithm to fine-register the registration point clouds.
- Continuously adjust the fine registration point clouds manually according to the measurement results to make the registration point clouds closer to the hull.
- Perform surface reconstruction on the manually adjusted point clouds to obtain the actual digital model (Figure 17b) of hull ‘a’.

#### 4.3.2. Analysis of Registration Accuracy of Multiple Partial Point Clouds

## 5. Conclusions

- Using a global-template-based multiple partial point cloud registration method can fully guarantee the overlap rate between each partial point cloud and its corresponding partial template point cloud, thereby reducing the registration error and improving the point cloud reconstruction accuracy.
- Searching for correspondence points between partial point clouds and the global template point cloud through TPCC-Net does not require separate training for specific local data of point clouds, thereby effectively reducing the correspondence point estimation error.
- The rigid body transformation matrix parameters in the registration are estimated through TMPE-Net, and estimation results are robust to changes in data points. It eliminates the shortcomings of other algorithms that cannot effectively register two point clouds with significant differences in the amount of data.

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Zhong, K.; Li, Z.; Zhou, X.; Li, Y.; Shi, Y.; Wang, C. Enhanced phase measurement profilometry for industrial 3D inspection automation. Int. J. Adv. Manuf. Technol.
**2015**, 76, 1563–1574. [Google Scholar] [CrossRef] - Han, L.; Cheng, X.; Li, Z.; Zhong, K.; Shi, Y.; Jiang, H. A Robot-Driven 3D Shape Measurement System for Automatic Quality Inspection of Thermal Objects on a Forging Production Line. Sensors
**2018**, 18, 4368. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Liu, D.; Chen, X.; Yang, Y.-H. Frequency-Based 3D Reconstruction of Transparent and Specular Objects. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 660–667. [Google Scholar]
- Yang, H.; Liu, R.; Kumara, S. Self-organizing network modelling of 3D objects. CIRP Ann.
**2020**, 69, 409–412. [Google Scholar] [CrossRef] - Cheng, X.; Li, Z.; Zhong, K.; Shi, Y. An automatic and robust point cloud registration framework based on view-invariant local feature descriptors and transformation consistency verification. Opt. Lasers Eng.
**2017**, 98, 37–45. [Google Scholar] [CrossRef] - Pulli, K. Multiview registration for large data sets. In Proceedings of the Second International Conference on 3-D Digital Imaging and Modeling (Cat. no. pr00062), Ottawa, ON, Canada, 8 October 1999; pp. 160–168. [Google Scholar]
- Verdie, Y.; Yi, K.M.; Fua, P.; Lepetit, V. TILDE: A Temporally Invariant Learned DEtector. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Ouellet, J.-N.; Hébert, P. Precise ellipse estimation without contour point extraction. Mach. Vis. Appl.
**2008**, 21, 59–67. [Google Scholar] [CrossRef] - Zhang, Z.; Dai, Y.; Sun, J. Deep learning based point cloud registration: An overview. Virtual Real. Intell. Hardw.
**2020**, 2, 222–246. [Google Scholar] [CrossRef] - Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar]
- Besl, P.; McKay, N.D. A method for registration of 3-D shapes, Pattern Analysis and Machine Intelligence. IEEE Trans. Pattern Anal. Mach. Intell.
**1992**, 14, 239. [Google Scholar] [CrossRef] - Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput.
**1992**, 10, 145–155. [Google Scholar] [CrossRef] - Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. [Google Scholar]
- Rusinkiewicz, S. A symmetric objective function for ICP. ACM Trans. Graph.
**2019**, 38, 1–7. [Google Scholar] [CrossRef] - Kamencay, P.; Sinko, M.; Hudec, R.; Benco, M.; Radil, R. Improved feature point algorithm for 3D point cloud registration. In Proceedings of the 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 1–3 July 2019; pp. 517–520. [Google Scholar]
- Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration. IEEE Trans. Pattern Anal. Mach. Intell.
**2016**, 38, 2241–2254. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Srivatsan Rangaprasad, A.; Xu, M.; Zevallos-Roberts, N.; Choset, H. Bingham Distribution-Based Linear Filter for Online Pose Estimation. In Proceedings of the Robotics: Science and Systems XIII, Cambridge, MA, USA, 12–16 July 2017. [Google Scholar]
- Eckart, B.; Kim, K.; Kautz, J. Fast and Accurate Point Cloud Registration using Trees of Gaussian Mixtures. arXiv
**2018**, arXiv:1807.02587. [Google Scholar] - Jost, T.; Hugli, H. A multi-resolution scheme ICP algorithm for fast shape registration. In Proceedings of the First International Symposium on 3D Data Processing Visualization and Transmission, Padua, Italy, 19–21 June 2002; pp. 540–543. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM
**1981**, 24, 381–395. [Google Scholar] [CrossRef] - Chen, C.-S.; Hung, Y.-P.; Cheng, J.-B. A fast automatic method for registration of partially-overlapping range images. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), Bombay, India, 7 January 1998; pp. 242–248. [Google Scholar]
- Aiger, D.; Mitra, N.J.; Cohen-Or, D. 4-points congruent sets for robust pairwise surface registration. In ACM SIGGRAPH 2008 Papers, Proceedings of the SIGGRAPH ’08: Special Interest Group on Computer Graphics and Interactive Techniques Conference, Los Angeles, CA, USA, 11–16 August 2008; Association for Computing Machinery: New York, NY, USA, 2008; pp. 1–10. [Google Scholar]
- Mellado, N.; Aiger, D.; Mitra, N.J. Super 4pcs fast global pointcloud registration via smart indexing. In Computer Graphics Forum; Wiley Online Library: Strasbourg, France, 2014; pp. 205–215. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
- Frome, A.; Huber, D.; Kolluri, R.; Bülow, T.; Malik, J. Recognizing objects in range data using regional point descriptors. In Proceedings of the European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; pp. 224–237. [Google Scholar]
- Kurobe, A.; Sekikawa, Y.; Ishikawa, K.; Saito, H. Corsnet: 3d point cloud registration by deep neural network. IEEE Robot. Autom. Lett.
**2020**, 5, 3960–3966. [Google Scholar] [CrossRef] - Zeng, A.; Song, S.; Niessner, M.; Fisher, M.; Xiao, J.; Funkhouser, T. 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 199–208. [Google Scholar]
- Pais, G.D.; Ramalingam, S.; Govindu, V.M.; Nascimento, J.C.; Chellappa, R.; Miraldo, P. 3DRegNet: A Deep Neural Network for 3D Point Registration. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 7191–7201. [Google Scholar]
- Lu, W.; Wan, G.; Zhou, Y.; Fu, X.; Yuan, P.; Song, S. DeepVCP: An End-to-End Deep Neural Network for Point Cloud Registration. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October 27–2 November 2019; pp. 12–21. [Google Scholar]
- Li, J.; Zhang, C.; Xu, Z.; Zhou, H.; Zhang, C. Iterative distance-aware similarity matrix convolution with mutual-supervised point elimination for efficient point cloud registration. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXIV 16, 2020. pp. 378–394. [Google Scholar]
- Qi, C.R.; Liu, W.; Wu, C.; Su, H.; Guibas, L.J. Frustum PointNets for 3D Object Detection from RGB-D Data. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 918–927. [Google Scholar]
- Yuan, W.; Khot, T.; Held, D.; Mertz, C.; Hebert, M. PCN: Point Completion Network. In Proceedings of the 2018 International Conference on 3D Vision (3DV), Verona, Italy, 5–8 September 2018; pp. 728–737. [Google Scholar]
- Aoki, Y.; Goforth, H.; Srivatsan, R.A.; Lucey, S. PointNetLK: Robust & Efficient Point Cloud Registration Using PointNet. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 7156–7165. [Google Scholar]
- Lucas, B.D.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; pp. 121–130. [Google Scholar]
- Wang, Y.; Solomon, J.M. Deep closest point: Learning representations for point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 3523–3532. [Google Scholar]
- Yuan, W.; Eckart, B.; Kim, K.; Jampani, V.; Fox, D.; Kautz, J. Deepgmr: Learning latent gaussian mixture models for registration. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 733–750. [Google Scholar]
- Sarode, V.; Li, X.; Goforth, H.; Aoki, Y.; Srivatsan, R.A.; Lucey, S.; Choset, H. PCRNet: Point Cloud Registration Network using PointNet Encoding. arXiv
**2019**, arXiv:1908.07906. [Google Scholar] - Wang, Y.; Solomon, J.M. PRNet: Self-Supervised Learning for Partial-to-Partial Registration. arXiv
**2019**, arXiv:1910.12240. [Google Scholar] - Yew, Z.J.; Lee, G.H. RPM-Net: Robust Point Matching Using Learned Features. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 11821–11830. [Google Scholar]
- Choy, C.; Dong, W.; Koltun, V. Deep Global Registration. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 2511–2520. [Google Scholar]
- Gojcic, Z.; Zhou, C.; Wegner, J.D.; Guibas, L.J.; Birdal, T. Learning Multiview 3D Point Cloud Registration. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1756–1766. [Google Scholar]
- Moenning, C.; Dodgson, N.A. Fast Marching Farthest Point Sampling; University of Cambridge, Computer Laboratory: Cambridge, UK, 2003. [Google Scholar]
- Olivas, E.S.; Guerrero, J.D.M.; Sober, M.M.; Benedito, J.R.M.; Lopez, A.J.S. Handbook Of Research On Machine Learning Applications and Trends: Algorithms, Methods and Techniques-2 Volumes; IGI Global: Hershey, PA, USA, 2009. [Google Scholar]
- Orts-Escolano, S.; Morell, V.; Garcia-Rodriguez, J.; Cazorla, M. Point cloud data filtering and downsampling using growing neural gas. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013. [Google Scholar]
- Sanyuan, Z.; Fengxia, L.; Yongmei, L.; Yonghui, R. A New Method for Cloud Data Reduction Using Uniform Grids; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Benhabiles, H.; Aubreton, O.; Barki, H. Fast simplification with sharp feature preserving for 3D point clouds. In Proceedings of the 11th International Symposium on Programming and Systems (ISPS), Algiers, Algeria, 22–24 April 2013. [Google Scholar]
- Zhang, J.; Wang, M.; Wang, W.; Tang, W.; Zhu, Y. Investigation on egg-shaped pressure hulls. Mar. Struct.
**2017**, 52, 50–66. [Google Scholar] [CrossRef]

**Figure 7.**The creation process of a global template point cloud, initial partial point cloud, and partial point cloud: (

**a**) Chair model in ModelNet40; (

**b**) Global template point cloud; (

**c**) Initial partial point cloud; (

**d**) Partial point cloud.

**Figure 8.**Accuracy of correspondence point estimation: (

**a**) TPCC-Net, p = 0.90; (

**b**) PRNet, p = 0.51; (

**c**) RPM-Net, p = 0.25.

**Figure 9.**Average estimation accuracy of correspondence points with different proportions of correspondence points.

**Figure 10.**Effect of registration: (

**a**) TMPE-Net ΔR = 3.7°, Δt = 0.029; (

**b**) PRNet ΔR = 17.5°, Δt = 0.269; (

**c**) RPM-Net ΔR = 21.7°, Δt = 0.328.

**Figure 11.**Average registration error with different proportions of correspondence points: (

**a**) Average calculation error of rotation angle; (

**b**) Average calculation error of translation distance.

**Figure 12.**Average time of MPCR-Net, PRNet, and RPM-Net under different proportions of correspondence points.

**Figure 14.**CAD model and global template point cloud of the hull ‘a’: (

**a**) CAD model; (

**b**) Global template point cloud.

**Figure 17.**Scanning scene photo of the hull ‘a’ and generate the actual digital model: (

**a**) Scanning scene; (

**b**) Actual digital model.

**Figure 18.**The final 3D reconstruction results: (

**a**) Full registered point cloud; (

**b**) Surface reconscheme 19. Cloud maps of contour deviation of hulls a–i calculated by MPCR-Net.

Environment | Configuration | |
---|---|---|

Software | Operating system | Windows 10 |

Deep learning framework | Pytorch 1.8.1 + CUDA 11.0 + cuDNN | |

Programming language | Python 3.8.3 | |

Point cloud processing library | Open3D | |

Hardware | CPU Memory | Intel(R) Core(TM) i5-9400F 16 GB |

Graphics card | Nvidia GeForce GTX 1070 8 GB |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Su, S.; Wang, C.; Chen, K.; Zhang, J.; Yang, H.
MPCR-Net: Multiple Partial Point Clouds Registration Network Using a Global Template. *Appl. Sci.* **2021**, *11*, 10535.
https://doi.org/10.3390/app112210535

**AMA Style**

Su S, Wang C, Chen K, Zhang J, Yang H.
MPCR-Net: Multiple Partial Point Clouds Registration Network Using a Global Template. *Applied Sciences*. 2021; 11(22):10535.
https://doi.org/10.3390/app112210535

**Chicago/Turabian Style**

Su, Shijie, Chao Wang, Ke Chen, Jian Zhang, and Hui Yang.
2021. "MPCR-Net: Multiple Partial Point Clouds Registration Network Using a Global Template" *Applied Sciences* 11, no. 22: 10535.
https://doi.org/10.3390/app112210535