# Data-Driven Point Cloud Objects Completion

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Network Architecture

#### 2.1. Problem Statement

#### 2.2. PCCNet Architecture

#### 2.3. Loss Function

## 3. Experiment

#### 3.1. Dataset and Implementation Details

#### 3.2. Evaluation of the Proposed PCCNet

#### 3.3. Comparison with Traditional Point Completion Works

## 4. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Yue, X.; Wu, B.; Seshia, S.A.; Keutzer, K.; Sangiovanni-Vincentelli, A.L. A LiDAR Point Cloud Generator: From a Virtual World to Autonomous Driving. In Proceedings of the ACM on International Conference on Multimedia Retrieval, Yokohama, Japan, 11–14 June 2018. [Google Scholar]
- Wu, T.; Liu, J.; Li, Z.; Liu, K.; Xu, B. Accurate Smartphone Indoor Visual Positioning Based on a High-Precision 3D Photorealistic Map. Sensors
**2018**, 18, 1974. [Google Scholar] [CrossRef] [PubMed] - Stets, J.D.; Sun, Y.; Corning, W.; Greenwald, S. Visualization and Labeling of Point Clouds in Virtual Reality. arXiv, 2018; arXiv:1804.04111. [Google Scholar]
- Wu, M.L.; Chien, J.C.; Wu, C.T.; Lee, J.D. An Augmented Reality System Using Improved-Iterative Closest Point Algorithm for On-Patient Medical Image Visualization. Sensors
**2018**, 18, 2505. [Google Scholar] [CrossRef] - Balsabarreiro, J.; Lerma, J.L. A new methodology to estimate the discrete-return point density on airborne lidar surveys. Int. J. Remote Sens.
**2014**, 35, 1496–1510. [Google Scholar] [CrossRef] - Balsa-Barreiro, J.; Lerma, J.L. Empirical study of variation in lidar point density over different land covers. Int. J. Remote Sens.
**2014**, 35, 3372–3383. [Google Scholar] [CrossRef] - Ley, A.; Drhondt, O.; Hellwich, O. Regularization and Completion of TomoSAR Point Clouds in a Projected Height Map Domain. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2018**, 11, 2104–2114. [Google Scholar] [CrossRef] - Cai, Z.; Wang, C.; Wen, C.; Li, J. Occluded Boundary Detection for Small-Footprint Groundborne LIDAR Point Cloud Guided by Last Echo. IEEE Geosci. Remote Sens. Lett.
**2015**, 12, 2272–2276. [Google Scholar] [CrossRef] - Mallet, C.; Bretar, F. Full-waveform topographic lidar: State-of-the-art. ISPRS J. Photogramm. Remote Sens.
**2009**, 64, 1–16. [Google Scholar] [CrossRef] - Zhou, G.; Zhou, X. Seamless Fusion of LiDAR and Aerial Imagery for Building Extraction. IEEE Trans. Geosci. Remote Sens.
**2014**, 52, 7393–7407. [Google Scholar] [CrossRef] - Zhou, G.; Song, C.; Simmers, J.; Cheng, P. Urban 3D GIS From LiDAR and digital aerial images. Comput. Geosci.
**2004**, 30, 345–353. [Google Scholar] [CrossRef] - Zhang, J.; Lin, X. Advances in fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing. Int. J. Image Data Fusion
**2016**, 8, 1–31. [Google Scholar] [CrossRef] - Wang, H.; Wang, C.; Luo, H.; Li, P.; Cheng, M.; Wen, C.; Li, J. Object Detection in Terrestrial Laser Scanning Point Clouds Based on Hough Forest. IEEE Geosci. Remote Sens. Lett.
**2014**, 11, 1807–1811. [Google Scholar] [CrossRef] [Green Version] - Ivan, S.; Robert, G.; Tobias, S. Approximate Symmetry Detection in Partial 3D Meshes. Comput. Graph. Forum
**2014**, 33, 131–140. [Google Scholar] [Green Version] - Speciale, P.; Oswald, M.R.; Cohen, A.; Pollefeys, M. A Symmetry Prior for Convex Variational 3D Reconstruction; Springer International Publishing: Cham, Switzerland, 2016; pp. 313–328. [Google Scholar]
- Balsa-Barreiro, J.; Fritsch, D. Generation of 3D/4D Photorealistic Building Models. The Testbed Area for 4D Cultural Heritage World Project: The Historical Center of Calw (Germany). In Advances in Visual Computing; Springer International Publishing: Cham, Switzerland, 2015. [Google Scholar]
- Balsa-Barreiro, J.; Fritsch, D. Generation of visually aesthetic and detailed 3D models of historical cities by using laser scanning and digital photogrammetry. Digit. Appl. Archaeol. Cult. Herit.
**2018**, 8, 57–64. [Google Scholar] [CrossRef] - Chang, A.X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H. ShapeNet: An Information-Rich 3D Model Repository. Comput. Sci.
**2015**, arXiv:1512.03012. [Google Scholar] - Wu, J.; Zhang, C.; Xue, T.; Freeman, W.T.; Tenenbaum, J.B. Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling. Neural Inf. Process. Syst.
**2016**, arXiv:1610.07584, 82–90. [Google Scholar] - Fan, H.; Su, H.; Guibas, L. A Point Set Generation Network for 3D Object Reconstruction from a Single Image. Comput. Vis. Pattern Recognit.
**2016**, arXiv:1612.00603. [Google Scholar] - Yan, X.; Yang, J.; Yumer, E.; Guo, Y.; Lee, H. Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision. Neural Inf. Process. Syst.
**2016**, arXiv:1612.00814, 1696–1704. [Google Scholar] - Tatarchenko, M.; Dosovitskiy, A.; Brox, T. Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs. arXiv, 2017; arXiv:1703.09438. [Google Scholar] [Green Version]
- Lin, C.H.; Kong, C.; Lucey, S. Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. Comput. Sci.
**2016**, arXiv:1612.00593. [Google Scholar] - Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. Comput. Sci.
**2017**, arXiv:1706.02413. [Google Scholar] - Besl, P.J.; Mckay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach.
**2002**, 14, 239–256. [Google Scholar] [CrossRef] - Cicek, O.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In Medical Image Computing and Computer Assisted Intervention; Springer International Publishing: Cham, Switzerland, 2016; pp. 424–432. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. Comput. Sci.
**2014**, arXiv:1412.6980. [Google Scholar] - Tatarchenko, M.; Dosovitskiy, A.; Brox, T. Multi-View 3D Models from Single Images with a Convolutional Network; Springer International Publishing: Cham, Switzerland, 2016; pp. 231–257. [Google Scholar]
- Yu, X.; Kim, W.; Wei, C.; Ji, J.; Choy, C.; Hao, S.; Mottaghi, R.; Guibas, L.; Savarese, S. ObjectNet3D: A Large Scale Database for 3D Object Recognition. In European Conference on Computer Vision; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]

**Figure 2.**The sample images of reconstruction and completion on Mobile Laser Scanning (MLS) point clouds. (

**a**) The real street images. (

**b**) The scanned point clouds. (

**c**) The generated point clouds (rendered). (

**d**) The merged point clouds.

**Figure 7.**Results on rendered images. (

**a**) Rendered images. (

**b**) Ground truth. (

**c**) Generated point clouds by $PCCNet$.

**Figure 8.**Two samples of projection from the same viewpoint. (

**a**) Rendered input images. (

**b**) Projection of the ground truth. (

**c**) Projection of the generated shapes.

**Figure 9.**Comparison on training loss curve. The red line displays the training process without projection loss, and the blue line is with projection loss.

**Figure 10.**Car images from ObjectNet3D [30]. The orders from left to right: original images, the results of $PCCNet$ and $OGN$.

**Figure 11.**Car images from Internet. The orders from left to right: original images, the results of $PCCNet$ and $PSGN$.

**Figure 12.**Results of the MLS objects completion. (

**a**) Street images. (

**b**) Original MLS point clouds (Missing more than a half). (

**c**) The completion results of $PCCNet$. (

**d**) The completion results of [8].

**Figure 13.**More results of MLS data. (

**a**) Street images. (

**b**) Original MLS point clouds. (

**c**) The completion results of $PCCNet$.

Category | $\mathit{PSGN}$ (CD) | $\mathit{PCCNet}\_\mathit{WP}$ | $\mathit{PCCNet}\_\mathit{WF}$ | $\mathit{PCCNet}\_\mathit{P}$ |
---|---|---|---|---|

Sofa | 0.00220 | 0.00201 | 0.00195 | 0.00161 |

Airplane | 0.00100 | 0.00084 | 0.00092 | 0.00071 |

Bench | 0.00251 | 0.00233 | 0.00231 | 0.00195 |

Car | 0.00128 | 0.00136 | 0.00127 | 0.00123 |

Chair | 0.00238 | 0.00210 | 0.00191 | 0.00181 |

Category | $\mathit{OGN}$ (IoU) | $\mathit{PCCNet}\_\mathit{WP}$ | $\mathit{PCCNet}\_\mathit{WF}$ | $\mathit{PCCNet}\_\mathit{P}$ |
---|---|---|---|---|

Sofa | 0.11204 | 0.19014 | 0.19310 | 0.21018 |

Airplane | 0.14727 | 0.34216 | 0.28621 | 0.43376 |

Bench | 0.04608 | 0.25839 | 0.26517 | 0.27712 |

Car | 0.44141 | 0.31326 | 0.31591 | 0.33721 |

Chair | 0.13935 | 0.20318 | 0.24133 | 0.25320 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Zhang, Y.; Liu, Z.; Li, X.; Zang, Y.
Data-Driven Point Cloud Objects Completion. *Sensors* **2019**, *19*, 1514.
https://doi.org/10.3390/s19071514

**AMA Style**

Zhang Y, Liu Z, Li X, Zang Y.
Data-Driven Point Cloud Objects Completion. *Sensors*. 2019; 19(7):1514.
https://doi.org/10.3390/s19071514

**Chicago/Turabian Style**

Zhang, Yang, Zhen Liu, Xiang Li, and Yu Zang.
2019. "Data-Driven Point Cloud Objects Completion" *Sensors* 19, no. 7: 1514.
https://doi.org/10.3390/s19071514