# Deep Learning-Based Point Upsampling for Edge Enhancement of 3D-Scanned Data and Its Application to Transparent Visualization

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Eigenvalue-Based 3D Feature Values

## 3. Methods for Transparent Visualization and Edge Highlighting

#### 3.1. Stochastic Point-Based Rendering (SPBR)

#### 3.2. Opacity-Based Edge Highlighting

#### 3.3. Resampling for Controlling Point Density

## 4. Proposed Method for Edge-Highlighting Visualization

#### 4.1. Steps of the Proposed Method

**Random downsampling of the 3D-edge regions:**Execute downsampling for points in the 3D-edge regions. We randomly eliminate points with $f>{f}_{\mathrm{th}}$ such that the resultant point distribution obeys the selected opacity function (type (a), (b), or (c)). Points with $f<{f}_{\mathrm{th}}$ are eliminated;**Deep learning-based upsampling of the 3D-edge regions:**Execute the deep learning-based upsampling for the points obtained in STEP 1;**Point integration and visualization:**Merge the original 3D-scanned points, which include points of the non-edge regions, with the upsampled edge points obtained in STEP 2. Then, stochastic point-based rendering is applied to the integrated point dataset. In this step, we obtain a transparent image of the target 3D-scanned point cloud data with clear edge highlighting.

#### 4.2. Proposed Upsampling Network

#### 4.2.1. Overview

#### 4.2.2. Preparing the Training Data and Ground Truth

#### 4.2.3. Generator

**Point Feature Extraction**

**.**Feature extraction is important in the processing of discrete point cloud data, especially sparse edge point clouds. To extract the complete edge features, we propose a point feature extraction module to simultaneously extract the global feature and the context information inside local regions. PointNet [12] is effective for extracting global features of point clouds and performs well in various point cloud processing tasks. Thus, we adopt a multilayer perceptron(MLP) structure of dimension (32, 64, 64), similar to PointNet, to process each point and obtain global features with a size of $N\times {C}_{g}$ by max-pooling the output of a set of MLPs. However, only using global features cannot represent local geometric information. PointNet++ [27] is very effective and widely used for local feature extraction. For example, EC-Net, [28,29] adopts PointNet++ to extract features from the input point cloud data. However, within each local region, PointNet++ still extracts the features of each point independently without considering the relationship between neighboring points. For 3D point cloud data with a small number of points, PointNet++ feature extraction is efficient, but for large-scale 3D-scanned point cloud data, which usually contain millions or even tens of millions of points, sampling and finding neighboring points in PointNet++ consumes more memory and time as the number of points increases. Therefore, a fast and lightweight feature extraction module becomes necessary in our work since the main objective of our study is to upsample 3D-scanned point clouds. Inspired by DGCNN [30], we define the local neighborhood in the feature space and adopt a set of edge convolutions to extract local features. Given a sparse patch $\mathcal{P}$ with the size of $N\times 3$ as input, we compute the edge features of each point by applying MLPs with dimensions of (32, 64, 128) and obtain the local feature with a size of $N\times {C}_{l}$ after max-pooling among neighboring edge features, where ${C}_{l}$ is the number of feature channels. The local neighborhood is computed by k-nearest neighbors search in feature space and dynamically updated due to different feature outputs for each layer. Then, we concatenate the local features and global features to obtain the concatenated feature $F$ with a size of $N\times {C}_{p}$ and pass it to the next step for feature expansion.

**Point Feature Expansion.**The purpose of feature expansion is to establish the mapping from known points to more points. At present, the mainstream feature extraction methods can be roughly divided into three categories: interpolation-based methods [27], reshaping-based methods [31], and folding-based methods [32]. Interpolation-based methods usually influence feature expansion through the point interpolation relationship, but in some cases, the interpolation relationship between point clouds is unclear. Additionally, the reshaping-based method usually first expands the dimensions of the input features through a deep network such as a set of MLPs or fully connected (FC) layers and then generates the target features through a simple reshaping operation. However, the expanded features obtained in this way are closer to the input features and thus affect the upsampling quality. For example, the finally generated new points tend to be gathered near the original points. Therefore, in our work, we adopted the point feature expansion method in [19], which is a folding-based method. Compared with other feature expansion methods, the folding-based method is more flexible and has good performance in multiple applications [11,32]. In particular, the folding-based method not only avoids tedious multistep training but also promotes the generation of fine-grained information. This can save considerable memory and time for the upsampling task of 3D-scanned point cloud data and produce more refined results.

**Coordinate Reconstruction**. For an expanded feature with a size of $rN\times {C}_{p}^{\u2019}$, we regress the 3D coordinates through a series of fully connected layers of dimension (64, 3) on the feature of each point and finally output a dense patch $S$ with a size of $rN\times 3$.

#### 4.2.4. Discriminator

#### 4.2.5. Loss Function

## 5. Upsampling and Visualization Results and Evaluation of the Proposed Method

#### 5.1. Verifying the Robustness of Upsampling Networks on Simulated Edges

#### 5.2. Datasets and Implementation Details

#### 5.3. Application to Real 3D-Scanned Data

#### 5.3.1. Results of Upsampling and Visualization for 3D-Edge Regions

#### 5.3.2. Comparison with Existing Upsampling Networks

#### 5.4. Use of Statistical Outlier Removal (SOR) Filter for Noisy Data

#### 5.5. Visibility Improvement of Soft Edges Using Our Upsampling Network

## 6. Discussion

## 7. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K. See-Through Imaging of Laser-Scanned 3D Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds. In Proceedings of the ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016; Volume III-3. [Google Scholar]
- Kawakami, K.; Hasegawa, K.; Li, L.; Nagata, H.; Adachi, M.; Yamaguchi, H.; Thufail, F.I.; Riyanto, S.; Tanaka, S.; Brahmantara. Opacity-based edge highlighting for transparent visualization of 3D scanned point clouds. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci.
**2020**, 5, 373–380. [Google Scholar] [CrossRef] - Tanaka, S.; Hasegawa, K.; Shimokubo, Y.; Kaneko, T.; Kawamura, T.; Nakata, S.; Ojima, S.; Sakamoto, N.; Tanaka, H.T.; Koyamada, K. Particle-Based Transparent Rendering of Implicit Surfaces and its Application to Fused Visualization. In Proceedings of the Eurographics Conference on Visualization (EuroVis), Vienna, Austria, 5–8 June 2012. [Google Scholar]
- Uchida, T.; Hasegawa, K.; Li, L.; Adachi, M.; Yamaguchi, H.; Thufail, F.I.; Riyanto, S.; Okamoto, A.; Tanaka, S. Noise-robust transparent visualization of large-scale point clouds acquired by laser scanning. ISPRS J. Photogramm. Remote. Sens.
**2020**, 161, 124–134. [Google Scholar] [CrossRef] - Alexa, M.; Behr, J.; Cohen-Or, D.; Fleishman, S.; Levin, D.; Silva, C. Computing and rendering point set surfaces. IEEE Trans. Vis. Comput. Graph.
**2003**, 9, 3–15. [Google Scholar] [CrossRef][Green Version] - Lipman, Y.; Cohen-Or, D.; Levin, D.; Tal-Ezer, H. Parameterization-free projection for geometry reconstruction. ACM Trans. Graph.
**2007**, 26, 22. [Google Scholar] [CrossRef] - Huang, H.; Li, D.; Zhang, H.; Ascher, U.; Cohen-Or, D. Consolidation of unorganized point clouds for surface reconstruction. ACM Trans. Graph.
**2009**, 28, 1–7. [Google Scholar] [CrossRef][Green Version] - Zhou, Y.; Tuzel, O. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 4490–4499. [Google Scholar]
- Shi, S.; Wang, X.; Li, H. PointRCNN: 3D Object Proposal Generation and Detection From Point Cloud. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2019; pp. 770–779. [Google Scholar]
- Huang, Z.; Yu, Y.; Xu, J.; Ni, F.; Le, X. PF-Net: Point Fractal Network for 3D Point Cloud Completion. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2020; pp. 7659–7667. [Google Scholar]
- Wen, X.; Li, T.; Han, Z.; Liu, Y.-S. Point Cloud Completion by Skip-Attention Network With Hierarchical Folding. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2020; pp. 1936–1945. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep Learning on Point Sets for 3d Classification and Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2017; pp. 652–660. [Google Scholar]
- Zhang, Y.; Rabbat, M. A Graph-CNN for 3D Point Cloud Classification. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2018; pp. 6279–6283. [Google Scholar]
- Liu, Y.; Fan, B.; Xiang, S.; Pan, C. Relation-Shape Convolutional Neural Network for Point Cloud Analysis. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2019; pp. 8887–8896. [Google Scholar]
- Landrieu, L.; Simonovsky, M. Large-Scale Point Cloud Semantic Segmentation with Superpoint Graphs. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2018; pp. 4558–4567. [Google Scholar]
- Wang, W.; Yu, R.; Huang, Q.; Neumann, U. SGPN: Similarity Group Proposal Network for 3D Point Cloud Instance Segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA; pp. 2569–2578. [Google Scholar]
- Yu, L.; Li, X.; Fu, C.-W.; Cohen-Or, D.; Heng, P.A. PU-Net: Point Cloud Upsampling Network. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2018; pp. 2790–2799. [Google Scholar]
- Yifan, W.; Wu, S.; Huang, H.; Cohen-Or, D.; Sorkine-Hornung, O. Patch-based Progressive 3D Point Set Upsampling. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–21 June 2019; pp. 5958–5967. [Google Scholar]
- Li, R.; Li, X.; Fu, C.-W.; Cohen-Or, D.; Heng, P.-A. PU-GAN: A Point Cloud Upsampling Adversarial Network. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2019; pp. 7203–7212. [Google Scholar]
- Chang, A.X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H. Shapenet: An information-rich 3d model repository. arXiv
**2015**, arXiv:1512.03012 2015. [Google Scholar] - Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D ShapeNets: A deep representation for volumetric shapes. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2015; pp. 1912–1920. [Google Scholar]
- West, K.F.; Webb, B.N.; Lersch, J.R.; Pothier, S.; Triscari, J.M.; Iverson, A.E. Context-Driven Automated Target Detection in 3D Data. In Proceedings of the Automatic Target Recognition XIV, Orlando, FL, USA, 21 September 2004. [Google Scholar]
- Rusu, R.B. Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. KI Künstliche Intell.
**2010**, 24, 345–348. [Google Scholar] [CrossRef][Green Version] - Weinmann, M.; Jutzi, B.; Mallet, C. Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant features. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci.
**2014**, II-3, 181–188. [Google Scholar] [CrossRef][Green Version] - Jutzi, B.; Gross, H. Nearest neighbour classification on laser point clouds to gain object structures from buildings. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2009**, 38, 4–7. [Google Scholar] - Corsini, M.; Cignoni, P.; Scopigno, R. Efficient and Flexible Sampling with Blue Noise Properties of Triangular Meshes. IEEE Trans. Vis. Comput. Graph.
**2012**, 18, 914–924. [Google Scholar] [CrossRef][Green Version] - Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Comput. Vis. Pattern Recognit.
**2017**, arXiv:1706.02413 2017. [Google Scholar] - Yu, L.; Li, X.; Fu, C.-W.; Cohen-Or, D.; Heng, P.A. EC-Net: An Edge-Aware Point Set Consolidation Network. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2018; pp. 398–414. [Google Scholar]
- Chen, N.; Liu, L.; Cui, Z.; Chen, R.; Ceylan, D.; Tu, C.; Wang, W. Unsupervised Learning of Intrinsic Structural Representation Points. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2020; pp. 9118–9127. [Google Scholar]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J. Dynamic Graph CNN for Learning on Point Clouds. ACM Trans. Graph.
**2019**, 38, 1–12. [Google Scholar] [CrossRef][Green Version] - Achlioptas, P.; Diamanti, O.; Mitliagkas, I.; Guibas, L. Learning Representations and Generative Models for 3d Point Clouds. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 40–49. [Google Scholar]
- Yang, Y.; Feng, C.; Shen, Y.; Tian, D. FoldingNet: Point Cloud Auto-Encoder via Deep Grid Deformation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2018; pp. 206–215. [Google Scholar]
- Yuan, W.; Khot, T.; Held, D.; Mertz, C.; Hebert, M. Pcn: Point Completion Network. In Proceedings of the 2018 International Conference on 3D Vision (3DV), Verona, Italy, 5–8 September 2018; pp. 728–737. [Google Scholar]
- Fan, H.; Su, H.; Guibas, L. A Point Set Generation Network for 3D Object Reconstruction from a Single Image. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2017; pp. 2463–2471. [Google Scholar]
- Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S. Least Squares Generative Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2017; pp. 2794–2802. [Google Scholar]
- Visionair. Available online: http://www.infra-visionair.eu (accessed on 21 March 2021).
- Kingma, D.P.; Ba, J. Adam: A method for Stochastic Optimization. In Proceedings of the International Conference Learn Represent (ICLR), San Diego, CA, USA, 5–8 May 2015. [Google Scholar]
- Berger, M.; Levine, J.A.; Nonato, L.G.; Taubin, G.; Silva, C.T. A benchmark for surface reconstruction. ACM Trans. Graph.
**2013**, 32, 1–17. [Google Scholar] [CrossRef] - Lague, D.; Brodu, N.; Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z). ISPRS J. Photogramm. Remote Sens.
**2013**, 82, 10–26. [Google Scholar] [CrossRef][Green Version] - ShapeNetCore. Available online: https://shapenet.org (accessed on 4 June 2021).
- Rusu, R.B.; Cousins, S. 3d is Here: Point Cloud Library (pcl). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
- Li, W.; Shigeta, K.; Hasegawa, K.; Li, L.; Yano, K.; Adachi, M.; Tanaka, S. Transparent Collision Visualization of Point Clouds Acquired by Laser Scanning. ISPRS Int. J. Geo-Inf.
**2019**, 8, 425. [Google Scholar] [CrossRef][Green Version]

**Figure 1.**The architecture of the proposed upsampling network. Note that $N$ is the number of points in input patch $\mathcal{P}$, $r$ is the upsampling rate, and ${C}_{g}$, ${C}_{l}$, ${C}_{p}$, ${C}_{p}^{\u2019}$, ${C}_{d}$, and ${C}_{d}^{\u2019}$ are the numbers of feature channels. Given a sparse input patch $\mathcal{P}$ with $N$ points, we generate a dense patch $S$ with $rN$ points in the generator, which consists of feature extraction, feature expansion, and coordinate reconstruction. The goal of the discriminator is to distinguish whether its input is produced by the generator.

**Figure 2.**The upsampling results of the edge data for Joint (top) and Fandisk (bottom); (

**a**) are the original polygon model, (

**b**) are the original point cloud and extracted 3D edges, (

**c**) are the modified edge data, and (

**d**) are the upsampling results of the modified edge data.

**Figure 3.**Qualitative verification of block (top) and cover rear (bottom). (

**a**) Inputs, (

**b**) ground truth, (

**c**) PU-NET, (

**d**) PU-GAN, (

**e**) our proposed method.

**Figure 4.**The 3D laser-scanned data of the former Nakajima Residence, which is a traditional Southeast Asian folk house (24,074,424 points).

**Figure 5.**Extraction of edge points from 3D-scanned data in Figure 4. Standard binary extraction is adopted using change-of-curvature ${C}_{\lambda}$ as the feature value $f$ and regarding the regions with $f>{f}_{\mathrm{th}}$ as the 3D-edge regions. The threshold parameter ${f}_{\mathrm{th}}$ is set to 0.25. (

**a**) shows the initially extracted points, and (

**b**) shows the result of applying our upsampling network.

**Figure 6.**Edge highlighting transparent visualizations of the 3D-scanned data in Figure 4. (

**a**) is created based on the integrated point dataset composed of the original 3D-scanned points and the extracted edge points in Figure 5a. (

**b**) shows a similar visualization in which the upsampled edge points in Figure 5b are used instead.

**Figure 7.**(

**a**) Points on the extracted 3D edges of the gymnasium using the change-of-curvature and type (b) function. (

**b**) Points obtained by executing our upsampling for the points in (

**a**). The parameters are set as follows: ${f}_{\mathrm{th}}=0.3$, ${\alpha}_{\mathrm{min}}=0.2$, ${\alpha}_{\mathrm{max}}=1.0$, and $d=3.0$. Each large rectangle shows the enlarged image of the area indicated by the corresponding small rectangle.

**Figure 8.**Fused transparent visualization of the original 3D-scanned point cloud with the extracted edge points. (

**a**) shows the fusion with the edge points before upsampling, and (

**b**) shows the fusion after the upsampling using our proposed network.

**Figure 9.**(

**a**) CAD model of the Resort Sofa Bed. (

**b**) shows the extracted edge points by adopting change-of-curvature and the type (a) function with ${f}_{\mathrm{th}}=0.2$, ${\alpha}_{\mathrm{max}}=1.0$.

**Figure 10.**Results of the upsampling when using (

**a**) PU-NET, (

**b**) PU-GAN, and (

**c**) our proposed deep learning network.

**Figure 11.**Ablation study on point feature extraction module. (

**a**) shows the upsampling result using only local features, and (

**b**) shows the upsampling result using only global features.

**Figure 12.**3D laser-scanned data used in our comparative experiments of deep learning networks (10,480,242 points). The scanned target is the campus building of Kyoto Women’s University, Japan.

**Figure 13.**Edge points extracted from the 3D laser-scanned points in Figure 12. We use these edge points for our comparative study of the upsampling. Each large rectangle shows the enlarged image of the area indicated by the small rectangle with the same shape.

**Figure 14.**Results of the upsampling when using (

**a**) PU-NET, (

**b**) PU-GAN, and (

**c**) our proposed deep learning network. Each large rectangle shows the enlarged image of the area indicated by the small rectangle with the same shape, corresponding to Figure 13.

**Figure 15.**Frequency distribution of the point number with respect to the feature value (change-of-curvature ${C}_{\lambda}$) for the initially extracted edge points and the upsampled edge points by using the three networks.

**Figure 16.**3D-laser-scanned point cloud of the Hachiman-Yama float used in the Gion Festival in Kyoto City, Japan (7,866,197 points).

**Figure 17.**(

**a**) Edge points of the point cloud data in Figure 16. The set of edge points is created by extracting points by using linearity ${L}_{\lambda}$ and adopting the type (b) function and then applying our upsampling network. (

**b**) Edge-highlighted transparent visualization using the points of (a) and the original 3D-scanned points in Figure 16.

**Figure 18.**(

**a**) Edge points improved by the combined use of the SOR filter with our upsampling network. (

**b**) Edge-highlighted transparent visualization using the points of (a) and the original 3D-scanned points in Figure 16.

**Figure 19.**Frequency distribution of the point number with respect to the feature value (linearity ${L}_{\lambda}$) for the initially extracted edge points, upsampled edge points without using the SOR filter, and the upsampled edge points using the SOR filter.

**Figure 20.**Upsampling experiment for 3D-scanned data with soft edges. The scanned target is the Japanese armor. Image (

**a**) shows the 3D-scanned point cloud (9,094,466 points) acquired by our photogrammetric 3D scanning. Image (

**b**) shows the extracted edge points (2,571,474 points) by adopting change-of-curvature and the type (c) function with ${f}_{\mathrm{th}}=0.03$, ${F}_{\mathrm{th}}=0.2$, ${\alpha}_{\mathrm{min}}=0.2$, ${\alpha}_{\mathrm{max}}=1.0$, and $d=2.0$. Image (

**c**) shows the soft-edge points increased by our upsampling network (10,285,896 points).

Objects | Scale [m] | Data | Number of Points | $\mathbf{Density}\text{}\left({10}^{6}\right)$ $[\mathrm{Points}/{\mathrm{m}}^{3}]$ |
---|---|---|---|---|

Joint | $0.93\times 1.24\times 1.18$ | Original point cloud | 50,000 | 1.55 |

Extracted edges | 2283 | 1.41 | ||

Modified edges | 1072 | 1.19 | ||

Upsampling result | 4290 | 2.99 | ||

Fandisk | $1.29\times 1.40\times 0.72$ | Original point cloud | 50,000 | 2.05 |

Extracted edges | 2518 | 1.95 | ||

Modified edges | 1441 | 1.62 | ||

Upsampling result | 5764 | 2.69 |

Objects | Methods | $\mathbf{Ratio}\text{}\mathbf{of}\text{}\mathbf{Edge}\text{}\mathbf{Points}\text{}({\mathit{C}}_{\mathit{\lambda}}\ge 0.25)$ | $\mathbf{Cloud}-\mathbf{to}-\mathbf{Cloud}\text{}\mathbf{Distance}\text{}\left({10}^{-2}\right)$ | Hausdorff Distance |
---|---|---|---|---|

Block | PU-NET | 65.26% | 3.19 | 0.31 |

PU-GAN | 58.53% | 3.42 | 0.32 | |

Proposed method | 82.37% | 1.63 | 0.29 | |

Cover rear | PU-NET | 62.62% | 1.51 | 0.72 |

PU-GAN | 53.77% | 1.71 | 0.72 | |

Proposed method | 79.83% | 1.15 | 0.67 |

Data | Scale [m] | Number of Points | $\mathbf{Density}\text{}\left({10}^{5}\right)$ $[\mathrm{Points}/{\mathrm{m}}^{3}]$ |
---|---|---|---|

Original point cloud | $10.47\times 15.13\times 7.18$ | 24,074,424 | 8.46 |

Initially extracted edges | 3,821,874 | 3.33 | |

Upsampling edges | 15,287,496 | 6.34 |

Data | Scale [m] | Number of Points | $\mathbf{Density}\text{}\left({10}^{3}\right)$ $[\mathrm{Points}/{\mathrm{m}}^{3}]$ |
---|---|---|---|

Original point cloud | $25.59\times 43.42\times 20.04$ | 5,234,550 | 5.21 |

Initially extracted edges | 1,211,452 | 4.38 | |

Upsampling edges | 4,845,808 | 4.84 |

$\mathbf{Ratio}\text{}\mathbf{of}\text{}\mathbf{Edge}\text{}\mathbf{Points}\text{}({\mathit{C}}_{\mathit{\lambda}}\ge 0.25)$ | $\mathbf{Cloud}-\mathbf{to}-\mathbf{Cloud}\text{}\mathbf{Distance}\text{}\left({10}^{-2}\right)$ | Hausdorff Distance | |
---|---|---|---|

PU-NET | 75.00% | 4.42 | 0.89 |

PU-GAN | 84.02% | 3.53 | 1.44 |

Proposed method | 92.71% | 2.65 | 0.64 |

Number of Edge Points | $\mathbf{Ratio}\text{}\mathbf{of}\text{}\mathbf{Edge}\text{}\mathbf{Points}\text{}({\mathit{L}}_{\mathit{\lambda}}\ge 0.25)$ | $\mathbf{Average}\text{}{\mathit{L}}_{\mathit{\lambda}}\text{}\mathbf{of}\text{}\mathbf{the}\text{}\mathbf{Edge}\text{}\mathbf{Points}$ | |
---|---|---|---|

Initially extracted edges | 1,200,278 | 95.32% | 0.64 |

Upsampling edges | 4,801,112 | 95.67% | 0.66 |

SOR filter | 3,661,036 | 97.17% | 0.73 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Li, W.; Hasegawa, K.; Li, L.; Tsukamoto, A.; Tanaka, S. Deep Learning-Based Point Upsampling for Edge Enhancement of 3D-Scanned Data and Its Application to Transparent Visualization. *Remote Sens.* **2021**, *13*, 2526.
https://doi.org/10.3390/rs13132526

**AMA Style**

Li W, Hasegawa K, Li L, Tsukamoto A, Tanaka S. Deep Learning-Based Point Upsampling for Edge Enhancement of 3D-Scanned Data and Its Application to Transparent Visualization. *Remote Sensing*. 2021; 13(13):2526.
https://doi.org/10.3390/rs13132526

**Chicago/Turabian Style**

Li, Weite, Kyoko Hasegawa, Liang Li, Akihiro Tsukamoto, and Satoshi Tanaka. 2021. "Deep Learning-Based Point Upsampling for Edge Enhancement of 3D-Scanned Data and Its Application to Transparent Visualization" *Remote Sensing* 13, no. 13: 2526.
https://doi.org/10.3390/rs13132526