# Intact Planar Abstraction of Buildings via Global Normal Refinement from Noisy Oblique Photogrammetric Point Clouds

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

#### 1.1. Related Works

**Normal Estimation:**The normal vectors of point clouds are fundamental geometric shape properties (i.e., representation of the geometry around a specific point) for planar segmentation and many other applications [7] and are still under active study [8,9]. There are two core steps for satisfactory normal estimation: (1) determining the selected region around each point and (2) determining the geometric kernel to fit the tangent space for the above region as shown in Figure 1. For the first issue, the most intuitive and widely adopted region is the k-neighborhood (or

**r**-ball), based on the k-dimensional tree (kd-tree) [10]. Local region selection and covariance estimation methods may be improved by considering density anisotropy using different weightings [11] and robust statistical information [12], respectively. For the second issue, the standard approach to normal estimation is based on principal component analysis (PCA) [13], for which the geometrical interpretation is to fit a thin ellipsoid, and the direction for the minor axis is the normal vector [14]. Jet fitting [15] is another popular method that uses a quadratic surface as the geometric kernel. Other geometric kernels, including spheres and splines, can also be adapted to the smooth surface for normal estimation and the determination of other geometrical properties [16]. Recently, research has even demonstrated that the geometric kernel can be learned from exemplar datasets using deep neural networks [16]. Although the above methods with sophisticated geometrical kernels are suitable for complex shapes, urban environments (especially buildings) have planes as the primary structure, and the problem caused by noise in a local region is more serious, which requires information in a larger context to alleviate its influences, as shown in Figure 1.

**Model fitting:**Model fitting-based planar extraction methods can be categorized by the approaches used to determine the model, including the random sample consensus (RANSAC) [17,18] and Hough transform (HT) [19]. The HT has been used to detect various types of geometries, including planes [20] and L-shape structures [21] in point clouds. However, the time and memory required to perform a HT on a large data set are prohibitively high [22], which impedes its usage in a large scale urban environment. Although RANSAC-based methods are more robust and better at handling outliers, RANSAC generates a planar surface with a maximal number of points for every iteration, with many spurious planes that may be generated [23]. To prevent the detection of these spurious planes, follow-up works have extended RANSAC by considering local surface normal vectors [24] or combining it with a regional growing method [18]. However, as described above, the local surface property is sensitive to noise [25], which limits its usage for photogrammetric point clouds from oblique images.

**Region growing:**Region growing-based [26] planar extraction methods generally consist of three major factors: (1) the selection of the starting seed primitives; (2) the criteria to extend the seed region; and (3) the criteria to terminate growing. First, the choice of seed is not limited to points, but can also be triangles in a surface mesh [27] or initial planar primitives [28]. The most intuitive method to place a seed is random selection; however, if the seed primitive is located in areas with high noise, the growth regions may deviate from the expected regions. To overcome this problem, points with good planarity should be chosen for the seed points [29]. Second, for the growing criteria, the similarity of normal vectors and the distance between neighbor points and the current region are widely adopted [30,31,32]. Alternatively, neighboring patches can be applied to growing regions [33] based on minimizing the deformable energy. Third, as the region grows larger, the fitting error also monotonically increases, and the termination criteria are determined by the largest error. In an empirical evaluation, the region growing-based method has a higher recall rate on the retrieved primitive than the model fitting-based method [34], which is much slower. In addition, because only the local information in a small neighborhood context is used for growing in each iteration, the global context is ignored; therefore, the regional growing methods are intrinsically sensitive to noise and probably lead to the problem of crossing the object boundary.

**Supervoxel clustering**: Similar to the well-studied superpixel clustering, the clustering of points into supervoxels can also be used as an intermediate representation for further application and still remain in the developmental stage [35]. The most popular artificial method for supervoxel clustering is the Voxel Cloud Connectivity Segmentation (VCCS) [36]. VCCS first generates an adjacency graph using octree, and divides the space into a voxelized grid with a seed resolution and voxel resolution. Then, VCCS supervoxels are clustered by 39 dimensional local geometric features. VCCS supervoxels are reported to be highly efficient, and the results on RGB-D test data ensure no crossing over object boundaries. For further applications, such as object detection [37], classification [38] and segmentation [39], there are three advantages for point clouds being partitioned into supervoxels as basic entities in comparison to a single point. First, supervoxels provide more discernible structural features; second, the adjacency relationships are clearer; third, the computational complexity is significantly reduced, especially for large-scale urban scenes. Because of the advantage of reducing computational complexity and preserving boundaries, in this paper, the supervoxel is also adopted in the first stage and augmented with contextual information to overcome the noise problem.

#### 1.2. Contributions

## 2. Methods

#### 2.1. Boundary-Preserved Supervoxel Clustering

_{1}, p

_{2}, …, p

_{n}}, this step splits P into different clusters, C = {c

_{1}, c

_{2}, …, c

_{m}}, where each cluster c

_{i}contains a non-overlapping subset of P. Our proposed implementation is motivated by the VCCS [36], which is publicly available in the PCL (point cloud library) [40]. However, as the original method is designed in a sequential iterative way and the clustering is, in fact, the bottleneck of the whole pipeline, we propose a parallel extension of the original supervoxel clustering method and furthermore, a boundary constraint is enforced to preserve sharp features.

_{i}in urban environments:

^{T}represents the spatial position, [L, a, b]

^{T}represents the color in CIE space, and [n

_{x}, n

_{y}, n

_{z}]

^{T}represents the noisy normal vector estimated with the orientation determined from the viewpoint of the aerial oblique images of the SFM-MVS pipeline. Normal vectors are used instead of the FPFH (fast point feature histograms) feature [42], because they are more intuitive and efficient in constraining planar structures. The feature distance is also normalized and weighted from the three categories, as in the original method of Papon et al. [36]. For original implementation, the expansion of all of the leaf points is implemented sequentially and becomes the bottleneck for the whole pipeline. In this paper, this step is parallelized by caching the provisional testing results and updating all supervoxels after a whole epoch of expansion tests. In practice, we found no significant difference between the two strategies, but there was almost a linear speedup with respect to the number of parallel threads.

_{1}< σ

_{2}< σ

_{3}and η

_{1}, η

_{2}, η

_{3}, respectively. The following constraint is adopted:

_{1}and τ

_{2}, respectively.

_{i}, which improves the successive growth of the maximum planar support region, as described below.

#### 2.2. Hierarchical Generation of the Maximum Planar Support Region

_{i}, where S

_{i}= {c

_{i}, c

_{j}∈ C|j = 1, 2, …, n, j ≠ i}, and S is assumed to be planar.

_{i}∈ C, is created around the supervoxel c

_{i}(k = 16); (2) the success of the expansion is tested, as described later; (3) if the expansion fails, k is decreased by a factor of two; and (4) steps (2) and (3) are repeated until no supervoxel is inserted, as shown in Figure 3c. In practice, the expansion may fail if the supervoxel is located around a sharp feature, because the neighbor set around the supervoxel may cross the sharp feature. Thus, a directional search strategy is used for this type of supervoxel, which is based on the fact that the shape of a supervoxel is generally abstracted by a quadrangle, and the expansion prefers to expand toward the direction with more consistent normal vectors. The two cores different from above strategy are (1) determining the four adjacent neighbors around the supervoxel c

_{i}; (2) decreasing the thresholds of criteria, which are determined by the Equations (2) and (3), to half.

_{i}, rather than those of supervoxels c

_{i}and c

_{j}, the effects of noise are smoothed and reduced. A large range is specified for the angle between the plane and supervoxel, θ(

**n**

_{i},

**n**), which is specified to prevent supervoxels on other planes from being accepted during the growing phase:

**n**represents the normal vector of the primitives and a threshold of 15° [45] is selected for τ

_{3}.

_{i}∈ S

_{j}and c

_{j}∈ S

_{i}}, which are responsible for the observations during global optimization, as discussed below.

#### 2.3. Global Optimization of Normal Vectors

_{ij}. Instead of directly enforcing the equality of normal vectors [46], because the constraint of a unit vector for the normal vectors may be violated in the iterative least-squares optimization, we indirectly optimize the rotation vector

**r**= [r

_{1}, r

_{2}, r

_{3}]

^{T}for each supervoxel. For each supervoxel, the normal vector should be normalized to a unit vector so that one degree of freedom is lost. However, this makes it difficult to determine the thresholds for robust estimation and outlier removal, as described below. The data term for optimization is described by the angle differences between each pair, U

_{ij}, after reorientation, which is parametrized by the angle-axis notation as

**r**∈ $\mathcal{R}$

^{3}. To remove the ambiguity of the rotation in the global model, and to ensure that the method is robust to outliers, a L

_{2}-norm regularization term and a robust loss function ρ(•) are included in the optimization as described below:

^{3×3}represents the rotation matrix from

**r**and λ represents a weight parameter that balances the importance of the two terms. |S| represents the number of supervoxels in maximum planar support region. The Rodrigues rotation [47], ‖r‖

_{2}represents the angle of the rotation around a fixed axis

**r**. The Rodrigues representation is used instead of Euler angles to increase the smoothness around the zero rotation. Both the data and regularization terms have the same physical meaning, which makes the weight parameters easy to determine; a value of λ = 0.1 is used in this study. The standard Huber loss [48] is used for loss evaluation. Huber loss requires the parameter a (15° is selected) [49] to separate the inlier and outlier regions. For inliers, this is identical to the squared loss that grows quadratically; for outliers, it grows linearly to reduce influences, as in Equation (5). In the iterative solver, the 3σ law [50] is used to remove obvious outliers.

**r**for each supervoxel, the normal vectors are reoriented and assigned to the underlying point clouds for further segmentation. The obtained normal vectors of point clouds are robust not only to noise, but can also signally retain the sharp features of buildings, as shown in Figure 6. Besides, the RMSE (root mean squares error) is used to evaluate the results. N represents the number of points,

**n**represents the estimated normal vector, and

**n**

^{o}represents the reference normal vector.

#### 2.4. Plane Extraction Guided by the Maximum Planar Support Region

_{i}, p

_{j}) is less than the distance threshold τ

_{d}, then the seed point p

_{i}and its neighbor can be considered to be spatially connected. The threshold τ

_{d}is determined by the average point spacing (1.5 times is used in this study).

_{a}(15° is used), then the seed and neighbor are assumed to belong to a smooth planar patch.

_{proj}represents the projected point on the plane, Q represents a given point, P represents the point that must be projected, and

**n**represents the normal vector of plane. The final new plane can be referred to as the plane abstraction.

## 3. Experimental Evaluations and Analysis

#### 3.1. Qualitative Evaluations

#### 3.1.1. Evaluations of the Global Normal Optimization

#### 3.1.2. Evaluations of Large-Scale Tilewise Planar Extraction

#### 3.1.3. Evaluations of the Abstraction Quality of a Single Building

#### 3.2. Quantitative Analysis

_{up}) is used to assess the robustness to noise. The final metric used is points correctness(PC), which is defined as follows:

_{Correctness}represents the number of points correctly segmented and P

_{total}represents the total number of points.

_{up}in PLINKAGE is not counted. The quantified indicators of the extracted planes are counted as shown in Table 2. For Building 1, the number of FP from our method is higher than that from RG-PCA. Because of the high noise level, many planes cannot be detected by the RG-PCA.

_{up}values of the different segmentation methods. Moreover, over-segmentation caused by noise results in low Completeness and Correctness values for the RG-PCA. IPA maintains a high completeness rate, but the correction peak is approximately 50%. The errors in the results from IPA are mainly located at sharp features with serious deformation. For all three buildings tested, the time cost of IPA is acceptable as shown in Figure 14.

## 4. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Haala, N.; Rothermel, M.; Cavegn, S. Extracting 3D urban models from oblique aerial images. In Proceedings of the IEEE in Urban Remote Sensing Event (JURSE), Lausanne, Switzerland, 30 March–1 April 2015. [Google Scholar]
- Hu, H.; Zhu, Q.; Du, Z.; Zhang, Y.; Ding, Y. Reliable spatial relationship constrained feature point matching of oblique aerial images. Photogramm. Eng. Remote Sens.
**2015**, 81, 49–58. [Google Scholar] [CrossRef] - Gerke, M.; Nex, F.; Remondino, F.; Jacobsen, K.; Kremer, J.; Karel, W.; Huf, H.; Ostrowski, W. Orientation of oblique airborne image sets-experiences from the ISPRS/EUROSDR benchmark on multi-platform photogrammetry. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2016**, 41, 185–191. [Google Scholar] [CrossRef] - Xie, L.; Hu, H.; Wang, J.; Zhu, Q.; Chen, M. An asymmetric re-weighting method for the precision combined bundle adjustment of aerial oblique images. ISPRS J. Photogram. Remote Sens.
**2016**, 117, 92–107. [Google Scholar] [CrossRef] - Koci, J.; Jarihani, B.; Leon, J.X.; Sidle, R.C.; Wilkinson, S.N.; Bartley, R. Assessment of UAV and ground-based Structure from Motion with multi-view stereo photogrammetry in a gullied savanna catchment. ISPRS Int. J. Geo-Inf.
**2017**, 6, 328. [Google Scholar] [CrossRef] - Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F. State of the art in high density image matching. Photogramm. Rec.
**2014**, 29, 144–166. [Google Scholar] [CrossRef] - Xiong, B.; Elberink, S.O.; Vosselman, G. Building modeling from noisy photogrammetric point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci.
**2014**, 1, 197–204. [Google Scholar] [CrossRef] - Guerrero, P.; Kleiman, Y.; Ovsjanikov, M.; Mitra, N.J. PCPNet Learning Local Shape Properties from Raw Point Clouds. Comput. Graph. Forum
**2018**, 37, 75–85. [Google Scholar] [CrossRef] [Green Version] - Boulch, A.; Marlet, R. Deep learning for robust normal estimation in unstructured point clouds. Comput. Graph. Forum
**2016**, 35, 281–290. [Google Scholar] [CrossRef] - Muja, M.; Lowe, D.G. Scalable nearest neighbor algorithms for high dimensional data. IEEE Trans. Pattern Anal. Mach. Intell.
**2014**, 36, 2227–2240. [Google Scholar] [CrossRef] [PubMed] - Huang, H.; Li, D.; Zhang, H.; Ascher, H.; Cohen-Or, D. Consolidation of unorganized point clouds for surface reconstruction. ACM Trans. Graph.
**2009**, 28, 176. [Google Scholar] [CrossRef] - Kalogerakis, E.; Nowrouzezahrai, D.; Simari, P.; Singh, K. Extracting lines of curvature from noisy point clouds. Comput.-Aided Des.
**2009**, 41, 282–292. [Google Scholar] [CrossRef] [Green Version] - Lee, W.; Park, J.; Kim, J.; Kim, W.; Yu, C. New approach to accuracy verification of 3D surface models: An analysis of point cloud coordinates. J. Prosthodont. Res.
**2016**, 60, 98–105. [Google Scholar] [CrossRef] [PubMed] - Lin, C.; Chen, J.; Su, P.; Chen, C. Eigen-feature analysis of weighted covariance matrices for LiDAR point cloud classification. ISPRS J. Photogramm. Remote Sens.
**2014**, 94, 70–79. [Google Scholar] [CrossRef] - Cazals, F.; Pouget, M. Estimating differential quantities using polynomial fitting of osculating jets. Comput. Aided Geom. Des.
**2005**, 22, 121–146. [Google Scholar] [CrossRef] [Green Version] - Dimitrov, A.; Gu, R.; Golparvar Fard, M. Non-Uniform B-Spline Surface Fitting from Unordered 3D Point Clouds for As-Built Modeling. Comput.-Aided Civ. Infrastruct. Eng.
**2016**, 31, 483–498. [Google Scholar] [CrossRef] - Li, L.; Yang, F.; Zhu, H.; Li, D.; Li, Y.; Tang, L. An improved RANSAC for 3D point cloud plane segmentation based on normal distribution transformation cells. Remote Sens.
**2017**, 9, 433. [Google Scholar] [CrossRef] - Chen, D.; Zhang, L.; Mathiopoulos, P.T.; Huang, X. A methodology for automated segmentation and reconstruction of urban 3-D buildings from ALS point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2014**, 7, 4199–4217. [Google Scholar] [CrossRef] - Hulik, R.; Spanel, M.; Smrz, P.; Materna, Z. Continuous plane detection in point-cloud data based on 3D Hough Transform. J. Vis. Commun. Image Represent.
**2014**, 25, 86–97. [Google Scholar] [CrossRef] - Limberger, F.A.; Oliveira, M.M. Oliveira, Real-time detection of planar regions in unorganized point clouds. Pattern Recognit.
**2015**, 48, 2043–2053. [Google Scholar] [CrossRef] - Wang, Y.; Zhu, X. Automatic feature-based geometric fusion of multiview TomoSAR point clouds in urban area. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2015**, 8, 953–965. [Google Scholar] [CrossRef] - Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P. Hough-transform and extended ransac algorithms for automatic detection of 3D building roof planes from lidar data. In Proceedings of the ISPRS Workshop on Laser Scanning, Espoo, Finland, 12–14 September 2007. [Google Scholar]
- Yan, J.; Shan, J.; Jiang, W. A global optimization approach to roof segmentation from airborne lidar point clouds. ISPRS J. Photogramm. Remote Sens.
**2014**, 94, 183–193. [Google Scholar] [CrossRef] - Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for point-cloud shape detection. Comput. Graph. Forum
**2007**, 26, 214–226. [Google Scholar] [CrossRef] - Yu, Y.; Wu, Q.; Khan, Y.; Chen, M. An adaptive variation model for point cloud normal computation. Neural Comput. Appl.
**2015**, 26, 1451–1460. [Google Scholar] [CrossRef] - Rabbani, T.; Van Den Heuvel, F.; Vosselmann, G. Segmentation of point clouds using smoothness constraint. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2006**, 36, 248–253. [Google Scholar] - He, M.; Cheng, Y.; Nie, Y.; Zhao, Z.; Zhang, F. An Algorithm of Combining Delaunay TIN Models and Region Growing for Buildings Extraction. In Proceedings of the International Conference on Computer Science and Technology, Guilin, China, 25–29 July 2017. [Google Scholar]
- Xu, Y.; Yao, W.; Hoegner, L.; Stilla, U. Segmentation of building roofs from airborne LiDAR point clouds using robust voxel-based region growing. Remote Sens. Lett.
**2017**, 8, 1062–1071. [Google Scholar] [CrossRef] - Nurunnabi, A.; West, G.; Belton, D. Outlier detection and robust normal-curvature estimation in mobile laser scanning 3D point cloud data. Pattern Recognit.
**2015**, 48, 1404–1419. [Google Scholar] [CrossRef] - Qin, L.; Wu, W.; Tian, Y.; Xu, W. Lidar filtering of urban areas with region growing based on moving-window weighted iterative least-squares fitting. IEEE Geosci. Remote Sens. Lett.
**2017**, 14, 841–845. [Google Scholar] [CrossRef] - Amini Amirkolaee, H.; Arefi, H. 3D Semantic Labeling using Region Growing Segmentation Based on Structural and Geometric Attributes. J. Geomat. Sci. Technol.
**2017**, 7, 1–16. [Google Scholar] - Guo, B.; Li, Q.; Huang, X.; Wang, C. An improved method for power-line reconstruction from point cloud data. Remote Sens.
**2016**, 8, 36. [Google Scholar] [CrossRef] - Tseng, Y.; Tang, K.; Chou, F. Surface reconstruction from LiDAR data with extended snake theory. In Proceedings of the International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, Ezhou, China, 27–29 August 2007. [Google Scholar]
- Cgal. Available online: https://doc.cgal.org/latest/Point_set_shape_detection_3/index.html (accessed on 5 August 2018).
- Lin, Y.; Wang, C.; Zhai, D.; Li, W.; Li, J. Toward better boundary preserved supervoxel segmentation for 3D point clouds. ISPRS J. Photogramm. Remote Sens.
**2018**, 143, 39–47. [Google Scholar] [CrossRef] - Papon, J.; Abramov, A.; Schoeler, M.; Worgotter, F. Voxel cloud connectivity segmentation-supervoxels for point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 Jun 2013. [Google Scholar]
- Wang, H.; Wang, C.; Luo, H. 3-D point cloud object detection based on supervoxel neighborhood with Hough forest framework. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2015**, 8, 1570–1581. [Google Scholar] [CrossRef] - Zhu, Q.; Li, Y.; Hu, H.; Wu, B. Robust point cloud classification based on multi-level semantic relationships for urban scenes. ISPRS J. Photogramm. Remote Sens.
**2017**, 129, 86–102. [Google Scholar] [CrossRef] - Dong, Z.; Yang, B.; Hu, P.; Scherer, S. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds. ISPRS J. Photogramm. Remote Sens.
**2018**, 137, 112–133. [Google Scholar] [CrossRef] - Rusu, R.B.; Cousins, S. 3D is here: Point cloud library (PCL). In Proceedings of the IEEE International Conference on Robotics and automation (ICRA), Shanghai, China, 11–13 October 2011. [Google Scholar]
- Kong, T.Y.; Rosenfeld, A. Digital topology: Introduction and survey. Comput. Vis. Graph. Image Process.
**1989**, 48, 357–393. [Google Scholar] [CrossRef] - Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’092009), Kobe, Japan, 12–17 May 2009. [Google Scholar]
- Matei, B.C.; Sawhney, H.S.; Samarasekera, S.; Kim, J.; Kumar, R. Building segmentation for densely built urban regions using aerial lidar data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
- Ramón, M.J.; Pueyo, E.L.; Oliva-Urcia, B.; Larrasoaña, J.C. Virtual directions in paleomagnetism: A global and rapid approach to evaluate the NRM components. Front. Earth Sci.
**2017**, 5, 8. [Google Scholar] [CrossRef] - Vo, A.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens.
**2015**, 104, 88–100. [Google Scholar] [CrossRef] - Avron, H.; Sharf, A.; Greif, C.; Cohen-Or, D. ℓ 1-Sparse reconstruction of sharp point set surfaces. ACM Trans. Graph.
**2010**, 29, 135. [Google Scholar] [CrossRef] - Belongie, S. “Rodrigues’ Rotation Formula.” From MathWorld—A Wolfram Web Resource, Created by Eric W. Weisstein. Available online: http://mathworld.wolfram.com/RodriguesRotationFormula.html (accessed on 17 October 2018).
- Friedman, J.; Hastie, T.; Tibshirani, R. The Elements of Statistical Learning; Springer: Berlin, Germany, 2001; Volume 1. [Google Scholar]
- Hu, H.; Ding, Y.; Zhu, Q.; Wu, B.; Xie, L.; Chen, M. Stable least-squares matching for oblique images using bound constrained optimization and a robust loss function. ISPRS J. Photogramm. Remote Sens.
**2016**, 118, 53–67. [Google Scholar] [CrossRef] - Pukelsheim, F. The three sigma rule. Am. Stat.
**1994**, 48, 88–91. [Google Scholar] - Agarwal, S.; Mierle, K. Ceres Solver. 2013. Available online: https://github.com/ceres-solver/ceres-solver (accessed on 5 May 2018).
- Stein, S.C.; Schoeler, M.; Papon, J.; Wörgötter, F. Object Partitioning Using Local Convexity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Lu, X.; Yao, J.; Tu, J.; Li, K.; Li, L.; Liu, Y. Pairwise linkage for point cloud segmentation. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci.
**2016**, 3, 201–208. [Google Scholar] - Chen, Y.; Cheng, L.; Li, M.; Wang, J.; Tong, L.; Yang, K. Multiscale grid method for detection and reconstruction of building roofs from airborne LiDAR data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2014**, 7, 4081–4094. [Google Scholar] [CrossRef]

**Figure 1.**Different region selection and geometric kernel for normal estimation. Principal component analysis (PCA) (

**a**), jet fitting (

**b**) and MLS Sphere fitting (

**c**) generally use local regions, and select planes, quadratic surfaces, and spheres as the geometric kernel, respectively. In addition, the geometric kernel can also be learned from exemplar datasets (

**d**). However, for noisy photogrammetric point clouds in urban environments, a large context should be selected to keep sharp features, such as the sharp edges of a building.

**Figure 3.**Directional expansion of the maximal support regions. (

**a**) Photogrammetric point cloud, (

**b**) supervoxels, (

**c**) growing region of a supervoxel located in a plane, and (

**d**) growing region of a supervoxel located in a sharp feature.

**Figure 4.**Constraints used during region growing of the support region. (

**a**) Planarity constraint determined from the spectral features. (

**b**) Angle deviation in the supervoxels.

**Figure 5.**Mutual validation of the connectivity test. The dots represent supervoxels and the rounded rectangles represent support regions. (

**a**) Only one side of the region is contained, c

_{j}∈ S

_{i}but c

_{i}$\notin $ S

_{j}. (

**b**) The shaded dots are mutually contained, c

_{i}∈ S

_{j}and c

_{j}∈ S

_{i}.

**Figure 6.**Visualization of normal vectors. (

**a**) shows a cube model with 240,000 points. (

**c**) shows the same model but with 0.5% noise. (

**b**,

**d**) show the results from the proposed method, the RMSE are 0.0022 and 0.0231 respectively. (

**c**,

**f**) show the results estimated by local information only, the RMSE are 0.0707 and 0.433 respectively. The proposed method produces more consistent normal vectors.

**Figure 7.**Normal vectors visualization. The left column represents photogrammetric point clouds, the middle column represents normal vectors optimized by IPA, and the right column represents initial normal vectors estimated by the PCA.

**Figure 12.**Comparison of the planar abstraction quality of building 2. (

**a**) Building point cloud, (

**b**) planar abstractions, (

**c**) extracted planes approximated by a set of planar polygons, and (

**d**) magnified images of the circle marked regions in (

**c**).

**Figure 13.**Comparison of the planar abstraction quality of building 3. (

**a**) Building point cloud, (

**b**) planar abstractions, and (

**c**) extracted planes approximated by a set of planar polygons.

Tile 1 | Tile 2 | Tile 3 | |||||||
---|---|---|---|---|---|---|---|---|---|

methods | IPA | RG-PCA | RANSAC | IPA | RG-PCA | RANSAC | IPA | RG-PCA | RANSAC |

number of small holes | 3 | 69 | 29 | 15 | 172 | 43 | 2 | 46 | 24 |

number of fragments | 9 | 10 | 106 | 44 | 39 | 287 | 13 | 11 | 137 |

Number of Points | Methods | N_{up} | TP | FN | FP | |
---|---|---|---|---|---|---|

building 1 | 127,665 | IPA | 362 | 13 | 1 | 10 |

RG-PCA | 24,281 | 7 | 7 | 4 | ||

RANSAC | 4159 | 10 | 4 | 16 | ||

LCCP | 0 | 2 | 12 | 87 | ||

PLINKAGE | / | 9 | 5 | 35 | ||

building 2 | 278,893 | IPA | 227 | 15 | 0 | 0 |

RG-PCA | 19,323 | 10 | 5 | 10 | ||

RANSAC | 5705 | 11 | 4 | 22 | ||

LCCP | 0 | 2 | 13 | 106 | ||

PLINKAGE | / | 15 | 0 | 72 | ||

building 3 | 258,245 | IPA | 2356 | 24 | 0 | 22 |

RG-PCA | 30,553 | 17 | 7 | 29 | ||

RANSAC | 7805 | 23 | 3 | 26 | ||

LCCP | 0 | 3 | 21 | 397 | ||

PLINKAGE | / | 16 | 8 | 161 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Zhu, Q.; Wang, F.; Hu, H.; Ding, Y.; Xie, J.; Wang, W.; Zhong, R.
Intact Planar Abstraction of Buildings via Global Normal Refinement from Noisy Oblique Photogrammetric Point Clouds. *ISPRS Int. J. Geo-Inf.* **2018**, *7*, 431.
https://doi.org/10.3390/ijgi7110431

**AMA Style**

Zhu Q, Wang F, Hu H, Ding Y, Xie J, Wang W, Zhong R.
Intact Planar Abstraction of Buildings via Global Normal Refinement from Noisy Oblique Photogrammetric Point Clouds. *ISPRS International Journal of Geo-Information*. 2018; 7(11):431.
https://doi.org/10.3390/ijgi7110431

**Chicago/Turabian Style**

Zhu, Qing, Feng Wang, Han Hu, Yulin Ding, Jiali Xie, Weixi Wang, and Ruofei Zhong.
2018. "Intact Planar Abstraction of Buildings via Global Normal Refinement from Noisy Oblique Photogrammetric Point Clouds" *ISPRS International Journal of Geo-Information* 7, no. 11: 431.
https://doi.org/10.3390/ijgi7110431