# Relation-Constrained 3D Reconstruction of Buildings in Metropolitan Areas from Photogrammetric Point Clouds

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

- We designed a top-down space partitioning strategy that converts a building’s bounding space into abundant but not redundant CSG-BRep trees, based on the geometric relations in the building structure. A subset of the leaf cells from the partitioning CSG-BRep tree are then selected to build a bottom-up reconstructing CSG-BRep tree, which forms the geometric model of the building. Because these cells are produced according to the building structure, they allow for a configuration that can model buildings in arbitrary shapes, including façade intrusions and extrusions;
- To best approximate the building geometries, we developed a novel optimization strategy for the selection of the 3D space cells, where 3D facets and edges are extracted from the topological relations and used to constrain the selection process. The uses of multiple space entities introduce more measurements and constraints to the modelling, such that the completeness, fitness, and regularity of the modelling result are guaranteed simultaneously;
- We designed two useful rule-based schemes that can automatically analyze the topological relations between different building parts based on the CSG-BRep trees and assign each building surface with semantic information, converting the models into an international standard—CityGML [19]. Therefore, the final building models can be used not only in applications related to geometric analysis, but also in applications where topological or semantic information is required.

## 2. Related Work

#### 2.1. Model-Driven Methods

#### 2.2. Data-Driven Methods

#### 2.3. Hybrid-Driven Methods

#### 2.4. Space-Partition-Based Methods

#### 2.5. CityGML Building Models

## 3. Relation-Constrained 3D Reconstruction of Buildings Using CSG-BRep Trees

#### 3.1. Overview of the Approach

#### 3.2. Partitioning CSG-BRep Tree for 3D Space Decomposition

#### 3.2.1. Extraction and Refinement of Planar Primitives Based on Geometric Relations

**P**= {P

_{1}, P

_{2}, …, P

_{n}} are first extracted using RANSAC and the planar coefficients, comprising the normal vector

**n**

_{i}and a distance coefficient d

_{i}corresponding to each P

_{i}∈

**P**, are obtained. To enhance the regularities between the planar primitives, the initial planar primitives are refined based on geometric relations, namely parallelism, orthogonality, z-symmetry, and co-planarity, as defined in [18]. Three additional relations, verticality, horizontality, and xy-parallelism, are also considered in this work. These six geometric relations are mathematically described in Table 1, together with angle threshold ε and Euclidean distance d. Note that the values of ε and d are set also as in the work of [18]. In Table 1,

**n**

_{z}denotes the unit vector along the z-axis, and

**n**

^{xy}denotes the projection of the normal

**n**on the xy-plane.

**n**

_{z}. The normal vectors of the vertical planes are first forced to be orthogonal to

**n**

_{z}and are later adjusted in the xy-plane. The vertical and oblique planes are further clustered based on the parallel relation, and the average normal is computed for each cluster. If the average normal is parallel, orthogonal or z-symmetric with respect to any existing refined normal

**n**’ (the first refined normal is

**n**

_{z}if the horizontal plane exists), it is adjusted accordingly; otherwise, the averaged normal is included in the set of refined normals. For the oblique planes, the xy-parallel relation is also checked with the existing refined normals. This process is repeated in order from the vertical to the oblique plane clusters, until all of the plane clusters are refined.

#### 3.2.2. Generation of the Partitioning CSG-BRep Tree

_{i}, two infinite boxes P

_{iL}and P

_{iR}(corresponding to the spaces on the left and right sides of P

_{i}, respectively) are generated and used to split the parent cell into two child cells by using the intersection Boolean operator, as shown in Figure 3.

**C**.

#### 3.3. Reconstructing CSG-BRep Tree Based on Topological-relation Constraints

**C**is generated based on the planar primitives representing the building surfaces, it can be assumed that each C ∈

**C**is either occupied by the building (inside the building) or is empty (outside the building). Therefore, the building geometry can be retrieved by selecting the occupied cells. The selected cells correspond to a set of leaf nodes in the partitioning CSG-BRep tree, and because the parent–child relations are well recorded in the tree, they can be easily converted into a reconstructing CSG-BRep tree with the union Boolean operator, as shown in Figure 5, to form the geometric model of the building. Thus, the key step that determines the quality of the geometric model is the selection of the occupied 3D cells. To ensure optimal selection, the 2D and 1D topological relations between the 3D cells (as shown in Figure 6), which can be represented as 3D facets and 3D edges, are extracted and used to constrain the selection process by introducing more fidelity and regularity measurements.

#### 3.3.1. Extraction of 2D Topology between 3D Cells

_{i}and C

_{j}(C

_{i}, C

_{j}∈

**C**) is denoted as F

_{ij}(as shown in Figure 7) and that which is connected to a single cell C

_{k}(C

_{k}∈

**C**) is denoted as F

_{k}(only facets on the surfaces of the building bounding box are connected to single cells).

**F**and those connected to single cells are denoted as

^{2}**F**. To extract these two types of facets, an iterative extraction and update process based on 2D topological relations is used as follows. For a cell C

^{1}_{i}∈

**C**, the 3D bounding faces are directly obtained from its boundary representation. For each face F

_{i}obtained from C

_{i}, if there is a facet F

_{j}that is co-planar with F

_{i}, the 2D topological relation between F

_{i}and F

_{j}is analyzed. Based on the topological relation between F

_{i}and F

_{j}, a set of new facets is generated by executing 2D Boolean operations. A set of operations is defined as shown in Table 2 (where the red polygon denotes F

_{i}and the blue polygon denotes F

_{j}) to determine whether the new facets are added to

**F**or

^{1}**F**. This iteration continues until all of the bounding faces of the cells in

^{2}**F**have been checked. Because each facet in

**F**records the indices of two connected cells, the adjacency relationships between the 3D cells are also obtained.

^{2}#### 3.3.2. Extraction of 1D Topology between 3D Cells

**E**and

^{1}**E**are initialized to store the edges connected to single facets and pairs of facets. Because in no situation is an edge connected to a single facet,

^{2}**E**is only a temporary set that stores the candidate edges. For a facet F

^{1}_{i}∈

**F**, the edges constituting the boundary of F

_{i}are obtained from its boundary representation. For each boundary edge E

_{i}, if there is an edge E

_{j}∈

**E**co-linear with E

^{1}_{i}, the 1D topological relation between them is computed. Based on their topological relation, new edges are generated and added into

**E**or

^{1}**E**, as described in Table 3, where the red and blue line segments refer to E

^{2}_{i}and E

_{j}, respectively. Note that for an edge E

_{ij}∈

**E**, i and j refer to the indices of the two connected facets.

^{2}#### 3.3.3. Optimal Selection with Topological-Relation Constraints

**l**is the binary labelling that assigns the labels

**l**

_{C},

**l**

_{F}, and

**l**

_{E}, respectively, to each cell, facet, and edge (

**l**

_{C},

**l**

_{F},

**l**

_{E}∈ {0, 1}; 1 and 0 denote that a cell/facet/edge is selected or not selected); γ and η are two parameters that control the weights of the topological constraints; D

_{cell}(C,

**l**

_{C}), D

_{Facet}(F,

**l**

_{F}), and D

_{Edge}(E,

**l**

_{E}) are the energy items corresponding to the cell, facet, and edge complexes, respectively, and used to measure the completeness, fitness, and regularity of the modelling.

**l**that minimizes the global energy E(

**l**), and

**l**

_{C},

**l**

_{F,}and

**l**

_{E}meet the constraints about the topological relations between the corresponding space units. Thus, the binary labelling is converted into an ILP problem [41], where Equation (1) is the objective function, and the topological relations between the space units formulate the constraints of the ILP problem.

**C**, 37 rays with an angle interval of 30° are drawn to reduce the potential errors, in case that the plane detection is incomplete due to data missing or a ray happens to intersect with the building edge or corner. Based on the intersections of multiple rays, the probability of C being occupied can be formulated as in Equation (2):

_{odd}(rays) is the number of rays that have odd numbers of intersection points with the planar primitives.

**C**) is the number of cells in

**C**; and

**l**

_{C}∈ {0, 1} is the binary label assigned to C, where

**l**

_{C}= 1 denotes that C is occupied, and 0 denotes that it is empty.

**F**is supported by points in the building point cloud, it is defined as occupied; otherwise, it is defined as empty. The points with perpendicular distances to facet F smaller than d (d is the same as the distance threshold used to determine the co-planar relation, see Table 1) are regarded as supporting points of F.

_{F}is the set of supporting points of F; and Polygon($\mathcal{P}$

_{F}) is the polygon extracted from $\mathcal{P}$

_{F}by alpha-shape [42].

**F**) is the number of facets in

**F**; and

**l**

_{F}∈ {0, 1} is the binary label assigned to F, where

**l**

_{F}= 1 denotes that F is occupied, and 0 denotes that it is empty.

**E**, the edge angle between the corresponding connected facets is computed, and a regularity value A(E) is assigned to E based on the edge angle as in Equation (6):

**E**) denotes the number of edges in

**E**; and A(E) is the regularity value set to an edge E ∈

**E**.

_{C}and ε

_{F}(e.g., ε

_{C}= 0.5 and ε

_{F}= 0.3), as given in Equations (8) and (9):

**l**

_{C},

**l**

_{F,}and

**l**

_{E}also satisfy the following constraints:

**F**should be selected (

**l**

_{F}= 1) when only one of its connected cells is selected (

**l**

_{C}= 1); otherwise, it cannot be selected (

**l**

_{F}= 0). The second constraint means that an edge E ∈

**E**should be selected (

**l**

_{E}= 1) when both of its connected facets are selected (

**l**

_{F}= 1); otherwise, it cannot be selected (

**l**

_{E}= 0).

#### 3.4. Generation of CityGML Building Models

#### 3.4.1. Retrieving Topologies between Building Parts

^{2}), and the first type has higher priority than the second type. We also considered planar primitives that have significantly concave shapes as extended surfaces, but only when the building point cloud does not lack large amounts of data. Figure 11 shows the results of building components analyses of two example buildings based on CSG-BRep trees.

#### 3.4.2. Building Surface Type Classification

_{min}and z

_{max}denote the minimum and maximum elevation values of the buildings, respectively.

**n**’ is computed based on the vertexes of a surface. Second, a ray

**r**is originated from a point p on the surface and expands along

**n**’. If

**r**has an even number of intersection points with the other building surfaces, the final normal vector

**n**=

**n**’, otherwise,

**n**= −

**n**’. Note that, if

**n**= −

**n**’, the vertexes of the current surface will be restored in the inverted order.

## 4. Experimental Analysis and Discussion

#### 4.1. Dataset Description

#### 4.2. Qualitative Evaluation of the Reconstruction Results

#### 4.3. Quantitative Evaluation and Comparisons with Previous Methods

_{B}|| denotes the Euclidean distance between a point p in the input point cloud and the generated geometric model $\mathcal{M}$

_{B}of a building B, and N ($\mathcal{P}$

_{B}) denotes the number of points in the building point cloud $\mathcal{P}$

_{B}.

#### 4.4. Evaluation of Building Surface Detection and Classification

#### 4.5. Discussion

## 5. Conclusions

## Author Contributions

## Funding

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Lafarge, F.; Mallet, C. Creating large-scale city models from 3D point clouds: A robust approach with hybrid representation. Int. J. Comput. Vis.
**2012**, 99, 69–85. [Google Scholar] [CrossRef] - Li, M.; Rottensteiner, F.; Heipke, C. Modelling of buildings from aerial LiDAR point clouds using TINs and label maps. ISPRS J. Photogramm. Remote Sens.
**2019**, 154, 127–138. [Google Scholar] [CrossRef] - Zhang, L.; Li, Z.; Li, A.; Liu, F. Large-scale urban point cloud labelling and reconstruction. ISPRS J. Photogramm. Remote Sens.
**2018**, 138, 86–100. [Google Scholar] [CrossRef] - Chen, D.; Zhang, L.; Mathiopoulos, P.T.; Huang, X. A methodology for automated segmentation and reconstruction of urban 3-D buildings from ALS point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2014**, 7, 4199–4217. [Google Scholar] [CrossRef] - Sampath, A.; Shan, J. Segmentation and reconstruction of polyhedral building roofs from aerial lidar point clouds. IEEE Trans. Geosci. Remote Sens.
**2010**, 48, 1554–1567. [Google Scholar] [CrossRef] - Zhou, Q.Y.; Neumann, U. 2.5D dual contouring: A robust approach to creating building models from aerial lidar point clouds. In Computer Vision—ECCV 2010; Daniilidis, K., Maragos, P., Paragios, N., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6313. [Google Scholar]
- Zhou, Q.Y.; Neumann, U. 2.5D building modelling with topology control. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 20–25 June 2011; pp. 2489–2496. [Google Scholar]
- Fritsch, D.; Becher, S.; Rothermel, M. Modeling façade structures using point clouds from dense image matching. In Proceedings of the International Conference on Advances in Civil, Structural and Mechanical Engineering—CSM 2013, Hong Kong, China, 3–4 August 2013; pp. 57–64. [Google Scholar]
- Zhu, Q.; Li, Y.; Hu, H.; Wu, B. Robust point cloud classification based on multi-level semantic relationships for urban relationships for urban scenes. ISPRS J. Photogramm. Remote Sens.
**2017**, 129, 86–102. [Google Scholar] [CrossRef] - Ye, L.; Wu, B. Integrated image matching and segmentation for 3D surface reconstruction in urban areas. Photogramm. Eng. Remote Sens.
**2018**, 84, 35–48. [Google Scholar] [CrossRef] - Tutzauer, P.; Becker, S.; Fritsch, D.; Niese, T.; Deussen, O. A study of the human comprehension of building categories based on different 3D building representations. Photogramm. Fernerkund. Geoinf.
**2016**, 15, 319–333. [Google Scholar] [CrossRef] - Kada, M.; McKinley, L. 3D building reconstruction from LiDAR based on a cell decomposition approach. Int. Arch. Photogram. Remote Sens. Spat. Inf. Sci.
**2009**, 38, 47–52. [Google Scholar] - Verma, V.; Kumar, R.; Hsu, S. 3D building detection and modelling from aerial LiDAR data. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA; 2006; pp. 2213–2220. [Google Scholar]
- Poullis, C. A framework for automatic modelling from point cloud data. IEEE Trans. Pattern Anal. Mach. Intell.
**2013**, 35, 2563–2575. [Google Scholar] [CrossRef] - Xie, L.; Zhu, Q.; Hu, H.; Wu, B.; Li, Y.; Zhang, Y.; Zhong, R. Hierarchical regularization of building boundaries in noisy aerial laser scanning and photogrammetric point clouds. Remote Sens.
**2018**, 10, 1996–2018. [Google Scholar] [CrossRef] [Green Version] - Xiong, B.; Oude Elberink, S.; Vosselman, G. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds. ISPRS J. Photogramm. Remote Sens.
**2014**, 93, 227–242. [Google Scholar] [CrossRef] - Nan, L.; Wonka, P. PolyFit: Polygonal surface reconstruction from point clouds. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2353–2361. [Google Scholar]
- Verdie, Y.; Lafarge, F.; Alliez, P. LOD generation for urban scenes. ACM Trans. Graph.
**2015**, 34, 15–30. [Google Scholar] [CrossRef] [Green Version] - Gröger, G.; Plümer, L. CityGML—Interoperable semantic 3D city models. ISPRS J. Photogramm. Remote Sens.
**2012**, 71, 12–33. [Google Scholar] [CrossRef] - Kolbe, T.H.; Gröger, G.; Plümer, L. CityGML—3D city models and their potential for emergency response. Geospat. Inf. Technol. Emerg. Response
**2008**, 5, 257–274. [Google Scholar] - Biljecki, F.; Ledoux, H.; Stoter, J. Generation of multi-LOD 3D city models in CityGML with the procedural modelling engine Random3DCity. ISPRS Ann. Photogram. Remote Sens. Spat. Inf. Sci.
**2016**, 4, 51–59. [Google Scholar] [CrossRef] - Tang, L.; Ying, S.; Li, L.; Biljecki, F.; Zhu, H.; Zhu, Y.; Yang, F.; Su, F. An application-driven LOD modelling paradigm for 3D building models. ISPRS J. Photogramm. Remote Sens.
**2020**, 161, 194–207. [Google Scholar] [CrossRef] - Gomes, A.J.; Teixeira, J.G. Form feature modelling in a hybrid CSG/BRep scheme. Comput Graph.
**1991**, 15, 217–229. [Google Scholar] [CrossRef] - Wang, R.; Peethambaran, J.; Chen, D. LiDAR point clouds to 3-D urban models: A review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2018**, 11, 606–627. [Google Scholar] [CrossRef] - Musialski, P.; Wonka, P.; Aliaga, D.G.; Wimmer, M.; Van Gool, L.; Purgathofer, W. A survey of urban reconstruction. Comput. Graph. Forum
**2013**, 32, 146–177. [Google Scholar] [CrossRef] - Haala, N.; Kada, M. An update on automatic 3D building reconstruction. ISPRS J. Photogramm. Remote Sens.
**2010**, 65, 570–580. [Google Scholar] [CrossRef] - Huang, H.; Brenner, C.; Sester, M. A generative statistical approach to automatic 3D building roof reconstruction from laser scanning data. ISPRS J. Photogramm. Remote Sens.
**2013**, 79, 29–43. [Google Scholar] [CrossRef] - Poullis, C.; You, S.; Neumann, U. Rapid creation of large-scale photorealistic virtual environments. In Proceedings of the 2008 IEEE Virtual Reality Conference, Reno, NV, USA; 2008; pp. 153–160. [Google Scholar]
- Dorninger, P.; Pfeifer, N. A comprehensive automated 3D approach for building extraction, reconstruction and regularization from airborne laser scanning point clouds. Sensors
**2008**, 8, 7323–7343. [Google Scholar] [CrossRef] [Green Version] - Mass, H.G.; Vosselman, G. Two algorithms for extracting building models from raw laser altimetry data. ISPRS J. Photogramm. Remote Sens.
**1999**, 54, 153–163. [Google Scholar] [CrossRef] - Vosselman, G.; Dijkman, S. 3D building model reconstruction from point clouds and ground plans. Int. Arch. Photogram. Remote Sens. Spat. Inf. Sci.
**2001**, 34, 37–44. [Google Scholar] - Sohn, G.; Huang, X.; Tao, V. Using a binary space partitioning tree for reconstructing polyhedral building models from airborne lidar data. Photogramm. Eng. Remote Sens.
**2008**, 74, 1425–1438. [Google Scholar] [CrossRef] [Green Version] - Jung, J.; Jwa, Y.; Sohn, G. Implicit regularization for reconstructing 3D building rooftop models using airborne LiDAR data. Sensors
**2017**, 17, 621–648. [Google Scholar] [CrossRef] - Poullis, C.; You, S. Automatic reconstruction of cities from remote sensor data. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 153–160. [Google Scholar]
- Chen, D.; Wang, R.; Peethambaran, J. Topologically aware building rooftop reconstruction from airborne laser scanning point clouds. IEEE Trans. Geosci. Remote Sens.
**2017**, 55, 7032–7052. [Google Scholar] [CrossRef] - Oude Elberink, S. Target graph matching for building reconstruction. Int. Arch. Photogram. Remote Sens. Spat. Inf. Sci.
**2009**, 38, 49–54. [Google Scholar] - Oude Elberink, S.; Vosselman, G. Building reconstruction by target based graph matching on incomplete laser data: Analysis and limitations. Sensors
**2009**, 9, 6101–6118. [Google Scholar] [CrossRef] - Li, M.; Nan, L.; Liu, S. Fitting boxes to Manhattan scenes using linear integer programming. Int. J. Digit. Earth
**2016**, 9, 806–817. [Google Scholar] [CrossRef] [Green Version] - Li, M.; Wonka, P.; Nan, L. Manhattan-world urban reconstruction from point clouds. In Computer Vision—14th European Conference, ECCV 2016, Proccedings; Sebe, N., Leibe, B., Welling, M., Matas, J., Eds.; Springer: Berlin, Germany, 2016; pp. 54–69. [Google Scholar]
- Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for point cloud shape detection. Comput. Graph. Forum
**2007**, 26, 214–226. [Google Scholar] [CrossRef] - Schrijver, A. Theory of Linear and Integer Programming; John Wiley & Sons: New York, NY, USA, 1998; pp. 229–237. [Google Scholar]
- Pandi, F.; De Amicis, R.; Piffer, S.; Soave, M.; Cadzow, S.; Boxi, E.G.; D’Hondt, E. Using CityGML to deploy smart-city services for urban ecosystems. Int. Arch. Photogram. Remote Sens. Spat. Inf. Sci.
**2013**, 4, W1. [Google Scholar] - Baig, S.U.; Adbul-Rahman, A. Generalization of buildings within the framework of CityGML. Geo-Spat. Inf. Sci.
**2013**, 16, 247–255. [Google Scholar] [CrossRef] [Green Version] - Liang, J.; Edelsbrunner, H.; Fu, P.; Sudhakar, P.V.; Subramaniam, S. Analytical shape computation of macromolecules: I. Molecular area and volume through alpha shape. Proteins
**1998**, 33, 1–17. [Google Scholar] [CrossRef] - Gurobi Optimization. Available online: https://www.gurobi.com/ (accessed on 31 December 2020).
- Bentley, ContextCapture. Available online: https://www.bentley.com/en/products/brands/contextcapture (accessed on 31 December 2020).
- Li, Y.; Wu, B.; Ge, X. Structural segmentation and classification of mobile laser scanning point clouds with large variations in point density. ISPRS J. Photogramm. Remote Sens.
**2019**, 153, 151–165. [Google Scholar] [CrossRef] - Gruen, A.; Schrotter, G.; Schubiger, S.; Qin, R.; Xiong, B.; Xiao, C.; Li, J.; Ling, X.; Yao, S. An operable system for LoD3 model Generation Using Multi-Source Data and User-Friendly Interactive Editing; ETH Centre, Future Cities Laboratory: Singapore, 2020. [Google Scholar]

**Figure 1.**Flowchart of the proposed approach. Abbreviations: RANSAC, Random Sample Consensus; CSG-BRep, constructive solid geometry and boundary representation; CityGML, City Geography Markup Language.

**Figure 2.**Planar primitives extracted from the point cloud and refinement of normal vectors of the planes.

**Figure 3.**Illustration of a partition in the partitioning CSG-BRep tree. (

**a**) A parent cell A and a planar primitive P

_{i}, with the two infinite boxes P

_{iL}and P

_{iR}generated from; (

**b**) two child cells split from A; (

**c**) The partitioning process in the CSG-BRep tree, where ∩ denotes the intersection Boolean operator.

**Figure 5.**Conversion from partitioning CSG-BRep tree to the reconstructing CSG-BRep tree. (

**a**) Partitioning CSG-BRep tree; (

**b**) Reconstructing CSG-BRep tree built from selected leaf nodes. The 2D illustration of the space decomposition in (

**a**) is simplified without expanding the bounding box or the planar primitives. The links highlighted in red in (

**a**) denote the parent–child relations inherited by the reconstructing CSG-BRep tree and ∪ denotes the union Boolean operator.

**Figure 6.**2D and 1D topological relations between the 3D cells. (

**a**) 2D topological relations between 3D cells are derived from the interfaces between cells and represented as 3D facets; (

**b**) 1D topological relations between 3D cells are derived from the intersecting lines and represented as 3D edges.

**Figure 7.**Illustration of a 3D facet F

_{ij}extracted from the interface between two cells C

_{i}and C

_{j}.

**Figure 8.**Examples of 3D facet and edge complexes extracted from the cell complex. (

**a**) Planar primitives extracted from building point clouds; (

**b**) Cell complex generated by partitioning CSG-BRep tree; (

**c**) Facet complex extracted from 2D topological relations (only facets connected with pairs of cells are shown); (

**d**) Edge complex extracted from 1D topological relations.

**Figure 9.**Geometric models of a building formed by occupied cells selected with and without edge regularity constraints. (

**a**) Original building point cloud; (

**b**) Planar primitives in colored points; (

**c**) Optimal selection result without edge regularity constraint (γ = 1, η = 0); (

**d**) Optimal selection result with edge regularity constraint (γ = 1, η = 5).

**Figure 10.**Illustration of the topology analysis between building parts. (

**a**) A 2D diagram of a building with two parts and inner surface between the two parts; (

**b**) the space partitioning based on planar primitives P

_{1}, P

_{2}, …, P

_{6}, and the partitioning CSG-BRep tree shown in (

**d**); (

**c**) the building model formed by the reconstructing CSG-BRep tree shown in (

**e**). The planar primitive P

_{3}is the extended surface of the inner surface, and the branches created by P

_{3}in the partitioning tree and the reconstructing tree correspond to the different parts of the building.

**Figure 11.**Building components analyses of two example buildings based on the CSG-BRep trees. (

**a**) Planar segments extracted from building point clouds using RANSAC; (

**b**) Generated building models with multiple parts (coded by different colors) determined using the CSG-BRep trees.

**Figure 12.**Building surface types defined by CityGML [19].

**Figure 15.**Building point clouds in the central area of Hong Kong. (

**a**) 105 buildings with relatively complete representations; (

**b**) Building point clouds at the boundaries of this area representing incomplete building shapes.

**Figure 16.**Reconstruction results of the 105 buildings in CityGML format. Red, grey, and yellow indicate roof, wall, and outer-floor surfaces, respectively.

**Figure 17.**Reconstruction results of five challenging types of buildings shown in CityGML format. Red, grey, yellow, and pink indicate roof, wall, outer-floor, and outer-ceiling surfaces, respectively. (

**a**) Buildings with complex structures; (

**b**) Buildings with curved surfaces; (

**c**) Buildings with typical European architectural styles; (

**d**) Buildings with missing data; (

**e**) Buildings with true 3D structures.

**Figure 20.**Quantitative evaluation of building surface detection. (

**a**) Comparison with the ground-Table 2. D, where TP means true positive detection, FN means false negative detection and FP means false positive detection; (

**b**) Comparison with the ground-truth in elevation, where red indicates the generated model is higher than the ground-truth and blue indicates lower.

**Figure 21.**Detailed reconstruction results of buildings with false surface detections. From top to bottom are the false detections pointed by arrows a ~ d in Figure 20. (

**a**) 2D comparisons between the generated models and the manually digitalized ground-truth; (

**b**) The manually digitalized ground-truth overlapping on the ortho image; (

**c**) The building point clouds where the false detections appeared; (

**d**) The generated building models.

**Figure 22.**Misclassifications between WallSurface and RoofSurface. (

**a**) RoofSurface misclassified as WallSurface, and (

**b**) WallSurface misclassified as RoofSurface.

Relation Type | Description |
---|---|

Horizontality | P_{i} is vertical if θ (n_{i}, n_{z}) < ε. |

Verticality | P_{i} is vertical if θ (n_{i}, n_{z}) > π/2 − ε. |

Parallelism | P_{i} and P_{j} are parallel if θ (n_{i}, n_{j}) < ε. |

Orthogonality | P_{i} and P_{j} are orthogonal if θ (n_{i}, n_{j}) > π/2 − ε. |

Z-symmetry | P_{i} and P_{j} are z-symmetric if |θ (n_{i}, n_{z}) − θ (n_{j}, n_{z})| <ε. |

XY-parallelism | P_{i} and P_{j} are xy-parallel if θ (${\mathit{n}}_{i}^{xy}$_{i}, ${\mathit{n}}_{j}^{xy}$) < ε. |

Co-planarity | Two parallel plane primitives P_{i} and P_{j} are co-planar if |d_{i} − d_{j}| < d. |

Topological-Relation | Output | Operations | ||
---|---|---|---|---|

Separated | Add F_{i} to F.^{1} | |||

Connecting | Add F_{i} to F.^{1} | |||

Full-overlapping | Remove F_{j} from F; add F^{1}_{ij} to F.^{2} | |||

Partially-overlapping 1 | Remove F_{j} from F; add F^{1}_{ij} to F; add F^{2}_{j}’ to F.^{1} | |||

Partially-overlapping 2 | Remove F_{j} from F; add F^{1}_{ij} to F; add F^{2}_{i}’ to F.^{1} | |||

Partially-overlapping 3 | Remove F_{j} from F; add F^{1}_{ij} to F; add F^{2}_{j}’, F_{j}’’ to F.^{1} | |||

Partially-overlapping 4 | Remove F_{j} from F; add F^{1}_{ij} to F; add F^{2}_{i}’, F_{i}’’ to F.^{1} | |||

Contained | Remove F_{j} from F; add F^{1}_{ij} to F; add F^{2}_{j}’, F_{j}’’ to F.^{1} | |||

Containing | Remove F_{j} from F; add F^{1}_{ij} to F; add F^{2}_{i}’, F_{i}’’ to F.^{1} | |||

Intersecting | Remove F_{j} from F; add F^{1}_{ij} to F; add F^{2}_{i}’ to F; and F^{1}_{j}’ to F.^{1} |

Topological-Relation | Output | Operations | |
---|---|---|---|

Separated | Add E_{i} to E.^{1} | ||

Connecting | Add E_{i} to E.^{1} | ||

Fully-overlapping | Remove E_{j} from E; add E^{1}_{ij} to E.^{2} | ||

Partially-overlapping1 | Remove E_{j} from E; add E^{1}_{ij} to E; add E^{2}_{j}’ to E.^{1} | ||

Partially-overlapping2 | Remove E_{j} from E; add E^{1}_{ij} to E; add E^{2}_{i}’ to E.^{1} | ||

Contained | Remove E_{j} from E; add E^{1}_{ij} to E; add E^{2}_{j}’, E_{j}’’ to E.^{1} | ||

Containing | Remove E_{j} from E; add E^{1}_{ij} to E; add E^{2}_{i}’, E_{i}’’ to E.^{1} | ||

Intersecting | Remove E_{j} from E; add E^{1}_{ij} to E; add E^{2}_{i}’ to E; add E^{1}_{j}’ to E.^{1} |

Normal Direction | Elevation Information | Surface Type | |
---|---|---|---|

|θ (n’,n)| ≥ π/2 − ε_{z} | - | WallSurface | |

ε < |θ (n’, n_{z})| < π/2 − ε | n’, n_{z} > 0 | - | RoofSurface |

n’, n_{z} < 0 | - | WallSurface | |

|θ (n’, n_{z})| ≤ ε | n’, n_{z} > 0 | $\overline{z}$ < z_{min} + (z_{max} − z_{min})/3and $\overline{z}$ < 10 m | OuterFloorSurface |

otherwise | RoofSurface | ||

n’, n_{z} < 0 | < z_{min} + d | GroundSurface | |

otherwise | OuterCeilingSurface |

GT | Ground Surface | Wall Surface | Roof Surface | OuterFloor Surface | OuterCeiling Surface | Precision |
---|---|---|---|---|---|---|

GroundSurface | 105 | 0 | 0 | 0 | 0 | 100% |

WallSurface | 0 | 4170 | 242 | 0 | 0 | 95% |

RoofSurface | 0 | 104 | 2735 | 0 | 0 | 96% |

OuterFloorSurface | 0 | 0 | 0 | 24 | 0 | 100% |

OuterCeilingSurface | 0 | 0 | 0 | 0 | 241 | 100% |

Recall | 100% | 98% | 92% | 100% | 100% |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Li, Y.; Wu, B.
Relation-Constrained 3D Reconstruction of Buildings in Metropolitan Areas from Photogrammetric Point Clouds. *Remote Sens.* **2021**, *13*, 129.
https://doi.org/10.3390/rs13010129

**AMA Style**

Li Y, Wu B.
Relation-Constrained 3D Reconstruction of Buildings in Metropolitan Areas from Photogrammetric Point Clouds. *Remote Sensing*. 2021; 13(1):129.
https://doi.org/10.3390/rs13010129

**Chicago/Turabian Style**

Li, Yuan, and Bo Wu.
2021. "Relation-Constrained 3D Reconstruction of Buildings in Metropolitan Areas from Photogrammetric Point Clouds" *Remote Sensing* 13, no. 1: 129.
https://doi.org/10.3390/rs13010129