# Building Point Detection from Vehicle-Borne LiDAR Data Based on Voxel Group and Horizontal Hollow Analysis

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Methods

#### 2.1. Voxel Group-Based Shape Recognition

#### 2.1.1. Voxelization

#### 2.1.2. Generating of Voxel Group

**Building 3D voxel grid system**. Set an appropriate size S to build a regular 3-D voxel grid system. Each LiDAR point is added to each voxel according to its 3D coordinates. The minimum value among all LiDAR point coordinates $({x}_{\mathrm{min}},{y}_{\mathrm{min}},{z}_{\mathrm{min}})$ is the origin of the 3D voxel grid system. For each LiDAR point, the row, column, and layer number $(i,j,k)$ of its corresponding voxel are recorded to construct a two-way index.

**Dividing voxels in each column (in vertical direction)**. On account of a series of voxels distributed in the same vertical direction, the group of voxels with the same row and column $(i,j)$ may belong to a different target, such as pedestrians or vehicles below the canopy of trees along a street. Therefore, these voxels must be separated to ensure that each voxel group contains only one object’s points, as shown in Figure 2c. Accordingly, the elevation difference between voxel ${V}_{(i,j,k)}$ and the voxel above it, ${V}_{(i,j,k+1)}$ should be calculated:

**Merging process in horizontal direction for voxel group**. A full λ-Schedule algorithm is to be taken to merge the voxel columns in horizontal direction. The full λ-Schedule algorithm [55] was first used to segment SAR images. The segmentation principle is based on the Mumford–Shah energy equation to judge the difference in object attributes and the complexity of the object boundary [11]. The merging cost value ${t}_{i,j}$ of each adjacent voxel column $({S}_{i}^{},{S}_{j})$ is calculated as below:

- Take a simple region growth for whole voxel columns in horizontal direction based on connectivity to get several rough clusters: $\{{C}_{1},{C}_{2},\dots ,{C}_{n},\dots \}$.
- Compute all the pairs of adjacent voxel columns within C
_{n}and their merging cost value from Equation (6) and sort them into a list. - Merge the pair (S
_{i},S_{j}) which own smallest t_{i,j}to form a new voxel column S_{ij}and update the merging cost value. - Repeat the step ii and step iii until the t
_{i,j}exceeds the threshold T_{End}or all the voxel columns within C_{n}into one group. - Repeat the step ii, iii, iv until all clusters are processed.

_{2}(mn))for a 2D image of m × n pixels [56]. For 3D voxel grid system, the computational complexity will be higher so the origin 3D voxel grid system must be divided into pieces to reduce the amount of involved voxel columns in one process. Finally, all voxel columns are combined into a higher-level structure, the voxel group. The LiDAR points within each voxel group belong to the same single real-world object and have the same geometric properties or shape information.

#### 2.1.3. Shape Recognition of Each Voxel Group

**Finding the center voxel**. The point density of each voxel within one voxel group is calculated and finds the most dense voxel ${V}_{md}$. Calculate the center coordinate of points in this voxel:

**Determine the variation range of neighborhood size**. Centering on $\left(\overline{X},\overline{Y},\overline{Z}\right)$, the minimum neighborhood size ${R}_{\mathrm{min}}$ is determined as the radius that includes the minimal number ${N}_{p}$ of points required for PCA. Set the increment ${R}_{i}$, the neighborhood size will increase until the radius reach the boundary of voxel group. Then the variation range of neighborhood size $\left[{R}_{\mathrm{min}},{R}_{\mathrm{max}}\right]$ is obtained.

**Calculate the dimensionality features and entropy feature**. Then the dimensionality features ${a}_{1d}^{}$, ${a}_{2d}^{}$, ${a}_{3d}^{}$ and entropy feature ${E}_{f}({V}_{p}{}^{r})$ within ${V}_{p}^{R}$ ($R\in \left[{R}_{\mathrm{min}},{R}_{\mathrm{max}}\right]$) are calculated by the Equation (9). In this paper, P denotes the center coordinate of points in the selected voxel. Then the optimal neighborhood size can be obtained:

#### 2.2. Category-Oriented Merging

#### 2.2.1. Removing Ground Points

**Extracting the potential voxel group that contains ground points**. The difference value between the lowest and highest points is calculated for each planar voxel group with an angle between the surface normal vector and horizontal plane that is greater than 85°:

**Combining the connected region**. If the elevation difference between two adjacent candidate voxel groups contains ground points less than 0.3 m, then the two voxel groups are merged. Repeat this process and calculate the area of the final combined voxel group:

^{2}

**Removing ground points**. Set the area threshold 10 m

^{2}to filter the too small combined voxel group. Then all candidate voxel groups’ average elevation are recorded and the outliers are rejected, always indicating the suspended flat roof. All the points within the rest of candidate voxel groups will be labeled as ground points and need to be removed before the next step.

#### 2.2.2. Category-Oriented Merging

#### 2.3. Horizontal Hollow Ratio-Based Building Point Identification

**Extracting outline**. The proposed method makes every segment’s voxels project to the horizontal plane to form two-dimensionality grids and employ the simple and efficient method proposed by Yang [1] to extract the contour grid: if eight neighbor grids of one background grid (contains no points) are not all background grid, it will be labeled as contour grid. The aim of this step is to reduce the amount of calculation in next step.

**Generating convex hull**. When get the contour grids of one segment, the convex hull of this segment is calculated by the Graham’s Scan method. Furthermore, the convex hull area ${S}_{C}$ of this segment can be calculated.

**Calculating horizontal hollow ratio**. The proposed method defines the horizontal hollow ratio of each voxel cluster to indicate the above feature:

**Calculating threshold**. OTSU is an automatic and unsupervised threshold selection method. Based on this method, the optimal threshold selection should be made with the best separation of the two types obtained by the threshold segmentation. The interclass separability criterion is the best statistical difference between class characteristics of maximum or minimum differences within class characteristics [59]. The building’s hollow ratio is far smaller than that of other objects, as indicated by Figure 7; therefore, using the OTSU method to obtain threshold T of the hollow ratio of the divided building and other objects should achieve a good effect:

## 3. Results and Discussion

#### 3.1. Study Area and Experimental Data

^{2}. Due to the amount of testing data being huge, the proposed method is unable to process it in one time. Therefore, we clip the raw testing data into 12 parts according to the road segments in practice. The experimental region contained both downtown area and urban residential area with a number of commercial and residential architecture. A shopping mall, a skyscraper, an apartment building, and a high-rise office building are the main architecture buildings in the study area. Due to the good road greening, a large number of street trees exist in the study area, which cause strong variation of point densities of building façade. On the other hand, it is sometimes difficult separate the buildings and the trees surround it.

#### 3.2. Extraction Results of Building Points

#### 3.3. Evaluation of Extraction Accuracy

#### 3.3.1. Building-Based Evaluation for Overall Experimental Area

#### 3.3.2. Point-Based Evaluation for Individual Building

#### 3.4. Experiment Discussion

## 4. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## Abbreviations

MDPI | Multidisciplinary Digital Publishing Institute |

DOAJ | Directory of open access journals |

TLA | Three letter acronym |

LD | linear dichroism |

## References

- Yang, B.; Wei, Z.; Li, Q.; Li, J. Semiautomated building facade footprint extraction from mobile LiDAR point clouds. IEEE Geosci. Remote Sens. Lett.
**2013**, 10, 766–770. [Google Scholar] [CrossRef] - Hong, S.; Jung, J.; Kim, S.; Cho, H.; Lee, J.; Heo, J. Semi-automated approach to indoor mapping for 3D as-built building information modeling. Comput. Environ. Urban Syst.
**2015**, 51, 34–46. [Google Scholar] [CrossRef] - Remondino, F. Heritage recording and 3D modeling with photogrammetry and 3D scanning. Remote Sens.
**2011**, 3, 1104–1138. [Google Scholar] [CrossRef][Green Version] - Cheng, L.; Wang, Y.; Chen, Y.; Li, M. Using LiDAR for digital documentation of ancient city walls. J. Cult. Herit.
**2016**, 17, 188–193. [Google Scholar] [CrossRef] - He, M.; Zhu, Q.; Du, Z.; Hu, H.; Ding, Y.; Chen, M. A 3D shape descriptor based on contour clusters for damaged roof detection using airborne LiDAR point clouds. Remote Sens.
**2016**, 8, 189. [Google Scholar] [CrossRef] - Cheng, L.; Gong, J.; Li, M.; Liu, Y. 3D building model reconstruction from multi-view aerial imagery and LiDAR data. Photogramm. Eng. Remote Sens.
**2011**, 77, 125–139. [Google Scholar] [CrossRef] - Wu, B.; Yu, B.; Yue, W.; Shu, S.; Tan, W.; Hu, C.; Huang, Y.; Wu, J.; Liu, H. A voxel-based method for automated identification and morphological parameters estimation of individual street trees from mobile laser scanning data. Remote Sens.
**2013**, 5, 584–611. [Google Scholar] [CrossRef] - Rutzinger, M.; Rottensteiner, F.; Pfeifer, N. A comparison of evaluation techniques for building extraction from airborne laser scanning. IEEE J. Sel. Topics Appl. Earth Observ. Remote.
**2009**, 2, 11–20. [Google Scholar] [CrossRef] - Sampath, A.; Shan, J. Segmentation and reconstruction of polyhedral building roofs from aerial LiDAR point clouds. IEEE Trans. Geosci. Remote Sens.
**2010**, 48, 1554–1567. [Google Scholar] [CrossRef] - Kim, K.; Shan, J. Building roof modeling from airborne laser scanning data based on level set approach. ISPRS J. Photogramm. Remote Sens.
**2011**, 66, 484–497. [Google Scholar] [CrossRef] - Yanming, C.; Liang, C.; Manchun, L.; Jiechen, W.; Lihua, T.; Kang, Y. Multiscale grid method for detection and reconstruction of building roofs from airborne LiDAR data. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
**2014**, 7, 4081–4094. [Google Scholar] - Sun, S.; Salvaggio, C. Aerial 3D building detection and modeling from airborne LiDAR point clouds. IEEE J. Sel. Topics Appl. Earth Observ. Remote.
**2013**, 56, 1440–1449. [Google Scholar] [CrossRef] - Rottensteiner, F. Automatic generation of high-quality building models from LiDAR data. IEEE Comput. Graphics Appl.
**2003**, 23, 42–50. [Google Scholar] [CrossRef] - Li, Q.; Chen, Z.; Hu, Q. A model-driven approach for 3D modeling of pylon from airborne LiDAR data. Remote Sens.
**2015**, 7, 11501–11524. [Google Scholar] [CrossRef] - Wang, R. 3D building modeling using images and LiDAR: A review. Int. J. Image Data Fusion
**2013**, 4, 273–292. [Google Scholar] [CrossRef] - Xu, H.; Cheng, L.; Li, M.; Chen, Y.; Zhong, L. Using octrees to detect changes to buildings and trees in the urban environment from airborne LiDAR data. Remote Sens.
**2015**, 7, 9682–9704. [Google Scholar] [CrossRef] - Xu, S.; Vosselman, G.; Oude Elberink, S. Detection and classification of changes in buildings from airborne laser scanning data. Remote Sens.
**2015**, 7, 17051–17076. [Google Scholar] [CrossRef] - Pu, S.; Rutzinger, M.; Vosselman, G.; Oude Elberink, S. Recognizing basic structures from mobile laser scanning data for road inventory studies. ISPRS J. Photogramm. Remote Sens.
**2011**, 66, S28–S39. [Google Scholar] [CrossRef] - Vo, A.V.; Truong-Hong, L.; Laefer, D.F. Aerial laser scanning and imagery data fusion for road detection in city scale. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015.
- Tong, L.; Cheng, L.; Li, M.; Wang, J.; Du, P. Integration of LiDAR data and orthophoto for automatic extraction of parking lot structure. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
**2014**, 7, 503–514. [Google Scholar] [CrossRef] - Cheng, L.; Zhao, W.; Han, P.; Zhang, W.; Shan, J.; Liu, Y.; Li, M. Building region derivation from LiDAR data using a reversed iterative mathematic morphological algorithm. Opt. Commun.
**2013**, 286, 244–250. [Google Scholar] [CrossRef] - Mongus, D.; Lukač, N.; Žalik, B. Ground and building extraction from LiDAR data based on differential morphological profiles and locally fitted surfaces. ISPRS J. Photogramm. Remote Sens.
**2014**, 93, 145–156. [Google Scholar] [CrossRef] - Chun, L.; Beiqi, S.; Xuan, Y.; Nan, L.; Hangbin, W. Automatic buildings extraction from LiDAR data in urban area by neural oscillator network of visual cortex. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
**2013**, 6, 2008–2019. [Google Scholar] - Berger, C.; Voltersen, M.; Hese, S.; Walde, I.; Schmullius, C. Robust extraction of urban land cover information from HSR multi-spectral and LiDAR data. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
**2013**, 6, 2196–2211. [Google Scholar] [CrossRef] - Zhang, J.; Lin, X. Advances in fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing. Int. J. Image Data Fusion
**2016**, 2016, 1–31. [Google Scholar] [CrossRef] - Becker, S.; Haala, N. Grammar supported facade reconstruction from mobile LiDAR mapping. Int. Arch. Photogramm. Remote Sens.
**2009**, 38, 13–16. [Google Scholar] - Chan, T.O.; Lichti, D.D.; Glennie, C.L. Multi-feature based boresight self-calibration of a terrestrial mobile mapping system. ISPRS J. Photogramm. Remote Sens.
**2013**, 82, 112–124. [Google Scholar] [CrossRef] - Manandhar, D.; Shibasaki, R. Auto-extraction of urban features from vehicle-borne laser data. Int. Arch. Photogramm. Remote Sens.
**2002**, 34, 650–655. [Google Scholar] - Hammoudi, K.; Dornaika, F.; Soheilian, B.; Paparoditis, N. Extracting outlined planar clusters of street facades from 3D point clouds. In Proceedings of the 2010 Canadian Conference on Computer and Robot Vision (CRV), Ottawa, ON, Canada, 31 May–2 June 2010.
- Jochem, A.; Höfle, B.; Rutzinger, M. Extraction of vertical walls from mobile laser scanning data for solar potential assessment. Remote Sens.
**2011**, 3, 650–667. [Google Scholar] [CrossRef] - Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P. Hough-transform and extended ransac algorithms for automatic detection of 3D building roof planes from LiDAR data. Proc. ISPRS
**2007**, 36, 407–412. [Google Scholar] - Rutzinger, M.; Elberink, S.O.; Pu, S.; Vosselman, G. Automatic extraction of vertical walls from mobile and airborne laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2009**, 38, W8. [Google Scholar] - Moosmann, F.; Pink, O.; Stiller, C. Segmentation of 3D LiDAR data in non-flat urban environments using a local convexity criterion. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009.
- Munoz, D.; Vandapel, N.; Hebert, M. Directional associative markov network for 3D point cloud classification. In Proceedings of the Fourth International Symposium on 3D Data Processing, Visualization and Transmission, Atlanta, GA, USA, 18 June 2008.
- Demantké, J.; Mallet, C.; David, N.; Vallet, B. Dimensionality based scale selection in 3D LiDAR point clouds. Proc. ISPRS
**2011**, 38, W52. [Google Scholar] [CrossRef] - Yang, B.; Dong, Z. A shape-based segmentation method for mobile laser scanning point clouds. ISPRS J. Photogramm. Remote Sens.
**2013**, 81, 19–30. [Google Scholar] [CrossRef] - Yang, B.; Dong, Z.; Zhao, G.; Dai, W. Hierarchical extraction of urban objects from mobile laser scanning data. ISPRS J. Photogramm. Remote Sens.
**2015**, 99, 45–57. [Google Scholar] [CrossRef] - Li, B.; Li, Q.; Shi, W.; Wu, F. Feature extraction and modeling of urban building from vehicle-borne laser scanning data. Proc. ISPRS
**2004**, 35, 934–939. [Google Scholar] - Aijazi, A.K.; Checchin, P.; Trassoudaine, L. Segmentation based classification of 3D urban point clouds: A super-voxel based approach with evaluation. Remote Sens.
**2013**, 5, 1624–1650. [Google Scholar] [CrossRef] - Yang, B.; Wei, Z.; Li, Q.; Li, J. Automated extraction of street-scene objects from mobile LiDAR point clouds. Int. J. Remote Sens.
**2012**, 33, 5839–5861. [Google Scholar] [CrossRef] - Truong-Hong, L.; Laefer, D.F. Quantitative evaluation strategies for urban 3D model generation from remote sensing data. Comput. Graphics
**2015**, 49, 82–91. [Google Scholar] [CrossRef] - Weishampel, J.F.; Blair, J.; Knox, R.; Dubayah, R.; Clark, D. Volumetric LiDAR return patterns from an old-growth tropical rainforest canopy. Int. J. Remote Sens.
**2000**, 21, 409–415. [Google Scholar] [CrossRef] - Riano, D.; Meier, E.; Allgöwer, B.; Chuvieco, E.; Ustin, S.L. Modeling airborne laser scanning data for the spatial generation of critical forest parameters in fire behavior modeling. Remote Sens. Environ.
**2003**, 86, 177–186. [Google Scholar] [CrossRef] - Chasmer, L.; Hopkinson, C.; Treitz, P. Assessing the three-dimensional frequency distribution of airborne and ground-based LiDAR data for red pine and mixed deciduous forest plots. Proc. ISPRS
**2004**, 36, 8W. [Google Scholar] - Popescu, S.C.; Zhao, K. A voxel-based LiDAR method for estimating crown base height for deciduous and pine trees. Remote Sens. Environ.
**2008**, 112, 767–781. [Google Scholar] [CrossRef] - Reitberger, J.; Schnörr, C.; Krzystek, P.; Stilla, U. 3D segmentation of single trees exploiting full waveform LiDAR data. ISPRS J. Photogramm. Remote Sens.
**2009**, 64, 561–574. [Google Scholar] [CrossRef] - Wang, C.; Tseng, Y. Dem generation from airborne LiDAR data by an adaptive dual-directional slope filter. Proc. ISPRS
**2010**, 38, 628–632. [Google Scholar] - Jwa, Y.; Sohn, G.; Kim, H. Automatic 3D powerline reconstruction using airborne LiDAR data. Int. Arch. Photogramm. Remote Sens.
**2009**, 38, 105–110. [Google Scholar] - Douillard, B.; Underwood, J.; Kuntz, N.; Vlaskine, V.; Quadros, A.; Morton, P.; Frenkel, A. On the segmentation of 3D LiDAR point clouds. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011.
- Park, H.; Lim, S.; Trinder, J.; Turner, R. Voxel-based volume modelling of individual trees using terrestrial laser scanners. In Proceedings of the 15th Australasian Remote Sensing and Photogrammetry Conference, Alice Springs, Australia, 13–17 September 2010.
- Hosoi, F.; Nakai, Y.; Omasa, K. 3D voxel-based solid modeling of a broad-leaved tree for accurate volume estimation using portable scanning LiDAR. ISPRS J. Photogramm. Remote Sens.
**2013**, 82, 41–48. [Google Scholar] [CrossRef] - Stoker, J. Visualization of multiple-return LiDAR data: Using voxels. Photogramm. Eng. Remote Sens.
**2009**, 75, 109–112. [Google Scholar] - Liang, C.; Yang, W.; Yu, W.; Lishan, Z.; Yanming, C.; Manchun, L. Three-dimensional reconstruction of large multilayer interchange bridge using airborne LiDAR data. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
**2015**, 8, 691–708. [Google Scholar] - Cheng, L.; Tong, L.; Wang, Y.; Li, M. Extraction of urban power lines from vehicle-borne LiDAR data. Remote Sens.
**2014**, 6, 3302–3320. [Google Scholar] [CrossRef] - Redding, N.J.; Crisp, D.J.; Tang, D.; Newsam, G.N. An efficient algorithm for mumford-shah segmentation and its application to sar imagery. In Proceedings of the 1999 Conference on Digital Image Computing: Techniques and Applications, Perth, Australia; 1999; pp. 35–41. [Google Scholar]
- Robinson, D.J.; Redding, N.J.; Crisp, D.J. Implementation Of A Fast Algorithm For Segmenting Sar Imagery; DSTO-TR-1242; Defense Science and Technology Organization: Sydney, Australia, 2002. [Google Scholar]
- Korah, T.; Medasani, S.; Owechko, Y. Strip histogram grid for efficient LiDAR segmentation from urban environments. In Proceedings of the 2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Colorado Springs, CO, USA, 20–25 June 2011.
- Aljumaily, H.; Laefer, D.F.; Cuadra, D. Big-data approach for 3D building extraction from aerial laser scanning. J. Comput. Civil Eng.
**2015**, 30, 04015049. [Google Scholar] [CrossRef] - Otsu, N. A threshold selection method from gray-level histograms. Automatica
**1975**, 11, 23–27. [Google Scholar]

**Figure 2.**Construction of voxel group. (

**a**) Point clouds distribution of several objects in a 3D voxel grid system; (

**b**) Street lamp point clouds and the generated voxels; This is a typical case in which the voxels with the same horizontal and vertical coordinates with adjacent elevations belong to the same target; (

**c**) Schematic of the process of dividing the voxel distributions on the same vertical direction; (

**d**) Profile of part canopy of a street tree, a case that adjacent voxel within points belong to one object have little elevation differences.

**Figure 5.**Voxel group-based shape recognition. (

**a**) Raw LiDAR point clouds include building facades, street trees, street lamps, cars, and the ground; (

**b**) Generated voxel group, voxels of the same color belong to the same voxel group; (

**c**) LiDAR points within each voxel group, points of the same color belong to the same voxel group; (

**d**) Shape recognition results.

**Figure 6.**Category-oriented merging. (

**a**) Merging results, points of the same color belong to the same segment; (

**b**–

**e**) I: Single real-world object with several voxel groups, points of the same color belong to the same voxel group; II: Shapes of one object, red denotes linear points, green denotes surface points, and blue denotes spherical points.

**Figure 7.**Horizontal hollow ratio-based building point identification (

**a**–

**c**).

**Left**: top view of segments of point clouds of several buildings, trees and cars.

**Right**: overlay of a convex hull and point clouds of each segment.

**Figure 9.**Experimental area. (

**a**) Aerial orthophotos of the experimental area, red line denotes the SSW mobile mapping system’s driving route; (

**b**) Raw VLS data of the experimental area.

**Figure 10.**Building point extraction results. (

**a**) Extraction results of buildings in the experiment region; (

**b**,

**c**) Proposed method successfully detected various building shapes, including skyscrapers and low cottages; (

**d**) Proposed method effectively separated a building and the trees attached to it; (

**e**) Results show that the method could also recognize buildings with sparse LiDAR points or lack of partial structures.

**Figure 11.**Point-based evaluation for individual building. (

**a**) Automatic extraction results of a building; (

**b**) Manual extraction results of the same building; (

**c**) Overlay result with the correct, error, and missing points denoted in blue, red, and yellow, respectively.

**Figure 12.**Comparison of building extraction result between the proposed method and the method of Yang et al. [37]. (

**a**–

**e**) Left to right: street image, raw VLS data, the result by the proposed method and the result by Yang’s method of the specific building.

Linear | Planar | Spherical | |
---|---|---|---|

Linear | If: $\mathrm{arccos}\theta <\overrightarrow{{p}_{s}},\overrightarrow{{p}_{c}}>\le {10}^{\circ}$&& $\left|e{t}_{s}-e{t}_{p}\right|\le {T}_{e}$ && $\Vert {o}_{s}(x,y,z)-{o}_{p}(x,y,z)\Vert <{T}_{o}$ Else if: $\mathrm{arccos}\theta <\overrightarrow{{p}_{s}},\overrightarrow{{p}_{c}}>\ge {80}^{\circ}$&& ${S}_{M\text{in}}\le {T}_{md}$ | If: $\mathrm{arccos}\theta <\overrightarrow{{p}_{s}},\overrightarrow{{n}_{c}}>\le {10}^{\circ}$|| $\mathrm{arccos}\theta <\overrightarrow{{p}_{s}},\overrightarrow{{n}_{c}}>\ge {80}^{\circ}$ && ${S}_{M\text{in}}\le {T}_{md}$ | If: $\Vert {o}_{s}(x,y)-{o}_{p}(x,y)\Vert <{T}_{o}$&& ${S}_{M\text{in}}\le {T}_{md}$ |

Planar | If: $\mathrm{arccos}\theta <\overrightarrow{{n}_{s}},\overrightarrow{{p}_{c}}>\le {10}^{\circ}$|| $\mathrm{arccos}\theta <\overrightarrow{{n}_{s}},\overrightarrow{{p}_{c}}>\ge {80}^{\circ}$ && ${S}_{M\text{in}}\le {T}_{md}$ | If: $\mathrm{arccos}\theta <\overrightarrow{{n}_{s}},\overrightarrow{{n}_{c}}>\le {10}^{\circ}$&& $\left|e{t}_{s}-e{t}_{p}\right|\le {T}_{e}$ Else if: $\mathrm{arccos}\theta <\overrightarrow{{n}_{s}},\overrightarrow{{n}_{c}}>\ge {80}^{\circ}$&& ${S}_{M\text{in}}\le {T}_{md}$ | |

Spherical | If: $\Vert {o}_{s}(x,y)-{o}_{p}(x,y)\Vert <{T}_{o}$ | If: $\Vert {o}_{s}(x,y,z)-{o}_{p}(x,y,z)\Vert <{T}_{o}$ |

Items | Values | Description | Setting Basis | |
---|---|---|---|---|

Voxel group generating | $S$ | 0.5 m | The voxel size | Empirical |

${T}_{S}$ | 0.2 m | To divide the adjacent voxel in vertical direction | Data source | |

${T}_{End}$ | 0.85 | To terminate the growth of voxel groups’ generating | Chen et al. [11] | |

Shape recognition | ${N}_{p}$ | 5 pts | Minimum number of points for PCA | Empirical |

${R}_{i}$ | 0.1 m | The increment of the search radius | Empirical | |

Category-oriented merging | ${T}_{e}$ | 0.1 m | Maximal difference of elevation between two voxel groups | Data source |

${T}_{o}$ | 0.5 m | Maximal distance between two voxel groups’ center | Empirical | |

${T}_{md}$ | 0.15 m | Maximal minimum euclidean distance between two voxel groups | Data source | |

Building point identification | $T$ | Automatic | The threshold of the horizontal hollow ratio to identify building points | Calculation |

$\overline{H}$ | 2.5 m | Minimum average height of voxel cluster | Data source | |

$Csa$ | 3 m^{2} | Minimum Cross-sectional area of voxel cluster | Data source |

Type | Number of Points | Completeness (%) | Correctness (%) | Average Com (%) | Average Corr (%) | ||
---|---|---|---|---|---|---|---|

TP | FN | FP | |||||

Low-rise | 15,744 | 500 | 1239 | 96.9 | 92.7 | 94.8 | 93.1 |

54,399 | 2233 | 349 | 96.1 | 99.4 | |||

6750 | 0 | 598 | 100 | 91.9 | |||

6830 | 377 | 135 | 94.8 | 98.1 | |||

30,752 | 3234 | 3827 | 90.5 | 88.9 | |||

38,580 | 0 | 5122 | 100 | 88.3 | |||

20,751 | 1705 | 512 | 92.4 | 97.6 | |||

8048 | 0 | 1147 | 100 | 87.5 | |||

23,606 | 3234 | 336 | 87.7 | 98.6 | |||

12,083 | 1473 | 1639 | 89.1 | 88.1 | |||

Medium-rise | 167,478 | 934 | 2126 | 99.4 | 98.7 | 95.0 | 95.7 |

85,670 | 543 | 1408 | 99.4 | 98.4 | |||

194,255 | 1560 | 3210 | 99.2 | 98.4 | |||

198,123 | 846 | 1042 | 99.6 | 99.5 | |||

125,507 | 6835 | 773 | 94.8 | 99.4 | |||

237,798 | 11,732 | 10,592 | 95.3 | 95.7 | |||

50,687 | 10,466 | 5872 | 82.9 | 89.6 | |||

219,639 | 9897 | 5396 | 95.7 | 97.6 | |||

45,340 | 3699 | 1146 | 92.5 | 97.5 | |||

25,536 | 2229 | 5587 | 92.0 | 82.0 | |||

High-rise | 115,343 | 14,306 | 388 | 89.0 | 99.7 | 91.0 | 99.4 |

186,558 | 6697 | 2993 | 96.5 | 98.4 | |||

253,489 | 14,368 | 1152 | 94.6 | 99.5 | |||

206,176 | 6467 | 1388 | 97.0 | 99.3 | |||

209,904 | 38,477 | 3387 | 84.5 | 98.4 | |||

320,217 | 26,779 | 432 | 92.3 | 99.9 | |||

153,428 | 26,186 | 0 | 85.4 | 100.0 | |||

144,498 | 10,957 | 0 | 93.0 | 100.0 | |||

54,596 | 9874 | 0 | 84.7 | 100.0 | |||

133,353 | 9248 | 652 | 93.5 | 99.5 | |||

Complex | 313,922 | 22,428 | 1716 | 93.3 | 99.5 | 91.9 | 99.0 |

34,455 | 1798 | 254 | 95.0 | 99.3 | |||

26,540 | 739 | 613 | 97.3 | 97.7 | |||

11,945 | 4250 | 0 | 73.8 | 100.0 | |||

17,711 | 2415 | 136 | 88.0 | 99.2 | |||

608,188 | 24,904 | 0 | 96.1 | 100.0 | |||

281,385 | 26,653 | 342 | 91.3 | 99.9 | |||

282,115 | 11,010 | 2341 | 96.2 | 99.2 | |||

19,957 | 832 | 687 | 96.0 | 96.7 | |||

312,765 | 27,144 | 4336 | 92.0 | 98.6 |

Point Organization | Shape Recognition | Merging | Total | |
---|---|---|---|---|

The proposed method(s) | 4.32 | 9.91 | 9.45 | 23.68 |

Yang’s method(s) | 7.67 | 10.44 | 16.96 | 35.07 |

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Wang, Y.; Cheng, L.; Chen, Y.; Wu, Y.; Li, M. Building Point Detection from Vehicle-Borne LiDAR Data Based on Voxel Group and Horizontal Hollow Analysis. *Remote Sens.* **2016**, *8*, 419.
https://doi.org/10.3390/rs8050419

**AMA Style**

Wang Y, Cheng L, Chen Y, Wu Y, Li M. Building Point Detection from Vehicle-Borne LiDAR Data Based on Voxel Group and Horizontal Hollow Analysis. *Remote Sensing*. 2016; 8(5):419.
https://doi.org/10.3390/rs8050419

**Chicago/Turabian Style**

Wang, Yu, Liang Cheng, Yanming Chen, Yang Wu, and Manchun Li. 2016. "Building Point Detection from Vehicle-Borne LiDAR Data Based on Voxel Group and Horizontal Hollow Analysis" *Remote Sensing* 8, no. 5: 419.
https://doi.org/10.3390/rs8050419