Extraction and Simplification of Building Façade Pieces from Mobile Laser Scanner Point Clouds for 3 D Street View Services

Extraction and analysis of building façades are key processes in the three-dimensional (3D) building reconstruction and realistic geometrical modeling of the urban environment, which includes many applications, such as smart city management, autonomous navigation through the urban environment, fly-through rendering, 3D street view, virtual tourism, urban mission planning, etc. This paper proposes a building facade pieces extraction and simplification algorithm based on morphological filtering with point clouds obtained by a mobile laser scanner (MLS). First, this study presents a point cloud projection algorithm with high-accuracy orientation parameters from the position and orientation system (POS) of MLS that can convert large volumes of point cloud data to a raster image. Second, this study proposes a feature extraction approach based on morphological filtering with point cloud projection that can obtain building facade features in an image space. Third, this study designs an inverse transformation of point cloud projection to convert building facade features from an image space to a 3D space. A building facade feature with restricted facade plane detection algorithm is implemented to reconstruct façade pieces for street view service. The results of building facade extraction experiments with large volumes of point cloud from MLS show that the proposed approach is suitable for various types of building facade extraction. The geometric accuracy of building façades is 0.66 m in x direction, 0.64 in y direction and 0.55 m in the vertical direction, which is the same level as the space resolution (0.5 m) of the point cloud.


Introduction
3D scanning is gaining popularity as a novel surveying and mapping technique and is gradually becoming the main measure for obtaining and updating spatial geographic data [1,2].Mobile laser scanning (MLS) systems are utilized for collecting high-density point clouds in urban areas to obtain roadside data.MLS systems use adequate scanning angles for measuring vertical features along the road, such as the trees, buildings, and poles.This approach is fast, highly precise, and low-cost, and the obtained data contain strong real-time features that rely on city roads, which have become a useful 3D data source for collecting and updating city geoinformatic data [3][4][5].The data sampling method of land-based vehicle-borne 3D laser scanning systems is suitable for collecting city geographic information; thus, these systems have rapidly become the primary means of city information collection, and they have been extensively applied in 3D city reconstruction, autonomous navigation through the ISPRS Int.J. Geo-Inf.2016, 5, 231 2 of 18 urban environment, fly-through rendering, 3D street view, virtual tourism, urban mission planning, etc. [5,6].Massive amounts of points from MLS are characterized by having large volumes, being independent, having no structure and form information.As such, point cloud data cannot be directly used for 3D modeling.The building façade structure is crucial for 3D building modeling, automatic texture mapping, information annotation, and street view services.The fast and accurate extraction of building façade pieces is challenging for 3D reconstruction using point cloud data because of varying point densities, different viewing positions, scan geometry, and occluding objects [7,8].
In May 2007, Google first launched Google Street View (GSV) with 360 • panoramic images [9], which provides a street view put together by collecting pictures of all buildings and roadsides while driving along every street in the world.A simplified 3D mesh is generated from mobile laser scanner data to model the facades with both the point cloud and panoramic images.Then, building facade information (Figure 1) is obtained as background space information, whereas the continuous panoramic image data of the street is used for display.GSV can generate spatial 3D effects by detecting the background space data based on the mouse moving position of the user and the changing direction, angle, and shape information of the mouse detecting plane.
ISPRS Int.J. Geo-Inf.2016, 5, 231 2 of 18 of city information collection, and they have been extensively applied in 3D city reconstruction, autonomous navigation through the urban environment, fly-through rendering, 3D street view, virtual tourism, urban mission planning, etc. [5,6].Massive amounts of points from MLS are characterized by having large volumes, being independent, having no structure and form information.As such, point cloud data cannot be directly used for 3D modeling.The building façade structure is crucial for 3D building modeling, automatic texture mapping, information annotation, and street view services.The fast and accurate extraction of building façade pieces is challenging for 3D reconstruction using point cloud data because of varying point densities, different viewing positions, scan geometry, and occluding objects [7,8].
In May 2007, Google first launched Google Street View (GSV) with 360° panoramic images [9], which provides a street view put together by collecting pictures of all buildings and roadsides while driving along every street in the world.A simplified 3D mesh is generated from mobile laser scanner data to model the facades with both the point cloud and panoramic images.Then, building facade information (Figure 1) is obtained as background space information, whereas the continuous panoramic image data of the street is used for display.GSV can generate spatial 3D effects by detecting the background space data based on the mouse moving position of the user and the changing direction, angle, and shape information of the mouse detecting plane.GSV provides online street view services for public users worldwide.One can navigate forward and backward, upward and downward, as well as zoom in and out, as shown in Figure 1 (Internet application of GSV).Once the user clicks the facade piece, the interface would jump and face toward the nearest image view of the facade piece.GSV data can be traced back to the street view collection vehicle with panoramic imaging and 3D laser scanning.In addition, the point of interest (POI) can be tagged on the street view image supported by the façade pieces.The street view collection vehicle captures tremendous amounts of LIDAR data that cannot be released on the Internet.Extracting building facade pieces from 3D LIDAR data is the foundation for realizing the release of measurable, position able, and jump-free panoramic images on the Internet [9][10][11].
Effectively extracting building facade structures from enormous point cloud data for 3D digital city automatic modeling and street view Internet application is difficult [9][10][11].Many researchers worldwide have extensively studied this problem.For airborne 3D LIDAR data, original point cloud data are first classified, and building boundaries are subsequently confirmed based on the top surface data of the building.A popular issue in the study of LIDAR point cloud processing is the extraction of building roofs, boundaries and footprints [12][13][14][15].The detection of building GSV provides online street view services for public users worldwide.One can navigate forward and backward, upward and downward, as well as zoom in and out, as shown in Figure 1 (Internet application of GSV).Once the user clicks the facade piece, the interface would jump and face toward the nearest image view of the facade piece.GSV data can be traced back to the street view collection vehicle with panoramic imaging and 3D laser scanning.In addition, the point of interest (POI) can be tagged on the street view image supported by the façade pieces.The street view collection vehicle captures tremendous amounts of LIDAR data that cannot be released on the Internet.Extracting building facade pieces from 3D LIDAR data is the foundation for realizing the release of measurable, position able, and jump-free panoramic images on the Internet [9][10][11].
Effectively extracting building facade structures from enormous point cloud data for 3D digital city automatic modeling and street view Internet application is difficult [9][10][11].Many researchers worldwide have extensively studied this problem.For airborne 3D LIDAR data, original point cloud data are first classified, and building boundaries are subsequently confirmed based on the top surface data of the building.A popular issue in the study of LIDAR point cloud processing is the extraction of building roofs, boundaries and footprints [12][13][14][15].The detection of building outlines from airborne laser scanning (ALS) or aerial imagery is hampered by the fact that the building outlines in these datasets are defined by the roof extension and not by the building walls as in cadastral maps [16].
3D point cloud data from MLS provide much more detailed information about building facades that is more helpful for façade extraction and 3D building modeling.In contrast, large amounts of data volumes, redundancy, and occlusion affect the efficiency and automation ability of object extraction from the MLS point cloud.Point cloud data are difficult to separate into ground-data and non-ground-data on the basis of the feature of elevation for extracting boundaries for non-ground buildings [17][18][19].Wang Jan et al. proposed a method for extracting building height from the building top [20].Huang Lei et al. proposed a "horizontal point reference frame" feature extraction algorithm based on horizontal point information in scanning data corresponding to problems such as rejected noise points, immense data amount, slow processing, and so on [21].However, the restraint prerequisites of this approach are complex, and the requirements are high for the original data.Li performed threshold segmentation of point clouds by utilizing dense projection to extract the building geometry boundary [22].This method is suitable for independent buildings but not for abundant buildings beside the road that are scanned by vehicle-borne LIDAR because they are affected by multi targets.Lu et al. proposed a building gridding extraction method based on the scanning dense projections from vehicle-borne point cloud data containing dense information and on the differences of various city objects with features.In this method, a 3D dispersed point cloud was converted into a feature image through point cloud projection to extract 3D features [23].However, point cloud projection is mainly a top view projection, which does not take full advantage of the vehicle-borne side view scanning of LIDAR features; thus, the 3D feature cannot be effectively extracted.Both Yang et al. [24] and Martin et al. [19] presented several novel methods for the automated footprint extraction of building facades from mobile LIDAR point clouds.These approaches mainly focus on the footprint of building façades.Aijazi et al. [14] presented a novel method that automatically detects different window shapes in 3D LIDAR point clouds obtained from MLS in the urban environment.Window level building façades for street view services have larger data volume, which affects the efficiency of street view services.
Mathematical morphology can quantitatively describe content in terms of shape and size for geometry object extraction, such as roads, curbs, building façades, etc. [25][26][27][28].Hernández and Marcotegui [27] presented a morphological segmentation of building façade images based on the tecture information.Rodríguez-Cuenca et al. [28] proposed a robust approach to segment the facades from 3D MLS point cloud projection image based on the assented segmentation algorithm, which uses morphological operations to determine the location of street boundaries for the straight and curved road extraction from MLS dataset.Serna et al. [29,30] proposed a robust approach to segment the facades from 3D MLS point cloud using morphological attribute-based operators.The processing is implemented with the elevation image for visualization or evaluation purposes.Morphological processing can be an effective approach for point cloud segmentation, extraction and classification [27][28][29][30][31][32][33][34].
This paper aims to develop a methodology for identifying and extracting building façade pieces from MLS data for street view and 3D modeling applications.A point cloud projection algorithm is presented together with high-accuracy orientation parameters from the position and orientation system (POS) of MLS, which can convert large volumes of point cloud data to a raster image.A feature extraction and simplification approach based on morphological filtering with point cloud projection images is presented.This approach can obtain the building facade feature in an image space.Thereafter, an inverse transformation of point cloud projection is designed to convert the building facade feature from an image space to a 3D space.A building facade feature with restricted facade plane detection algorithm is implemented to reconstruct precise building façade pieces.Experiment results show that various kinds of building facades can be rapidly and effectively extracted using the method, and the geometrical precision is the same as the space resolution of point cloud from MLS.

Technical Framework of the Proposed Approach
MLS mainly obtains point cloud data with facade information from the side view.Topological relationships can be found among high-precision POS data, city roads, and buildings along the road.The point cloud data of buildings can be used as complex data for city facade extraction.A building facade extraction approach is proposed based on morphological filtering with point cloud from vehicle-borne LIDAR projection image.Vehicle-borne LIDAR point cloud data and the corresponding high-precision POS data were used to perform the conversion from 3D LIDAR point cloud data to raster images.Therefore, the morphological filtering image processing method was applied with point cloud raster image for building facade extraction.Based on the transformation relationship between the point cloud raster image and the 3D space, the building facade features in the image space were transformed into a 3D point cloud space.Thereafter, the 3D building facade structure was reconstructed with different heights by restraining the facade features and plane detection in the 3D point cloud space.The flow of the algorithm is shown in Figure 2.

Technical Framework of the Proposed Approach
MLS mainly obtains point cloud data with facade information from the side view.Topological relationships can be found among high-precision POS data, city roads, and buildings along the road.The point cloud data of buildings can be used as complex data for city facade extraction.A building facade extraction approach is proposed based on morphological filtering with point cloud from vehicle-borne LIDAR projection image.Vehicle-borne LIDAR point cloud data and the corresponding high-precision POS data were used to perform the conversion from 3D LIDAR point cloud data to raster images.Therefore, the morphological filtering image processing method was applied with point cloud raster image for building facade extraction.Based on the transformation relationship between the point cloud raster image and the 3D space, the building facade features in the image space were transformed into a 3D point cloud space.Thereafter, the 3D building facade structure was reconstructed with different heights by restraining the facade features and plane detection in the 3D point cloud space.The flow of the algorithm is shown in Figure 2.

Point Cloud Projection Based on POS
The amount of vehicle-borne 3D LIDAR point cloud data is enormous, and directly extracting features in the 3D point cloud space would result in complex 3D space computation.Therefore, in view of the requirements of feature analysis, 3D LIDAR point cloud projection can be transformed into 2D images [35,36], and the image feature analyzing algorithm can be used for features extraction from LIDAR point cloud.The vehicle-borne LIDAR scanning system possesses high-precision POS data, whereas the building facades beside the road mainly possess the same road direction.Nevertheless, POS data represent the trajectory in which the vehicle-borne system was sampled.Therefore, the obtained trajectory data from POS can serve as the standard, and LIDAR point cloud data can be projected to the projection plane parallel to the road or heading direction to generate point cloud projection images of the sides of the road.
The point cloud facade projection principles based on POS are shown in Figure 3, where points A, B, and C are the three points of POS trajectory.In projecting the point cloud data around the AB segment, point A is selected as the origin of the coordinate, straight line AB as axis X, and the vertical upward direction as axis Z. Axis Y can be determined by right-hand principle, and plane XZ is the

Point Cloud Projection Based on POS
The amount of vehicle-borne 3D LIDAR point cloud data is enormous, and directly extracting features in the 3D point cloud space would result in complex 3D space computation.Therefore, in view of the requirements of feature analysis, 3D LIDAR point cloud projection can be transformed into 2D images [35,36], and the image feature analyzing algorithm can be used for features extraction from LIDAR point cloud.The vehicle-borne LIDAR scanning system possesses high-precision POS data, whereas the building facades beside the road mainly possess the same road direction.Nevertheless, POS data represent the trajectory in which the vehicle-borne system was sampled.Therefore, the obtained trajectory data from POS can serve as the standard, and LIDAR point cloud data can be projected to the projection plane parallel to the road or heading direction to generate point cloud projection images of the sides of the road.
The point cloud facade projection principles based on POS are shown in Figure 3, where points A, B, and C are the three points of POS trajectory.In projecting the point cloud data around the AB segment, point A is selected as the origin of the coordinate, straight line AB as axis X, and the vertical upward direction as axis Z. Axis Y can be determined by right-hand principle, and plane XZ is the projection image plane.Similarly, the BC segment projection plane can be constructed by using the preceding rules.
ISPRS Int.J. Geo-Inf.2016, 5, 231 5 of 18 projection image plane.Similarly, the BC segment projection plane can be constructed by using the preceding rules.Step 1: Calculate every normal vector i L of LIDAR point to the nearest projection plane.
Step 2: Examine the LIDAR points in the left/right facades.When the direction of L AB ×   is upward, the point belongs to the left side of the road.When the direction of L AB ×   is downward, the point belongs to the right side of the road.
Step 3: Projection calculation.Use each LIDAR point i p as the corresponding image point of the pedal of the normal vector in the nearest projection plane.The grey value of the LIDAR point can be calculated based on Equation ( 1): where i L  is the length of the normal vector i L , and is the maximum length of the normal vector of all spaced points corresponding to the POS line.
Step 4: Divide the grids of the projection plane based on the space resolution of point cloud.If there are more than one projected point to a grid, use the max grey value as the grey value of the grid.
Figure 4 shows the facade projection images of both sides of the road.These images are obtained from the vehicle-borne LIDAR point cloud data of a street block based on POS data.Figure 4a is the top view projection image of the street block, and Figure 4b is the facade projection result.Step 1: Calculate every normal vector L i of LIDAR point to the nearest projection plane.
Step 2: Examine the LIDAR points in the left/right facades.When the direction of L × AB is upward, the point belongs to the left side of the road.When the direction of L × AB is downward, the point belongs to the right side of the road.
Step 3: Projection calculation.Use each LIDAR point p i as the corresponding image point of the pedal of the normal vector in the nearest projection plane.The grey value of the LIDAR point can be calculated based on Equation (1): where L i is the length of the normal vector L i , and L max is the maximum length of the normal vector of all spaced points corresponding to the POS line.
Step 4: Divide the grids of the projection plane based on the space resolution of point cloud.If there are more than one projected point to a grid, use the max grey value as the grey value of the grid.
Figure 4 shows the facade projection images of both sides of the road.These images are obtained from the vehicle-borne LIDAR point cloud data of a street block based on POS data.Figure 4a is the top view projection image of the street block, and Figure 4b is the facade projection result.When the turning angle of the two neighboring projection planes is not large in the POS trajectory, two projection images can be mosaicked into one image to avoid segmenting the building at the corner into two images.

Façade Extraction from Point Cloud Projection Image after Morphological Filtering
Building façade features are constructed from outlines and structures.Morphological image-processing algorithms can be utilized in point cloud projection images.By using morphological open and close computing, basic shapes of building boundaries in projection images may be retained, and irrelevant structures may then be deleted.The following features may be used to describe the façade point cloud projection images of buildings: ○ 1 Relationships between each pixel gray value and distance of LIDAR point to the projection plane.The bigger the gray value, the farther it is from the projection plane.
○ 2 As the chosen projection planes are a group of continuous façades of different buildings, an abrupt change may appear along the direction of the connection position of the two façades.This manifests in images as missing pixel information of one or many columns.
The algorithm flow of façade feature extraction with point cloud projection images based on morphological filtering is shown in Figure 5.When the turning angle of the two neighboring projection planes is not large in the POS trajectory, two projection images can be mosaicked into one image to avoid segmenting the building at the corner into two images.

Façade Extraction from Point Cloud Projection Image after Morphological Filtering
Building façade features are constructed from outlines and structures.Morphological image-processing algorithms can be utilized in point cloud projection images.By using morphological open and close computing, basic shapes of building boundaries in projection images may be retained, and irrelevant structures may then be deleted.The following features may be used to describe the façade point cloud projection images of buildings: 1 Relationships between each pixel gray value and distance of LIDAR point to the projection plane.The bigger the gray value, the farther it is from the projection plane.The concrete algorithm flow is as follows: Step 1: Noise filtering of projection image.In the projection image of point cloud, the pixel gray value represents the projection distance.Since the gray values distribution is known, the average value f and the standard deviation σ can be calculated.Thus, the pixels whose difference from the average value is more than the thrice of the standard deviation are taken as noises, as in Equation ( 2).
, f x y is the gray value in image plane ( ) , x y , f is the average value of gray, N is the number of point cloud, and the standard deviation σ can be calculated as in Equation ( 3): ) Row Col where Row is the row number of projection image and Col is the column number of projection image.
Step 2: Binarization of projection image.Set the pixels whose gray value are not equal to 0 as 255 in the projection image to realize image binarization.The calculation speed of morphological filtering with a binary image is faster than that for an original gray image.Moreover, the binary image can simplify the follow-on processing.Figures 6 and 7 show the original point cloud projection image and the image after its binarization segmentation, respectively.The concrete algorithm flow is as follows: Step 1: Noise filtering of projection image.In the projection image of point cloud, the pixel gray value represents the projection distance.Since the gray values distribution is known, the average value f and the standard deviation σ can be calculated.Thus, the pixels whose difference from the average value is more than the thrice of the standard deviation are taken as noises, as in Equation (2).
where f (x, y) is the gray value in image plane (x, y), f is the average value of gray, N is the number of point cloud, and the standard deviation σ can be calculated as in Equation ( 3): where Row is the row number of projection image and Col is the column number of projection image.
Step 2: Binarization of projection image.Set the pixels whose gray value are not equal to 0 as 255 in the projection image to realize image binarization.The calculation speed of morphological filtering with a binary image is faster than that for an original gray image.Moreover, the binary image can simplify the follow-on processing.Figures 6 and 7 show the original point cloud projection image and the image after its binarization segmentation, respectively.Step 3: Morphological Filtering.According to the projection image traits, a 5 × 5 rectangle structure element b ( ) can be used to perform dilation processing twice at first and then erosion processing twice.The operation f b • is defined as follows: where ⊕ represents the dilation processing, anddenotes the erosion processing.
The purpose of using a large structure element and performing dilation calculation is to ensure the compensation for the gray lost in façade connection positions.Figure 8 illustrates the effective compensation for missing gray positions.Step 3: Morphological Filtering.According to the projection image traits, a 5 × 5 rectangle structure element b ( ) can be used to perform dilation processing twice at first and then erosion processing twice.The operation f b • is defined as follows: where ⊕ represents the dilation processing, anddenotes the erosion processing.
The purpose of using a large structure element and performing dilation calculation is to ensure the compensation for the gray lost in façade connection positions.Figure 8 illustrates the effective compensation for missing gray positions.
) can be used to perform dilation processing twice at first and then erosion processing twice.The operation f •b is defined as follows: where ⊕ represents the dilation processing, and -denotes the erosion processing.The purpose of using a large structure element and performing dilation calculation is to ensure the compensation for the gray lost in façade connection positions.Figure 8 illustrates the effective compensation for missing gray positions.Step 4: Boundary line extraction.Use Laplacian operator ( ) to extract the boundary; the results are shown in Figure 9: Step 5: Segmentation line calculation.After the dilation and erosion calculations of point cloud projection image, buildings that were not connected originally may be joined as a whole.Therefore, building segmentation lines must be extracted.Extracted building boundaries can then be segmented into several sub-polygons that can better represent real buildings.
Calculate the sum of absolute values of discrete approximations of the gradient ( ) t n for each pixel in column n sequentially as in Equation ( 5).
( ) ) 0 where H is the height of projection image and W is the width of the projection image.
For the reason of projection deformation and scanning resolution, the building boundaries that should be vertical in space may be projected to neighboring columns, which will not be in one column.A window of L columns is taken to count the gradient as Equation ( 5), where L is calculated as Equation ( 6): where dF is the projection deformation.We take the spatial resolution as the maximum projection deformation.The projection deformation was less than 0.5 m in our experiments based on the spatial resolution of the point cloud.dGrid is the grid size of the point cloud projection.Step 4: Boundary line extraction.Use Laplacian operator ( ) to extract the boundary; the results are shown in Figure 9: Step 5: Segmentation line calculation.After the dilation and erosion calculations of point cloud projection image, buildings that were not connected originally may be joined as a whole.Therefore, building segmentation lines must be extracted.Extracted building boundaries can then be segmented into several sub-polygons that can better represent real buildings.

Calculate the sum of absolute values of discrete approximations of the gradient ( )
t n for each pixel in column n sequentially as in Equation ( 5).
( ) ) 0 where H is the height of projection image and W is the width of the projection image.
For the reason of projection deformation and scanning resolution, the building boundaries that should be vertical in space may be projected to neighboring columns, which will not be in one column.A window of L columns is taken to count the gradient as Equation ( 5), where L is calculated as Equation ( 6): where dF is the projection deformation.We take the spatial resolution as the maximum projection deformation.The projection deformation was less than 0.5 m in our experiments based on the spatial resolution of the point cloud.dGrid is the grid size of the point cloud projection.Step 5: Segmentation line calculation.After the dilation and erosion calculations of point cloud projection image, buildings that were not connected originally may be joined as a whole.Therefore, building segmentation lines must be extracted.Extracted building boundaries can then be segmented into several sub-polygons that can better represent real buildings.
Calculate the sum of absolute values of discrete approximations of the gradient t (n) for each pixel in column n sequentially as in Equation (5).
where H is the height of projection image and W is the width of the projection image.
For the reason of projection deformation and scanning resolution, the building boundaries that should be vertical in space may be projected to neighboring columns, which will not be in one column.A window of L columns is taken to count the gradient as Equation ( 5), where L is calculated as Equation ( 6): where dF is the projection deformation.We take the spatial resolution as the maximum projection deformation.The projection deformation was less than 0.5 m in our experiments based on the spatial resolution of the point cloud.dGrid is the grid size of the point cloud projection.
In each window of L columns, the largest gradient t l max is calculated as the possible position of segmentation line.
where l represents the column with the largest gradient in window [j − L, j + L].Then we reset the t (n) as Equation ( 8): where t (n) is the reset gradient within the projection deformation window L. Building width is generally larger than 5 m.Therefore, a new window column S is taken for the determination of segmentation line, as Equation ( 9).
where hW is the building width.In our paper, hW is taken as 5 m.Then the maximum gradient t s max in window S is considered as the possible position of segmentation line.
where s represents the possible position of segmentation line, as in Equation (7).
According to house structure traits, the borders of the chimney and window that are vertical in the left and right boundaries may also be taken as segmentation lines.Therefore, the existence of a second local maximum (noted as column n ) inside the (−S, S) interval must be investigated.If it exists, the intrinsic max in the interval can then be taken as a non-segmentation line, as in Equation (12).
Then, take column t (n) = 0 as a segmentation line, as shown in Figure 10.
ISPRS Int.J. Geo-Inf.2016, 5, 231 10 of 18 In each window of L columns, the largest gradient max l t is calculated as the possible position of segmentation line.
{ } max max ( ), ( ) , , where l represents the column with the largest gradient in window [j − L, j + L].Then we reset the ( ) t n as Equation ( 8): ( ) where ( ) t n is the reset gradient within the projection deformation window L.
Building width is generally larger than 5 m.Therefore, a new window column S is taken for the determination of segmentation line, as Equation (9).
where hW is the building width.In our paper, hW is taken as 5 m.Then the maximum gradient max s t in window S is considered as the possible position of segmentation line.
'' ' max ( ) 0 ( ) , ( , ) According to house structure traits, the borders of the chimney and window that are vertical in the left and right boundaries may also be taken as segmentation lines.Therefore, the existence of a second local maximum (noted as column ' n ) inside the ( )

S S −
interval must be investigated.If it exists, the intrinsic max in the interval can then be taken as a non-segmentation line, as in Equation ( 12).
( ) Then, take column ( ) '' 0 t n ≠ as a segmentation line, as shown in Figure 10.Step 6: Sub-polygon segmentation.According to the segmentation lines from Step 5, divide the extracted boundary graph in Step 3 into independent sub-polygons of the buildings, as shown in Figure 10.Step 6: Sub-polygon segmentation.According to the segmentation lines from Step 5, divide the extracted boundary graph in Step 3 into independent sub-polygons of the buildings, as shown in Figure 10.
Step 7: Simplify the sub-polygons.We can get the building façades in 3D space through inverse projection of sub-polygons from Step 6.In this way, the shapes of the facades are more coincident with the real building.However, these facades are complex relatively, which will result in tremendous computing load.In addition, the complex facades will affect the efficiency of the spatial query with façade pieces of the street view browsing from on-line end users.In order to simplify the polygon, we stretch the two segmentation lines of each polygon to the top height of the polygon, and then connect the stretched segmentation lines to convert the irregular polygon to a rectangle.The simplification results are shown in Figure 11.
ISPRS Int.J. Geo-Inf.2016, 5, 231 11 of 18 Step 7: Simplify the sub-polygons.We can get the building façades in 3D space through inverse projection of sub-polygons from Step 6.In this way, the shapes of the facades are more coincident with the real building.However, these facades are complex relatively, which will result in tremendous computing load.In addition, the complex facades will affect the efficiency of the spatial query with façade pieces of the street view browsing from on-line end users.In order to simplify the polygon, we stretch the two segmentation lines of each polygon to the top height of the polygon, and then connect the stretched segmentation lines to convert the irregular polygon to a rectangle.The simplification results are shown in Figure 11.

Building Façade Pieces Reconstruction in 3D Point Cloud Space
The 2D building façade features that are determined by the projection space can be taken as constraints to detect and reconstruct the plane in 3D point cloud space.3D building façade structures can be precisely reconstructed on this base through the following steps: Step 1: Bounding rectangle calculation of the building façade.The bounding rectangle w simplified from the 2D building sub-polygons that were extracted from the point cloud projection image space can be taken as restraints.Projection inverse transformation can be conducted according to point cloud projection references to obtain the bounding rectangles of the building façade object corresponding to plane-feature rectangle.
In Figure 13, the green rectangle is the bounding rectangle obtained from the projection inverse transformation that uses the feature rectangle in the projection plane.The blue point cloud data are the point cloud data in the bounding rectangle and are also the calculation data used in the plane fitting of the space plane in this district.Step 7: Simplify the sub-polygons.We can get the building façades in 3D space through inverse projection of sub-polygons from Step 6.In this way, the shapes of the facades are more coincident with the real building.However, these facades are complex relatively, which will result in tremendous computing load.In addition, the complex facades will affect the efficiency of the spatial query with façade pieces of the street view browsing from on-line end users.In order to simplify the polygon, we stretch the two segmentation lines of each polygon to the top height of the polygon, and then connect the stretched segmentation lines to convert the irregular polygon to a rectangle.The simplification results are shown in Figure 11.

Building Façade Pieces Reconstruction in 3D Point Cloud Space
The 2D building façade features that are determined by the projection space can be taken as constraints to detect and reconstruct the plane in 3D point cloud space.3D building façade structures can be precisely reconstructed on this base through the following steps: Step 1: Bounding rectangle calculation of the building façade.The bounding rectangle w simplified from the 2D building sub-polygons that were extracted from the point cloud projection image space can be taken as restraints.Projection inverse transformation can be conducted according to point cloud projection references to obtain the bounding rectangles of the building façade object corresponding to plane-feature rectangle.
In Figure 13, the green rectangle is the bounding rectangle obtained from the projection inverse transformation that uses the feature rectangle in the projection plane.The blue point cloud data are the point cloud data in the bounding rectangle and are also the calculation data used in the plane fitting of the space plane in this district.

Building Façade Pieces Reconstruction in 3D Point Cloud Space
The 2D building façade features that are determined by the projection space can be taken as constraints to detect and reconstruct the plane in 3D point cloud space.3D building façade structures can be precisely reconstructed on this base through the following steps: Step 1: Bounding rectangle calculation of the building façade.The bounding rectangle w simplified from the 2D building sub-polygons that were extracted from the point cloud projection image space can be taken as restraints.Projection inverse transformation can be conducted according to point cloud projection references to obtain the bounding rectangles of the building façade object corresponding to plane-feature rectangle.
In Figure 13, the green rectangle is the bounding rectangle obtained from the projection inverse transformation that uses the feature rectangle in the projection plane.The blue point cloud data are the point cloud data in the bounding rectangle and are also the calculation data used in the plane fitting of the space plane in this district.
Step 2: The point cloud data in building an object-bounding rectangle can be derived as the calculation data for space plane fitting.
Step 3: Use an iterative robust plane fitting method to delete outliers in the point cloud data and obtain stable fitting results.
Define the plane of the building façade, as in Equation ( 13) [26]: where a, b, c, and d are the references in the plane, and P(x i , y i , z i ) can be the coordinate of any point cloud.Thus, The result of the best fitting plane is resolved based on the least square principle.Each plane with plane parameters a, b, c that satisfies a 2 + b 2 + c 2 = 1 can be taken to calculate the residual ∑ d 2 i .The parameters with the minimum residual ∑ d 2 i can be taken as the best plane, which is also where the building façade is located.The solving of the least square is an iterative process.The iteration is stopped when the difference of the two adjacent iteration ∑ d 2 i is less than 0.1 m.The plane constructed with the red boundary in Figure 13 is the fitting result of the building façade being in the position.
ISPRS Int.J. Geo-Inf.2016, 5, 231 12 of 18 Step 2: The point cloud data in building an object-bounding rectangle can be derived as the calculation data for space plane fitting.
Step 3: Use an iterative robust plane fitting method to delete outliers in the point cloud data and obtain stable fitting results.
Define the plane of the building façade, as in Equation ( 13) [26]: where a, b, c, and d are the references in the plane, and ( , , ) i i i P x y z can be the coordinate of any point cloud.Thus, The result of the best fitting plane is resolved based on the least square principle.Each plane with plane parameters a, b, c that satisfies The plane constructed with the red boundary in Figure 13 is the fitting result of the building façade being in the position.

Datasets
Experimental data were obtained from vehicle-borne LIDAR 3D scanning system supported by Wuhan University 985 platform.This MLS comprises three scanners (Sick ® LMS 511, Düsseldorf, Germany), one panorama camera (Ladybug ® 3, Richmond, BC, Canada), two binocular stereo vision CCD cameras, and a POS device (Leador ® LDA03), as shown in Figure 14a.

Datasets
Experimental data were obtained from vehicle-borne LIDAR 3D scanning system supported by Wuhan University 985 platform.This MLS comprises three scanners (Sick ® LMS 511, Düsseldorf, Germany), one panorama camera (Ladybug ® 3, Richmond, BC, Canada), two binocular stereo vision CCD cameras, and a POS device (Leador ® LDA03, Wuhan, China), as shown in Figure 14a.The experiment district is a 3 km 2 business street in Wuhan, which is a typical urban area with buildings of different shapes and heights.The road length is about 1.8 km. Figure 14b shows all the point cloud data in this district.The dataset includes not only city buildings and infrastructure, but also non-fixed mobile data, such as running cars and pedestrians.The average density of the LIDAR points is approximately 0.5 m within the distance of 30 m.

Result Analysis
The building façades beside the road were extracted using the proposed method.Figure 16 shows the building façade structures that were extracted in point cloud projection image space by using the proposed method.As shown in Figure 16, the structures are strictly overlaid with the original point cloud.The façades extracted from whole district road building were examined and were found to satisfy the requirements of street view browsing with façade pieces.
Figure 17 shows the registration result of a 3D building façade structure and the panoramic image with georeferenced matching.This manifests the validity of the method.The experiment district is a 3 km 2 business street in Wuhan, which is a typical urban area with buildings of different shapes and heights.The road length is about 1.8 km. Figure 14b shows all the point cloud data in this district.The dataset includes not only city buildings and infrastructure, but also non-fixed mobile data, such as running cars and pedestrians.The average density of the LIDAR points is approximately 0.5 m within the distance of 30 m.

Result Analysis
The building façades beside the road were extracted using the proposed method.Figure 16 shows the building façade structures that were extracted in point cloud projection image space by using the proposed method.As shown in Figure 16, the structures are strictly overlaid with the original point cloud.The façades extracted from whole district road building were examined and were found to satisfy the requirements of street view browsing with façade pieces.
Figure 17 shows the registration result of a 3D building façade structure and the panoramic image with georeferenced matching.This manifests the validity of the method.The simple 3D model of the building can be rapidly constructed by the proposed approach.According to the point cloud, georeferenced panoramic images and the extracted building 3D façade, the 3D models were inversely calculated in the panoramic image space, and then the texture images were tailored directly from panoramic images.The texture images were then mapped with the building 3D façades as shown in Figure 18.The simple 3D model of the building can be rapidly constructed by the proposed approach.According to the point cloud, georeferenced panoramic images and the extracted building 3D façade, the 3D models were inversely calculated in the panoramic image space, and then the texture images were tailored directly from panoramic images.The texture images were then mapped with the building 3D façades as shown in Figure 18.The simple 3D model of the building can be rapidly constructed by the proposed approach.According to the point cloud, georeferenced panoramic images and the extracted building 3D façade, the 3D models were inversely calculated in the panoramic image space, and then the texture images were tailored directly from panoramic images.The texture images were then mapped with the building 3D façades as shown in Figure 18.The simple 3D model of the building can be rapidly constructed by the proposed approach.According to the point cloud, georeferenced panoramic images and the extracted building 3D façade, the 3D models were inversely calculated in the panoramic image space, and then the texture images were tailored directly from panoramic images.The texture images were then mapped with the building 3D façades as shown in Figure 18.To analyze the structure extraction precision of the building façades, 15 feature points, including building top and bottom, were evenly selected from the original LIDAR point cloud data.The 3D coordinates and corresponding point coordinates of the building facades, which were automatically extracted by the proposed method, were compared for the precision analysis.The error statistic results are shown in Table 1.In Table 1, the extracted building facade structure and extraction precision of the proposed method in this paper is about 0.7 m in plane, including the point selection error of the feature points from the building facade structure.Considering that the distance between a building and a road is usually more than 30 m, the point cloud resolution is approximately 0.5 m.This accuracy is similar to the interactive artificial facade structure feature extraction precision based on 3D LIDAR point cloud.The façade extraction error is about 2 pixels, both in the point cloud and the street view To analyze the structure extraction precision of the building façades, 15 feature points, including building top and bottom, were evenly selected from the original LIDAR point cloud data.The 3D coordinates and corresponding point coordinates of the building facades, which were automatically extracted by the proposed method, were compared for the precision analysis.The error statistic results are shown in Table 1.In Table 1, the extracted building facade structure and extraction precision of the proposed method in this paper is about 0.7 m in plane, including the point selection error of the feature points from the building facade structure.Considering that the distance between a building and a road is usually more than 30 m, the point cloud resolution is approximately 0.5 m.This accuracy is similar to the interactive artificial facade structure feature extraction precision based on 3D LIDAR point cloud.The façade extraction error is about 2 pixels, both in the point cloud and the street view image, which meets the requirements of the street view service.

Discussion
Building facades have many applications for the 3D modeling of the smart city.The 3D modeling has different levels for different applications.Although the proposed approach is a building facades extraction method, its aim is to use these facades pieces for online street view services.This requires that building facades pieces be matched with the panoramic images with a good vision effect.Thus, morphological dilation and erosion processing were adopted to transfer the complex building facades to facade pieces.These will reduce the data volume for online street view services.The complexities of facades were determined by morphological processing.The proposed approach is for the building structure modeling.Further study should be made as regards the morphological processing of windows, doors, and platforms to obtain higher accuracy and more detailed information.Structural elements and morphological processing times affect the results of facades extraction both in accuracy and details [37,38].
Another important issue is the density of the point cloud from MLS.Although morphological processing will affect the accuracy and details of the façade extraction, density is a crucial factor.The proposed approach of building facades extraction is for street view application.As such, the position precision of the façade pieces is adequate for scene jumping, POI tagging and street view browsing.We implemented point cloud data from MLS with a resolution of 0.3 m.The density of the point cloud should be improved to the cm level for a more detailed building structure modeling.However, doing so would affect the speed of the data processing [39,40].

Conclusions
A 3D LIDAR point cloud segmentation and side-view projection method based on high precision POS data was proposed in this paper.Through-point cloud projection transformation effectively enhanced the building façade features.On this basis, side-view projection image feature and building façade traits in projection image were analyzed.A building façade extraction method based on morphological filtering was proposed, and a precise reconstruction of 3D building façades was ultimately realized by the space feature transformation from 2D polygon to space 3D polygon.This method is feasible for most city buildings, demand less computational workload than the full processing of the whole point cloud and requires only driving and recording data on roads.3D façade pieces of buildings are less than 3D LIDAR point cloud data.Therefore, 3D simple models can be rapidly reconstructed.This method can also be employed in the release and application of Internet street view images.Automation of building façade extraction in complex road sections (e.g., crossroads) should be explored further in the future.

Figure 3 .
Figure 3. Point cloud facade projection based on position and orientation system (POS) trajectory.

Figure 3 .
Figure 3. Point cloud facade projection based on position and orientation system (POS) trajectory.

Figure 4 .
Figure 4. Facade projection results based on position and orientation system (POS) trajectory.(a) Point cloud projection image from top view; (b) Facade projection image.

Figure 4 .
Figure 4. Facade projection results based on position and orientation system (POS) trajectory.(a) Point cloud projection image from top view; (b) Facade projection image.

Figure 5 .
Figure 5. Façade feature extraction algorithm with point cloud projection image.

Figure 7 .
Figure 7. Binary image of point cloud projection Image.

Figures 6
Figures 6 and 7 illustrate that the features of the building façade become apparent and beneficial for further façade extraction through image processing after transforming the 3D point cloud into a projection image.Step 3: Morphological Filtering.According to the projection image traits, a 5 × 5 rectangle

Figure 7 .
Figure 7. Binary image of point cloud projection Image.

Figures 6 and 7
Figures 6 and 7 illustrate that the features of the building façade become apparent and beneficial for further façade extraction through image processing after transforming the 3D point cloud into a projection image.Step 3: Morphological Filtering.According to the projection image traits, a 5 × 5 rectangle

Figure 7 .
Figure 7. Binary image of point cloud projection Image.

Figures 6
Figures 6 and 7 illustrate that the features of the building façade become apparent and beneficial for further façade extraction through image processing after transforming the 3D point cloud into a projection image.Step 3: Morphological Filtering.According to the projection image traits, a 5 × 5 rectangle

Figure 8 .
Figure 8. Projection image after dilation and erosion processing.

Figure 9 .
Figure 9. Boundary extraction result of the building.

Figure 8 .
Figure 8. Projection image after dilation and erosion processing.

Figure 9 .
Figure 9. Boundary extraction result of the building.

Figure 9 .
Figure 9. Boundary extraction result of the building.

Figure 10 .
Figure 10.Result of segmentation line calculation.

Figure 10 .
Figure 10.Result of segmentation line calculation.
+ + = can be taken to calculate the residual 2 i d  .The parameters with the minimum residual 2 i d can be taken as the best plane, which is also where the building façade is located.The solving of the least square is an iterative process.The iteration is stopped when the difference of the two adjacent iteration 2 i d  is less than 0.1 m.
Figures 15 and 16 respectively show the 3D building façade extraction results beside the road and the overlay image of extracted building façade and point cloud.
Figures 15 and 16 respectively show the 3D building façade extraction results beside the road and the overlay image of extracted building façade and point cloud.

Figure 15 .
Figure 15.Spatial façade of the building beside pedestrian street in the experiment district.

Figure 16 .
Figure 16.Overlaid image of extracted building façade and point cloud.

Figure 17 .
Figure 17.Overlaid result of street view image and façade pieces.

Figure 15 . 18 Figure 15 .
Figure 15.Spatial façade of the building beside pedestrian street in the experiment district.

Figure 16 .
Figure 16.Overlaid image of extracted building façade and point cloud.

Figure 17 .
Figure 17.Overlaid result of street view image and façade pieces.

Figure 16 .
Figure 16.Overlaid image of extracted building façade and point cloud.

18 Figure 15 .
Figure 15.Spatial façade of the building beside pedestrian street in the experiment district.

Figure 16 .
Figure 16.Overlaid image of extracted building façade and point cloud.

Figure 17 .
Figure 17.Overlaid result of street view image and façade pieces.

Figure 17 .
Figure 17.Overlaid result of street view image and façade pieces.

18 Figure 18 .
Figure 18.Building simple model rapid creation based on street view image and LIDAR data.

Figure 18 .
Figure 18.Building simple model rapid creation based on street view image and LIDAR data.

Table 1 .
Precision statistics of building façade pieces extraction.

Table 1 .
Precision statistics of building façade pieces extraction.