Next Article in Journal
Analysis of Landsat-8 OLI Imagery for Estimating Exposed Bedrock Fractions in Typical Karst Regions of Southwest China Using a Karst Bare-Rock Index
Next Article in Special Issue
Visualizing the Spatiotemporal Trends of Thermal Characteristics in a Peatland Plantation Forest in Indonesia: Pilot Test Using Unmanned Aerial Systems (UASs)
Previous Article in Journal
Object-Based Image Analysis for Sago Palm Classification: The Most Important Features from High-Resolution Satellite Imagery
Previous Article in Special Issue
Use of Multi-Rotor Unmanned Aerial Vehicles for Radioactive Source Search
Article Menu
Issue 8 (August) cover image

Export Article

Remote Sens. 2018, 10(8), 1320; https://doi.org/10.3390/rs10081320

Article
Window Detection from UAS-Derived Photogrammetric Point Cloud Employing Density-Based Filtering and Perceptual Organization
1
Faculty of Geodesy and Geomatics Engineering, K.N. Toosi University of Technology, 19667-15433 Tehran, Iran
2
Institute of Geodesy and Photogrammetry, Technical University of Braunschweig, 38106 Braunschweig, Germany
3
Faculty of Geomatics, Computer Science and Mathematics, University of Applied Sciences, 70174 Stuttgart, Germany
*
Author to whom correspondence should be addressed.
Received: 19 July 2018 / Accepted: 15 August 2018 / Published: 20 August 2018

Abstract

:
Point clouds with ever-increasing volume are regular data in 3D city modelling, in which building reconstruction is a significant part. The photogrammetric point cloud, generated from UAS (Unmanned Aerial System) imagery, is a novel type of data in building reconstruction. Its positive characteristics, alongside its challenging qualities, provoke discussions on this theme of research. In this paper, patch-wise detection of the points of window frames on facades and roofs are undertaken using this kind of data. A density-based multi-scale filter is devised in the feature space of normal vectors to globally handle the matter of high volume of data and to detect edges. Color information is employed for the downsized data to remove the inner clutter of the building. Perceptual organization directs the approach via grouping and the Gestalt principles, to segment the filtered point cloud and to later detect window patches. The evaluation of the approach displays a completeness of 95% and 92%, respectively, as well as a correctness of 95% and 96%, respectively, for the detection of rectangular and partially curved window frames in two big heterogeneous cluttered datasets. Moreover, most intrusions and protrusions cannot mislead the window detection approach. Several doors with glass parts and a number of parallel parts of the scaffolding are mistaken as windows when using the large-scale object detection approach due to their similar patterns with window frames. Sensitivity analysis of the input parameters demonstrates that the filter functionality depends on the radius of density calculation in the feature space. Furthermore, successfully employing the Gestalt principles in the detection of window frames is influenced by the width determination of window partitioning.
Keywords:
heterogeneous image-derived point cloud; filtering; perceptual organization; window extraction; UAV (Unmanned Aerial Vehicle); edge detection; big datasets; normal vectors; clutter

1. Introduction

Three-dimensional (3D) building models with high level of detailed information play an important role in various applications, like building information modelling (BIM) [1]; emergency management [2]; augmented reality [3]; and, construction site monitoring [4]. Point clouds from active and passive sensors, which have various densities, are two data sources for urban modelling [5]. 3D building reconstruction from dense point clouds normally entails special pattern detection for the identification of different features of the building. Hence; spatial data mining is a cardinal concept in modelling the details of the building using large spatial data sets through object detection [6].
UAV is one of the state-of-the-art technologies that is used to gather extensive spatial data from the built environment [7]. UAV is known under various names, such as Unmanned Aerial System (UAS), drone, and Remotely-Piloted Aerial System (RPAS). UAS is composed of a set of complementary technologies to fulfil a specific task. Some of main UAS components are UAV, ground control station, navigation sensors, and imaging sensors [8]. Therefore, the UAV term will be used, when we want to refer to the platform. As a matter of fact, there is a clear global rise in the use of these off-the-shelf equipment since 2005, especially in civil applications [8]. UAVs can be considered as a tool of short- and close-range data acquisition from small areas [9], which leads to images with small GSDs. This data has the potential for fast updates of the reconstructed 3D city models with low costs [10] and large-scale spatial data [11]. The combination of UAS imagery with computer vision and photogrammetric techniques generates high accuracy point clouds, even with end-user, small format cameras [8]. Multi-view dense image matching (DIM), which is a technique to compute a depth value for each pixel of an image, can generate image-derived dense point clouds from multi-view imagery. In general, the analysis of high-density point clouds poses challenges for complex real-world buildings. Furthermore, point clouds of DIM have high variations in point density [12]. In addition, clutter, outliers, and noises which stem from reflective surfaces, movement of the platform, textureless objects, shadowed areas, occlusions, and unsuitable viewing angles [12] suffer from this type of data. Nonetheless, point clouds of DIM win in terms of density when compared to laser scanning point clouds [13]. Moreover, laser scanners cannot produce points on edges accurately. Additionally, variations of materials of objects’ surfaces may cause serious errors in laser scanning data [14]. However, matching can benefit from edges as areas with strong gradients and from surface variations as textures to use to compute points.
Processing high-density non-uniform point clouds of DIM is time consuming and a big challenge for most of spatial analyses [15]. Massive spatial data imposes scalability on the approach to accommodate complex processes of large datasets [16]. Furthermore, large differences in density raise the uncertainty and, consequently, the challenge of analyzing this type of data [17]. Indeed volume, inconsistency, and quality are the significant issues that should be considered for this kind of data. These concepts affirm the importance of spatial data mining for object detection from UAS-based image-derived point clouds, which convey large-scale detailed data.
Facades are important features in 3D models of buildings. One of their constituent elements is windows, which play a role in different applications, such as the thermal inspection of buildings. Moreover, the ratio of window to wall areas is a determining parameter to perform the energy assessment of buildings [18]. This is why window detection is a significant factor in the calculation of the energy consumption of buildings. In addition, the number of floors is an item that can be determined via rows of windows. Furthermore, windows should be modelled to build a realistic visualization of a building [4]. Windows belong in the openings subclass of the building module in the City Geography Markup Language (CityGML) standard. This standard enumerates the 3D building model in five levels of details (LoD) [19]. LoD2 includes roof structures, and LoD3 involves detailed architecture [20]. Openings, which are demonstrated in LoD3, enhance the building model semantically. Nonetheless, dormers, which are modelled in LoD2, are comprised of windows too.
Utilizing UAVs for the acquisition of oblique images from the top to the bottom of facades, which include most of the windows of a building, assigns priority to other methods of data capture, such as mobile/terrestrial laser scanning. The photogrammetric point cloud, generated from UAV imagery, can benefit from ultra-high density. This matter poses a difficulty in the process, but it provides the object with dense points, even on edges. Thus, in the case of proper visibility, point clouds of well-textured surfaces under stable lighting can be employed in the generation of detailed 3D models of buildings [21]. For this purpose, the scene should observe the required LoD [21]. Additionally, the viewing angle and the overlap of images should be determined properly. On the other hand, convergent image acquisition in case of large-scale projects and non-flat objects is still critical [9]. Therefore, the purpose of this research is to develop an approach that can cope with capabilities and challenges of this type of point cloud for large-scale window detection. This paper endeavors to introduce a method that employs this type of data to building blocks, aside from using color information, to identify various rectangular and curved window frames.
Our main innovations are summarized as follows. First, our proposed method of employing a multi-scale filter, based on local density in feature space, can globally handle the scale issue for various structures in unorganized point clouds and variations in their density. Additionally, it maintains the data accuracy on edge areas. Second, employing the Gestalt principles in the detection of window patches demonstrates one of the fresh usages of human perception in 3D spatial data mining. In the next section, a literature review discusses several related studies to date.

2. Related Works

High-density point clouds comprise a deluge of data that can be utilized in modelling details of the building. Therefore, object detection is a basic step in building reconstruction and has been well-established by many researchers. In working with big datasets, the approach should be able to deal with a huge mass of data effectively. Decreasing the volume of data is one feasible procedure to handle massive datasets.
Filtering is one of the relevant issues, which has been widely addressed in different research studies. In [22], a comprehensive analysis of various filters is presented. Most of the filters have been designed to remove outliers and noises. In addition, preserving the intended objects is a point that ought to be taken into consideration. Meanwhile, filters decrease the volume of the data to be handled with ease and expedite the process of identifying the target points. One type of filter is a neighborhood-based filter which uses similarity measures between neighbouring points. One type of this filter, which can carry out the aforementioned tasks successfully, is defined in [23]. It eliminates sparse outliers employing the relative deviation of the local neighborhood and takes a clustering-based approach to remove small clusters of outliers. Another successful filter for removing outliers and noises and preserving features is described in [24], in which a density-based method is applied. A particle swarm optimization approach is adopted for the approximation of the optimal bandwidth for kernel density to fulfill the robustness. Next, mean-shift clustering is utilized to remove outliers via thresholding. Wenzel et al. [25] develops a multi-cloud filter that preserves the locally densest clouds. It splits the tree structure to reach just one point for each node from each point cloud. Therefore, it can remove noise. Additionally, this filter omits outliers via constraining a certain redundancy.
One of the proposed methods for multi-scale point cloud filtering is addressed in [26]. In this research, a multi-scale operator is implemented by the difference of normal vectors (DoN) in two scales. This method is designed for large unorganized point clouds to generate scale-salient clusters. Edges of different objects, such as buildings, cars, windows, and trees are extracted from LiDAR data, to some extent, via defining vector fields with different radii. Normal vectors in different scales estimate surfaces of various objects with different sizes. The important matter in this method is setting the radii, which depends on the objects in question. A set of ground truth point clouds are employed to tackle this matter via maximizing the DoN for the objective classes and minimizing it for other classes. Another approach, which uses a multi-scale filter, is discussed in [27]. A multi-scale morphological filter with different shapes is utilized to extract small objects, like cars and trees, along with big objects such as buildings. The size of the filter is changed exponentially and can be bigger than the biggest object in the experimental area. In this method, which separates ground and non-ground points, the size of the filter window is determining. In [28], a multi-scale operator that is based on the Laplacian-Betrami operator is proposed to detect objects in an unorganized point cloud. It adopts a Gaussian kernel and defines an operator invariant to transformation and translation in order to detect regions of interest in the point cloud. However, these regions do not include edges.
Windows draw attentions in reconstruction of detailed building models. The detection of this object from aerial or close range images is pursued in different studies, such as [29,30,31]. Furthermore, windows are recognizable through outdoor or indoor data. There is a variety of research on window detection from point clouds. However, the majority of the utilized data are generated from laser scanning systems, which is an outgrowth of the usage of these systems, including mobile mapping, terrestrial, or trolley-based [32]. Methods that are based on deep learning are of particular interest for object detection tasks. Nonetheless, massive training dataset is a challenge for learning a model via this technique. There are too few publicly available suitable benchmark datasets for building classification and reconstruction using point clouds. Roofn3D is a new training dataset for classification of different roof types and semantic segmentation of buildings [33]. To the best of our knowledge, there is still no available training dataset proper for window detection in 3D. The following is a summary of several studies on window detection, which utilized the laserscanning data. In [4], indoor points behind the façade planes, which arise from the reflected laser pulses, are employed to detect centers of windows in sparse airborne laser scanning point clouds. Peaks of correlation functions and repetitive structures are two main features to detect window points. In [34], points on the window frames are detected by slicing the point cloud of the façade horizontally and vertically. Then, points on the window frames are picked by a length threshold in the slices that have more clusters of points. In [35], points on the frames are detected by thresholding a likelihood function. Points on the right and left edges, alongside points on the upper and lower edges, are separated via their relative positions, so that window patches are identified if they have the points on the four edges. In [36], a hole-based approach is applied to detect and reconstruct windows. A TIN is generated, long TIN edges are the results of the holes. The points of each hole are grouped together and minimum bounding boxes are generated to reconstruct windows. Aijazi et al. [37] presents a super voxel segmentation method for window detection from mobile terrestrial laserscanning. Points on facades are projected onto a plane parallel to the façade. A watertight boundary of the projected facade is generated and point inversion is conducted for the points inside it in order to segment holes. Windows are detected using these segments, along with a number of geometrical features, such as height and width.
One of the methods for object detection, which examines the problem space via an object-based approach, is perceptual organization. The Gestalt principles are suggested for perceptual organization by the Gestalt psychology, which studies the human perception of the visual environment [38]. According to the Gestaltists, humans intend to organize patterns and group individual elements into larger objects, based on the particular rules. These rules can consist of symmetry, proximity, similarity, and continuity [38]. In [39], the Gestalt principles are investigated through the viewpoint of robotic vision. In this regard, objects are detected while using a saliency model, which is based on symmetry. Moreover, similarity in color and position alongside proximity in depth, are employed for super pixel segmentation. In addition, continuation, proximity, symmetry, color uniqueness, and parallelism are utilized for the evaluation of the quality of segmentation. In [40], the relationship between surface patches, which are extracted from Gestalt rules, are employed to group surfaces in RGB-D data. These rules are defined on the basis of similarity of color, size, and texture and are learned by SVM. Tutzauer et al. [41] reveals a study on the human perception of building categories. It summarizes some of the research on two-dimensional (2D) geometric building structures, which aim to totally recognize, generalize, and generate an abstraction of the building. It uses the human perception of 3D geometric objects in 3D building abstraction. The quantification of human perception in 3D is firstly addressed here, according to its claim. Furthermore, Xu et al. [42] is another paper that reveals one of the first research efforts in this area. It presents a strategy for segmentation of two types of point clouds via voxel and super voxel structures. Graph-based clustering is applied to laserscanning and photogrammetric point clouds using perceptual grouping laws of proximity, similarity, and continuity. The outcomes demonstrate successful segmentation of facades, fences, windows, and the ground. In addition, they indicate that voxel graph-based segmentation yields better results with a photogrammetric point cloud.
While considering the literature, object detection from point clouds via filtering can detect points on the edge areas only to some extent and requires improvements for window detection purpose. Hence, we firstly devise a density-based multi-scale filter to downsize the input datasets and extract the points on edge areas globally. The designed filter should preserve the data accuracy [25]. When employing unorganized point clouds, the spatial position of the input data has foremost importance in processing to discover their coherence without gaining any benefit from additional information [43]. Therefore, the process of filtering should be position-invariant. Afterwards, inner clutter is discarded via color information and 3D points are clustered while using grouping concept of perceptual organization. Next, the concept of the Gestalt principles is tailored for identifying 3D points on window frames and recognizing window patches. Then, research experiments are reported and evaluated. Also, discussion is presented. Finally, overall results are discussed.

3. Materials and Methods

We exploit the data that includes two real-world heavy, inhomogeneous point clouds from two building blocks that are located in Baden-Wuerttemberg, Germany. They are composed of 20 and 53 million points, respectively. These point clouds, which are spread over areas of 520 and 1240 m2, respectively, consist of various windows on the roofs and on the walls alongside non-window structures, which can misdirect the approach (Figure 1). The images were captured while using an uncalibrated Sony camera, with focal length of 35 mm, on a UAV Falcon octocopter, which flew around the target buildings entirely and performed nadir and oblique imaging. The average GSD of the images is around 1 cm/pixel. The average end lap and side lap of the images are 80% and 70%, respectively. The average density on walls is 20 points/dm2. 22 ground control points (GCPs) were used to do geo-referencing and model calculations. Images were processed with Photoscan software by Agisoft to generate spare point clouds via structure from motion (SfM) and then dense point clouds using multi-view stereo matching (MVS). The average horizontal and vertical accuracies on 16 control points are 0.5 and 1.2 cm, respectively. Those in the six check points are 0.7 and 1.5 cm, respectively. Accuracy assessments of GCPs were performed via root-mean-square error.
Different kinds of windows in Figure 2, including casement windows, are taken into consideration. They are located on 10 wall faces and seven roof faces.

3.1. Filtering

Useful spatial data usually come with unsuitable data, such as clutter, noise, and outliers. Filtering is one of the possible methods to tackle this matter. It can facilitate data mining via downsizing the data and eliminating undesirable data.
Normal vectors are important specifications of a surface since they can estimate the inclination of the underlying surface. One of the methods of normal vector computation is the approximation of the local plane tangent to the surface. This method employs points directly to generate normal vectors. Directions of the local normal vectors on a real-world surface are associated with its changes so that normal vectors near edges get distorted. Normal vectors on a planar surface are theoretically surrounded by parallel normal vectors (Figure 3). Therefore, these normal vectors have an equal number of parallel normal vectors in their neighbourhood, defined by a specific radius.
One demonstration of a normal vector is via the directional cosines (Equations (1)), where N is a normal vector whose three components are defined through three directional angles a ,   b ,   and   c (Figure 4).
N = N x + N y + N z
N x = cos a ,   N y = cos b ,     N z = cos c
If the normal vectors are depicted by directional cosines, each point on the sphere (Figure 5a) displays a normal vector. Each colorful area on the sphere shows a planar surface of the building. In addition, in the feature space of normal vectors (Figure 5a), local surface densities of the edge areas (in Figure 5b, in blue) are lower than other areas because they are narrow areas, with normal vectors of different directions. However, other areas with similar characteristics exist in the image-derived point cloud on account of heterogeneity and clutter. Indeed, pseudo edges bring the same effect as real edges. This fact is the basis of the proposed filter to detect points on edges and window frames. In Figure 5c, the clustering of the first dataset is generated employing density grouping, as demonstrated in the histogram in Figure 5b. These clusters are conformed to the planes of the faces of the building.
Different planar patches have various normal vectors and are discriminated in the feature space (Figure 5a), in the histogram of the local density (Figure 5b), and on point cloud of the building (Figure 5c) with similar colors. In addition, points on the break lines of the roofs, footprints and walls and beside windows, the scaffold, and walls of dormers are recognizable in blue (Figure 5c). In order to discriminate these areas, a threshold has to be determined.
Setting a threshold in the feature space of a heterogeneous point cloud is a complicated matter. However, the feature space has more uniformity, it inherits high variations. Hence, we utilize the ratio of densities to employ a computationally efficient parameter. In the local normal vectors space, local surface densities ( ρ 1   and   ρ 2 ) around a point in two different radii ( R 1   and   R 2 ), have a relationship that is represented by Equation (2). A scale parameter is imposed upon the filtering approach via employing two radii to examine the surface around each point.
ρ 1 ρ 2 = ( n v 1 n v 2 ) ( R 2 R 1 ) 2
where n v 1   and   n v 2 are the number of points in the feature space. In Figure 6a, the histogram of ρ 1 ρ 2 is depicted. In Figure 6b, the point cloud of the building is labelled correspondingly.
In Figure 3, local normal vectors around two points near and far from the edges are depicted schematically. Directions of vectors around the point far from the edge are similar. If ρ 12 = ρ 1 ρ 2 is defined as the two-scaled density, ρ 12   s values are remarkably close for these points. Nevertheless, directions of local normal vectors around the point near the edge are varied, especially inside the bigger circle. Hence, ρ 12   s values are diverse for these points. Therefore, changes of ρ 12 are high on break lines and real/pseudo edges.
Which considering this fact, the rate of changes of ρ 12 over two points x and i ( R o c   ρ 12 x , i ) is calculated through Equation (3), in which d x ,   i is the Euclidean distance of two points. Next, the magnitude of this parameter (changes of the two-scaled surface density of point) over the neighborhood with the radius of r ( ρ 12 i ,   r ) is computed via Equation (4).
R o c   ρ 12 x , i = ( ρ 12   x ρ 12   i ) / d x , i ,
ρ 12 i ,   r =   max x ϵ C r i   R o c   ρ 12 x , i               C r i = { x   | d x , i < r } ,
This feature is computed for r = R 1 using a parallel processing server with two Intel E5-2690 processors with 2.6 GHz, 12 cores, and a RAM of 256 GB DDR4. Processes are carried out concurrently in the data level, while utilizing the parallel computing of MATLAB. It lasts 8.4 h for the first dataset and 46.2 h for the second one. By thinning out the points with gradients near 0 (Figure 6c), the points, which have high gradients and are mostly located on the break lines and edges, remain. Figure 6d displays the outcome of the proposed global edge detection method.

Sensitivity Analysis of Filtering to the Radius of Density Computation in the Feature Space

The support radius (R) is a determining parameter in the computation of density in the feature space, and it affects the next steps of the algorithm. This is why setting this parameter is analyzed here. Generally, there are two major approaches to estimating normal vectors. Employing a fixed radius or a fixed number of neighbouring points are two main methods to compute the local tangent surface, which is the basis of local normal vector computation.
In order to recognize the local geometry around each point, the density in the feature space needs to be computed in a fixed radius, especially in unorganized heterogeneous point clouds. This is because using different radii results in generating local surfaces in different scales. Big radii demonstrate big structures. While small radii suppress them and indicate fine structures. This is in contrast to some studies, such as [44], in which radii were computed based on Shanon entropy minimization to raise the distinctiveness of the features.
In Figure 7, the effect of employing varied radii upon the scale of segmentation in the feature space is demonstrated. In Figure 7a, using a big radius leads to big clusters and under segmentation. It cannot match the inter-class responses. While in Figure 7b, a small radius segments the point cloud into very small clusters and cannot arrange the proper intra-class responses. Consequently, they cannot discriminate the clusters on the cross sections completely. Figure 5a displays the segmentation with a radius of 0.15, which produces segments of each face of the building.
In Figure 8, the impact of using different sets of radii for the computation of ρ 12 on the filtered patches of the point cloud is depicted. In Figure 8a, there is still plenty of clutter and patches on the faces. In Figure 8b, plenty of pseudo edges are demonstrated. In Figure 8c, a few points from different patches are kept and most of the points are discarded. In Figure 6d, windows and major break lines of the building are held using ( R 1 ,   R 2 ) values of (0.15, 1).
The results of the sensitivity analysis assert the fact that noises and clutter are included in the isolated segments if radii are too small. This matter is of importance, especially for photogrammetric point clouds because these data usually suffer in depth from clutter [45]. In the current data, there is great clutter on the back of windows, on some parts of walls near big windows, and on the walls of dormers. Hence, a small radius cannot remove it. If radii are too big, sparse patches from subsections of the building such as windows and walls remain.

3.2. Window Detection

The filtered point cloud mostly consists of ridges and edges of the roof, walls, doors, windows, stairs, and dormers. A photogrammetric point cloud benefits from color. This non-geometrical feature is usually represented in RGB space. However, color values in this space suffer from high correlation. Therefore, they are transformed into an HSI color space. Afterwards, the algorithm undergoes a color-based two-step selection process and identifies window patches next.
The points, which are generated from what is inside the building, are referred to as the voyeur effect, according to [46]. The histogram of intensity (I)—namely the amount of each color’s light–is created (Figure 9a). The voyeur points, which are located on the back of glass or intrusions, occupy the lowest part of the histogram (Figure 9a,b). Therefore, these points are removed by thresholding.

3.2.1. Perceptual Organization

Perceptual organization concerns the fact that humans perceive the world in the form of objects. According to this concept, individual elements build parts which, in their turn, are grouped into bigger objects. The two main types of organization are grouping and shape. Grouping creates segregated parts within a holistic percept [47]. In this study, the law of proximity is applied for perceptual grouping. It states that close elements probably belong to the same group. In addition, the similarity in sizes of groups is utilized in that elements in similar-sized groups are probably the same. Therefore, thresholds for the minimum distance of points and the size of bounding boxes of patches are determined. In this regard, the average density and sizes of windows are investigated. Each patch consists of a group of points that are perceived as a separate whole. In Figure 10, some of the extracted patches are demonstrated.
Another associated concept in perceptual organization is shape. It is the result of a global perceptual process that usually identifies rules to give each patch a form [47]. The Gestalt principles are employed to define the rules of the shape detection problem and are taken into consideration to detect window patches. Window patches consist of gaps, and their structures are similarly composed of consecutive sub-segments of points and gaps. Hence, the difference in the number of points that belong to these consecutive sub-segments are high. Therefore, by following this pattern, Inequality (5) works out, where i   and   j are numbers of a sub-segment in a window patch and in a non-window patch, respectively. This inequality expresses big differences of differential number of points between windows and non-windows patches.
diff window   patch ( i ,   i + 1 ) diff non window   patch ( j ,   j + 1 )
Each patch is divided into several sub-segments using a partitioning moving box, as seen as in Figure 11a. The width of the box defines the number of sub-segments or steps for each patch. Afterwards, the difference in the number of points in consecutive sub-segments are computed and displayed in the diagram of Figure 11b.
In this diagram, each patch is segmented into 10 parts and displayed as a separate zigzag line. Except for two last zigzag lines, each of the lines possesses at least one big difference or leap. Apart from these two latest lines, which belong to a wall and a door, the difference between the maximum and minimum of other lines are large or have several large leaps. Therefore, the patches with repetitive big differences or leaps can be windows. However, this matter depends on the size of the moving box, which is utilized to divide patches.

3.2.2. Sensitivity Analysis of Window Detection Approach to the Size of Moving Box

In order to evaluate the influence of the size of moving box parameter on the results of window detection, sensitivity analysis is carried out for a number of window and non-window patches. Next, the computed input parameter will be used for the rest of the data, in the experiments section. In diagrams of Figure 12, the differential number of points for consecutive sub-segments of the patches, employed for Figure 11b, is depicted with three different steps.
With the adoption of the proposed criteria for the assessment of object detection in [48], the detection approach can be evaluated based on correctness and completeness. These criteria are computed via Equations (6) and (7). In these equations, true positive (TP) is a window patch that is detected as a window by our method, false positive (FP) is a non-window patch that is detected as a window by our method, and false negative (FN) is a window patch that is not detected as a window by our method. The number of patches are denoted by . .
c o r r e c t n e s s = T P T P + F P
c o m p l e t e n e s s =   T P T P + F N
The results of the evaluation are listed in Table 1. The results of completeness demonstrate the sensitivity of the algorithm to the width of the moving box and discover the fact that increasing the steps does not lead to an improvement in the detection process. This is because, on some window patches with low density or windows with shutters, narrow partitioning results in a few points in each sub-section. Consequently, their differentials are near to that of non-window patches. According to Table 1, using 15 steps is the best choice for partitioning. This will be utilized to carry out the rest of patches partitioning.

4. Experiments and Results

The method is applied on the second dataset. The second dataset is a high-density heterogeneous point cloud from a complex building block, which includes three buildings with an L-shaped junction. The feature space of local normal vectors and building clustering, through local density in the feature space, are demonstrated in Figure 13.
In Figure 14a,b, the histogram of ρ 12 and point labelling, respectively are demonstrated for radii of 0.15 and 1.5 m. In this diagram, most of the clutter due to the voyeur effect gathered in the separate red bin. Thresholding ρ 12 (Figure 14c) leads to the filtered point cloud, which majorly contains building edges (Figure 14d). They belong mainly to ridges, intersections of walls, window frames, stairs, chimneys, dormers, and items on the ground.
The outcome of the filtering consists of edge and non-edge areas. Non-edge areas are comprised of the remaining clutter inside the building, some parts of towers, installations, small patches of the roof, and walls. The existence of non-edge areas is mostly due to the heterogeneity of the point cloud, which brings pseudo-edge areas. According to Figure 15, the intensity of the filtered point cloud on the frames are the highest and the intensity on the points on the back of glass are the lowest. Therefore, the points of clutter areas inside the building are removed by thresholding.
Afterwards, the remaining points are analyzed according to Section 3.2.1. In Figure 16, the output of the grouping process and some of the segregated patches are displayed.
Employing Inequality (5), window detection is carried out with 15 steps for the partitioning process. Therefore, patches with consecutive big leaps on the diagrams (Figure 17) or with big differences between the maximum and the minimum are identified as windows.
To evaluate the window detection approach, their bounding boxes are determined on the original point clouds as the ground truth. We prefer to perform a pessimistic assessment while considering the ability of the proposed approach in large-scale window detection. Therefore, if at least 70% of points of a detected patch are located inside one of the reference bounding boxes, the patch is considered TP.
If less than 50% of points of a detected patch are placed inside a reference bounding box or it has no correspondence in the ground truth, the patch is counted as FP. If a bounding box does not contain at least 50% of points of one of the detected patches or it has no correspondence among the patches, then it is considered as FN. The evaluation results of both datasets are presented in Table 2.
In Figure 18 the results of the method are displayed. The extracted patches inside the bounding boxes illustrate the outcome.
In the first dataset, patches of door and scaffolding were wrongly detected as a window. In addition, three windows near the scaffolding were partly detected (less than 70% of points). In the second dataset, patches that belong to the doors and walls were falsely detected. Furthermore, eight windows were partly detected and three windows near the tower and close to the installation and big holes were not detected.

Comparison and Discussion

We examine the proposed approach by comparison with another multi-scale filtering method. This method was proposed in [26] for the extraction of edges in unorganized point clouds. The critical issue in this method is the selection of proper radii which can reflect the structure of surfaces in two small and big scales. In addition, the tolerance of DoN should be determined suitably to detect intended edges. In order to detect window frames in different parts of the building, several sets of radii are set empirically according to Figure 19.
After applying this filter using PCL, the statistical outlier removal (SOR) filter [49] is employed to remove noises. Then, the intensity is utilized to remove the clutter inside the buildings, such as the approach in Section 3.2. Afterwards different patches in point clouds are segregated, employing the grouping method in Section 3.2.1. Since the DoN method does not detect points on the middle bars of the frames on most of windows, the proposed method that is based on the Gestalt principles cannot be applied here. Therefore, in order to remove patches of major edges of buildings such as ridges and edges of walls, the method of thresholding the ratio of the length to the width of bounding boxes are applied. However, there are still non-window patches in the remaining points, which are indeed pseudo edges in the light of high variations of point cloud density. A hole detection method in 2D space is suggested to discard these kind of patches so that, in each bounding box, points are projected to the biggest face. The projected points, which are on the x-z or y-z plane, surround a big hole if they belong to a window (Figure 20). Therefore, the plane is gridded and squares that contain points less than a threshold are counted as empty squares. If the area of all neighbouring squares is more than a threshold, the connected squares generate a hole and, consequently, the patch belongs to a window. This threshold is set by considering the area of glass parts for different windows.
The evaluation results of window detection via DoN, thresholding and hole detection methods are listed in Table 3.
In the first dataset, six shutters and dormers were wrongly detected as windows. In addition, 18 windows on dormers and partly curved big windows were not detected. In the second dataset, 12 patches of doors, shutters, and dormers and on a tower were incorrectly detected as window. Moreover, 22 windows on dormers and 17 windows on walls, plus three partly curved big windows were not detected.
Alongside these results, a consequence of the DoN method is the partial detection of circular windows (Figure 21), which is beyond the scope of our method. The relative size of this type of window is small.
The filtered point clouds demonstrates 95% and 96% correctness, respectively, and 95% and 92% completeness, respectively. Hence, it can alleviate the clutter effect and local changes of the point cloud density greatly. However, structures, like the remaining parts of the scaffolding with a pattern of parallelism in dataset 1 and doors with glass parts, are mistaken in the proposed method. On the contrary, the DoN method cannot detect most of the middle window bars and reach 71% and 72% completeness for the two kinds of windows, respectively. Therefore, its function gets into difficulties with the heterogeneous photogrammetric point cloud, which suffers from clutter and holes. The lack of detection of big windows near the installation part and big holes confirm this matter. In addition, it is highly dependent on the determination of radii so that some shutters and dormers, which are neighboring structures of the windows close in size to them, are wrongly detected as window. However, it can detect edges of circular windows. This issue poses a problem for this method, especially in processing various close-sized structures.

5. Conclusions

In this study, patches of window frames are detected within high density cluttered photogrammetric point clouds, as generated from UAS imagery. The proposed approach introduces a density-based multi-scale filter in the feature space of local normal vectors to downsize the data and globally detect edges in heterogeneous unorganized point clouds. Afterwards, intensity is computed from color information to remove the voyeur effect. Next, perceptual organization is tailored and utilized for the grouping and then window patches are detected via the Gestalt principles. The proposed method is compared with a multi-scale DoN filter.
The processed point clouds gained 8% and 7% of the points of the original datasets 1 and 2, respectively, after applying the proposed method. The implementation of the presented filter via parallel processing facilitates spatial data mining for the detection of window patches with ratios of diminution in volume near the aforementioned amounts. The outcome of utilizing perceptual organization for the filtered point clouds displays 95% and 96% correctness, respectively, and 95% and 92% completeness, respectively, in rectangular and partially curved window detection. The presented approach can cope with the clutter effect and local changes of the point cloud density, successfully. However, some structures with a pattern of parallelism or with glass parts impair the performance of the proposed method.
Analyzing different radii in computing ρ 12 in the feature space shows the sensitivity of the proposed method of filtering to this parameter. Determining a proper radius depends on the variations of point cloud density and can be analyzed in ongoing research. Employing perceptual organization acts successfully in window detection. In addition, based on the sensitivity analysis to the size of moving box the determination of proper steps in partitioning window patches via the Gestalt principles is of great importance. The analysis demonstrates that smaller steps do not necessarily result in better partitioning. Thanks to the Gestalt principles, the proposed multi-scale filtering method achieves satisfying results, employing the human perception even on most middle bars of window frames. Thus, this method retains high correctness and completeness criteria, with a pessimistically defined TP, via the proposed large-scale window detection method. Furthermore, windows are correctly discriminated from intrusions such as doors, except for some which include glass parts and protrusions like installations, stairs, and towers, apart from parallel parts of scaffolding.
In future works, we intend to utilize big data concepts for the analyses of photogrammetric point clouds with ultra-high densities. On account of the high densities of input datasets, big data platforms can be fairly advantageous in handling this demanding type of data. Furthermore, employing more features, extracted from color information, is one possible future research effort to progress the development of the object detection approach. In case of availability of proper training 3D dataset, deep learning can be employed to automatically recognize windows in future research studies.

Author Contributions

Concept and methodology of the research were created by S.M. and M.J.V.Z. Formal analysis, implementation, validation and writing original draft were performed by S.M. M.H. made data acquisition and reviewed the manuscript. M.J.V.Z. supervised the research and reviewed the manuscript. M.M. improved the conceptualization and reviewed the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors thank H. Arefi from the School of Surveying and Geospatial Engineering, University of Tehran for insightful suggestions. The authors also appreciate helpful advice of M. Gerke from the Institute of Geodesy and Photogrammetry, Technische Universität Braunschweig. The authors acknowledge support by the German Research Foundation (DFG) and the Open Access Publication Funds of the Technische Universität Braunschweig.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tang, P.; Huber, D.; Akinci, B.; Lipman, R.; Lytle, A. Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques. Autom. Constr. 2010, 19, 829–843. [Google Scholar] [CrossRef]
  2. Gerhard, G.; Kolbe, T.H.; Claus Nagel, K.H. OGC City Geography Markup Language (CityGML) encoding Standard. 2012. Available online: http://www.opengis.net/spec/citygml/2.0 (accessed on 18 August 2018).
  3. Portalés, C.; Lerma, J.L.; Navarro, S. Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments. ISPRS J. Photogramm. Remote Sens. 2010, 65, 134–142. [Google Scholar] [CrossRef]
  4. Tuttas, S.; Stilla, U. Window Detection in Sparse Point Clouds Using Indoor Points. Available online: https://pdfs.semanticscholar.org/dc55/5028da62f79ba52b236a492966bdb9485df4.pdf (accessed on 18 August 2018).
  5. Baghani, A.; Valadan Zoej, M.J.; Mokhtarzade, M. Automatic hierarchical registration of aerial and terrestrial image-based point clouds. Eur. J. Remote Sens. 2018, 51, 436–456. [Google Scholar] [CrossRef]
  6. Aljumaily, H.; Laefer, D.F.; Asce, M.; Cuadra, D. Urban Point Cloud Mining Based on Density Clustering and MapReduce. J. Comput. Civ. Eng. 2017, 31, 1–11. [Google Scholar] [CrossRef]
  7. Malihi, S.; Valadan Zoej, J.M.; Hahn, M. Large-Scale Accurate Reconstruction of Buildings Employing Point Clouds Generated from UAV Imagery. Remote Sens. 2018, 10. [Google Scholar] [CrossRef]
  8. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  9. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  10. Jarzabek-Rychard, M.; Karpina, M. Quality Analysis on 3d Buidling Models Reconstructed from UAV Imagery. Available online: https://pdfs.semanticscholar.org/ef03/e60c3e321876734f77ac68f1c7f168b70992.pdf (accessed on 18 August 2018).
  11. Haala, N.; Cramer, M.; Rothermel, M. Quality of 3D Point Clouds from Highly Overlapping Uav Imagery. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-1/W2, 183–188. [Google Scholar] [CrossRef]
  12. Malihi, S.; Valadan Zoej, M.J.; Hahn, M.; Mokhtarzade, M.; Arefi, H. 3D building reconstruction using dense photogrammetric point cloud. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (IAPRS); ISPRS: Prague, Czech Republic, 2016; Volume XLI-B3, pp. 71–74. [Google Scholar]
  13. Nex, F.; Gerke, M. Photogrammetric DSM denoising. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (IAPRS); ISPRS: Zurich, Switzerland, 2014; Volume XL-3. [Google Scholar]
  14. Boehler, W.; Bordas Vicent, M.; Marbs, A. Investigating Laser Scanner Accuracy. Available online: http://dev.cyark.org/temp/i3mainzresults300305.pdf (accessed on 18 August 2018).
  15. Richter, R.; Döllner, J. Concepts and techniques for integration, analysis and visualization of massive 3D point clouds. Comput. Environ. Urban Syst. 2014, 45, 114–124. [Google Scholar] [CrossRef]
  16. Aljumaily, H.; Laefer, D.F.; Cuadra, D. Big-Data Approach for Three-Dimensional Building Extraction from Aerial Laser Scanning. J. Comput. Civ. Eng. 2016, 30. [Google Scholar] [CrossRef]
  17. Chen, C.C.; Chen, M.S. HiClus: Highly scalable density-based clustering with heterogeneous cloud. Procedia Comput. Sci. 2015, 53, 149–157. [Google Scholar] [CrossRef]
  18. Tuttas, S.; Stilla, U. Reconstruction of Rectangular Windows in Multi-Looking Oblique View Als Data. In IAPRS; ISPRS: Melbourne, Australia, 2012; Volume I-3, pp. 317–322. [Google Scholar]
  19. Löwner, M.; Benner, J.; Gröger, G.; Häfele, K. New Concepts for Structuring 3D City Models—An Extended Level of Detail Concept for CityGML Buildings. Available online: https://link.springer.com/chapter/10.1007/978-3-642-39646-5_34 (accessed on 18 August 2018).
  20. Arefi, H.; Engels, J.; Hahn, M.; Mayer, H. Levels of Detail in 3D Building Reconstruction from LiDAR Data. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.182.7121 (accessed on 18 August 2018).
  21. Daftry, S.; Hoppe, C.; Bischof, H. Building with drones: Accurate 3D facade reconstruction using MAVs. In Proceedings of the IEEE International Conference on Robotics and Biomimetics, Zhuhai, China, 6–9 December 2015. [Google Scholar]
  22. Han, X.; Jin, J.S.; Wang, M.; Jiang, W.; Gao, L.; Xiao, L. A review of algorithms for filtering the 3D point cloud. Signal Process. Image Commun. 2017, 57, 103–112. [Google Scholar] [CrossRef]
  23. Wang, J.; Yu, Z.; Zhu, W.; Cao, J. Feature-Preserving Surface Reconstruction From Unoriented. In Computer Graphics Forum; Wiley Online Library: Oxford, UK, 2013; Volume 32, pp. 164–176. [Google Scholar]
  24. Zaman, F.; Wong, Y.P.; Ng, B.Y. Density-based Denoising of Point Cloud. In 9th International Conference on Robotic, Vision, Signal Processing and Power Applications. Lecture Notes in Electrical Engineering; Springer: Singapore, 2017. [Google Scholar]
  25. Wenzel, K.; Rothermel, M.; Fritsch, D.; Haala, N. Filtering of Point Clouds from Photogrammetric Surface Reconstruction. Available online: https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-5/615/2014/isprsarchives-XL-5-615-2014.pdf (accessed on 18 August 2018).
  26. Ioannou, Y.; Taati, B.; Harrap, R.; Greenspan, M. Difference of normals as a multi-scale operator in unorganized point clouds. In Proceedings of the 2nd International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, Zurich, Switzerland, 13–15 October 2012. [Google Scholar]
  27. Wang, Q.; Wu, L.; Xu, Z.; Tang, H.; Wang, R.; Fashuai, L. A Progressive Morphological Filter for PointCloud Extraction from UAV Images. Available online: https://ieeexplore.ieee.org/abstract/document/6946860/ (accessed on 18 August 2018).
  28. Unnikrishnan, R. Statistical Approaches to Multi-Scale Point Cloud Processing; Carnegie Mellon University Press: Pittsburgh, PA, USA, 2008. [Google Scholar]
  29. Meixner, P.; Leberl, F. 3-Dimensional building details from aerial photography for Internet maps. Remote Sens. 2011, 3, 721–751. [Google Scholar] [CrossRef]
  30. Recky, M.; Leberl, F. Windows Detection Using K-means in CIE-Lab Color Space. In 20th International Conference on Pattern Recognition; IEEE: Istanbul, Turkey, 2010; pp. 356–359. [Google Scholar]
  31. Tylecek, R.; Sara, R. Stochastic Recognition of Regular Structures in Facade Images. IPSJ Trans. Comput. Vis. Appl. 2012, 4, 63–70. [Google Scholar] [CrossRef][Green Version]
  32. Maboudi, M.; Bánhidi, D.; Gerke, M. Evaluation of Indoor Mobile Mapping Systems. Available online: https://www.researchgate.net/publication/321709273_Evaluation_of_indoor_mobile_mapping_systems (accessed on 18 August 2018).
  33. Wichmann, A.; Agoub, A.; Kada, M. Roofn3D: Deep Learning Training Data for 3D Building Reconstruction. Available online: https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-2/1191/2018/isprs-archives-XLII-2-1191-2018.pdf (accessed on 18 August 2018).
  34. Zolanvari, S.M.I.; Laefer, D.F. Slicing Method for curved façade and window extraction from point clouds. ISPRS J. Photogramm. Remote Sens. 2016, 119, 334–346. [Google Scholar] [CrossRef][Green Version]
  35. Wang, Q.; Yan, L.; Zhang, L.; Ai, H.; Lin, X. A Semantic Modelling Framework-Based Method for Building Reconstruction from Point Clouds. Remote Sens. 2016, 8, 1–23. [Google Scholar] [CrossRef]
  36. Pu, S.; Vosselman, G. Knowledge based reconstruction of building models from terrestrial laser scanning data. ISPRS J. Photogramm. Remote Sens. 2009, 64, 575–584. [Google Scholar] [CrossRef]
  37. Aijazi, A.; Checchin, P.; Trassoudaine, L. Automatic Detection and Feature Estimation of Windows for Refining Building Facades in 3D Urban Point Clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 1–8. [Google Scholar] [CrossRef]
  38. Klavdianos, P.; Mansouri, A.; Meriaudeau, F. Gestalt-inspired features extraction for object category recognition. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Australian, 15–18 September 2013. [Google Scholar]
  39. Kootstra, G.; Bergstr, N.; Kragic, D. Gestalt Principles for Attention and Segmentation in Natural and Artificial Vision Systems. In Proceedings of the ICRA 2011 Workshop on Semantic Perception, Mapping and Exploration (SPME), Shanghai, China, 9 May 2011. [Google Scholar]
  40. Richtsfeld, A.; Zillich, M.; Vincze, M. Object Detection for Robotic Applications Using Perceptual Organization in 3D. Available online: https://link.springer.com/article/10.1007/s13218-014-0339-7#citeas (accessed on 18 August 2018).
  41. Tutzauer, P.; Becker, S.; Fritsch, D.; Niese, T.; Deussen, O. A Study of the Human Comprehension of Building Categories Based on Different 3D Building Representations. Available online: https://www.ingentaconnect.com/content/schweiz/pfg/2016/00002016/f0020005/art00005 (accessed on 18 August 2018).
  42. Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U. Voxel- and Graph-based Point Cloud Segmentation of 3D Scenes Using Perceptual Grouping Laws. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 43–50. [Google Scholar] [CrossRef]
  43. Remondino, F.; El-Hakim, S. Image-Based 3D Modelling: A Review. Photogram. Rec. 2006, 21, 269–291. [Google Scholar] [CrossRef]
  44. Weinmann, M.; Jutzi, B.; Hinz, S.; Mallet, C. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J. Photogramm. Remote Sens. 2015, 105, 286–304. [Google Scholar] [CrossRef]
  45. Tutzauer, P.; Haala, N. Façade Reconstruction Using Geometric and Radiometric Point Cloud Information. Available online: https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-3-W2/247/2015/isprsarchives-XL-3-W2-247-2015.pdf (accessed on 18 August 2018).
  46. Tuttas, S.; Stilla, U. Reconstruction of Façades in Point Clouds From Multi-Aspect Oblique Als. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 2, 91–96. [Google Scholar] [CrossRef]
  47. Pinna, B. New Gestalt principles of perceptual organization: an extension from grouping to shape and meaning. Gestalt Theory 2010, 32, 11–78. [Google Scholar]
  48. Rutzinger, M.; Rottensteiner, F.; Pfeifer, N. A Comparison of Evaluation Techniques for Building Extraction From Airborne Laser Scanning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2009, 2, 11–20. [Google Scholar] [CrossRef]
  49. Rusu, R.B. Point Cloud Library. Available online: http://pointclouds.org/documentation/tutorials/statistical_outlier.php (accessed on 18 August 2018).
Figure 1. Study areas of (a) dataset 1 and, (b) dataset 2, which include (c) various windows, scaffolding, doors, and installations.
Figure 1. Study areas of (a) dataset 1 and, (b) dataset 2, which include (c) various windows, scaffolding, doors, and installations.
Remotesensing 10 01320 g001
Figure 2. The different window types considered in this study.
Figure 2. The different window types considered in this study.
Remotesensing 10 01320 g002
Figure 3. Local normal vectors around the points far from the edge and near the edge.
Figure 3. Local normal vectors around the points far from the edge and near the edge.
Remotesensing 10 01320 g003
Figure 4. Directional angles of the normal vector.
Figure 4. Directional angles of the normal vector.
Remotesensing 10 01320 g004
Figure 5. Basis of the proposed filter in the normal vector space. (a)Directional cosines of normal vectors on the unit sphere. (b) Surface density of local normal vectors. (c) The labelled point cloud of the building (in m).
Figure 5. Basis of the proposed filter in the normal vector space. (a)Directional cosines of normal vectors on the unit sphere. (b) Surface density of local normal vectors. (c) The labelled point cloud of the building (in m).
Remotesensing 10 01320 g005
Figure 6. Filtering the point cloud. (a) Histogram of ρ 12 . (b) Labelling the point cloud (in m). (c) Thresholding the histogram of ρ 12 i ,   R 1 . (d) The filtered point cloud (in m).
Figure 6. Filtering the point cloud. (a) Histogram of ρ 12 . (b) Labelling the point cloud (in m). (c) Thresholding the histogram of ρ 12 i ,   R 1 . (d) The filtered point cloud (in m).
Remotesensing 10 01320 g006
Figure 7. Segmentation in the feature space for R of (a) 0.5, (b) 0.015.
Figure 7. Segmentation in the feature space for R of (a) 0.5, (b) 0.015.
Remotesensing 10 01320 g007
Figure 8. The remaining points using ( R 1 ,   R 2 ) of (a) (0.015, 0.2), (b) (0.15, 0.8) and (c) (0.1, 0.5). Units in figures are m.
Figure 8. The remaining points using ( R 1 ,   R 2 ) of (a) (0.015, 0.2), (b) (0.15, 0.8) and (c) (0.1, 0.5). Units in figures are m.
Remotesensing 10 01320 g008
Figure 9. Employing intensity in removing the voyeur effect. (a) The intensity histogram, (b) highlighted intensities on the point cloud, and magnification of the voyeur effect.
Figure 9. Employing intensity in removing the voyeur effect. (a) The intensity histogram, (b) highlighted intensities on the point cloud, and magnification of the voyeur effect.
Remotesensing 10 01320 g009
Figure 10. Some magnified segregated patches.
Figure 10. Some magnified segregated patches.
Remotesensing 10 01320 g010
Figure 11. Utilizing the Gestalt principles in window detection. (a) Partitioning a window via moving a box. (b) Differential number of points for the consecutive sub-segments, which belong to different window and non-window patches (10 steps).
Figure 11. Utilizing the Gestalt principles in window detection. (a) Partitioning a window via moving a box. (b) Differential number of points for the consecutive sub-segments, which belong to different window and non-window patches (10 steps).
Remotesensing 10 01320 g011
Figure 12. Differential number of points for consecutive sub-segments of window and non-window patches employing, (a) 5 steps, (b) 15 steps, and (c) 40 steps.
Figure 12. Differential number of points for consecutive sub-segments of window and non-window patches employing, (a) 5 steps, (b) 15 steps, and (c) 40 steps.
Remotesensing 10 01320 g012aRemotesensing 10 01320 g012b
Figure 13. The normal vector space. (a) Directional cosines of normal vectors on the unit sphere. (b) Local surface density in feature space. (c) The labelled point cloud of the building.
Figure 13. The normal vector space. (a) Directional cosines of normal vectors on the unit sphere. (b) Local surface density in feature space. (c) The labelled point cloud of the building.
Remotesensing 10 01320 g013
Figure 14. Filtering the point cloud. (a) Histogram of ρ 12 , (b) Points labelling according to the histogram of ρ 12 , (c) Thresholding the histogram of ρ 12 i ,   R 1   and (d) The filtered the point cloud.
Figure 14. Filtering the point cloud. (a) Histogram of ρ 12 , (b) Points labelling according to the histogram of ρ 12 , (c) Thresholding the histogram of ρ 12 i ,   R 1   and (d) The filtered the point cloud.
Remotesensing 10 01320 g014
Figure 15. Applying intensity thresholding. (a) Intensity histogram and (b) the highlighted corresponding intensity on the point cloud.
Figure 15. Applying intensity thresholding. (a) Intensity histogram and (b) the highlighted corresponding intensity on the point cloud.
Remotesensing 10 01320 g015
Figure 16. A number of the magnified segregated patches.
Figure 16. A number of the magnified segregated patches.
Remotesensing 10 01320 g016
Figure 17. Differential number of points of consecutive sub-segments belong to different patches of: (a) building 1, (b) building 2, for 15 steps.
Figure 17. Differential number of points of consecutive sub-segments belong to different patches of: (a) building 1, (b) building 2, for 15 steps.
Remotesensing 10 01320 g017
Figure 18. The extracted patches inside the bounding boxes.
Figure 18. The extracted patches inside the bounding boxes.
Remotesensing 10 01320 g018
Figure 19. The extracted window patches via the difference of normal vectors (DoN) method on (a) the dataset 1 with radiuses of (0.2, 2)m, (0.4,3)m, (0.6,6)m and (b) the dataset 2 with radiuses of (0.1,1)m, (0.2,2)m, (0.25,1.5)m, and (0.8,7)m.
Figure 19. The extracted window patches via the difference of normal vectors (DoN) method on (a) the dataset 1 with radiuses of (0.2, 2)m, (0.4,3)m, (0.6,6)m and (b) the dataset 2 with radiuses of (0.1,1)m, (0.2,2)m, (0.25,1.5)m, and (0.8,7)m.
Remotesensing 10 01320 g019
Figure 20. Hole detection in window patches on (y-z) plane. Units are in m.
Figure 20. Hole detection in window patches on (y-z) plane. Units are in m.
Remotesensing 10 01320 g020
Figure 21. Circular window detection by DoN method.
Figure 21. Circular window detection by DoN method.
Remotesensing 10 01320 g021
Table 1. Correctness and completeness of window detection approach utilizing different sizes of partitioning (steps).
Table 1. Correctness and completeness of window detection approach utilizing different sizes of partitioning (steps).
Correctness (%)Completeness (%)
5 steps87.578
10 steps10078
15 steps100100
40 steps10089
Table 2. Correctness and Completeness of Window detection via Our Method.
Table 2. Correctness and Completeness of Window detection via Our Method.
TPFP FNCorrectness (%)Completeness (%)
1st dataset59339595
2nd dataset1415119692
Table 3. Correctness and Completeness of Window Detection via DoN Method.
Table 3. Correctness and Completeness of Window Detection via DoN Method.
TPFPFNCorrectness (%)Completeness (%)
1st dataset446188871
2nd dataset11012429072

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top