Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods

: This paper presents an automated and effective method for detecting 3D edges and tracing feature lines from 3D-point clouds. This method is named Analysis of Geometric Properties of Neighborhoods (AGPN), and it includes two main steps: edge detection and feature line tracing. In the edge detection step, AGPN analyzes geometric properties of each query point’s neighborhood, and then combines RANdom SAmple Consensus (RANSAC) and angular gap metric to detect edges. In the feature line tracing step, feature lines are traced by a hybrid method based on region growing and model ﬁtting in the detected edges. Our approach is experimentally validated on complex man-made objects and large-scale urban scenes with millions of points. Comparative studies with state-of-the-art methods demonstrate that our method obtains a promising, reliable, and high performance in detecting edges and tracing feature lines in 3D-point clouds. Moreover, AGPN is insensitive to the point density of the input data.


Problem Statement
Feature extraction in 2D-images, one of the most important topics in the fields of image analysis and computer vision, has been studied for years [1].Edges and feature lines are considered as important features in various urban scenes covering a vast number of man-made objects.Generally, once edges are detected, a further step will be done for tracing feature lines from the detected edges.For edge detection in images, edges have been well defined, such as "the boundary element between two regions of different homogeneous luminance" [2] or "large or sudden changes in some image attribute, usually the brightness" [3].An extensive review of the established edge detection methods can be found in the literature [1].Apart from the surveyed methods, many outstanding methods have been proposed such as the revised Canny operator [4] and Edison operator [5].
In recent years, benefiting from the advances in sensor technology for both airborne and ground-based laser scanning, dense 3D-point clouds have become increasingly common [6].Therefore, edge detection and feature line extraction in 3D-point clouds have become a novel research topic.However, in the fields of remote sensing and photogrammetry, the approaches have just scratched the surface, and the definition of 3D edges has not yet been confirmed though feature line extraction has long been a major issue in the 3D city modeling works.
In addition, the established edge detection methods defined for images cannot be applied directly to 3D-point clouds.The main reasons for this are given below: (1) The data representation is different.An image is considered as a matrix, whereas a 3D-point cloud is an unorganized and irregularly distributed [7] scattered point set.(2) The presented information type is different.An image contains cryptic spatial information, and abundant spectral information.Comparatively, a 3D-point cloud contains explicit spatial information, and the reflected intensity at times [8].(3) The spatial neighborhood is different.An image is arranged as a grid-like pattern, and the neighborhood of a pixel can easily be determined.However, a 3D-point cloud is unorganized, and the neighborhood of a point is more complex than that of a pixel in an image.Generally, in 3D-point clouds, there are three types of neighborhoods: spherical neighborhood, cylindrical neighborhood, and k-closest neighbors based neighborhood [9].The three types of neighborhoods are based on different search methods, and change of the search method alters the neighborhood correspondingly.
To address the aforementioned problems, an automated and effective method is proposed to detect edges and trace feature lines from 3D-point clouds.Prior to presenting our method, the definition of 3D edges is given herein.Traditionally, 2D edges in an image are defined as the following two types [10]: (1) Gray level edges, which are often associated with abrupt changes in average gray level.
(2) Texture edges, which are the abrupt "coarseness" changes between adjacent regions contained the same texture at different scales, or the abrupt "directionality" changes between the directional textures in adjacent regions.
Then, the definition of 2D edges is extended by the literatures [1][2][3][4]11].Specially, the literature [1] defines 2D edges as one-dimensional discontinuities in the intensity surface of the underlying scene.However, the intensity or spectral information cannot describe the complete geometric properties in 3D-point clouds.According to the definitions in images and the characteristics of 3D-point clouds, we visibly define 3D edges as 3D discontinuities of the geometric properties in the underlying 3D-scene.
Mathematically, we define 3D edges as the following two types (see Figure 1): (1) Boundary elements, which are often associate with an abrupt angular gap in the shape formed by their neighborhoods.The details are presented in Section 2.2.Boundary elements are the edges belonging to roof contours, façade outlines, height jump lines [12], and other types of surface's contours.Specially, the surface is a 3D-plane or a curve surface.(2) Fold edges, which are the abrupt "directionality" changes between the normal directions in adjacent surfaces.Generally, two curve or planar intersected surfaces exist in the neighborhood of a fold edge.The details are presented in Section 2.2.Fold edges are the edges belonging to plane intersection lines [13], sharp feature line [14], breaklines [15], and other types of intersections between different surfaces.
Edge detection in 3D-point clouds is similar to 2D image processing.The usual aim of edge detection is to locate edges belonging to boundaries of objects of interest [3].Most edge detection techniques consist of two stages [11]: (1) converting an original image into features; and (2) assigning points to edges or non-edges.Similarly, our proposed edge detection procedure first computes angular gap feature for all the points in a 3D-point cloud, then assigns the points to edges or non-edges.Sometimes, there is a further stage called edge linking, in which the detected edges are examined once again for instance to obtain closed contours [11].For 3D-point clouds, the edge linking procedure may relate to a special application such as line segment extraction or building reconstruction.In this paper, we present a feature line tracing procedure for linking the detected edges to feature lines, such as boundaries and intersection lines.The feature lines might be straight or curve, however, must be smooth.To detect the defined 3D edges and trace the feature lines from 3D-point clouds, we propose an Analysis of Geometric Properties of Neighborhoods (AGPN) method.AGPN first analyzes geometric properties of each query point's neighborhood in spatial domain, and then detects 3D edges based on RANSAC and angular gap metric.Finally, to trace feature lines from the 3D edges, a hybrid method based on region growing and model fitting is proposed.With the proposed AGPN method, we can detect the two defined 3D edges-boundary elements and fold edges from 3D-point clouds directly without extra image processing or point cloud preprocessing (e.g., segmentation or object recognition).
The above-noted approaches can detect edges on regular feature lines such as plane intersection lines [13] or breaklines [15,16], roof or wall outlines [15,17] in buildings or piecewise planar objects.We mainly review the aforementioned direct methods herein, because the indirect methods might cause loss of spatial geometric information when a 3D-point cloud is converted into an image.To extract roof boundaries or façade outlines, a boundary estimation method is required in the direct methods.At beginning, 3D-point clouds without a high point density bring challenges to this task [18].The literature [12] extracts roof boundaries named height jump lines by using segmented ground plans.With the improvement of the point density of 3D-point clouds, two groups of boundary estimation methods are proposed.The first group of methods [17,19,20] extract boundaries based on triangulation.In this case, a Triangular Irregular Network (TIN) for the planar segments is generated first, and then long TIN edges appear only at the outer boundary (segment outlines) or inner boundary (holes).Boundary points are just the end points of the long TIN edges.The second group of methods [21,22] extract boundaries based on convex hull algorithm.In this case, the convex hull algorithm is modified by local convex hull testing to deal with complex boundaries.Then the points which are close enough to the query local convex boundary are picked up and treated as boundary points.In comparison with boundary extraction, the extraction for plane intersection lines is more convenient in the field of photogrammetry.After all the planar segments in a 3D-point cloud have been determined, the topologic relations among the planar segments are computed and represented To detect the defined 3D edges and trace the feature lines from 3D-point clouds, we propose an Analysis of Geometric Properties of Neighborhoods (AGPN) method.AGPN first analyzes geometric properties of each query point's neighborhood in spatial domain, and then detects 3D edges based on RANSAC and angular gap metric.Finally, to trace feature lines from the 3D edges, a hybrid method based on region growing and model fitting is proposed.With the proposed AGPN method, we can detect the two defined 3D edges-boundary elements and fold edges from 3D-point clouds directly without extra image processing or point cloud preprocessing (e.g., segmentation or object recognition).
The above-noted approaches can detect edges on regular feature lines such as plane intersection lines [13] or breaklines [15,16], roof or wall outlines [15,17] in buildings or piecewise planar objects.We mainly review the aforementioned direct methods herein, because the indirect methods might cause loss of spatial geometric information when a 3D-point cloud is converted into an image.To extract roof boundaries or façade outlines, a boundary estimation method is required in the direct methods.At beginning, 3D-point clouds without a high point density bring challenges to this task [18].The literature [12] extracts roof boundaries named height jump lines by using segmented ground plans.With the improvement of the point density of 3D-point clouds, two groups of boundary estimation methods are proposed.The first group of methods [17,19,20] extract boundaries based on triangulation.In this case, a Triangular Irregular Network (TIN) for the planar segments is generated first, and then long TIN edges appear only at the outer boundary (segment outlines) or inner boundary (holes).Boundary points are just the end points of the long TIN edges.The second group of methods [21,22] extract boundaries based on convex hull algorithm.In this case, the convex hull algorithm is modified by local convex hull testing to deal with complex boundaries.Then the points which are close enough to the query local convex boundary are picked up and treated as boundary points.In comparison with boundary extraction, the extraction for plane intersection lines is more convenient in the field of photogrammetry.After all the planar segments in a 3D-point cloud have been determined, the topologic relations among the planar segments are computed and represented by an adjacency matrix [15], then all pairs of adjacent planar segments are intersected to extract plane intersection lines [12,13,15,23], and thus, the edges on the intersection lines can be easily detected.
The aforementioned methods can, however, detect edges and extract feature lines in only regular buildings or piecewise planar objects.Furthermore, the literature [6] reviews the drawbacks of this work in the literature [13], some of which are common in state-of-the-art methods using 3D laser scanning data.The drawbacks include: (1) these methods are incapable of fitting a small and narrow plane in a noisy point cloud; (2) these methods may generate unexpected lines at non-planar surfaces when the data become complex.
Some approaches [14,[36][37][38][39][40] employ surface meshes or point-based surfaces, which can detect 3D edges or extract feature lines from some irregular objects and more complex surfaces.However, these methods are only applied to a small-scale 3D-point cloud with a single object.Specially, an angular gap based method [39,40], whose variations are widely used in building reconstruction work [41], fan clouds work [42] and the Point Cloud Library (PCL) [43], has been proposed.In this paper, the angular gap based method is utilized as a criterion in our proposed method.

Overview
To detect 3D edges and trace feature lines, a two-step strategy based on AGPN, is illustrated in Figure 2. In the diagram shown in Figure 2, the upper part marked by blue dotted lines is the first step, and the lower part marked by red dotted lines is the second step.The first step, detailed in Section 2.2, detects or locates edges in 3D-point clouds based on RANSAC, normal optimization, and angular gap computation.The second step, detailed in Section 2.3, traces feature lines from the detected edges based on neighborhood refinement and growing criterion determination.
by an adjacency matrix [15], then all pairs of adjacent planar segments are intersected to extract plane intersection lines [12,13,15,23], and thus, the edges on the intersection lines can be easily detected.
The aforementioned methods can, however, detect edges and extract feature lines in only regular buildings or piecewise planar objects.Furthermore, the literature [6] reviews the drawbacks of this work in the literature [13], some of which are common in state-of-the-art methods using 3D laser scanning data.The drawbacks include: (1) these methods are incapable of fitting a small and narrow plane in a noisy point cloud; (2) these methods may generate unexpected lines at non-planar surfaces when the data become complex.
Some approaches [14,[36][37][38][39][40] employ surface meshes or point-based surfaces, which can detect 3D edges or extract feature lines from some irregular objects and more complex surfaces.However, these methods are only applied to a small-scale 3D-point cloud with a single object.Specially, an angular gap based method [39,40], whose variations are widely used in building reconstruction work [41], fan clouds work [42] and the Point Cloud Library (PCL) [43], has been proposed.In this paper, the angular gap based method is utilized as a criterion in our proposed method.

Overview
To detect 3D edges and trace feature lines, a two-step strategy based on AGPN, is illustrated in Figure 2. In the diagram shown in Figure 2, the upper part marked by blue dotted lines is the first step, and the lower part marked by red dotted lines is the second step.The first step, detailed in Section 2.2, detects or locates edges in 3D-point clouds based on RANSAC, normal optimization, and angular gap computation.The second step, detailed in Section 2.3, traces feature lines from the detected edges based on neighborhood refinement and growing criterion determination.The pseudo code of our proposed AGPN method and its parameters are shown in Appendix A.

Geometric Property Analysis
The inherent property of being an edge is that it is the local neighborhood of a point rather than the point itself [44].Based on this principle, we design an algorithm to determine whether a point is an edge or not by analyzing geometric properties of the point's neighborhood.In the neighborhood of a boundary element, there is only one curve or planar surface.In the neighborhood of a fold edge, there are two or more intersected surfaces.We first present the flowchart of the edge detection step, and then explain why the defined edges can be detected.The pseudo code of our proposed AGPN method and its parameters are shown in Appendix A.

Geometric Property Analysis
The inherent property of being an edge is that it is the local neighborhood of a point rather than the point itself [44].Based on this principle, we design an algorithm to determine whether a point is an edge or not by analyzing geometric properties of the point's neighborhood.In the neighborhood of a boundary element, there is only one curve or planar surface.In the neighborhood of a fold edge, there are two or more intersected surfaces.We first present the flowchart of the edge detection step, and then explain why the defined edges can be detected.
The flowchart of the edge detection step in our proposed AGPN is shown in Figure 3. Let o denote an unlabeled point.Our method first searches the nearest neighbor point set P of o based on distance.Next, the point set P is fitted into a local plane, pl, by the RANSAC algorithm, and then P is divided into inliers (on the fitted plane pl) and outliers.The point o will be labeled as non-edge if it does not belong to the inliers.Otherwise, o and the inliers are connected to construct a number of spatial vectors, from which angular gap will be calculated based on the optimized normal detailed in Section 2.2.4.If a substantial angular gap exists between the constructed spatial vectors on pl, o will be labeled as edge.Otherwise, o will be labeled as non-edge.Edges will be detected after all the points in a 3D-point cloud are labeled.
It is noteworthy that we use the angular gap method rather than a modified convex hull boundary detection algorithm, which can be widely found in some references, because the inliers may be a subset of a surface plane model.Moreover, a kd-tree is used to determine the nearest neighbor point set P of each point.The flowchart of the edge detection step in our proposed AGPN is shown in Figure 3. Let denote an unlabeled point.Our method first searches the nearest neighbor point set of based on distance.Next, the point set is fitted into a local plane, , by the RANSAC algorithm, and then is divided into inliers (on the fitted plane ) and outliers.The point will be labeled as non-edge if it does not belong to the inliers.Otherwise, and the inliers are connected to construct a number of spatial vectors, from which angular gap will be calculated based on the optimized normal detailed in Section 2.2.4.If a substantial angular gap exists between the constructed spatial vectors on , will be labeled as edge.Otherwise, will be labeled as non-edge.Edges will be detected after all the points in a 3D-point cloud are labeled.
It is noteworthy that we use the angular gap method rather than a modified convex hull boundary detection algorithm, which can be widely found in some references, because the inliers may be a subset of a surface plane model.Moreover, a kd-tree is used to determine the nearest neighbor point set of each point.As shown in Figure 3, the plane model, calculated by the RANSAC algorithm, fits a local surface in the nearest neighbor point set P of the unlabeled point (see Figure 4).When is on a plane outline or plane intersection line, the plane model is impeccable.When is on a curve surface boundary or intersection of different curve surfaces, the plane model is also the most direct and effective geometric model for approximating the local smooth surface, because the neighborhood of is a local small area of the surface.
Based on the plane model fitted by the RANSAC algorithm, the nearest neighbor point set are divided into inliers and outliers (see Figure 4).The inliers are in the fitted plane and the outliers are outside.It can be found that, by the RANSAC algorithm, a best plane model can be found in P. If a point is a boundary element, the inliers are on the fitted plane, and the outliers are noise.If a point is a fold edge, the inliers are on the fitted plane lying on one of the intersecting surfaces in the point's neighborhood (see Figure 4).
The edge detection procedure in AGPN can detect both boundary elements and fold edges.The detection of the two types of edges is detailed in Sections 2.2.2 and 2.2.3, respectively.As shown in Figure 3, the plane model, calculated by the RANSAC algorithm, fits a local surface in the nearest neighbor point set P of the unlabeled point o (see Figure 4).When o is on a plane outline or plane intersection line, the plane model is impeccable.When o is on a curve surface boundary or intersection of different curve surfaces, the plane model is also the most direct and effective geometric model for approximating the local smooth surface, because the neighborhood of o is a local small area of the surface.
Based on the plane model fitted by the RANSAC algorithm, the nearest neighbor point set P are divided into inliers and outliers (see Figure 4).The inliers are in the fitted plane and the outliers are outside.It can be found that, by the RANSAC algorithm, a best plane model can be found in P. If a point is a boundary element, the inliers are on the fitted plane, and the outliers are noise.If a point is a fold edge, the inliers are on the fitted plane lying on one of the intersecting surfaces in the point's neighborhood (see Figure 4).
The edge detection procedure in AGPN can detect both boundary elements and fold edges.The detection of the two types of edges is detailed in Sections 2.2.2 and 2.2.3, respectively.

Boundary Element Detection
As shown in Figure 5, there is only one surface in the neighborhood of an unlabeled point .points depicted in blue and red are the nearest neighbors of , obtained using a kd-tree.Points ( = 1 ⋯ ) rendered in red are inliers extracted by the RANSAC algorithm.These inliers are on the fitted plane of the local surface.If is on a surface boundary, there will be a substantial angular gap (see Figure 5b) between vectors ( = 1 ⋯ ) on the local plane .If is an interior point (see Figure 5a), the distribution of the angles between vectors ( = 1 ⋯ ) will be consecutive, and there will certainly be no substantial angular gap.Notably, will be labelled as noise or an isolated point if it is an outlier of the local plane .

Fold Edge Detection
As shown in Figure 6, the neighborhood of an unlabeled point includes two intersecting surfaces.
points depicted in blue and red are the nearest neighbors of .Points rendered in red are inliers extracted by the RANSAC algorithm.The extracted inliers are on the fitted local plane lying on one of the intersecting surfaces.If is a fold edge point, there will be a substantial angular gap (see Figure 6b) between vectors ( = 1 ⋯ ) on the local plane .The local plane is on one of the intersecting surfaces.If is an interior point on one of the intersecting surfaces (see Figure 6a), the distribution of the angles between vectors ( = 1 ⋯ ) will be consecutive, and there will certainly be no substantial angular gap.
Figure 6c shows a special case, that is, the point densities of the intersecting surfaces are considerably different.One of the intersecting surfaces contains sparse points, while the other intersecting surface includes much dense points.If is an interior point on the surface with sparse points, the inliers extracted by the RANSAC algorithm will be on the other surface with dense points.

Boundary Element Detection
As shown in Figure 5, there is only one surface in the neighborhood of an unlabeled point o.K points depicted in blue and red are the nearest neighbors of o, obtained using a kd-tree.Points p i (i = 1 • • • N r ) rendered in red are inliers extracted by the RANSAC algorithm.These inliers are on the fitted plane pl of the local surface.

Boundary Element Detection
As shown in Figure 5, there is only one surface in the neighborhood of an unlabeled point .points depicted in blue and red are the nearest neighbors of , obtained using a kd-tree.Points ( = 1 ⋯ ) rendered in red are inliers extracted by the RANSAC algorithm.These inliers are on the fitted plane of the local surface.If is on a surface boundary, there will be a substantial angular gap (see Figure 5b) between vectors ( = 1 ⋯ ) on the local plane .If is an interior point (see Figure 5a), the distribution of the angles between vectors ( = 1 ⋯ ) will be consecutive, and there will certainly be no substantial angular gap.Notably, will be labelled as noise or an isolated point if it is an outlier of the local plane .

Fold Edge Detection
As shown in Figure 6, the neighborhood of an unlabeled point includes two intersecting surfaces.
points depicted in blue and red are the nearest neighbors of .Points ( = 1 ⋯ ) rendered in red are inliers extracted by the RANSAC algorithm.The extracted inliers are on the fitted local plane lying on one of the intersecting surfaces.If is a fold edge point, there will be a substantial angular gap (see Figure 6b) between vectors ( = 1 ⋯ ) on the local plane .The local plane is on one of the intersecting surfaces.If is an interior point on one of the intersecting surfaces (see Figure 6a), the distribution of the angles between vectors ( = 1 ⋯ ) will be consecutive, and there will certainly be no substantial angular gap.
Figure 6c shows a special case, that is, the point densities of the intersecting surfaces are considerably different.One of the intersecting surfaces contains sparse points, while the other intersecting surface includes much dense points.If is an interior point on the surface with sparse points, the inliers extracted by the RANSAC algorithm will be on the other surface with dense points.If o is on a surface boundary, there will be a substantial angular gap G θ (see Figure 5b) between vectors If o is an interior point (see Figure 5a), the distribution of the angles between vectors → op i (i = 1 • • • N r ) will be consecutive, and there will certainly be no substantial angular gap.
Notably, o will be labelled as noise or an isolated point if it is an outlier of the local plane pl.

Fold Edge Detection
As shown in Figure 6, the neighborhood of an unlabeled point o includes two intersecting surfaces.K points depicted in blue and red are the nearest neighbors of o.Points p i (i = 1 • • • N r ) rendered in red are inliers extracted by the RANSAC algorithm.The extracted inliers are on the fitted local plane pl lying on one of the intersecting surfaces.
If o is a fold edge point, there will be a substantial angular gap G θ (see Figure 6b) between vectors The local plane pl is on one of the intersecting surfaces.If o is an interior point on one of the intersecting surfaces (see Figure 6a), the distribution of the angles between vectors ) will be consecutive, and there will certainly be no substantial angular gap. Figure 6c shows a special case, that is, the point densities of the intersecting surfaces are considerably different.One of the intersecting surfaces contains sparse points, while the other intersecting surface includes much dense points.If o is an interior point on the surface with sparse points, the inliers extracted by the RANSAC algorithm will be on the other surface with dense points.
Then, once the angular gaps are computed from these inliers, there will be a substantial angular gap, resulting in o mislabeled as an edge.Fortunately, o is an outlier of the fitted local plane pl, and therefore, it can be rejected by determining whether or not it is an inlier of the fitted local plane pl.Then, once the angular gaps are computed from these inliers, there will be a substantial angular gap, resulting in mislabeled as an edge.Fortunately, is an outlier of the fitted local plane , and therefore, it can be rejected by determining whether or not it is an inlier of the fitted local plane .In addition, if the number of inliers is less than three, the inliers will not be able to fit a plane.In this case, will be labeled as noise or a local extreme, such as the vertex of a circular cone.

Normal Optimization
Generally, the normal of a 3D point is computed by a covariance matrix created from the nearest neighbors of the 3D point [45], which is called PCA-Normal herein.The PCA-Normal is the normal of the tangent plane of a 3D point.In our method, the normal is required to be orthogonal to the local fitted plane .When we detect boundary elements, only one surface exists in the neighborhood, and is on the surface.When we detect fold edges, two or more intersecting surfaces exist, and is on one of the intersecting surfaces.However, the PCA-Normal is orthogonal to the tangent plane, and hence it cannot meet the aforementioned requirement.
The reasons for this are that the PCA algorithm cannot detect any of the intersected planes in a complex neighborhood.Fortunately, the RANSAC algorithm can solve this problem, and deal with noise as well.As shown in Figure 4, the RANSAC algorithm can detect a proper plane in either of the two complex neighborhoods.
In this study, we fit a local plane in the neighborhood to estimate the normal of an unlabeled point .The procedure first searches for the neighbors of .Next, to eliminate the influence of outliers or noises, a local plane is fitted using the RANSAC algorithm.Then, is the normal of the local fitted plane .Note that the normal used in this study is named RANSAC-Normal.

Angular Gap Computation
To compute the angular gap, , a spatial coordinate frame is constructed based on the local plane and its normal vector .The axes of this frame are composed of two perpendicular vectors , on and the normal vector .The angular gap is computed by Equations ( 1)-( 4): where ( = 1 ⋯ ) is the vector connecting the unlabeled point to an inlier extracted by the RANSAC algorithm.The distribution of is shown in Figure 7.In addition, if the number of inliers is less than three, the inliers will not be able to fit a plane.In this case, o will be labeled as noise or a local extreme, such as the vertex of a circular cone.

Normal Optimization
Generally, the normal of a 3D point is computed by a covariance matrix created from the nearest neighbors of the 3D point [45], which is called PCA-Normal herein.The PCA-Normal is the normal of the tangent plane of a 3D point.In our method, the normal is required to be orthogonal to the local fitted plane pl.When we detect boundary elements, only one surface exists in the neighborhood, and pl is on the surface.When we detect fold edges, two or more intersecting surfaces exist, and pl is on one of the intersecting surfaces.However, the PCA-Normal is orthogonal to the tangent plane, and hence it cannot meet the aforementioned requirement.
The reasons for this are that the PCA algorithm cannot detect any of the intersected planes in a complex neighborhood.Fortunately, the RANSAC algorithm can solve this problem, and deal with noise as well.As shown in Figure 4, the RANSAC algorithm can detect a proper plane in either of the two complex neighborhoods.
In this study, we fit a local plane in the neighborhood to estimate the normal → n of an unlabeled point o.The procedure first searches for the neighbors of o.Next, to eliminate the influence of outliers or noises, a local plane pl is fitted using the RANSAC algorithm.Then, → n is the normal of the local fitted plane pl.Note that the normal → n used in this study is named RANSAC-Normal.

Angular Gap Computation
To compute the angular gap, G θ , a spatial coordinate frame is constructed based on the local plane pl and its normal vector where is the vector connecting the unlabeled point o to an inlier p i extracted by the RANSAC algorithm.The distribution of G θ is shown in Figure 7.The pseudo code of the edge detection step in AGPN and its parameters are shown in Appendix B.

Feature Line Tracing
Once edges have been detected, a further step will trace the detected edges into segments.Each segment is a point list, in which all the points belong to the same feature line.The proposed feature line tracing method connects edges with similar principle directions and splits edges with abrupt directionality changes.Thus, the traced feature lines are curve or straight, however, must be smooth.
The proposed feature line tracing is a hybrid method based on region growing and model-fitting algorithms.The model-fitting algorithm estimates 3D line parameters in each point's neighborhood by the RANSAC algorithm.The directional parameters of the fitted 3D line denote the principle direction of the edge point.The region growing algorithm clusters the detected edges into segments based on the following two redefined growing criteria related to the refined neighborhood and the principle direction.
The proposed feature line tracing procedure includes two essential steps: neighborhood refinement and growing criterion definition.

Neighborhood Refinement
Compared to an image, a 3D-point cloud is an unorganized and scattered point set, which brings great challenges to neighborhood searching of edge points.Our method first obtains the nearest neighbors of a query point by using the kd-tree algorithm.Next, a straight line model is fitted by the RANSAC algorithm, and then, the nearest neighbors are divided into inliers and outliers.The inliers containing the query point are the refined nearest neighbors.Otherwise, the outliers are processed iteratively by the RANSAC algorithm until the updated inliers contain the query point.Therefore, an adaptive neighborhood is designed for each query point.

Growing Criterion Definition
Two growing criteria given by [46] are redefined in this study.a. Proximity of points.Only points that are near one of the points in the current segment can be added to the stack of the segment.For feature line tracing, this proximity of edge points can be implemented by the aforementioned neighborhood refinement.b.Smooth direction vector field.Only points that have a similar principal direction with the current tracing segment can be added to the stack of the current segment.In this paper, a line model is first fitted from the refined neighborhood by the RANSAC algorithm, and then the principal direction of the current point is defined as the direction of the fitted line.
In addition, a larger proportion of inliers to all nearest neighbors implies a higher possibility of the presentence of feature lines.Due to the region growing procedure being irreversible, an edge The pseudo code of the edge detection step in AGPN and its parameters are shown in Appendix B.

Feature Line Tracing
Once edges have been detected, a further step will trace the detected edges into segments.Each segment is a point list, in which all the points belong to the same feature line.The proposed feature line tracing method connects edges with similar principle directions and splits edges with abrupt directionality changes.Thus, the traced feature lines are curve or straight, however, must be smooth.
The proposed feature line tracing is a hybrid method based on region growing and model-fitting algorithms.The model-fitting algorithm estimates 3D line parameters in each point's neighborhood by the RANSAC algorithm.The directional parameters of the fitted 3D line denote the principle direction of the edge point.The region growing algorithm clusters the detected edges into segments based on the following two redefined growing criteria related to the refined neighborhood and the principle direction.
The proposed feature line tracing procedure includes two essential steps: neighborhood refinement and growing criterion definition.

Neighborhood Refinement
Compared to an image, a 3D-point cloud is an unorganized and scattered point set, which brings great challenges to neighborhood searching of edge points.Our method first obtains the nearest neighbors of a query point by using the kd-tree algorithm.Next, a straight line model is fitted by the RANSAC algorithm, and then, the nearest neighbors are divided into inliers and outliers.The inliers containing the query point are the refined nearest neighbors.Otherwise, the outliers are processed iteratively by the RANSAC algorithm until the updated inliers contain the query point.Therefore, an adaptive neighborhood is designed for each query point.

Growing Criterion Definition
Two growing criteria given by [46] are redefined in this study.a Proximity of points.Only points that are near one of the points in the current segment can be added to the stack of the segment.For feature line tracing, this proximity of edge points can be implemented by the aforementioned neighborhood refinement.b Smooth direction vector field.Only points that have a similar principal direction with the current tracing segment can be added to the stack of the current segment.In this paper, a line model is first fitted from the refined neighborhood by the RANSAC algorithm, and then the principal direction of the current point is defined as the direction of the fitted line.
In addition, a larger proportion of inliers to all nearest neighbors implies a higher possibility of the presentence of feature lines.Due to the region growing procedure being irreversible, an edge point with greater linearity should be grown first to ensure a better tracing result.Therefore, edge points are first sorted by their proportion values, and then grown in order.
The general region growing procedure for linear feature segmentation is sensitive to the size of the established local neighborhoods and the location of seed points.However, the refined neighborhood and the aforementioned sorted edge points can overcome the problem.It is validated that the proposed feature line tracing procedure can distinguish spatially-adjacent, collinear/coaxial lines (see Figure 8).point with greater linearity should be grown first to ensure a better tracing result.Therefore, edge points are first sorted by their proportion values, and then grown in order.The general region growing procedure for linear feature segmentation is sensitive to the size of the established local neighborhoods and the location of seed points.However, the refined neighborhood and the aforementioned sorted edge points can overcome the problem.It is validated that the proposed feature line tracing procedure can distinguish spatially-adjacent, collinear/coaxial lines (see Figure 8).Three parameters are used in the feature line tracing step, that is, the number of nearest neighbors for kd-tree algorithm, distance threshold for the RANSAC line model estimation, and smooth direction threshold _ ℎ .The traced feature lines are shown in Figure 8b.

Experiments and Analysis
The proposed algorithms were implemented in C++ using the PCL.There are five parameters in the proposed AGPN (detailed in Appendix A).In this study, the point spacing of the input data affects the performance of our proposed AGPN most.Moreover, the performance is also affected by the distance thresholds and related to the point spacing.Therefore, we describe the testing data in Section 3.1 and discuss parameters tuning in Section 3.3.The point spacing is measured by the open source software CloudCompare [47].
To quantitatively evaluate the performance of our AGPN, four measures are defined in Section 3.2.We further compare the proposed AGPN with two representative algorithms: the boundary estimation method presented in the PCL and the edge points clustering algorithm presented in the literature [13].The comparative studies are presented in Section 3.7.

Testing Data
The open datasets available from the homepage of 3D ToolKit [48] are employed.The datasets are recorded by a Riegl VZ400 scanner in the Bremen city center.The point density of a single 3D scan is uneven.
From the open datasets, we selected two testing sites, i.e., Site 1 and Site 2. The two sites contain a large number of complex man-made objects.Table 1 shows their detailed information, including the number of points, maximum point spacing, minimum point spacing and the parameters used in our experiments.
The point spacings in the two testing sites are quite different.In Site 1, the point density is uneven.In the neighborhood of the scanner, the average point spacing is 0.001 m, and in the areas away from the scanner, the average point spacing is 0.15 m.The variation tendency of the point density is consecutive with the variation of the distances to the scanner.In Site 2, except for the window areas, the average point spacing is 0.005 m.In the window areas, the point density decreases Three parameters are used in the feature line tracing step, that is, the number of nearest neighbors K 2 for kd-tree algorithm, distance threshold d 2 r for the RANSAC line model estimation, and smooth direction threshold sm_thr.The traced feature lines are shown in Figure 8b.

Experiments and Analysis
The proposed algorithms were implemented in C++ using the PCL.There are five parameters in the proposed AGPN (detailed in Appendix A).In this study, the point spacing of the input data affects the performance of our proposed AGPN most.Moreover, the performance is also affected by the distance thresholds d 1 r and d 2 r related to the point spacing.Therefore, we describe the testing data in Section 3.1 and discuss parameters tuning in Section 3.3.The point spacing is measured by the open source software CloudCompare [47].
To quantitatively evaluate the performance of our AGPN, four measures are defined in Section 3.2.We further compare the proposed AGPN with two representative algorithms: the boundary estimation method presented in the PCL and the edge points clustering algorithm presented in the literature [13].The comparative studies are presented in Section 3.7.

Testing Data
The open datasets available from the homepage of 3D ToolKit [48] are employed.The datasets are recorded by a Riegl VZ400 scanner in the Bremen city center.The point density of a single 3D scan is uneven.
From the open datasets, we selected two testing sites, i.e., Site 1 and Site 2. The two sites contain a large number of complex man-made objects.Table 1 shows their detailed information, including the number of points, maximum point spacing, minimum point spacing and the parameters used in our experiments.
The point spacings in the two testing sites are quite different.In Site 1, the point density is uneven.In the neighborhood of the scanner, the average point spacing is 0.001 m, and in the areas away from the scanner, the average point spacing is 0.15 m.The variation tendency of the point density is consecutive with the variation of the distances to the scanner.In Site 2, except for the window areas, the average point spacing is 0.005 m.In the window areas, the point density decreases rapidly, and the average point spacing turns to 0.01 m.The variation tendency of the point density is piecewise.

Evaluation Metrics
To test the performance of the proposed AGPN, we quantitatively evaluate the results of the edge detection step and the feature line tracing step, respectively, by four measures: p dc (correctness rate of the edge detection step), p mj (mislabeled rate of the edge detection step), p dct (correctness rate of the feature line tracing step), and p mjt (mislabeled rate of the feature line tracing step).To compute p dc and p mj for evaluating the 3D edge results, we count the number of the feature lines which the detected edges belong to.The feature line is curve or straight, however, must be smooth.The four measures are defined as follows: where N dc is the number of true positive feature lines contained in the detected edges, N tc is the number of correctly traced feature lines in the feature line tracing step, N gc is the total number of feature lines in the ground truth, N mj is the number of mislabeled feature lines contained in the detected edges, and N mjt is the number of incorrectly traced feature lines in the feature line tracing step.Larger values of p dc and p dct , and smaller values of p mj and p mjt are more desirable.

Parameter Tuning
One of the major strengths of the proposed AGPN is that the extracted 3D edges in the edge detection step are sufficiently subtle with the default value ( π 2 ) of the angular gap (G θ ).Table 1 lists the parameters used in the proposed AGPN and their empirical values.We maintained K 1 = 200 and K 2 = 15 for Site 1 and Site 2 because both K 1 and K 2 have little influence on the quality of the results.
In addition, because the feature line tracing parameters are mainly dependent on the requirements of users, we only analyze the influence on the application of 3D line segment extraction.Under this condition, to ensure the sufficient quality of the result, sm_thr is set to 0.2.
We selected a number of subsets in Site 1 with the same point spacing (0.01 m) to analyze the sensitivity of d 1 r and d 2 r .As shown in Figure 9, the x-axis denotes d 1 r and d 2 r , the y-axis depicts the average values of the correctness and mislabeled rates.It can be seen that when the value of d 1 r and d 2 r are close to the point spacing of 0.01 m, the edge detection step and the feature line tracing step both achieve good results.If we set d 1 r smaller than the point spacing, the mislabeled rate p mj will arise.Although the correctness rate p dc also arises, it is much slighter than p mj .According to this figure, if the point spacings of the input data are the same, a reasonable configuration is obtained with d 1 r and d 2 r equal to the point spacing of the input data.Otherwise, we can set the parameters according to the variation tendency of the point density.For example, if the variation tendency of the point density is piecewise, we set d 1 r smaller than the average point spacing in the edge detection step, and thus all the edges can be ensured to be detected, next, we utilize the feature line tracing step to trace all the edges, then filter the traced segments with small number of points.
point density is piecewise, we set smaller than the average point spacing in the edge detection step, and thus all the edges can be ensured to be detected, next, we utilize the feature line tracing step to trace all the edges, then filter the traced segments with small number of points.

Normal Estimation
To demonstrate the feasibility of our RANSAC-Normal, we compare it with the PCA-Normal.The comparison is shown in Figure 10.When a point is a fold edge, there are two intersecting surfaces orthogonal to each other in its neighborhood.The PCA-Normal (see Figure 10a) is orthogonal to the tangent plane rather than one of the intersecting planes, while the RANSAC-Normal (see Figure 10b) is orthogonal to one of the intersecting planes.If we construct a coordinate frame based on the computed RANSAC-Normal, the other two axes and of the coordinate frame are located in one of the intersecting planes.Therefore, the RANSAC-Normal reaches the requirement of our method (see Section 2.2.4).

Influence of Point Density
The point density of the input data mainly determines the details of a complex object, such as a chimney, step, or window.To analyze the influence of the input density on the results of the edge detection step and the feature line tracing step, we down-sampled a point cloud data of a window (see Figure 11a) with different point densities.
As shown in Figure 11c, the horizontal axis denotes the number of points in the down-sampled data.The light blue bar represents the ratio of the contained feature lines in the original data of a window to all feature lines in the window (denoted by ).When the data is down-sampled into different point densities, the value of decreases with the decreasing of point density.We can find that the ratio of feature lines in the detected edges to all feature lines (denoted by ) and the ratio of traced feature lines to all feature lines (denoted by ) are close to the ratio .For example, when the data is down-sampled to 6673 points, only the sixty percent of the original feature lines are contained,

Normal Estimation
To demonstrate the feasibility of our RANSAC-Normal, we compare it with the PCA-Normal.The comparison is shown in Figure 10.When a point is a fold edge, there are two intersecting surfaces orthogonal to each other in its neighborhood.The PCA-Normal (see Figure 10a) is orthogonal to the tangent plane rather than one of the intersecting planes, while the RANSAC-Normal (see Figure 10b) is orthogonal to one of the intersecting planes.If we construct a coordinate frame based on the computed RANSAC-Normal, the other two axes point density is piecewise, we set smaller than the average point spacing in the edge detection step, and thus all the edges can be ensured to be detected, next, we utilize the feature line tracing step to trace all the edges, then filter the traced segments with small number of points.

Normal Estimation
To demonstrate the feasibility of our RANSAC-Normal, we compare it with the PCA-Normal.The comparison is shown in Figure 10.When a point is a fold edge, there are two intersecting surfaces orthogonal to each other in its neighborhood.The PCA-Normal (see Figure 10a) is orthogonal to the tangent plane rather than one of the intersecting planes, while the RANSAC-Normal (see Figure 10b) is orthogonal to one of the intersecting planes.If we construct a coordinate frame based on the computed RANSAC-Normal, the other two axes and of the coordinate frame are located in one of the intersecting planes.Therefore, the RANSAC-Normal reaches the requirement of our method (see Section 2.2.4).

Influence of Point Density
The point density of the input data mainly determines the details of a complex object, such as a chimney, step, or window.To analyze the influence of the input density on the results of the edge detection step and the feature line tracing step, we down-sampled a point cloud data of a window (see Figure 11a) with different point densities.
As shown in Figure 11c, the horizontal axis denotes the number of points in the down-sampled data.The light blue bar represents the ratio of the contained feature lines in the original data of a window to all feature lines in the window (denoted by ).When the data is down-sampled into different point densities, the value of decreases with the decreasing of point density.We can find that the ratio of feature lines in the detected edges to all feature lines (denoted by ) and the ratio of traced feature lines to all feature lines (denoted by ) are close to the ratio .For example, when the data is down-sampled to 6673 points, only the sixty percent of the original feature lines are contained,

Influence of Point Density
The point density of the input data mainly determines the details of a complex object, such as a chimney, step, or window.To analyze the influence of the input density on the results of the edge detection step and the feature line tracing step, we down-sampled a point cloud data of a window (see Figure 11a) with different point densities.
As shown in Figure 11c, the horizontal axis denotes the number of points in the down-sampled data.The light blue bar represents the ratio of the contained feature lines in the original data of a window to all feature lines in the window (denoted by r 1 ).When the data is down-sampled into different point densities, the value of r 1 decreases with the decreasing of point density.We can find that the ratio of feature lines in the detected edges to all feature lines (denoted by r 2 ) and the ratio of traced feature lines to all feature lines (denoted by r 3 ) are close to the ratio r 1 .For example, when the data is down-sampled to 6673 points, only the sixty percent of the original feature lines are contained, the values of r 1 , r 2 and r 3 are close to 0.6 simultaneously, and both the correctness rates of the edge detection step and the feature line tracing step are close to 1.0.This indicates that as long as a feature line is contained in the data, our method can detect the edges and trace the feature line segment.Therefore, the experimental results demonstrate that the correctness rates of the edge detection step and the feature line tracing step are not influenced by the point density.

Results
With the discussed values of the parameters used in the proposed AGPN, the overall performance is evaluated on the aforementioned two testing sites.
To test the capability of the proposed AGPN detecting the defined types of edges presented in Section 1.1, a small scene in Site 2 is selected.The selected small scene and its details are shown in Figure 12a.Four surfaces exist in this scene, i.e., one curve surface rendered by red color and three planar surfaces rendered by blue, green, and yellow colors respectively.There are fold edges belonging to the intersection curves [49] and lines [13], and boundary elements belonging to the plane outlines.Three parts of the edges are marked by white rectangles.Specifically, the fold edges belonging to an intersection curve are the intersetions of the red curve surface and the green planar surface.The results of our proposed AGPN are shown in Figure 12b,c.Figure 12b shows the detected edges.We can find that all the defined edges are detected in the scene of interest.Figure 12c shows the traced feature line segments rendered by different colors.Specifically, the purple and orange segments correspond to the two intersection curves of the scene respectively, and other segments correspond to the intersection lines and outlines.

Results
With the discussed values of the parameters used in the proposed AGPN, the overall performance is evaluated on the aforementioned two testing sites.
To test the capability of the proposed AGPN detecting the defined types of edges presented in Section 1.1, a small scene in Site 2 is selected.The selected small scene and its details are shown in Figure 12a.Four surfaces exist in this scene, i.e., one curve surface rendered by red color and three planar surfaces rendered by blue, green, and yellow colors respectively.There are fold edges belonging to the intersection curves [49] and lines [13], and boundary elements belonging to the plane outlines.Three parts of the edges are marked by white rectangles.Specifically, the fold edges belonging to an intersection curve are the intersetions of the red curve surface and the green planar surface.The results of our proposed AGPN are shown in Figure 12b,c.Figure 12b shows the detected edges.We can find that all the defined edges are detected in the scene of interest.Figure 12c shows the traced feature line segments rendered by different colors.Specifically, the purple and orange segments correspond to the two intersection curves of the scene respectively, and other segments correspond to the intersection lines and outlines.In Site 1, the variation tendency of the point density is consecutive with the variation of the distances to the scanner.Therefore, we set the thresholds of and to the average point spacing 0.01.In Site 2, the variation tendency of the point density is piecewise, i.e., the point spacing in window areas (0.01) is much larger than that in non-window areas (0.005).According to the discussion in Section 3.3, we set the thresholds of and to 0.005 (the average point spacing in non-window areas).The parameters and their values used in the proposed algorithms are listed in Table 1.To clearly demonstrate the performance of the proposed method, the 3D edge detection results are overlaid on the original point cloud.
The results of Sites 1 and 2 are shown in Figures 13 and 14, and the accuracy evaluation results are listed in Table 2.In Table 2, the correctness rates and of Site 2 are higher than that of Site 1.The reason is that the variation range in Site 1 is larger than that in Site 2. According to the analysis in Section 3.3, when the values of and are unequal to the point spacing, the correctness rates will decrease.We can also find that the mislabeled rate of Site 1 is lower than that of Site 2. A close-up visual inspection shows that there are many windows in the second site data, the misjudgments mainly arise in the areas of windows.The reason is that the point spacings in the areas of windows are much larger than the ones in non-window areas.Moreover, the difference of point spacings between windows and non-window areas in Site 2 is larger than that in Site 1.According to the analysis in Section 3.3, when the parameter of distance threshold is smaller than the point spacing, the misjudgments will arise.Furthermore, the tendencies of the correctness rate and the mislabeled rate (in Figure 9a) validates that the mislabeled rates of the edge detection step in Table 2 is reasonable.
In Figures 13d-f and 14d-f, all the defined types of 3D edges are detected by our proposed method, and some details in the results of Sites 1 and 2 are presented.We can find that most of the edges in the window areas or walls with textures belong to intersection curves or intersection lines.Specially, most of the intersected planes or curve surfaces are narrow, which are difficult to be extracted by segmentation or clustering procedures for 3D-point clouds.In Site 1, the variation tendency of the point density is consecutive with the variation of the distances to the scanner.Therefore, we set the thresholds of d 1 r and d 2 r to the average point spacing 0.01.In Site 2, the variation tendency of the point density is piecewise, i.e., the point spacing in window areas (0.01) is much larger than that in non-window areas (0.005).According to the discussion in Section 3.3, we set the thresholds of d 1 r and d 2 r to 0.005 (the average point spacing in non-window areas).The parameters and their values used in the proposed algorithms are listed in Table 1.To clearly demonstrate the performance of the proposed method, the 3D edge detection results are overlaid on the original point cloud.
The results of Sites 1 and 2 are shown in Figures 13 and 14, and the accuracy evaluation results are listed in Table 2.In Table 2, the correctness rates p dc and p dct of Site 2 are higher than that of Site 1.The reason is that the variation range in Site 1 is larger than that in Site 2. According to the analysis in Section 3.3, when the values of d 1 r .and d 2 r are unequal to the point spacing, the correctness rates will decrease.We can also find that the mislabeled rate p mj of Site 1 is lower than that of Site 2. A close-up visual inspection shows that there are many windows in the second site data, the misjudgments mainly arise in the areas of windows.The reason is that the point spacings in the areas of windows are much larger than the ones in non-window areas.Moreover, the difference of point spacings between windows and non-window areas in Site 2 is larger than that in Site 1.According to the analysis in Section 3.3, when the parameter of distance threshold is smaller than the point spacing, the misjudgments will arise.Furthermore, the tendencies of the correctness rate p dc and the mislabeled rate p mj (in Figure 9a) validates that the mislabeled rates of the edge detection step in Table 2 is reasonable.
In Figures 13d-f and 14d-f, all the defined types of 3D edges are detected by our proposed method, and some details in the results of Sites 1 and 2 are presented.We can find that most of the edges in the window areas or walls with textures belong to intersection curves or intersection lines.Specially, most of the intersected planes or curve surfaces are narrow, which are difficult to be extracted by segmentation or clustering procedures for 3D-point clouds.

Comparative Studies
In our comparison, we do not include all types of the existing methods.This paper compares the proposed AGPN method to "direct methods" because AGPN processes 3D-point clouds directly.In the proposed AGPN, there are two steps, i.e., edge detection step and feature line tracing step.We compare the existing methods with the two steps individually.
A boundary estimation method is presented in PCL, and it uses an angle criterion which is similar to the literature [41].Comparative results of the boundary estimation method and our edge detection method are shown in Figure 15.Three subsets are selected from the testing datasets.The comparative results show that the proposed edge detection method can detect all types of edges irrespective of how complex the details of the objects are.However, the boundary estimation method in PCL is incapable of detecting fold edges.Table 3 lists the comparative accuracies of our edge detection method and the boundary estimation method.

Conclusions
In this paper, we propose an automated and effective method named AGPN, which detects 3D edges and traces feature lines from 3D-point clouds.There are two main steps in our proposed AGPN: edge detection and feature line tracing.In the first step, AGPN detects edges from 3D-point clouds.This step first analyzes the geometric properties of each query point's neighborhood, and then, combines RANSAC algorithm and angular gap criterion to label the query point as edge or non-edge.In the second step, AGPN traces feature lines from the detected 3D edges.This step is a hybrid method based on region growing and model fitting.In this step, we refine the neighborhood of each query point and redefine two growing criteria to overcome the uncertainties of the region growing procedure.The reason is that the boundary estimation method detects 3D edges using only angle criterion, which is insufficient for detecting fold edges from unorganized point clouds.Besides, the boundary estimation method detects surface boundaries using the PCA-Normal, which ignores the influence of outliers and noises.Therefore, the boundary estimation method is incapable of dealing with an edge with a complex neighborhood.An edge point clustering method has been proposed by the literature [13].Comparative results of our feature line tracing method and the algorithm in the literature [13] are shown in Figure 15, where different traced feature line segments are rendered in different colors.Comparative accuracy results of our method and the method in the literature [13] are listed in Table 3.It can be seen that our feature line tracing method performs very well, and the tracing quality of it is obviously better than that of the algorithm in the literature [13].
The reason is that the literature [13] determines the principal direction of each edge point by analyzing a self-correlation matrix constructed by nearest neighbors, but ignoring outliers among the nearest neighbors.When several feature lines intersect, the neighborhood will not be a linear structure, resulting in the edges in this intersecting area will be missed.

Conclusions
In this paper, we propose an automated and effective method named AGPN, which detects 3D edges and traces feature lines from 3D-point clouds.There are two main steps in our proposed AGPN: edge detection and feature line tracing.In the first step, AGPN detects edges from 3D-point clouds.This step first analyzes the geometric properties of each query point's neighborhood, and then, combines RANSAC algorithm and angular gap criterion to label the query point as edge or non-edge.In the second step, AGPN traces feature lines from the detected 3D edges.This step is a hybrid method based on region growing and model fitting.In this step, we refine the neighborhood of each query point and redefine two growing criteria to overcome the uncertainties of the region growing procedure.
The contributions of the proposed AGPN method include: (1) image processing or point cloud preprocessing such as segmentation and object recognition are not needed, thereby reducing the complexity of the 3D-point cloud process; (2) the proposed edge detection method in AGPN can detect all the defined types of 3D edges and be insensitive to noise; (3) the feature line tracing step in AGPN is used to distinguish spatially-adjacent, collinear/coaxial lines in complex neighborhoods.
The experimental results show that our proposed AGPN can detect all the defined types of edges irrespective of how complex the details of the objects are.The feature line tracing step can trace feature lines in a complex neighborhood with several intersected or parallel lines.In comparison with state-of-the-art methods, the proposed AGPN can obtain superior results qualitatively and quantitatively.Moreover, we analyze the uncertainties of the results of our proposed method from two aspects, i.e., parameters tuning and influence of point density.In the analysis of the parameters tuning, we present the variation tendency of the correctness rate and the mislabeled rate with different parameters.Then, we achieve a good result according to the point spacing of the input data and the variation tendency.In the analysis of the influence of input density, the experimental results demonstrate that the two steps in our method are not influenced by the input density, though the details of a complex object mainly depend on it.
However, a limitation exists in that over-segmentation arises when the parameters of the feature line tracing procedure are strict or there is a gap in a long feature line.In the future work, we will attempt to solve this problem.In addition, it would be interesting to apply the proposed algorithms to object recognition in large-scale 3D-point clouds.

Figure 1 .
Figure 1.The definition of two types of edges.

Figure 1 .
Figure 1.The definition of two types of edges.

Figure 3 .
Figure 3. Flowchart for the edge detection step in AGPN.

Figure 3 .
Figure 3. Flowchart for the edge detection step in AGPN.

Figure 4 .
Figure 4. Local plane (rendered in red) fitted by the RANSAC algorithm in the nearest neighbor point set P. (a,b) show two types of neighbor point sets respectively.There are three planes in (a) and two planes in (b).

Figure 5 .
Figure 5. Distribution of the nearest neighbors of an unlabeled point on a surface: (a) the neighborhood of an interior point; (b) the neighborhood of a point on a boundary.

Figure 4 .
Figure 4. Local plane (rendered in red) fitted by the RANSAC algorithm in the nearest neighbor point set P. (a,b) show two types of neighbor point sets respectively.There are three planes in (a) and two planes in (b).

Figure 4 .
Figure 4. Local plane (rendered in red) fitted by the RANSAC algorithm in the nearest neighbor point set P. (a,b) show two types of neighbor point sets respectively.There are three planes in (a) and two planes in (b).

Figure 5 .
Figure 5. Distribution of the nearest neighbors of an unlabeled point on a surface: (a) the neighborhood of an interior point; (b) the neighborhood of a point on a boundary.

Figure 5 .
Figure 5. Distribution of the nearest neighbors of an unlabeled point on a surface: (a) the neighborhood of an interior point; (b) the neighborhood of a point on a boundary.

Figure 6 .
Figure 6.Distribution of the nearest neighbors of an unlabeled point on a surface intersecting structure: (a) the neighborhood of an interior point on one of the intersecting surfaces; (b) the neighborhood of a fold edge point; (c) the neighborhood of a point on the two intersecting surfaces with different point densities.

Figure 6 .
Figure 6.Distribution of the nearest neighbors of an unlabeled point on a surface intersecting structure: (a) the neighborhood of an interior point on one of the intersecting surfaces; (b) the neighborhood of a fold edge point; (c) the neighborhood of a point on the two intersecting surfaces with different point densities.

→n.
The axes of this frame are composed of two perpendicular vectors → u , → v on pl and the normal vector → n .The angular gap G θ is computed by Equations (1)-(4):

Figure 7 .
Figure 7. Distribution of , each point is colored according to its value of .

Figure 7 .
Figure 7. Distribution of G θ , each point is colored according to its value of G θ .

Figure 8 .
Figure 8. Feature line tracing, (a) feature line segments generated by region growing method; (b) feature line segments generated by the proposed feature line tracing method.The traced segments are marked by different colors.

Figure 8 .
Figure 8. Feature line tracing, (a) feature line segments generated by region growing method; (b) feature line segments generated by the proposed feature line tracing method.The traced segments are marked by different colors.

Figure 9 .
Figure 9. Average values of correctness and mislabeled rates for different values of 1 and 2 : (a) results of the edge detection step with different values of ; and (b) results of the feature line tracing step with different values of .

Figure 10 .
Figure 10.Normal estimation of the neighborhood with two intersecting planes: (a) PCA-Normal ; (b) RANSAC-Normal estimated by our method.

Figure 9 .
Figure 9. Average values of correctness and mislabeled rates for different values of d 1 r and d 2 r : (a) results of the edge detection step with different values of ; and (b) results of the feature line tracing step with different values of d 2 r .

u
of the coordinate frame are located in one of the intersecting planes.Therefore, the RANSAC-Normal reaches the requirement of our method (see Section 2.2.4).Remote Sens. 2016, 8, 710 11 of 21

Figure 9 .
Figure 9. Average values of correctness and mislabeled rates for different values of 1 and 2 : (a) results of the edge detection step with different values of ; and (b) results of the feature line tracing step with different values of .

Figure 10 .
Figure 10.Normal estimation of the neighborhood with two intersecting planes: (a) PCA-Normal ; (b) RANSAC-Normal estimated by our method.

Figure 10 .
Figure 10.Normal estimation of the neighborhood with two intersecting planes: (a) PCA-Normal → n ; (b) RANSAC-Normal → n estimated by our method.

Figure 11 .
Figure 11.(a) Original input data without down-sampling; (b) the input data down-sampled to 6673 points; (c) correctness rates under different densities.

Figure 11 .
Figure 11.(a) Original input data without down-sampling; (b) the input data down-sampled to 6673 points; (c) correctness rates under different densities.

Figure 12 .
Figure 12.The results of a small area, (a) shows the four surfaces, and three kinds of edges in this area; (b) shows the edges detected by our method; (c) shows the feature line segments traced by our method.

Figure 12 .
Figure 12.The results of a small area, (a) shows the four surfaces, and three kinds of edges in this area; (b) shows the edges detected by our method; (c) shows the feature line segments traced by our method.

Figure 13 .
Figure 13.Results of Site 1: (a) edge detection result overlaid on the original input data; (b) edges; (c) traced feature line segments depicted in different colors; (d-f) details of edges and traced feature line segments demarcated by different colored outlines corresponding to (a).

Figure 13 .
Figure 13.Results of Site 1: (a) edge detection result overlaid on the original input data; (b) edges; (c) traced feature line segments depicted in different colors; (d-f) details of edges and traced feature line segments demarcated by different colored outlines corresponding to (a).

Figure 14 .
Figure 14.Results of Site 2: (a) edge detection result overlaid on the original input data; (b) edges; (c) traced feature line segments depicted in different colors; (d-f) details of edges and traced feature line segments, demarcated by different colored outlines corresponding to (a).

Figure 14 .
Figure 14.Results of Site 2: (a) edge detection result overlaid on the original input data; (b) edges; (c) traced feature line segments depicted in different colors; (d-f) details of edges and traced feature line segments, demarcated by different colored outlines corresponding to (a).

Figure 15 .
Figure 15.Comparison of the results of the AGPN and existing methods: (a) original input data; (b) edges detected by our edge detection method; (c) edges detected by PCL, where the results are optimal, obtained by training ten sets of parameters; (d) feature line segments traced by our method; (e) feature line segments traced by the method in the literature [13].

Figure 15 .
Figure 15.Comparison of the results of the AGPN and existing methods: (a) original input data; (b) edges detected by our edge detection method; (c) edges detected by PCL, where the results are optimal, obtained by training ten sets of parameters; (d) feature line segments traced by our method; (e) feature line segments traced by the method in the literature [13].

Table 1 .
Data description and parameter settings for the two testing sites.

Table 2 .
Accuracy results of the proposed algorithms.