Next Article in Journal
Degradation of Non-Photosynthetic Vegetation in a Semi-Arid Rangeland
Next Article in Special Issue
Comparison of Manual Mapping and Automated Object-Based Image Analysis of Non-Submerged Aquatic Vegetation from Very-High-Resolution UAS Images
Previous Article in Journal
Second-Order Polynomial Equation-Based Block Adjustment for Orthorectification of DISP Imagery
Previous Article in Special Issue
Quantifying the Effect of Aerial Imagery Resolution in Automated Hydromorphological River Characterisation
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping

Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Enschede 7500 AE, The Netherlands
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(8), 689; https://doi.org/10.3390/rs8080689
Received: 30 June 2016 / Revised: 3 August 2016 / Accepted: 11 August 2016 / Published: 22 August 2016
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)

Abstract

:
Unmanned Aerial Vehicles (UAVs) have emerged as a rapid, low-cost and flexible acquisition system that appears feasible for application in cadastral mapping: high-resolution imagery, acquired using UAVs, enables a new approach for defining property boundaries. However, UAV-derived data are arguably not exploited to its full potential: based on UAV data, cadastral boundaries are visually detected and manually digitized. A workflow that automatically extracts boundary features from UAV data could increase the pace of current mapping procedures. This review introduces a workflow considered applicable for automated boundary delineation from UAV data. This is done by reviewing approaches for feature extraction from various application fields and synthesizing these into a hypothetical generalized cadastral workflow. The workflow consists of preprocessing, image segmentation, line extraction, contour generation and postprocessing. The review lists example methods per workflow step—including a description, trialed implementation, and a list of case studies applying individual methods. Furthermore, accuracy assessment methods are outlined. Advantages and drawbacks of each approach are discussed in terms of their applicability on UAV data. This review can serve as a basis for future work on the implementation of most suitable methods in a UAV-based cadastral mapping workflow.

Graphical Abstract

1. Introduction

Unmanned Aerial Vehicles (UAVs) have emerged as rapid, efficient, low-cost and flexible acquisition systems for remote sensing data [1]. The data acquired can be of high-resolution and accuracy, ranging from a sub-meter level to a few centimes [2,3]. A photogrammetric UAV workflow includes flight planning, image acquisition, mostly camera calibration, image orientation and data processing, which can result in Digital Surface Models (DSMs), orthoimages and point clouds [4]. UAVs are described as a capable sourcing tool for remote sensing data, since they allow flexible maneuverings, high-resolution image capture, flying under clouds, easy launch and landing and fast data acquisition at low cost. Disadvantages include payload limitations, uncertain or restricting airspace regulations, battery induced short flight duration and time consuming processing of large volumes of data gathered [5,6]. In addition, multiple factors that influence the accuracy of derived products require extensive consideration. This includes the quality of the camera, the camera calibration, the number and location of ground control points and the choice of processing software [2,5]. UAVs have been employed in a variety of applications such as the documentation of archaeological sites and cultural heritage [7,8], vegetation monitoring in favor of precision agriculture [9,10], traffic monitoring [11], disaster management [12,13] and 3D reconstruction [14].
Another emerging application field is UAV-based cadastral mapping. Cadastral maps are spatial representations of cadastre records, showing the extent, value and ownership of land [15]. Cadastral maps are intended to provide a precise description and identification of land parcels, which are crucial for a continuous and sustainable recording of land rights [16]. Furthermore, cadastral maps support land and property taxation, allow the development and monitoring of land markets, support urban planning infrastructure development and allow for producing statistical data. Extensive reviews on concepts and purposes of cadastres in relation to land administration are given in [17,18]. UAVs are proposed as a new tool for fast and cheap spatial data production that enable the production of cadastral maps. Within this field, UAVs facilitate land administration processes and contribute to securing land tenure [19]. UAVs enable a new approach to the establishment and updating of cadastral maps that contribute to new concepts in land administrations such as fit-for-purpose [20], pro-poor land administration [21] and responsible land administration [22].

1.1. Application of UAV-Based Cadastral Mapping

In the context of contemporary cadastral mapping, UAVs are increasingly argued and demonstrated as tools able to generate accurate and georeferenced high-resolution imagery—from which cadastral boundaries can be visually detected and manually digitized [23,24,25]. In order to support this manual digitization, existing parcel boundary lines might be automatically superimposed, which could facilitate and accelerate cadastral mapping [26]. With the exception of [1,27], cadastral mapping is not mentioned in review papers on application fields of UAVs [28,29,30]. This might be due to the small amount of case studies within this field, the often highly prescribed legal regulations relating to cadastral surveys, and the novelty of UAV in mapping generally. Nevertheless, all existing case studies underline the high potential of UAVs for cadastral mapping—in both urban and rural contexts for developing and developed countries.
In developing countries, cadastral mapping contributes to the creation of formal systems for registering and safeguarding land rights. According to the World Bank and the International Federation of Surveyors (FIG), 75% of the world’s population do not have access to such systems. Further, they state that 90 countries lack land registration systems, while 50 countries are in the process of establishing such systems [20]. In these countries, cadastral mapping is often based on partly outdated maps or satellite images of low-resolution, which might include areas covered by clouds. Numerous studies have investigated cadastral mapping based on orthoimages derived from satellite imagery [22,31,32,33,34,35,36,37] or aerial photography [38]. The definition of boundary lines is often conducted in a collaborative process among members of the communities, governments and aid organizations, which is referred to as “Community Mapping” [39], “Participatory Mapping” [22] or “Participatory GIS” [31]. Such outdated satellite images are substitutable through up-to-date high-resolution orthoimages derived from UAVs as shown in case studies in Namibia [24] and Rwanda [23]. The latter case shows the utility of UAVs to partially update existing cadastral maps.
In developed countries, the case studies focus on the conformity of the UAV data’s accuracy with local accuracy standards and requirements [40,41]. Furthermore, case studies tend to investigate possibilities of applying UAVs to reshape the cadastral production line efficiency and effectiveness [42,43,44]. In the latter, manual boundary detection with all stakeholders is conducted in an office, eliminating the need for convening all stakeholders on the parcel. In developed countries, UAV data are frequently used to update small portions of existing cadastral maps rather than creating new ones. Airspace regulations are the most limiting factor that hinder the thorough use of UAVs. Currently, regulatory bodies face the alignment of economic, information and safety needs or demands connected to UAVs [30,45]. Once these limitations are better aligned with societal needs, UAVs might be employed for further fields of land administration including the monitoring of public infrastructure like oil and gas pipelines, power lines, dikes, highways and railways [46]. Nowadays, some national mapping agencies in Europe integrate, but mainly investigate, the use of UAVs for cadastral mapping [45].
Overall, UAVs are employed to support land administration both in creating and updating cadastral maps. The entirety of case studies confirms that UAVs are suitable as an addition to conventional data acquisition methods in order to create detailed cadastral maps including overview images or 3D models [40,41,47]. The average geometrical precision is shown to be the same, or better, compared to conventional terrestrial surveying methods [42]. UAVs will not substitute conventional approaches, since they are currently not suited to map large areas such as entire countries [48]. The employment of UAVs supports the economic feasibility of land administration and contributes to the accuracy and completeness of cadastral maps.

1.2. Boundary Delineation for UAV-Based Cadastral Mapping

In all case studies, cadastral boundaries are manually detected and digitized from orthoimages. This is realized either in an office with a small group of involved stakeholders—for one parcel or in a community mapping approach for several parcels at once. All case studies lack an automatic approach to extract boundary features from the UAV data. An automatic or semi-automatic feature extraction process would facilitate cadastral mapping: manual feature extraction is generally regarded as time-consuming, wherefore an automation will bring substantial benefits [4]. The degree of automation can range from semi-automatic including human interaction to fully automatic. Due to the complexity of image understanding, fully automatic feature extraction often shows a certain error rate. Therefore human interaction can hardly be excluded completely [49]. However, even a semi-automatic or partial extraction of boundary features would alter cadastral mapping with regards to cost and time. Jazayeri et al. state that UAV data have the potential for automated object reconstruction and boundary extraction activities to be accurate and low-cost [50]. This is especially true for visible boundaries, manifested physically by objects such as hedges, stone walls, large scale monuments, walkways, ditches or fences, which often coincide with cadastral boundaries [51,52]. Such visible boundaries bear the potential to be automatically extracted from UAV data. To the best of the authors’ knowledge, no research has been done on expediting the cadastral mapping workflow through automatic boundary delineation from UAV data.

1.3. Objective and Organization of This Paper

The review is based on the assumption that image processing algorithms applied to high-resolution UAV data are applicable to determine cadastral boundaries. Therefore, methods are reviewed that are deemed feasible for detecting and delineating cadastral boundaries. The review is intended to serve as a basis for future work on the implementation of the most suitable methods in a UAV-based cadastral mapping workflow. The degree of automation of the final workflow is left undetermined at this point. Due to an absence of work in this context, the scope of this review is extended to methods that could be used for UAV-based cadastral mapping, but that are currently applied (i) on different data sources or (ii) for different purposes.
(i)
UAV data includes dense point clouds from which DSMs are derived as well as high-resolution imagery. Such products can be similarly derived from other high-resolution optical sensors. Therefore, methods based on other high-resolution optical sensor data such as High-Resolution Satellite Imagery (HRSI) and aerial imagery are equally considered in this review. Methods applied solely on 3D point clouds are excluded. Methods that are based on the derived DSM are considered in this review. Methods that combine 3D point clouds and aerial or satellite imagery are considered in terms of methods based on the aerial or satellite imagery.
(ii)
The review includes methods that aim to extract features other than cadastral boundaries having similar characteristics, which are outlined in the next section. Suitable methods are not intended to extract the entirety of boundary features, since some boundaries are not visible to optical sensors.
This paper is structured as follows: Firstly, the objects to be automatically extracted are defined and described. Therefore, cadastral boundary concepts and common cadastral boundary characteristics are outlined. Secondly, methods that are feasible to automatically detect and extract previously outlined boundary features are listed. The methods are structured according to subsequently applicable workflow steps. Thereafter, representative methods are applied on an example UAV dataset to visualize their performance and applicability on UAV data. Thirdly, accuracy assessment methods are outlined. Finally, the methods are discussed in terms of the advantages and drawbacks faced in case studies and during the implementation of representative methods. The term “case studies” is extended to studies on method development followed by examples in this review. The conclusion covers recommendations on suitable approaches for boundary delineation and issues to address in future work.

2. Review of Feature Extraction and Evaluation Methods

2.1. Cadastral Boundary Characteristics

In this paper, a cadastral boundary is defined as a dividing entity with a spatial reference that separates adjacent land plots. An overview on concepts and understandings of boundaries in different disciplines is given in [52]. Cadastral boundaries can be represented in two different ways: (i) In many cases, they are represented as line features that clearly demarcate the boundary’s spatial position (ii) Some approaches employ laminar features that represent a cadastral area without clear boundaries. The cadastral boundary is then defined implicitly based on the outline or center of the area constituting the boundary [53]. This is beneficial for ecotones that represent transitional zones between adjacent ecosystems or for pastoralists that move along areas. In such cases, cadastral boundaries seek to handle overlapping access rights and to grant spatiotemporal mobility [54,55,56]. As shown, a cadastral boundary does not merely include spatial aspects, but those of time and scale as well [57,58].
Different approaches exist to categorize concepts of cadastral boundaries. The lines between the different categories presented in the following can be understood as fuzzy. They are drawn to give a general overview, visualized in Figure 1. From a technical point of view, cadastral boundaries are dividable into two categories: (i) Fixed boundaries, whose accurate spatial position has been recorded and agreed upon and (ii) general boundaries, whose precise spatial position is left undetermined [59]. Both require surveying and documentation in cadastral mapping. Cadastral surveying techniques can be distinguished between (i) direct techniques, in which the accurate spatial position of a boundary is measured on the ground using theodolite, total stations and Global Navigation Satellite System (GNSS) and (ii) indirect techniques, in which remotely sensed data such as aerial or satellite imagery are applied. The spatial position of boundaries is derived from these data in a second step [32]. Fixed boundaries are commonly measured with direct techniques, which provide the required higher accuracy. Indirect techniques, including UAVs, are able to determine fixed boundaries only in case of high-resolution data. Indirect techniques are mostly applied to extract visible boundaries. These are determined by physical objects and coincide with the concept of general boundaries [51,52]. This review concentrates on methods that delineate general, i.e., visible cadastral boundaries from indirect surveying techniques of high-resolution. The methods are intended to automatically extract boundary features and to be applicable to UAV data.
In order to understand, which visible boundaries define the extents of land, literature on 2D cadastral mapping—based on indirect techniques—was reviewed to identify common boundary characteristics. Man-made objects are found to define cadastral boundaries as well as natural objects. Studies name buildings, hedges, fences, walls, roads, footpaths, pavement, open areas, crop type, shrubs, rivers, canals and water drainages as cadastral boundary features [24,31,32,34,42,60,61,62]. Trees are named as the most limiting factor since they often obscure the view of the actual boundary [41,63]. No study summarizes characteristics of detected cadastral boundaries, even though it is described as crucial for feature recognition to establish a model that describes the general characteristics of the feature of interest [64]. Common in many approaches is the linearity of extracted features. This might be due to the fact that some countries do not accept curved cadastral boundaries [33]. Even if a curved river marks the cadastral boundary, the boundary line is approximated by a polygon [32]. When considering the named features, the following characteristics can be derived: most features have a continuous and regular geometry expressed in long straight lines of a limited curvature. Furthermore, features often share common spectral properties, such as similar values in color and texture. Moreover, boundary features are topologically connected and form a network of lines that surround land parcels of a certain (minimal) size and shape. Finally, boundaries might be indicated by a special distribution of other objects such as trees. In summary, features are detectable based on their geometry, spectral property, topology and context.
This review focuses on methods that extract linear boundary features, since cadastral boundaries are commonly represented by straight lines with exceptions outlined in [55,65]. Cadastral representations in 3D as described in [66] are excluded. With the employment of UAVs, not all cadastral boundaries can be detectable. Only those detectable with an optical sensor, i.e., visible boundaries can be extracted. This approach does not consider non-visible boundaries that are not marked by a physical object. These might be socially perceived boundaries or arbitrary boundaries originating from a continuous subdivision of land parcels. Figure 2 provides an overview of visible boundary characteristics mentioned before and commonly raised issues in terms of their detection.

2.2. Feature Extraction Methods

This section reviews methods that are able to detect and extract the above mentioned boundary characteristics. The methods reviewed are either pixel-based or object-based. (i) Pixel-based approaches analyze single pixels, optionally taking into account the pixels’ context, which can be considered through moving windows or implicitly through modeling. These data-driven approaches are often employed when the object of interest is smaller or similar in size as the spatial resolution. Example exceptions are modern convolutional neural networks (CNN) [68], which are explained in the latter. The lack of an explicit object topology is one drawback that might lead to inferior results, in particular for topographic mapping applications compared to those of human vision [69]; (ii) Object-based approaches are employed to explicitly integrate knowledge of object appearance and topology into the object extraction process. Applying these approaches becomes possible, once the spatial resolution is finer than the object of interest. In such cases, pixels with similar characteristics such as color, tone, texture, shape, context, shadow or semantics are grouped to objects. Such approaches are referred to as Object Based Image Analysis (OBIA). They are considered model-driven, since knowledge about scene understanding is incorporated to structure the image content spatially and semantically. The grouping of pixels might also results into groups of pixels, called superpixels. This approach with corresponding methods explained in Section 2.2.2., could be seen as a third in-between category, but is understood as object-based in this review [70,71,72].
Pixel-based approaches are often used to extract low-level features, which do not consider information about spatial relationships. Low-level features are extracted directly from the raw, possibly noisy pixels with edge detection being the most prominent algorithms [73]. Object-based approaches are used to extract high-level features, which represent shapes in images that are detected invariant of illumination, translation, orientation and scale. High-level features are mostly extracted based on the information provided by low-level features [73]. High-level feature extraction aimed at automated object detection and extraction, is currently achieved in a stepwise manner and is still an active research field [74]. Algorithms for high-level feature extraction often need to be interlinked to a processing workflow and do not lead to appropriate results when applied solely [70]. The relation of the described concepts is visualized in Figure 3. Both pixel-based and object-based approaches are applicable to UAV data. Pixel-based approaches can be applied to UAV data, or to its down sampled version of lower resolution. Due to the possible high Ground Sample Distance (GSD) of 1–5 cm [6] of UAV data, object-based approaches seem to be preferred. Both approaches are included in this review as the ability to discriminate and extract features is highly dependent on scale [75,76].
The reviewed methods are structured according to a sequence of commonly applied workflow steps for boundary delineation, as shown in Figure 4. The structure of first identifying candidate regions, then detecting linear features, and finally connecting these appears to be a generic approach, as following literature exemplifies: A review on linear feature extraction from imagery [64], a review on road detection [77] and case studies that aim to extract road networks from aerial imagery [78,79] and to delineate tree outlines from HRSI [80]. The first step, image segmentation, aims to divide an image into non-overlapping segments in order to identify candidate regions for further processing [81,82,83]. The second step, line extraction, detects edges. Edges are defined as a step change in the value of a low-level feature such as brightness or color. A collinear collection of such edges aggregated on the basis of grouping criteria is commonly defined as a line [84,85,86]. The third step, contour generation, connects lines to form a closed vectorized boundary line that surrounds an area defined through segmentation. These main steps can optionally be extended with pre- and postprocessing steps.
This review includes 37 case studies of unknown resolution and 52 case studies of multiple resolutions, most often below 5 m (Figure 5). The investigated case studies intend to detect features such as coastlines, agricultural field boundaries, road networks and buildings from aerial or satellite imagery, which is mainly collected with IKONOS or QuickBird satellites. A minority of studies is based on UAV data. The methods are often equally applicable on aerial and satellite imagery, as the data sources can have similar characteristics such as the high resolution of the derived orthoimages [87].
In the following, each workflow step is explained in detail including a table of example methods and case studies that apply these methods. The table represents possible approaches, with various further methods possible. The most common strategies are covered, while specific adaptations derived from these are excluded, to limit the extent of this survey. Overall, the survey of methods in this review is extensive, but it does not claim to be complete. The description and contextualization of most methods is based upon [88,89,90,91]. Due to the small amount of case studies on linear feature extraction that employ high resolution sensors of <0.5 m, one group of the described table includes case studies on resolutions of up to 5 m, whereas the other includes the remaining case studies. In order to demonstrate the applicability of the methods on UAV imagery for boundary delineation, some representative methods were implemented. An orthoimage acquired with a fixed-wing UAV during a flight campaign in Namibia served as an exemplary dataset (Figure 6). It shows a rural residential housing area and has a GSD of 5 cm. The acquisition and processing of the images is described in [24]. Cadastral boundaries are marked with fences and run along paths in this exemplary dataset. In urban areas, the cadastral parcels might be marked differently, i.e., through roof outlines. The proposed workflow would be similarly applicable in such areas, possibly detecting a larger number of cadastral boundaries due a consistency in cadastral boundary objects and smaller parcels. However, a large number of cadastral boundaries might not be visible in urban areas, e.g., when running through buildings with the same roof. Therefore, a rural area is considered as exemplary dataset.
As for the implementation, image processing libraries written in Python and Matlab were considered. For Python, this included Scikit [92] and OpenCV modules [93]. The latter are equally available in C++. For Matlab, example code provided from MathWorks [94] and VLFeat [95] was adopted. The methods were implemented with different libraries and standard parameters. The visually most representative output was chosen for this review as an illustrative explanation of discussed methods.

2.2.1. Preprocessing

Preprocessing steps might be applied in order to improve the output of the subsequent image segmentation and to facilitate the extraction of linear features. Therefore, the image is processed to suppress noise and enhance image details. The preprocessing includes the adjustment of contrast and brightness and the application of smoothing filters to remove noise [96]. Two possible approaches that aim at noise removal and image enhancement are presented in the following. Further approaches can be found in [97].
Anisotropic diffusion aims at reducing image noise while preserving significant parts of the image content (Figure 7, based on source code provided in [98]). This is done in an iterative process of applying an image filter until a sufficient degree of smoothing is obtained [98,99].
Wallis filter is an image filter method for detail enhancement through local contrast adjustment. The algorithm subdivides an image into non-overlapping windows of the same size to then adjust the contrast and minimize radiometric changes of each window [100].

2.2.2. Image Segmentation

This section describes methods that divide an image into non-overlapping segments that represent areas. The segments are detected based on homogeneity parameters or on the differentiation to neighboring regions [101]. In a non-ideal case, the image segmentation creates segments that cover more than one object of interest or the object of interest is subdivided into several objects. These outcomes are referred to as undersegmentation and oversegmentation, respectively [101]. Various strategies exist to classify image segmentation, as shown in [102,103]. In this review, the methods are classified into (i) unsupervised or (ii) supervised approaches. Table 1 shows an exemplary selection of case studies that apply the methods described in the following.
(i)
Unsupervised approaches include methods in which segmentation parameters are defined that describe color, texture, spectral homogeneity, size, shape, compactness and scale of image segments. The challenge lies within defining appropriate segmentation parameters for features varying in size, shape, scale and spatial location. Thereafter, the image is automatically segmented according to these parameters [90]. Popular approaches are described in the following and visualized in Figure 8: these were often applied in the case studies investigated for this review. A list of further approaches can be found in [102].
Graph-based image segmentation is based on color and is able to preserve details in low-variability image regions while ignoring details in high-variability regions. The algorithm performs an agglomerative clustering of pixels as nodes on a graph such that each superpixel is the minimum spanning tree of the constituent pixels [104,105].
Simple Linear Iterative Clustering (SLIC) is an algorithm that adapts a k-mean clustering approach to generate groups of pixels, called superpixels. The number of superpixels and their compactness can be adapted within the memory efficient algorithm [106].
Watershed algorithm is an edge-based image segmentation method. It is also referred to as a contour filling method and applies a mathematical morphological approach. First, the algorithm transforms an image into a gradient image. The image is seen as a topographical surface, where grey values are deemed as elevation of the surface of each pixel’s location; Then, a flooding process starts in which water effuses out of the minimum grey values. When the flooding across two minimum values converges, a boundary that separates the two identified segments is defined [101,102].
Wavelet transform analyses textures and patterns to detect local intensity variations and can be considered as a generalized combination of three other operations: Multi-resolution analysis, template matching and frequency domain analysis. The algorithm decomposes an image into a low frequency approximation image and a set of high frequency, spatially oriented detailed images [107].
(ii)
Supervised methods often consist of methods from machine learning and pattern recognition. These can be performed by learning a classifier to capture the variation in object appearances and views from a training dataset. In the training dataset, object shape descriptors are defined and used to label the training dataset. Then, the classifier is learned based on a set of regions with object shape descriptors resulting in their corresponding predicted labels. The automation of machine learning approaches might be limited, since some classifiers need to be trained with samples that require manual labeling. The aim of training is to model the process of data generation such that it can predict the output for unforeseen data. Various possibilities exist to select training sets and features [108] as well as to select a classifier [90,109]. In contrast to the unsupervised methods, these methods go beyond image segmentation as they additionally add a semantic meaning to each segment. A selection of popular approaches that have been applied in case studies investigated for this review are described in the following. A list of further approaches can be found in [90].
Convolutional Neural Networks (CNN) are inspired by biological processes being made up of neurons that have learnable weights and biases. The algorithm creates multiple layers of small neuron collections which process parts of an image, referred to as receptive fields. Then, local connections and tied weights are analyzed to aggregate information from each receptive field [96].
Markov Random Fields (MRF) are a probabilistic approach based on graphical models. They are used to extract features based on spatial texture by classifying an image into a number of regions or classes. The image is modelled as a MRF and a maximum a posteriori probability approach is used for classification [110].
Support Vector Machines (SVM) consist of a supervised learning model with associated learning algorithms that support linear image classification into two or more categories through data modelling. Their advantages include a generalization capability, which concerns the ability to classify shapes that are not within the feature space used for training [111].

2.2.3. Line Extraction

This section describes methods that detect and extract linear features. Table 2 shows an exemplary selection of case studies that apply the described methods, which are visualized in Figure 9. The figure shows that a large amount of edges is detected especially in the case of vegetation and on the rooftops of buildings, while a small amount of edges is detected on paths.
Edge detection can be divided into (i) first and (ii) second order derivative based edge detection. An edge has the one-dimensional shape of a ramp and calculating the derivative of the image can highlight its location. (i) First order derivative based methods detect edges by looking for the maximum and minimum in the first derivative of the image to locate the presence of the highest rate of change between adjacent pixels. The most prominent representative is the Canny edge detection that fulfills the criteria of a good detection and localization quality and the avoidance of multiple responses. These criteria are combined into one optimization criterion and solved using the calculus of variations. The algorithm consists of Gaussian smoothing, gradient filtering, non-maximum suppression and hysteresis thresholding [161]. Further representatives based on first order derivatives are the Robert’s cross, Sobel, Kirsch and Prewitt operators; (ii) Second order derivative based methods detect edges by searching for zero crossings in the second derivative of the image to find edges. The most prominent representative is the Laplacian of Gaussian, which highlights regions of rapid intensity change. The algorithm applies a Gaussian smoothing filter, followed by a derivative operation [162,163].
Straight line extraction is mostly done with the Hough transform. This is a connected component analysis for line, circle and ellipse detection in a parameter space, referred to as Hough space. Each candidate object point is transformed into Hough space, in order to detect clusters within that space that represent the object to be detected. The standard Hough transform detects analytic curves, while a generalized Hough transform can be used to detect arbitrary shaped templates [164]. As an alternative, the Line Segment Detector (LSD) algorithm could be applied. For this method, the gradient orientation that represents the local direction of the intensity value, and the global context of the intensity variations are utilized to group pixels into line-support regions and to determine the location and properties of edges [84]. The method is applied for line extraction in [165,166]. The visualization in Figure 9 is based on source code provided in [166].

2.2.4. Contour Generation

This section describes methods that are used to generate a vectorized and topologically connected network through connection of line segments. Table 3 shows an exemplary selection of case studies that apply the methods described in the following, which can be categorized in two groups:
(i)
A human operator outlines a small segment of the feature to be extracted. Then, a line tracking algorithm recursively predicts feature characteristics, measures these with profile matching and updates the feature outline respectively. The process continues until the profile matching fails. Perceptual grouping, explained in the following, can be used to group feature characteristics. Case studies that apply such line tracking algorithms can be found in [174,178,179].
(ii)
Instead of outlining a small segment of the feature to be extracted, the human operator can also provide a rough outline of the entire feature. Then, an algorithm applies a deformable template and refines this initial template to fit the contour of the feature to be extracted. Snakes, which are explained in the following, are an example for this procedure.
Perceptual grouping is the ability to impose structural organization on spatial data based on a set of principles namely proximity, similarity, closure, continuation, symmetry, common regions and connectedness. If elements are close together, similar to one another, form a closed contour, or move in the same direction, then they tend to be grouped perceptually. This allows to group fragmented line segments to generate an optimized continuous contour [180]. Perceptual grouping is applied under various names such as line grouping, linking, merging or connection in the case studies listed in Table 3.
Snakes also referred to as active contours are defined as elastic curves that dynamically adapt a vector contour to a region of interest by applying energy minimization techniques that express geometric and photometric constraints. The active contour is a set of points that aims to continuously enclose the feature to be extracted [181]. They are listed here, even though they could also be applied in previous steps, such as image segmentation [112,117]. In this step, they are applied to refine the geometrical outline of extracted features [80,131,135].

2.2.5. Postprocessing

Postprocessing aims to improve the output of the delineated feature by optimizing its shape. Three prominent approaches are explained in the following. Table 4 shows an exemplary selection of case studies that apply the described postprocessing methods, which are visualized in Figure 10.
Douglas-Peucker algorithm is used to simplify a line by reducing the number of points in a curve that is approximated by a series of points [194].
Morphological operators are employed as a postprocessing step to smooth the contour of detected line features [195].

2.3. Accuracy Assessment Methods

In the following, approaches that assess the accuracy of extracted linear features are described. In order to quantify the accuracy, reference data are required to then calculate a metric, which measures the similarity between the result and the reference data. These methods are known as supervised discrepancy methods [201]. The reference data can be acquired through manual digitation of visually extractable linear features [64,202,203] or through their extraction from existing maps [146,199,204]. Some authors extend the assessment to aspects such as time, cost and energy savings and include further accuracy measures [64]. For methods intending to classify linear features, the accuracy assessment is extended to thematic aspects [205,206]. In such cases, the confusion matrix is calculated as well as statistics derived from it, such as the user’s, producer’s and overall accuracy as well as the kappa coefficient [113,207,208,209]. The accuracy might also be evaluated based on thematic and geometric aspects [210]. The geometric accuracy incorporates positional aspects, indicating errors in terms of the object’s location and errors in terms of the spatial extent of an object. These components can be assessed with pixel-based and object-based measures. Pixel-based accuracy assessment has a rather quantitative character, is often used to assess geometric accuracy and is more standardized than object-based accuracy assessment. The latter has a rather qualitative character and is often used to assess classification quality [143]. The trend towards standardized pixel-based accuracy measures is manifested in efforts from the International Society for Photogrammetry and Remote Sensing (ISPRS), which publishes benchmark data to assess different methods in a uniform approach [211]. A comparison of both approaches shows that object-based approaches provide additional accuracy information compared to pixel-based approaches [207]. One example for this additional information are topological aspects that can be assessed with an object-based approach as shown in [212]. Such approaches can be based on a fuzzy representation of the object’s boundary [213,214]. Ultimately, different aspects in terms of feature extraction performance can be highlighted with a combination of pixel-based and object-based metrics [215].
The following approaches that can be applied both pixel-based and object-based, calculate planimetric accuracy. They are simple to implement and are often applied when assessing feature extraction methods [203,207,215,216]:
The completeness measures the percentage of the reference data which is explained by the extracted data, i.e., the percentage of the reference data which could be extracted. The value ranges from 0 to 1, with 1 being the optimum value.
The correctness represents the percentage of correctly extracted data, i.e., the percentage of the extraction, which is in accordance with the reference data. The value ranges from 0 to 1, with 1 being the optimum value.
The redundancy represents the percentage to which the correct extraction is redundant, i.e., extracted features overlap themselves. The value ranges from 0 to 1, with 0 being the optimum value.
The Root-Mean Square (RMS) difference expresses the average distance between the matched extracted and the matched reference data, i.e., the geometrical accuracy potential of the extracted data. The optimum value for RMS is 0.
These planimetric measures calculate per pixel or per object the amount of (i) True Positives (TP), where the extracted features match the reference data; (ii) False Positives (FP), where the extracted feature does not exist in the reference data and (iii) False Negatives (FN), where the features existent in the reference data are not extracted. The measures described can be used to derive further quality measures. These include the quality, the rank distance, the branching factor, the mean detour factor and connectivity measures when assessing linear networks [171,203,217,218]. Further, the precision-recall curve, the F-measure, the intersection over union metric (IoU) and the average precision are derived to assess object detection methods [90]. Since they contain the information inherent in the initial measures, they are used if a single quality measure is desired.
In the most common and simple approach, a buffer area of a specific width is calculated around linear features in the extracted data and the reference data. Comparing these areas then leads to the described accuracy measures. A buffer analysis can be performed either on a pixel-based or object-based representation of linear features. For a pixel-based representation, the buffer consists of a set of pixels within a specific distance from a set of pixels that represents the line. For an object-based representation, i.e., a vector representation, the buffer consists of a corridor of a specific width around the line [219]. The results of this approach strongly depend on the buffer width.

3. Discussion

In the following sections, the previously described feature extraction and evaluation methods are discussed and interpreted in perspective of previous studies and in the context of UAV-based cadastral mapping. The former is be based on advantages and drawbacks faced in case studies, while the latter is based on the experiences made during the implementation of representative methods.

3.1. UAV-based Cadastral Mapping

As the investigated case studies on feature extraction are rarely UAV-based, their potential is discussed based on the studies described in the introduction: In cadastral mapping, UAVs bear advantages compared to conventional approaches, as they provide a tool for fast and cheap capture and visualization of spatial data. UAVs can improve the pace of a mapping procedure, as data can be captured and analyzed automatically. However, the full production chain and quality control of UAV data may involve substantial time investment. Compared to conventional image data used for cadastral mapping, UAV-derived orthoimages are more flexible in capture time, space and ground resolution. Furthermore, the orthoimages can be partially updated or reused in further applications. Due to the research gap in the field of UAV-based automatic feature extraction within the context of cadastral mapping, their feasibility cannot be assessed thoroughly. This review serves as a starting point to fill that gap and the missing link between UAV-based cadastral mapping and automatic feature extraction.

3.2. Cadastral Boundary Characteristics

For all feature extraction approaches it is necessary to define characteristics of the feature to be extracted as stressed in [64,220]. Defining feature characteristics has been done in this review for boundary features regarding their geometry, spectral property, topology and context. This is often not exclusively done in the reviewed case studies, which extract linear features such as coastlines, agricultural field boundaries, road networks and buildings. One example of a feature description approach similar to this review is employed in a study on the extraction of road networks [179], in which the feature description is extended to functional characteristics. Even with a thorough description of linear boundary features, methods that extract visible features only are not capable of extracting the entirety of boundary features. This might include socially constructed boundaries that are not visible to an optical sensor [51,57]. The detection of such boundaries can be supported through model-driven approaches that generate a contour around land plots based on a priori knowledge.

3.3. Feature Extraction Methods

The methods reviewed in this study are grouped according to common workflow steps applied for linear feature extraction from high-resolution optical sensor data. They consist of (i) preprocessing; (ii) image segmentation; (iii) line extraction; (iv) contour generation and (v) postprocessing (Figure 4). An entirety of methods has been reviewed, which does not result in a recommendation of one specific method per workflow step, since further experimental work would be required to do this reliably. The most common methods listed in this review are applied to data of different resolutions mostly below 5 m. Furthermore, the workflow steps are not necessarily independent and sometimes the same method can be applied within different steps. Finding a non-redundant classification structure for all methods does not seem feasible as stated by authors of similar review papers on road extraction [221], object detection [90] and linear feature extraction [64]. Many case studies combine strategies from a variety of approaches, of which many example combinations are listed in [221]. The greatest variety can be found in case studies on road detection [222]. This application field appears to be most prominent for linear feature extraction, demonstrated in the large amount of case studies, their comprehensive review in [221], their extensive comparison on benchmark data [223] and commonly applied accuracy assessment methods originating from this field [216].
In the following sections, advantages and drawbacks named in case studies that apply previously described methods are outlined. Furthermore, recommendations on their applicability for UAV-based boundary delineation are drawn, when possible.
(i)
Preprocessing steps that include image enhancement and filtering are often applied in case studies that use high-resolution data below 1 m [118,121,126]. This might be due to the large amount of detail in such images, which can be reduced with filtering techniques. Without such preprocessing, oversegmentation might result—as well as too many non-relevant edges obtained through edge detection. One drawback of applying such preprocessing steps is the need to set thresholds for image enhancement and filtering. Standard parameters might lead to valuable results, but might also erase crucial image details. Selecting parameters hinders the automation of the entire workflow.
(ii)
Image segmentation is listed as a crucial first step for linear feature extraction in corresponding review papers [64,77,109]. Overall, image segments distinct in color, texture, spectral homogeneity, size, shape, compactness and scale are generally better distinguishable than images that are inhomogeneous in terms of these aspects. The methods reviewed in this paper are classified into supervised and unsupervised approaches. More studies apply an unsupervised approach, which might be due to their higher degree of automation. The supervised approaches taken from machine learning suffer from their extensive input requirements, such as the definition of features with corresponding object descriptors, labeling of objects, training a classifier and applying the trained classifier on test data [86,147]. Furthermore, the ability of machine learning approaches to classify an image into categories of different labels is not necessarily required in the scope of this workflow step, since the image only needs to be segmented. Machine learning approaches, such as CNNs, can also be employed in further workflow steps, i.e., for edge detection as shown in [224]. A combination of edge detection and image segmentation based on machine learning is proposed in [225]. A large number of case studies are based on SVM [111]. SVMs are appealing due to their ability to generalize well from a limited amount and quality of training data, which appears to be a common limitation in remote sensing. Mountrakis et al. found that SVMs can be based on fewer training data, compared to other approaches. However, they state that selecting parameters such as kernel size strongly affects the results and is frequently solved in a trial-and-error approach, which again limits the automation [111]. Furthermore, SVMs are not optimized for noise removal, which makes image preprocessing indispensable for high-resolution data. Approaches such as the Bag-of-Words framework, as applied in [226], have the advantage of automating the feature selection and labeling, before applying a supervised learning algorithm. Further state-of-the-art approaches including AdaBoost and random forest are discussed in [90].
(iii)
Line extraction makes up the majority of case studies on linear feature extraction, with Canny edge detection being the most prominent approach. The Canny edge detector is capable of reducing noise while a second order derivative such as the Laplacian of Gaussian that responds to transitions in intensity, is sensitive to noise. When comparing different edge detection approaches, it has been shown that the Canny edge detector performs better than the Laplacian of Gaussian and first order derivatives as the Robert’s cross, Sobel and Prewitt operator [162,163]. In terms of line extraction, the Hough transform is the most commonly used method. The LSD appears as an alternative that requires no parameter tuning while giving accurate results.
(iv)
Contour Generation is not represented in all case studies, since it is not as essential as the two previous workflow steps for linear feature extraction. The exceptions are case studies on road network extraction, which name contour generation, especially based on snakes, as a crucial workflow step [77,221]. Snakes deliver most accurate results when initialized with seed points close to features to be extracted [193]. Furthermore, these methods require parameter tuning in terms of the energy field, which limits their automation [80]. Perceptual grouping is rarely applied in the case studies investigated, especially not in those based on high-resolution data.
(v)
Postprocessing is utilized more often than preprocessing. Especially morphological filtering is applied in the majority of case studies. Admittedly, it is not always employed as a form of postprocessing to improve the final output, but equally during the workflow to smooth the result of a workflow step before further processing [73,120,121,126]. When applied at the end of the workflow in case studies on road extraction, morphological filtering is often combined with skeletons to extract the vectorized centerline of the road [79,125,142,196,198].

3.4. Accuracy Assessment Methods

Considering the accuracy, there is not one optimal method for its assessment or reporting. Many studies, especially those that aim at road extraction, use the described approach based on buffer analysis. Quackenbush states that most case studies focus on the extraction of a single feature, such as roads, which reduces the informative value of the resulting confusion matrix [64]. Furthermore, many studies report qualitative accuracy measures based on visual assessment. Those that provide more quantitative measures are often vague in describing their procedure [64]. Moreover, Foody states that quantitative measures are often misinterpreted and should therefore be interpreted with care. He argues that standardized measures and reporting schemes could be supportive, but are unlikely given the range of application fields and disciplines [208]. The ISPRS benchmark tests are a recent effort into this direction [211].
Furthermore, mixed pixels and registration problems might lead to a gap between extracted data and reference data. This results into low accuracy measures even for accurately extracted features [208]. A prioritization of a comprehensive cover of land plots over spatial accuracy is manifested in the fit-for-purpose land administration strategy proposed in [20]. The accuracy measure should therefore not only be interpreted with care, but also initially chosen with care taking into account the context and aim of the study as concluded in [208].

4. Conclusions

This review aimed to explore options to delineate boundaries for UAV-based cadastral mapping. At first, an initial review on cadastral mapping based on high-resolution optical sensor data was done to document the recent state-of-the-art. Then, cadastral boundary concepts and characteristics were summarized. Thereafter, an extensive review was completed on methods that extract and assess linear features with boundary characteristics. Since cadastral features encompass a variety of objects, the methods could also be applied to detect linear features in further application fields. The workflow steps proposed for boundary delineation include preprocessing, image segmentation, line extraction, contour generation and postprocessing. Per workflow step, the most popular methods were described and case studies that have proven their suitability were listed. The applicability of some representative methods on high-resolution UAV data was shown through their implementation on an exemplary UAV-derived orthoimage. In general, the workflow steps were supported in the majority of case studies and have proven to be valid when applied on UAV data. Thereafter, the most common accuracy assessment approaches were described. Moreover, advantages and drawbacks of each method were outlined, resulting in recommendations on their application for UAV-based cadastral mapping.
In conclusion, this review serves as a basis for the subsequent implementation of most suitable methods in a cadastral mapping workflow. Depending on the methods chosen and their implementation, different degrees of automation can be obtained. It would be possible to aim for a data-driven workflow that extracts visible boundaries, which then need to be processed by a human operator. Another option would be a model-driven workflow that delineates boundaries based on knowledge about their characteristics. Future work should focus on the automation of a suitable workflow. To increase the level of automation while reducing the amount of required human input is also a central aim of ongoing research [64]. Due to a lack of robustness of automatic feature extraction, some authors favor semi-automatic approaches that combine the interpretation skills of a human operator with the measurement speed of a computer [178,227]. Semi-automatic approaches that include editing capabilities seem indispensable for cadastral mapping approaches that focus on the participation of local stakeholders and the integration of local knowledge [20]. When evaluating an entire boundary delineation workflow for cadastral mapping, the following points proposed in [74] can be suggested as a basis for evaluation: The workflow should correctly and completely extract all relevant boundaries, be simple in parameterization with a high degree of automation and a minimal need of interaction, have a low computational effort, include self-assessment to increase reliability and be robust against varying quality of input data. The varying quality of input data might result from the application of different UAV platforms and sensors. Their influence on the choice of optimal workflow steps for cadastral mapping could be investigated in future work. Furthermore, the potential of integrating information from UAV-derived 3D point clouds might be investigated to extend the proposed workflow.
Overall, this review contributes to the applicability of UAVs, which according to Watts et al., has the potential to revolutionize remote sensing and its application fields to the same degree as the advent of Geographical Information Systems (GIS) did two decades ago [30]. For cadastral mapping, numerous studies have demonstrated the potential of UAVs especially in terms of fast data capturing and high accuracy [40]. UAV-based cadastral mapping could contribute to contemporary initiatives such as the United Nations’ sustainable development goals, as it allows a new economic, environmental and social approach to cadastral mapping [228]. With UAVs being able to rapidly map small areas, the cadastral map could be kept up-to-date at low-cost in a sustainable way. These aspects together with the possibility of creating a transparent and participatory mapping process could contribute to another recent initiative, namely fit-for-purpose land administration published by the World Bank and the International Federation of Surveyors (FIG) [20].
Future work might concentrate on the integration of existing maps as a source of geometric and semantic information that was left undetected by the automatic feature extraction. Existing maps are incorporated in the workflow of cadastral mapping to support the manual digitization, as basis for map updates or for accuracy assessment [23,25,26]. Their potential to support the automatic feature extraction as proposed for road extraction [199] is not yet exploited—and hardly investigated [229]. As a further data source, smart sketchmaps that transfer hand drawn maps into topologically and spatially corrected maps could be integrated in the feature extraction workflow [230]. This would allow to integrate local spatial knowledge and to delineate socially perceived boundaries. Those boundaries are not visible to optical sensors and were excluded from this review. Furthermore, the boundary delineation methods could be enhanced to support the increasingly prominent area of 3D cadastral mapping of boundaries and buildings [66]. This would allow a detailed representation of complex interrelated titles and land uses [50]. Future development on UAV-based cadastral mapping can be expected, since the ISPRS lists UAVs as key topic and stresses their potential for national mapping in their recent paper on trends and topics for future work [87]. Moreover, the European Union has acknowledged the use of UAV-derived orthoimages as a valid source for cadastral mapping and further applications [41].

Acknowledgments

This research was funded within its4land, which is part of the Horizon 2020 program of the European Union, project number 687828. We are grateful to the authors of [24] for providing the UAV orthoimage.

Author Contributions

Sophie Crommelinck collected and prepared the data for this review paper. Markus Gerke, Francesco Nex, Rohan Bennett, Michael Ying Yang and George Vosselman contributed to the analysis and interpretation of the data. Additionally, they supervised the process of defining the structure of this review and the selection of case studies. The manuscript was written by Sophie Crommelinck with contributions from Markus Gerke, Francesco Nex, Rohan Bennett, Michael Ying Yang and George Vosselman.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNConvolutional Neural Networks
DSMDigital Surface Model
FIGInternational Federation of Surveyors
FNFalse Negatives
FPFalse Positives
GISGeographical Information System
GNSSGlobal Navigation Satellite System
GSDGround Sample Distance
HRSIHigh-Resolution Satellite Imagery
IoUIntersection over Union
ISPRSInternational Society for Photogrammetry and Remote Sensing
LSDLine Segment Detector
MRFMarkov Random Field
OBIAObject Based Image Interpretation
RMSRoot-Mean Square
SLICSimple Linear Iterative Clustering
SVMSupport Vector Machine
TPTrue Positives
UAVUnmanned Aerial Vehicle

References

  1. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  2. Harwin, S.; Lucieer, A. Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef]
  3. Gerke, M.; Przybilla, H.-J. Accuracy analysis of photogrammetric UAV image blocks: Influence of onboard RTK-GNSS and cross flight patterns. Photogramm. Fernerkund. Geoinf. 2016, 14, 17–30. [Google Scholar] [CrossRef]
  4. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  5. Tahar, K.N.; Ahmad, A. An evaluation on fixed wing and multi-rotor UAV images using photogrammetric image processing. Int. J. Comput. Electr. Autom. Control Inf. Eng. 2013, 7, 48–52. [Google Scholar]
  6. Toth, C.; Jóźków, G. Remote sensing platforms and sensors: A survey. ISPRS J. Photogramm. Remote Sens. 2016, 115, 22–36. [Google Scholar] [CrossRef]
  7. Eisenbeiss, H.; Sauerbier, M. Investigation of UAV systems and flight modes for photogrammetric applications. Photogramm. Rec. 2011, 26, 400–421. [Google Scholar] [CrossRef]
  8. Fernández-Hernandez, J.; González-Aguilera, D.; Rodríguez-Gonzálvez, P.; Mancera-Taboada, J. Image-based modelling from Unmanned Aerial Vehicle (UAV) photogrammetry: An effective, low-cost tool for archaeological applications. Archaeometry 2015, 57, 128–145. [Google Scholar] [CrossRef]
  9. Zhang, C.; Kovacs, J.M. The application of small unmanned aerial systems for precision agriculture: A review. Precis. Agric. 2012, 13, 693–712. [Google Scholar] [CrossRef]
  10. Berni, J.; Zarco-Tejada, P.; Suárez, L.; González-Dugo, V.; Fereres, E. Remote sensing of vegetation from UAV platforms using lightweight multispectral and thermal imaging sensors. Proc. ISPRS 2009, 38, 22–29. [Google Scholar]
  11. Puri, A.; Valavanis, K.; Kontitsis, M. Statistical profile generation for traffic monitoring using real-time UAV based video data. In Proceedings of the Mediterranean Conference on Control & Automation, Athens, Greece, 27–29 June 2007.
  12. Bendea, H.; Boccardo, P.; Dequal, S.; Giulio Tonolo, F.; Marenchino, D.; Piras, M. Low cost UAV for post-disaster assessment. Proc. ISPRS 2008, 37, 1373–1379. [Google Scholar]
  13. Chou, T.-Y.; Yeh, M.-L.; Chen, Y.C.; Chen, Y.H. Disaster monitoring and management by the unmanned aerial vehicle technology. Proc. ISPRS 2010, 35, 137–142. [Google Scholar]
  14. Irschara, A.; Kaufmann, V.; Klopschitz, M.; Bischof, H.; Leberl, F. Towards fully automatic photogrammetric reconstruction using digital images taken from UAVs. Proc. ISPRS 2010, 2010, 1–6. [Google Scholar]
  15. Binns, B.O.; Dale, P.F. Cadastral Surveys and Records of Rights in Land Administration. Available online: http://www.fao.org/docrep/006/v4860e/v4860e03.htm (accessed on 10 June 2016).
  16. Williamson, I.; Enemark, S.; Wallace, J.; Rajabifard, A. Land Administration for Sustainable Development; ESRI Press Academic: Redlands, CA, USA, 2010. [Google Scholar]
  17. Alemie, B.K.; Bennett, R.M.; Zevenbergen, J. Evolving urban cadastres in Ethiopia: The impacts on urban land governance. Land Use Policy 2015, 42, 695–705. [Google Scholar] [CrossRef]
  18. Nations, U. Land Administrtation in the UNECE Region: Development Trends and Main Principles; UNECE Information Service: Geneva, Switzerland, 2005. [Google Scholar]
  19. Kelm, K. UAVs revolutionise land administration. GIM Int. 2014, 28, 35–37. [Google Scholar]
  20. Enemark, S.; Bell, K.C.; Lemmen, C.; McLaren, R. Fit-For-Purpose Land Administration; International Federation of Surveyors: Frederiksberg, Denmark, 2014. [Google Scholar]
  21. Zevenbergen, J.; Augustinus, C.; Antonio, D.; Bennett, R. Pro-poor land administration: Principles for recording the land rights of the underrepresented. Land Use Policy 2013, 31, 595–604. [Google Scholar] [CrossRef]
  22. Zevenbergen, J.; De Vries, W.; Bennett, R.M. Advances in Responsible Land Administration; CRC Press: Padstow, UK, 2015. [Google Scholar]
  23. Maurice, M.J.; Koeva, M.N.; Gerke, M.; Nex, F.; Gevaert, C. A photogrammetric approach for map updating using UAV in Rwanda. In Proceedings of the GeoTechRwanda, Kigali, Rwanda, 18–20 November 2015.
  24. Mumbone, M.; Bennett, R.; Gerke, M.; Volkmann, W. Innovations in boundary mapping: Namibia, customary lands and UAVs. In Proceedings of the World Bank Conference on Land and Poverty, Washington, DC, USA, 23–27 March 2015.
  25. Volkmann, W.; Barnes, G. Virtual surveying: Mapping and modeling cadastral boundaries using Unmanned Aerial Systems (UAS). In Proceedings of the FIG Congress: Engaging the Challenges—Enhancing the Relevance, Kuala Lumpur, Malaysia, 16–21 June 2014; pp. 1–13.
  26. Barthel, K. Linking Land Policy, Geospatial Technology and Community Participation. Available online: http://thelandalliance.org/2015/06/early-lessons-learned-from-testing-uavs-for-geospatial-data-collection-and-participatory-mapping/ (accessed on 10 June 2016).
  27. Pajares, G. Overview and current status of remote sensing applications based on unmanned aerial vehicles (UAVs). Photogramm. Eng. Remote Sens. 2015, 81, 281–329. [Google Scholar] [CrossRef]
  28. Everaerts, J. The use of unmanned aerial vehicles (UAVs) for remote sensing and mapping. Proc. ISPRS 2008, 37, 1187–1192. [Google Scholar]
  29. Remondino, F.; Barazzetti, L.; Nex, F.; Scaioni, M.; Sarazzi, D. UAV photogrammetry for mapping and 3D modeling—Current status and future perspectives. Proc. ISPRS 2011, 38, C22. [Google Scholar] [CrossRef]
  30. Watts, A.C.; Ambrosia, V.G.; Hinkley, E.A. Unmanned aircraft systems in remote sensing and scientific research: Classification and considerations of use. Remote Sens. 2012, 4, 1671–1692. [Google Scholar] [CrossRef]
  31. Ali, Z.; Tuladhar, A.; Zevenbergen, J. An integrated approach for updating cadastral maps in Pakistan using satellite remote sensing data. Int. J. Appl. Earth Obs. Geoinform. 2012, 18, 386–398. [Google Scholar] [CrossRef]
  32. Corlazzoli, M.; Fernandez, O. In SPOT 5 cadastral validation project in Izabal, Guatemala. Proc. ISPRS 2004, 35, 291–296. [Google Scholar]
  33. Konecny, G. Cadastral mapping with earth observation technology. In Geospatial Technology for Earth Observation; Li, D., Shan, J., Gong, J., Eds.; Springer: Boston, MA, USA, 2010; pp. 397–409. [Google Scholar]
  34. Ondulo, J.-D. High spatial resolution satellite imagery for Pid improvement in Kenya. In Proceedings of the FIG Congress: Shaping the Change, Munich, Germany, 8–13 October 2006; pp. 1–9.
  35. Tuladhar, A. Spatial cadastral boundary concepts and uncertainty in parcel-based information systems. Proc. ISPRS 1996, 31, 890–893. [Google Scholar]
  36. Christodoulou, K.; Tsakiri-Strati, M. Combination of satellite image Pan IKONOS-2 with GPS in cadastral applications. In Proceedings of the Workshop on Spatial Information Management for Sustainable Real Estate Market, Athens, Greece, 28–30 May 2003.
  37. Alkan, M.; Marangoz, M. Creating cadastral maps in rural and urban areas of using high resolution satellite imagery. Appl. Geo-Inform. Soc. Environ. 2009, 2009, 89–95. [Google Scholar]
  38. Törhönen, M.P. Developing land administration in Cambodia. Comput. Environ. Urban Syst. 2001, 25, 407–428. [Google Scholar] [CrossRef]
  39. Greenwood, F. Mapping in practise. In Drones and Aerial Observation; New America: New York, NY, USA, 2015; pp. 49–55. [Google Scholar]
  40. Manyoky, M.; Theiler, P.; Steudler, D.; Eisenbeiss, H. Unmanned aerial vehicle in cadastral applications. Proc. ISPRS 2011, 3822, 57. [Google Scholar] [CrossRef]
  41. Mesas-Carrascosa, F.J.; Notario-García, M.D.; de Larriva, J.E.M.; de la Orden, M.S.; Porras, A.G.-F. Validation of measurements of land plot area using UAV imagery. Int. J. Appl. Earth Obs. Geoinform. 2014, 33, 270–279. [Google Scholar] [CrossRef]
  42. Rijsdijk, M.; Hinsbergh, W.H.M.V.; Witteveen, W.; Buuren, G.H.M.; Schakelaar, G.A.; Poppinga, G.; Persie, M.V.; Ladiges, R. Unmanned aerial systems in the process of juridical verification of cadastral border. Proc. ISPRS 2013, 61, 325–331. [Google Scholar] [CrossRef]
  43. Van Hinsberg, W.; Rijsdijk, M.; Witteveen, W. UAS for cadastral applications: Testing suitability for boundary identification in urban areas. GIM Int. 2013, 27, 17–21. [Google Scholar]
  44. Cunningham, K.; Walker, G.; Stahlke, E.; Wilson, R. Cadastral audit and assessments using unmanned aerial systems. Proc. ISPRS 2011, 2011, 1–4. [Google Scholar] [CrossRef]
  45. Cramer, M.; Bovet, S.; Gültlinger, M.; Honkavaara, E.; McGill, A.; Rijsdijk, M.; Tabor, M.; Tournadre, V. On the use of RPAS in national mapping—The EuroSDR point of view. Proc. ISPRS 2013, 11, 93–99. [Google Scholar] [CrossRef]
  46. Haarbrink, R. UAS for geo-information: Current status and perspectives. Proc. ISPRS 2011, 35, 1–6. [Google Scholar] [CrossRef]
  47. Eyndt, T.; Volkmann, W. UAS as a tool for surveyors: From tripods and trucks to virtual surveying. GIM Int. 2013, 27, 20–25. [Google Scholar]
  48. Barnes, G.; Volkmann, W. High-Resolution mapping with unmanned aerial systems. Surv. Land Inf. Sci. 2015, 74, 5–13. [Google Scholar]
  49. Heipke, C.; Woodsford, P.A.; Gerke, M. Updating geospatial databases from images. In ISPRS Congress Book; Taylor & Francis Group: London, UK, 2008; pp. 355–362. [Google Scholar]
  50. Jazayeri, I.; Rajabifard, A.; Kalantari, M. A geometric and semantic evaluation of 3D data sourcing methods for land and property information. Land Use Policy 2014, 36, 219–230. [Google Scholar] [CrossRef]
  51. Zevenbergen, J.; Bennett, R. The visible boundary: More than just a line between coordinates. In Proceedings of the GeoTechRwanda, Kigali, Rwanda, 17–25 May 2015.
  52. Bennett, R.; Kitchingman, A.; Leach, J. On the nature and utility of natural boundaries for land and marine administration. Land Use Policy 2010, 27, 772–779. [Google Scholar] [CrossRef]
  53. Smith, B. On drawing lines on a map. In Spatial Information Theory: A Theoretical Basis for GIS; Springer: Berlin/Heidelberg, Germany, 1995; pp. 475–484. [Google Scholar]
  54. Lengoiboni, M.; Bregt, A.K.; van der Molen, P. Pastoralism within land administration in Kenya—The missing link. Land Use Policy 2010, 27, 579–588. [Google Scholar] [CrossRef]
  55. Fortin, M.-J.; Olson, R.; Ferson, S.; Iverson, L.; Hunsaker, C.; Edwards, G.; Levine, D.; Butera, K.; Klemas, V. Issues related to the detection of boundaries. Landsc. Ecol. 2000, 15, 453–466. [Google Scholar] [CrossRef]
  56. Fortin, M.-J.; Drapeau, P. Delineation of ecological boundaries: Comparison of approaches and significance tests. Oikos 1995, 72, 323–332. [Google Scholar] [CrossRef]
  57. Richardson, K.A.; Lissack, M.R. On the status of boundaries, both natural and organizational: A complex systems perspective. J. Complex. Issues Organ. Manag. Emerg. 2001, 3, 32–49. [Google Scholar] [CrossRef]
  58. Fagan, W.F.; Fortin, M.-J.; Soykan, C. Integrating edge detection and dynamic modeling in quantitative analyses of ecological boundaries. BioScience 2003, 53, 730–738. [Google Scholar] [CrossRef]
  59. Dale, P.; McLaughlin, J. Land Administration; University Press: Oxford, UK, 1999. [Google Scholar]
  60. Cay, T.; Corumluoglu, O.; Iscan, F. A study on productivity of satellite images in the planning phase of land consolidation projects. Proc. ISPRS 2004, 32, 1–6. [Google Scholar]
  61. Barnes, G.; Moyer, D.D.; Gjata, G. Evaluating the effectiveness of alternative approaches to the surveying and mapping of cadastral parcels in Albania. Comput. Environ. Urban Syst. 1994, 18, 123–131. [Google Scholar] [CrossRef]
  62. Lemmen, C.; Zevenbergen, J.A.; Lengoiboni, M.; Deininger, K.; Burns, T. First experiences with high resolution imagery based adjudication approach for social tenure domain model in Ethiopia. In Proceedings of the FIG-World Bank Conference, Washington, DC, USA, 31 September 2009.
  63. Rao, S.; Sharma, J.; Rajashekar, S.; Rao, D.; Arepalli, A.; Arora, V.; Singh, R.; Kanaparthi, M. Assessing usefulness of High-Resolution Satellite Imagery (HRSI) for re-survey of cadastral maps. Proc. ISPRS 2014, 2, 133–143. [Google Scholar] [CrossRef]
  64. Quackenbush, L.J. A review of techniques for extracting linear features from imagery. Photogramm. Eng. Remote Sens. 2004, 70, 1383–1392. [Google Scholar] [CrossRef]
  65. Edwards, G.; Lowell, K.E. Modeling uncertainty in photointerpreted boundaries. Photogramm. Eng. Remote Sens. 1996, 62, 377–390. [Google Scholar]
  66. Aien, A.; Kalantari, M.; Rajabifard, A.; Williamson, I.; Bennett, R. Advanced principles of 3D cadastral data modelling. In Proceedings of the 2nd International Workshop on 3D Cadastres, Delft, The Netherlands, 16–18 November 2011.
  67. Ali, Z.; Ahmed, S. Extracting parcel boundaries from satellite imagery for a land information system. In Proceedings of the International Conference on Recent Advances in Space Technologies (RAST), Istanbul, Turkey, 12–14 June 2013.
  68. Lin, G.; Shen, C.; Reid, I. Efficient Piecewise Training of Deep Structured Models for Semantic Segmentation. Available online: http://arxiv.org/abs/1504.01013 (accessed on 12 August 2016).
  69. Hay, G.J.; Blaschke, T.; Marceau, D.J.; Bouchard, A. A comparison of three image-object methods for the multiscale analysis of landscape structure. ISPRS J. Photogramm. Remote Sens. 2003, 57, 327–345. [Google Scholar] [CrossRef]
  70. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  71. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Queiroz Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic object-based image analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [PubMed]
  72. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  73. Babawuro, U.; Beiji, Z. Satellite imagery cadastral features extractions using image processing algorithms: A viable option for cadastral science. Int. J. Comput. Sci. Issues 2012, 9, 30–38. [Google Scholar]
  74. Luhmann, T.; Robson, S.; Kyle, S.; Harley, I. Close Range Photogrammetry: Principles, Methods and Applications; Whittles: Dunbeath, UK, 2006. [Google Scholar]
  75. Selvarajan, S.; Tat, C.W. Extraction of man-made features from remote sensing imageries by data fusion techniques. In Proceedings of the Asian Conference on Remote Sensing, Singapore, 5–9 November 2001.
  76. Wang, J.; Zhang, Q. Applicability of a gradient profile algorithm for road network extraction—Sensor, resolution and background considerations. Can. J. Remote Sens. 2000, 26, 428–439. [Google Scholar] [CrossRef]
  77. Singh, P.P.; Garg, R.D. In road detection from remote sensing images using impervious surface characteristics: Review and implication. Proc. ISPRS 2014, 40, 955–959. [Google Scholar] [CrossRef]
  78. Trinder, J.C.; Wang, Y. Automatic road extraction from aerial images. Digit. Signal Process. 1998, 8, 215–224. [Google Scholar] [CrossRef]
  79. Jin, H.; Feng, Y.; Li, B. Road network extraction with new vectorization and pruning from high-resolution RS images. In Proceedings of the International Conference on Image and Vision Computing (IVCNZ), Christchurch, New Zealand, 26–28 November 2008; pp. 1–6.
  80. Wolf, B.-M.; Heipke, C. Automatic extraction and delineation of single trees from remote sensing data. Mach. Vis. Appl. 2007, 18, 317–330. [Google Scholar] [CrossRef]
  81. Haralick, R.M.; Shapiro, L.G. Image segmentation techniques. Comput. Vis. Graph. Image Process. 1985, 29, 100–132. [Google Scholar] [CrossRef]
  82. Pal, N.R.; Pal, S.K. A review on image segmentation techniques. Pattern Recognit. 1993, 26, 1277–1294. [Google Scholar] [CrossRef]
  83. Sonka, M.; Hlavac, V.; Boyle, R. Image Processing, Analysis, and Machine Vision; Chapman & Hal: London, UK, 2014. [Google Scholar]
  84. Burns, J.B.; Hanson, A.R.; Riseman, E.M. Extracting straight lines. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 425–455. [Google Scholar] [CrossRef]
  85. Sharifi, M.; Fathy, M.; Mahmoudi, M.T. A classified and comparative study of edge detection algorithms. In Proceedings of the International Conference on Information Technology: Coding and Computing, Las Vegas, NV, USA, 8–10 April 2002.
  86. Martin, D.R.; Fowlkes, C.C.; Malik, J. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 530–549. [Google Scholar] [CrossRef] [PubMed]
  87. Chen, J.; Dowman, I.; Li, S.; Li, Z.; Madden, M.; Mills, J.; Paparoditis, N.; Rottensteiner, F.; Sester, M.; Toth, C.; et al. Information from imagery: ISPRS scientific vision and research agenda. ISPRS J. Photogramm. Remote Sens. 2016, 115, 3–21. [Google Scholar] [CrossRef]
  88. Nixon, M. Feature Extraction & Image Processing; Elsevier: Oxford, UK, 2008. [Google Scholar]
  89. Petrou, M.; Petrou, C. Image Processing: The Fundamentals, 2nd ed.; John Wiley & Sons: West Sussex, UK, 2010. [Google Scholar]
  90. Cheng, G.; Han, J. A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 2016, 117, 11–28. [Google Scholar] [CrossRef]
  91. Steger, C.; Ulrich, M.; Wiedemann, C. Machine Vision Algorithms and Applications; Wiley-VCH: Weinheim, Germany, 2008. [Google Scholar]
  92. Scikit. Available online: http://www.scikit-image.org (accessed on 10 June 2016).
  93. OpenCV. Available online: http://www.opencv.org (accessed on 10 June 2016).
  94. MathWorks. Available online: http://www.mathworks.com (accessed on 10 June 2016).
  95. VLFeat. Available online: http://www.vlfeat.org (accessed on 10 June 2016).
  96. Egmont-Petersen, M.; de Ridder, D.; Handels, H. Image processing with neural networks—A review. Pattern Recognit. 2002, 35, 2279–2301. [Google Scholar] [CrossRef]
  97. Buades, A.; Coll, B.; Morel, J.-M. A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 2005, 4, 490–530. [Google Scholar] [CrossRef]
  98. Perona, P.; Malik, J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell 1990, 12, 629–639. [Google Scholar] [CrossRef]
  99. Black, M.J.; Sapiro, G.; Marimont, D.H.; Heeger, D. Robust anisotropic diffusion. IEEE Trans. Image Process. 1998, 7, 421–432. [Google Scholar] [CrossRef] [PubMed]
  100. Tan, D. Image enhancement based on adaptive median filter and Wallis filter. In Proceedings of the National Conference on Electrical, Electronics and Computer Engineering, Xi’an, China, 12–13 December 2015.
  101. Schiewe, J. Segmentation of high-resolution remotely sensed data-concepts, applications and problems. Proc. ISPRS 2002, 34, 380–385. [Google Scholar]
  102. Dey, V.; Zhang, Y.; Zhong, M. A review on image segmentation techniques with remote sensing perspective. Proc. ISPRS 2010, 38, 31–42. [Google Scholar]
  103. Cheng, H.-D.; Jiang, X.; Sun, Y.; Wang, J. Color image segmentation: Advances and prospects. Pattern Recognit. 2001, 34, 2259–2281. [Google Scholar] [CrossRef]
  104. Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient graph-based image segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
  105. Francisco, J.E.; Allan, D.J. Benchmarking image segmentation algorithms. Int. J. Comput. Vis. 2009, 85, 167–181. [Google Scholar]
  106. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Susstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed]
  107. Ishida, T.; Itagaki, S.; Sasaki, Y.; Ando, H. Application of wavelet transform for extracting edges of paddy fields from remotely sensed images. Int. J. Remote Sens. 2004, 25, 347–357. [Google Scholar] [CrossRef]
  108. Ma, L.; Cheng, L.; Li, M.; Liu, Y.; Ma, X. Training set size, scale, and features in geographic object-based image analysis of very high resolution unmanned aerial vehicle imagery. ISPRS J. Photogramm. Remote Sens. 2015, 102, 14–27. [Google Scholar] [CrossRef]
  109. Liu, Y.; Li, M.; Mao, L.; Xu, F.; Huang, S. Review of remotely sensed imagery classification patterns based on object-oriented image analysis. Chin. Geogr. Sci. 2006, 16, 282–288. [Google Scholar] [CrossRef]
  110. Blaschke, T.; Lang, S.; Lorup, E.; Strobl, J.; Zeil, P. Object-oriented image processing in an integrated GIS/remote sensing environment and perspectives for environmental applications. Environ. Inf. Plan. Politics Pub. 2000, 2, 555–570. [Google Scholar]
  111. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  112. Vakilian, A.A.; Vakilian, K.A. A new satellite image segmentation enhancement technique for weak image boundaries. Ann. Fac. Eng. Hunedoara 2012, 10, 239–243. [Google Scholar]
  113. Radoux, J.; Defourny, P. A quantitative assessment of boundaries in automated forest stand delineation using very high resolution imagery. Remote Sens. Environ. 2007, 110, 468–475. [Google Scholar] [CrossRef]
  114. Mueller, M.; Segl, K.; Kaufmann, H. Edge-and region-based segmentation technique for the extraction of large, man-made objects in high-resolution satellite imagery. Pattern Recognit. 2004, 37, 1619–1628. [Google Scholar] [CrossRef]
  115. Wang, Y.; Li, X.; Zhang, L.; Zhang, W. Automatic road extraction of urban area from high spatial resolution remotely sensed imagery. Proc. ISPRS 2008, 86, 1–25. [Google Scholar]
  116. Kumar, M.; Singh, R.; Raju, P.; Krishnamurthy, Y. Road network extraction from high resolution multispectral satellite imagery based on object oriented techniques. Proc. ISPRS 2014, 2, 107–110. [Google Scholar] [CrossRef]
  117. Butenuth, M. Segmentation of imagery using network snakes. Photogramm. Fernerkund. Geoinf. 2007, 2007, 1–7. [Google Scholar]
  118. Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. In segmentation of UAV-based images incorporating 3D point cloud information. Proc. ISPRS 2015, 40, 261–268. [Google Scholar] [CrossRef]
  119. Fernandez Galarreta, J.; Kerle, N.; Gerke, M. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning. Nat. Hazards Earth Syst. Sci. 2015, 15, 1087–1101. [Google Scholar] [CrossRef]
  120. Grigillo, D.; Kanjir, U. Urban object extraction from digital surface model and digital aerial images. Proc. ISPRS 2012, 22, 215–220. [Google Scholar] [CrossRef]
  121. Awad, M.M. A morphological model for extracting road networks from high-resolution satellite images. J. Eng. 2013, 2013, 1–9. [Google Scholar] [CrossRef]
  122. Ünsalan, C.; Boyer, K.L. A system to detect houses and residential street networks in multispectral satellite images. Comput. Vis. Image Underst. 2005, 98, 423–461. [Google Scholar] [CrossRef]
  123. Sohn, G.; Dowman, I. Building extraction using Lidar DEMs and IKONOS images. Proc. ISPRS 2003, 34, 37–43. [Google Scholar]
  124. Mena, J.B. Automatic vectorization of segmented road networks by geometrical and topological analysis of high resolution binary images. Knowl. Based Syst. 2006, 19, 704–718. [Google Scholar] [CrossRef]
  125. Mena, J.B.; Malpica, J.A. An automatic method for road extraction in rural and semi-urban areas starting from high resolution satellite imagery. Pattern Recognit. Lett. 2005, 26, 1201–1220. [Google Scholar] [CrossRef]
  126. Jin, X.; Davis, C.H. An integrated system for automatic road mapping from high-resolution multi-spectral satellite imagery by information fusion. Inf. Fusion 2005, 6, 257–273. [Google Scholar] [CrossRef]
  127. Chen, T.; Wang, J.; Zhang, K. A wavelet transform based method for road extraction from high-resolution remotely sensed data. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Toronto, ON, Canada, 24–28 June 2002.
  128. Liow, Y.-T.; Pavlidis, T. Use of shadows for extracting buildings in aerial images. Comput. Vis. Graph. Image Process. 1990, 49, 242–277. [Google Scholar] [CrossRef]
  129. Liu, H.; Jezek, K. Automated extraction of coastline from satellite imagery by integrating Canny edge detection and locally adaptive thresholding methods. Int. J. Remote Sens. 2004, 25, 937–958. [Google Scholar] [CrossRef]
  130. Wiedemann, C.; Heipke, C.; Mayer, H.; Hinz, S. Automatic extraction and evaluation of road networks from MOMS-2P imagery. Proc. ISPRS 1998, 32, 285–291. [Google Scholar]
  131. Tiwari, P.S.; Pande, H.; Kumar, M.; Dadhwal, V.K. Potential of IRS P-6 LISS IV for agriculture field boundary delineation. J. Appl. Remote Sens. 2009, 3, 1–9. [Google Scholar] [CrossRef]
  132. Karathanassi, V.; Iossifidis, C.; Rokos, D. A texture-based classification method for classifying built areas according to their density. Int. J. Remote Sens. 2000, 21, 1807–1823. [Google Scholar] [CrossRef]
  133. Qiaoping, Z.; Couloigner, I. Automatic road change detection and GIS updating from high spatial remotely-sensed imagery. Geo-Spat. Inf. Sci. 2004, 7, 89–95. [Google Scholar] [CrossRef]
  134. Sharma, O.; Mioc, D.; Anton, F. Polygon feature extraction from satellite imagery based on colour image segmentation and medial axis. Proc. ISPRS 2008, 37, 235–240. [Google Scholar]
  135. Butenuth, M.; Straub, B.-M.; Heipke, C. Automatic extraction of field boundaries from aerial imagery. In Proceedings of the KDNet Symposium on Knowledge-Based Services for the Public Sector, Bonn, Germany, 3–4 June 2004.
  136. Stoica, R.; Descombes, X.; Zerubia, J. A Gibbs point process for road extraction from remotely sensed images. Int. J. Comput. Vis. 2004, 57, 121–136. [Google Scholar] [CrossRef]
  137. Mokhtarzade, M.; Zoej, M.V.; Ebadi, H. Automatic road extraction from high resolution satellite images using neural networks, texture analysis, fuzzy clustering and genetic algorithms. Proc. ISPRS 2008, 37, 549–556. [Google Scholar]
  138. Zhang, C.; Baltsavias, E.; Gruen, A. Knowledge-based image analysis for 3D road reconstruction. Asian J. Geoinform. 2001, 1, 3–14. [Google Scholar]
  139. Shao, Y.; Guo, B.; Hu, X.; Di, L. Application of a fast linear feature detector to road extraction from remotely sensed imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 626–631. [Google Scholar] [CrossRef]
  140. Ding, X.; Kang, W.; Cui, J.; Ao, L. Automatic extraction of road network from aerial images. In Proceedings of the International Symposium on Systems and Control in Aerospace and Astronautics (ISSCAA), Harbin, China, 19–21 January 2006.
  141. Udomhunsakul, S.; Kozaitis, S.P.; Sritheeravirojana, U. Semi-automatic road extraction from aerial images. Proc. SPIE 2004, 5239, 26–32. [Google Scholar]
  142. Amini, J.; Saradjian, M.; Blais, J.; Lucas, C.; Azizi, A. Automatic road-side extraction from large scale imagemaps. Int. J. Appl. Earth Obs. Geoinform. 2002, 4, 95–107. [Google Scholar] [CrossRef]
  143. Drăguţ, L.; Blaschke, T. Automated classification of landform elements using object-based image analysis. Geomorphology 2006, 81, 330–344. [Google Scholar] [CrossRef]
  144. Saeedi, P.; Zwick, H. Automatic building detection in aerial and satellite images. In Proceedings of the 10th International Conference on Control, Automation, Robotics and Vision, Hanoi, Vietnam, 17–20 December 2008.
  145. Song, Z.; Pan, C.; Yang, Q. A region-based approach to building detection in densely build-up high resolution satellite image. In Proceedings of the IEEE International Conference on Image Processing, Atlanta, GA, USA, 8–11 October 2006.
  146. Song, M.; Civco, D. Road extraction using SVM and image segmentation. Photogramm. Eng. Remote Sens. 2004, 70, 1365–1371. [Google Scholar] [CrossRef]
  147. Momm, H.; Gunter, B.; Easson, G. Improved feature extraction from high-resolution remotely sensed imagery using object geometry. Proc. SPIE 2010. [Google Scholar] [CrossRef]
  148. Chaudhuri, D.; Kushwaha, N.; Samal, A. Semi-automated road detection from high resolution satellite images by directional morphological enhancement and segmentation techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 1538–1544. [Google Scholar] [CrossRef]
  149. Hofmann, P. Detecting buildings and roads from IKONOS data using additional elevation information. GeoBIT/GIS 2001, 6, 28–33. [Google Scholar]
  150. Hofmann, P. Detecting urban features from IKONOS data using an object-oriented approach. In Proceedings of the First Annual Conference of the Remote Sensing & Photogrammetry Society, Nottingham, UK, 12–14 September 2001.
  151. Yager, N.; Sowmya, A. Support vector machines for road extraction from remotely sensed images. In Computer Analysis of Images and Patterns; Springer: Berlin/Heidelberg, Germany, 2003; pp. 285–292. [Google Scholar]
  152. Wang, Y.; Tian, Y.; Tai, X.; Shu, L. Extraction of main urban roads from high resolution satellite images by machine learning. In Computer Vision-ACCV 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 236–245. [Google Scholar]
  153. Zhao, B.; Zhong, Y.; Zhang, L. A spectral–structural bag-of-features scene classifier for very high spatial resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2016, 116, 73–85. [Google Scholar] [CrossRef]
  154. Gerke, M.; Xiao, J. Fusion of airborne laserscanning point clouds and images for supervised and unsupervised scene classification. ISPRS J. Photogramm. Remote Sens. 2014, 87, 78–92. [Google Scholar] [CrossRef]
  155. Gerke, M.; Xiao, J. Supervised and unsupervised MRF based 3D scene classification in multiple view airborne oblique images. Proc. ISPRS 2013, 2, 25–30. [Google Scholar] [CrossRef]
  156. Guindon, B. Application of spatial reasoning methods to the extraction of roads from high resolution satellite imagery. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Seattle, WA, USA, 6–10 July 1998.
  157. Rydberg, A.; Borgefors, G. Integrated method for boundary delineation of agricultural fields in multispectral satellite images. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2514–2520. [Google Scholar] [CrossRef]
  158. Mokhtarzade, M.; Ebadi, H.; Valadan Zoej, M. Optimization of road detection from high-resolution satellite images using texture parameters in neural network classifiers. Can. J. Remote Sens. 2007, 33, 481–491. [Google Scholar] [CrossRef]
  159. Mokhtarzade, M.; Zoej, M.V. Road detection from high-resolution satellite images using artificial neural networks. Int. J. Appl. Earth Obs. Geoinform. 2007, 9, 32–40. [Google Scholar] [CrossRef]
  160. Zheng, Y.-J. Feature extraction and image segmentation using self-organizing networks. Mach. Vis. Appl. 1995, 8, 262–274. [Google Scholar] [CrossRef]
  161. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar] [CrossRef] [PubMed]
  162. Juneja, M.; Sandhu, P.S. Performance evaluation of edge detection techniques for images in spatial domain. Int. J. Comput. Theory Eng. 2009, 1, 614. [Google Scholar] [CrossRef]
  163. Shrivakshan, G.T.; Chandrasekar, C. A comparison of various edge detection techniques used in image processing. Int. J. Comput. Sci. Issues 2012, 9, 269–276. [Google Scholar]
  164. Hough, P.V. Method and Means For Recognizing Complex Patterns. US Patent 3069654 1962, 18 December 1962. [Google Scholar]
  165. Wu, J.; Jie, S.; Yao, W.; Stilla, U. Building boundary improvement for true orthophoto generation by fusing airborne LiDAR data. In Proceedings of the Joint Urban Remote Sensing Event (JURSE), Munich, Germany, 11–13 April 2011.
  166. Von Gioi, R.G.; Jakubowicz, J.; Morel, J.-M.; Randall, G. LSD: A line segment detector. Image Process. Line 2012, 2, 35–55. [Google Scholar] [CrossRef]
  167. Babawuro, U.; Beiji, Z. Satellite imagery quality evaluation using image quality metrics for quantitative cadastral analysis. Int. J. Comput. Appl. Eng. Sci. 2011, 1, 391–395. [Google Scholar]
  168. Turker, M.; Kok, E.H. Field-based sub-boundary extraction from remote sensing imagery using perceptual grouping. ISPRS J. Photogramm. Remote Sens. 2013, 79, 106–121. [Google Scholar] [CrossRef]
  169. Hu, J.; You, S.; Neumann, U. Integrating LiDAR, aerial image and ground images for complete urban building modeling. In Proceedings of the IEEE-International Symposium on 3D Data Processing, Visualization, and Transmission, Chapel Hill, NC, USA, 14–16 June 2006.
  170. Hu, J.; You, S.; Neumann, U.; Park, K.K. Building modeling from LiDAR and aerial imagery. In Proceedings of the ASPRS, Denver, CO, USA, 23–28 May 2004.
  171. Liu, Z.; Cui, S.; Yan, Q. Building extraction from high resolution satellite imagery based on multi-scale image segmentation and model matching. In Proceedings of the International Workshop on Earth Observation and Remote Sensing Applications, Beijing, China, 30 June–2 July 2008.
  172. Bartl, R.; Petrou, M.; Christmas, W.J.; Palmer, P. Automatic registration of cadastral maps and Landsat TM images. Proc. SPIE 1996. [Google Scholar] [CrossRef]
  173. Wang, Z.; Liu, W. Building extraction from high resolution imagery based on multi-scale object oriented classification and probabilistic Hough transform. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Seoul, Korea, 25–29 July 2005.
  174. Park, S.-R.; Kim, T. Semi-automatic road extraction algorithm from IKONOS images using template matching. In Proceedings of the Asian Conference on Remote Sensing, Singapore, 5–9 November 2001.
  175. Goshtasby, A.; Shyu, H.-L. Edge detection by curve fitting. Image Vis. Comput. 1995, 13, 169–177. [Google Scholar] [CrossRef]
  176. Venkateswar, V.; Chellappa, R. Extraction of straight lines in aerial images. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 1111–1114. [Google Scholar] [CrossRef]
  177. Lu, H.; Aggarwal, J.K. Applying perceptual organization to the detection of man-made objects in non-urban scenes. Pattern Recognit. 1992, 25, 835–853. [Google Scholar] [CrossRef]
  178. Torre, M.; Radeva, P. In Agricultural field extraction from aerial images using a region competition algorithm. Proc. ISPRS 2000, 33, 889–896. [Google Scholar]
  179. Vosselman, G.; de Knecht, J. Road tracing by profile matching and Kalman filtering. In Automatic Extraction of Man-Made Objects from Aerial and Space Images; Gruen, A., Kuebler, O., Agouris, P., Eds.; Springer: Boston, MA, USA, 1995; pp. 265–274. [Google Scholar]
  180. Sarkar, S.; Boyer, K.L. Perceptual organization in computer vision: A review and a proposal for a classificatory structure. IEEE Trans. Syst. Man Cybern. 1993, 23, 382–399. [Google Scholar] [CrossRef]
  181. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  182. Hasegawa, H. A semi-automatic road extraction method for Alos satellite imagery. Proc. ISPRS 2004, 35, 303–306. [Google Scholar]
  183. Montesinos, P.; Alquier, L. Perceptual organization of thin networks with active contour functions applied to medical and aerial images. In Proceedings of the International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996.
  184. Yang, J.; Wang, R. Classified road detection from satellite images based on perceptual organization. Int. J. Remote Sens. 2007, 28, 4653–4669. [Google Scholar] [CrossRef]
  185. Mohan, R.; Nevatia, R. Using perceptual organization to extract 3D structures. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 1121–1139. [Google Scholar] [CrossRef]
  186. Jaynes, C.O.; Stolle, F.; Collins, R.T. Task driven perceptual organization for extraction of rooftop polygons. In Proceedings of the Second IEEE Workshop on Applications of Computer Vision, Sarasota, FL, USA, 5–7 December 1994.
  187. Lin, C.; Huertas, A.; Nevatia, R. Detection of buildings using perceptual grouping and shadows. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994.
  188. Noronha, S.; Nevatia, R. Detection and description of buildings from multiple aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 17–19 June 1997.
  189. Yang, M.Y.; Rosenhahn, B. Superpixel cut for figure-ground image segmentation. In Proceedings of the ISPRS Annals of the Photogrammetry—Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016.
  190. Mayer, H.; Laptev, I.; Baumgartner, A. Multi-scale and snakes for automatic road extraction. In Computer Vision-ECCV1998; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  191. Gruen, A.; Li, H. Semi-automatic linear feature extraction by dynamic programming and LSB-snakes. Photogramm. Eng. Remote Sens. 1997, 63, 985–994. [Google Scholar]
  192. Laptev, I.; Mayer, H.; Lindeberg, T.; Eckstein, W.; Steger, C.; Baumgartner, A. Automatic extraction of roads from aerial images based on scale space and snakes. Mach. Vis. Appl. 2000, 12, 23–31. [Google Scholar] [CrossRef]
  193. Agouris, P.; Gyftakis, S.; Stefanidis, A. Dynamic node distribution in adaptive snakes for road extraction. In Proceedings of the Vision Interface, Ottawa, ON, Canada, 7–9 June 2001.
  194. Saalfeld, A. Topologically consistent line simplification with the Douglas-Peucker algorithm. Cartogr. Geogr. Inf. Sci. 1999, 26, 7–18. [Google Scholar] [CrossRef]
  195. Dong, P. Implementation of mathematical morphological operations for spatial data processing. Comput. Geosci. 1997, 23, 103–107. [Google Scholar] [CrossRef]
  196. Guo, X.; Dean, D.; Denman, S.; Fookes, C.; Sridharan, S. Evaluating automatic road detection across a large aerial imagery collection. In Proceedings of the International Conference on Digital Image Computing Techniques and Applications, Noosa, Australia, 6–8 December 2011.
  197. Heipke, C.; Englisch, A.; Speer, T.; Stier, S.; Kutka, R. Semiautomatic extraction of roads from aerial images. Proc. SPIE 1994. [Google Scholar] [CrossRef]
  198. Amini, J.; Lucas, C.; Saradjian, M.; Azizi, A.; Sadeghian, S. Fuzzy logic system for road identification using IKONOS images. Photogramm. Rec. 2002, 17, 493–503. [Google Scholar] [CrossRef]
  199. Ziems, M.; Gerke, M.; Heipke, C. Automatic road extraction from remote sensing imagery incorporating prior information and colour segmentation. Proc. ISPRS 2007, 36, 141–147. [Google Scholar]
  200. Mohammadzadeh, A.; Tavakoli, A.; Zoej, M.V. Automatic linear feature extraction of Iranian roads from high resolution multi-spectral satellite imagery. Proc. ISPRS 2004, 20, 764–768. [Google Scholar]
  201. Corcoran, P.; Winstanley, A.; Mooney, P. Segmentation performance evaluation for object-based remotely sensed image analysis. Int. J. Remote Sens. 2010, 31, 617–645. [Google Scholar] [CrossRef]
  202. Baumgartner, A.; Steger, C.; Mayer, H.; Eckstein, W.; Ebner, H. Automatic road extraction based on multi-scale, grouping, and context. Photogramm. Eng. Remote Sens. 1999, 65, 777–786. [Google Scholar]
  203. Wiedemann, C. External evaluation of road networks. Proc. ISPRS 2003, 34, 93–98. [Google Scholar]
  204. Overby, J.; Bodum, L.; Kjems, E.; Iisoe, P. Automatic 3D building reconstruction from airborne laser scanning and cadastral data using Hough transform. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 296–301. [Google Scholar]
  205. Radoux, J.; Bogaert, P.; Fasbender, D.; Defourny, P. Thematic accuracy assessment of geographic object-based image classification. Int. J. Geogr. Inf. Sci. 2011, 25, 895–911. [Google Scholar] [CrossRef]
  206. Stehman, S.V. Selecting and interpreting measures of thematic classification accuracy. Remote Sens. Environ. 1997, 62, 77–89. [Google Scholar] [CrossRef]
  207. Zhan, Q.; Molenaar, M.; Tempfli, K.; Shi, W. Quality assessment for geo-spatial objects derived from remotely sensed data. Int. J. Remote Sens. 2005, 26, 2953–2974. [Google Scholar] [CrossRef]
  208. Foody, G.M. Status of land cover classification accuracy assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
  209. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; CRC Press: Boca Raton, FL, USA, 2008. [Google Scholar]
  210. Lizarazo, I. Accuracy assessment of object-based image classification: Another STEP. Int. J. Remote Sens. 2014, 35, 6135–6156. [Google Scholar] [CrossRef]
  211. Rottensteiner, F.; Sohn, G.; Gerke, M.; Wegner, J.D.; Breitkopf, U.; Jung, J. Results of the ISPRS benchmark on urban object detection and 3D building reconstruction. ISPRS J. Photogramm. Remote Sens. 2014, 93, 256–271. [Google Scholar] [CrossRef]
  212. Winter, S. Uncertain topological relations between imprecise regions. Int. J. Geogr. Inf. Sci. 2000, 14, 411–430. [Google Scholar] [CrossRef]
  213. Clementini, E.; Di Felice, P. An algebraic model for spatial objects with indeterminate boundaries. In Geographic Objects with Indeterminate Boundaries; Burrough, P.A., Frank, A., Eds.; Taylor & Francis: London, UK, 1996; pp. 155–169. [Google Scholar]
  214. Worboys, M. Imprecision in finite resolution spatial data. GeoInformatica 1998, 2, 257–279. [Google Scholar] [CrossRef]
  215. Rutzinger, M.; Rottensteiner, F.; Pfeifer, N. A comparison of evaluation techniques for building extraction from airborne laser scanning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2009, 2, 11–20. [Google Scholar] [CrossRef]
  216. Heipke, C.; Mayer, H.; Wiedemann, C.; Jamet, O. Evaluation of automatic road extraction. Proc. ISPRS 1997, 32, 151–160. [Google Scholar]
  217. Wiedemann, C.; Ebner, H. Automatic completion and evaluation of road networks. Proc. ISPRS 2000, 33, 979–986. [Google Scholar]
  218. Harvey, W.A. Performance evaluation for road extraction. Bull. Soc. Fr. Photogramm. Télédétec. 1999, 153, 79–87. [Google Scholar]
  219. Shi, W.; Cheung, C.K.; Zhu, C. Modelling error propagation in vector-based buffer analysis. Int. J. Geogr. Inf. Sci. 2003, 17, 251–271. [Google Scholar] [CrossRef]
  220. Suetens, P.; Fua, P.; Hanson, A.J. Computational strategies for object recognition. ACM Comput. Surv. (CSUR) 1992, 24, 5–62. [Google Scholar] [CrossRef]
  221. Mena, J.B. State of the art on automatic road extraction for GIS update: A novel classification. Pattern Recognit. Lett. 2003, 24, 3037–3058. [Google Scholar] [CrossRef]
  222. Tien, D.; Jia, W. Automatic road extraction from aerial images: A contemporary survey. In Proceedings of the International Conference in IT and Applications (ICITA), Harbin, China, 15–18 January 2007.
  223. Mayer, H.; Hinz, S.; Bacher, U.; Baltsavias, E. A test of automatic road extraction approaches. Proc. ISPRS 2006, 36, 209–214. [Google Scholar]
  224. Xie, S.; Tu, Z. In Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015.
  225. Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. Pattern Analysis and Machine Intelligence. IEEE Trans. Patten Anal. Mach. Intell. 2011, 33, 898–916. [Google Scholar] [CrossRef] [PubMed]
  226. Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. Identification of structurally damaged areas in airborne oblique images using a visual-Bag-of-Words approach. Remote Sens. 2016. [Google Scholar] [CrossRef]
  227. Gülch, E.; Müller, H.; Hahn, M. Semi-automatic object extraction—Lessons learned. Proc. ISPRS 2004, 34, 488–493. [Google Scholar]
  228. Hecht, A.D.; Fiksel, J.; Fulton, S.C.; Yosie, T.F.; Hawkins, N.C.; Leuenberger, H.; Golden, J.S.; Lovejoy, T.E. Creating the future we want. Sustain. Sci. Pract. Policy 2012, 8, 62–75. [Google Scholar]
  229. Baltsavias, E.P. Object extraction and revision by image analysis using existing geodata and knowledge: Current status and steps towards operational systems. ISPRS J. Photogramm. Remote Sens. 2004, 58, 129–151. [Google Scholar] [CrossRef]
  230. Schwering, A.; Wang, J.; Chipofya, M.; Jan, S.; Li, R.; Broelemann, K. SketchMapia: Qualitative representations for the alignment of sketch and metric maps. Spat. Cognit. Comput. 2014, 14, 220–254. [Google Scholar] [CrossRef]
Figure 1. Overview of cadastral surveying techniques and cadastral boundary concepts that contextualize the scope of this review paper. The lines between different categories are fuzzy and should not be understood exclusively. They are drawn to give a general overview.
Figure 1. Overview of cadastral surveying techniques and cadastral boundary concepts that contextualize the scope of this review paper. The lines between different categories are fuzzy and should not be understood exclusively. They are drawn to give a general overview.
Remotesensing 08 00689 g001
Figure 2. Characteristics of cadastral boundaries extracted from high-resolution optical remote sensors. The cadastral boundaries are derived based on (a) roads, fences and edges of agricultural fields [48]; (b) fences and hedges [24]; (c,d) crop types [41]; (e) adjacent vegetation [63] and (f) roads, foot paths, water drainage, open areas and scrubs [67]. (d) Shows the case of a nonlinear irregular boundary shape. The cadastral boundaries in (e) and (f) are often obscured by tree canopy. Cadastral boundaries in (ad) are derived from UAV data; in (e) and (f) from HRSI. All of the boundaries are manually extracted and digitized.
Figure 2. Characteristics of cadastral boundaries extracted from high-resolution optical remote sensors. The cadastral boundaries are derived based on (a) roads, fences and edges of agricultural fields [48]; (b) fences and hedges [24]; (c,d) crop types [41]; (e) adjacent vegetation [63] and (f) roads, foot paths, water drainage, open areas and scrubs [67]. (d) Shows the case of a nonlinear irregular boundary shape. The cadastral boundaries in (e) and (f) are often obscured by tree canopy. Cadastral boundaries in (ad) are derived from UAV data; in (e) and (f) from HRSI. All of the boundaries are manually extracted and digitized.
Remotesensing 08 00689 g002
Figure 3. Pixel-based and object-based feature extraction approaches aim to derive low-level and high-level features from images. Object-based approaches may include information provided by low-level features that is used for high-level feature extraction.
Figure 3. Pixel-based and object-based feature extraction approaches aim to derive low-level and high-level features from images. Object-based approaches may include information provided by low-level features that is used for high-level feature extraction.
Remotesensing 08 00689 g003
Figure 4. Sequence of commonly applied workflow steps to detect and extract linear features used to structure the methods reviewed.
Figure 4. Sequence of commonly applied workflow steps to detect and extract linear features used to structure the methods reviewed.
Remotesensing 08 00689 g004
Figure 5. Spatial resolution of data used in the case studies. The figure shows the 52 case studies, in which the spatial resolution was known. For case studies that use datasets of multiple resolutions, the median resolution is used. For 37 further case studies, which are not represented in the histogram, the spatial resolution was left undetermined.
Figure 5. Spatial resolution of data used in the case studies. The figure shows the 52 case studies, in which the spatial resolution was known. For case studies that use datasets of multiple resolutions, the median resolution is used. For 37 further case studies, which are not represented in the histogram, the spatial resolution was left undetermined.
Remotesensing 08 00689 g005
Figure 6. UAV-derived orthoimage that shows a rural residential housing area in Namibia, which is used as an exemplary dataset to implement representative feature extraction methods.
Figure 6. UAV-derived orthoimage that shows a rural residential housing area in Namibia, which is used as an exemplary dataset to implement representative feature extraction methods.
Remotesensing 08 00689 g006
Figure 7. (a) Subset of the original UAV orthoimage converted to greyscale; (b) Anisotropic diffusion applied on greyscale UAV image to reduce noise. After filtering, the image appears smoothed with sharp contours removed, which can be observed at the rooftops and tree contours.
Figure 7. (a) Subset of the original UAV orthoimage converted to greyscale; (b) Anisotropic diffusion applied on greyscale UAV image to reduce noise. After filtering, the image appears smoothed with sharp contours removed, which can be observed at the rooftops and tree contours.
Remotesensing 08 00689 g007
Figure 8. Image segmentation applied on the original UAV orthoimage: (a) Graph-based segmentation; (b) SLIC segmentation and (c) Watershed segmentation. The label matrices are converted to colors for visualization purpose. The input parameters are tuned to obtain a comparable amount of segments from each segmentation approach. However, all approaches result in differently located and shaped segments.
Figure 8. Image segmentation applied on the original UAV orthoimage: (a) Graph-based segmentation; (b) SLIC segmentation and (c) Watershed segmentation. The label matrices are converted to colors for visualization purpose. The input parameters are tuned to obtain a comparable amount of segments from each segmentation approach. However, all approaches result in differently located and shaped segments.
Remotesensing 08 00689 g008
Figure 9. Edge detection applied on the greyscale UAV orthoimage based on (a) Canny edge detection and (b) the Laplacian of Gaussian. The output is a binary image in which one value represents edges (green) and the other value represents the background (black); (c) Shows the line segment detector applied and imposed on the original UAV orthoimage.
Figure 9. Edge detection applied on the greyscale UAV orthoimage based on (a) Canny edge detection and (b) the Laplacian of Gaussian. The output is a binary image in which one value represents edges (green) and the other value represents the background (black); (c) Shows the line segment detector applied and imposed on the original UAV orthoimage.
Remotesensing 08 00689 g009
Figure 10. (a) Douglas-Peucker simplification (red) of the contour generated with snakes (green). The simplified contour approximates the fence that marks the cadastral boundary better than the snake contour does; (b) Binary image derived from Canny edge detection as shown in Figure 9a. The image serves as a basis for morphological closing, shown in (c). Through dilation followed by erosion, edge pixels (green) belonging to one class in (b) are connected to larger regions in (c).
Figure 10. (a) Douglas-Peucker simplification (red) of the contour generated with snakes (green). The simplified contour approximates the fence that marks the cadastral boundary better than the snake contour does; (b) Binary image derived from Canny edge detection as shown in Figure 9a. The image serves as a basis for morphological closing, shown in (c). Through dilation followed by erosion, edge pixels (green) belonging to one class in (b) are connected to larger regions in (c).
Remotesensing 08 00689 g010
Table 1. Case study examples for image segmentation methods.
Table 1. Case study examples for image segmentation methods.
Image Segmentation MethodResolution < 5 mResolution > 5 mUnknown Resolution
Unsupervised[79,80,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128][107,129,130,131,132,133][125,134,135,136,137,138,139,140,141,142,143,144,145]
Supervised[72,75,108,146,147,148,149,150,151,152,153,154,155][156,157][86,137,138,158,159,160]
Table 2. Case study examples for line extraction methods.
Table 2. Case study examples for line extraction methods.
Line Extraction MethodResolution < 5 mResolution > 5 mUnknown Resolution
Canny edge detection[75,121,151,167][129,168][138,144,169,170]
Hough transform[73,120,126,171][172][140,169,173]
Line segment detector[128,165,171] [144,145,174,175,176,177]
Table 3. Case study examples for contour generation methods.
Table 3. Case study examples for contour generation methods.
Contour Generation MethodResolution < 5 mResolution > 5 mUnknown Resolution
Perceptual grouping[113,115,128,148,182][168][141,142,144,145,157,160,177,183,184,185,186,187,188,189]
Snakes[80,112,117,190,191,192,193] [131,135,178]
Table 4. Case study examples for postprocessing methods.
Table 4. Case study examples for postprocessing methods.
Postprocessing MethodResolution < 5 mResolution > 5 mUnknown Resolution
Douglas-Peucker algorithm[72,79,171,182,196][133,168][197]
Morphological operators[73,75,79,115,116,120,121,126,128,146,148,196,198,199][132][124,125,142,144,200]

Share and Cite

MDPI and ACS Style

Crommelinck, S.; Bennett, R.; Gerke, M.; Nex, F.; Yang, M.Y.; Vosselman, G. Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping. Remote Sens. 2016, 8, 689. https://doi.org/10.3390/rs8080689

AMA Style

Crommelinck S, Bennett R, Gerke M, Nex F, Yang MY, Vosselman G. Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping. Remote Sensing. 2016; 8(8):689. https://doi.org/10.3390/rs8080689

Chicago/Turabian Style

Crommelinck, Sophie, Rohan Bennett, Markus Gerke, Francesco Nex, Michael Ying Yang, and George Vosselman. 2016. "Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping" Remote Sensing 8, no. 8: 689. https://doi.org/10.3390/rs8080689

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop