Next Article in Journal
Adjusting Emergent Herbaceous Wetland Elevation with Object-Based Image Analysis, Random Forest and the 2016 NLCD
Previous Article in Journal
Detecting Ecological Changes with a Remote Sensing Based Ecological Index (RSEI) Produced Time Series and Change Vector Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Mapping of Typical Cropland Strips in the North China Plain Using Small Unmanned Aircraft Systems (sUAS) Photogrammetry

1
Institute of Land Reclamation and Ecological Restoration, Department of Surveying and Land Use, China University of Mining and Technology-Beijing, Beijing 100083, China
2
Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA
3
Department of Forest Resources and Environmental Conservation, Virginia Tech, Blacksburg, VA 24061, USA
4
School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(20), 2343; https://doi.org/10.3390/rs11202343
Submission received: 12 May 2019 / Revised: 16 July 2019 / Accepted: 16 July 2019 / Published: 10 October 2019
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
Accurate mapping of agricultural fields is needed for many purposes, including irrigation decisions and cadastral management. This paper is concerned with the automated mapping of cropland strips that are common in the North China Plain. These strips are commonly 3–8 m in width and 50–300 m in length, and are separated by small ridges that assist with irrigation. Conventional surveying methods are labor-intensive and time-consuming for this application, and only limited performance is possible with very high resolution satellite images. Small Unmanned Aircraft System (sUAS) images could provide an alternative approach to ridge detection and strip mapping. This paper presents a novel method for detecting cropland strips, utilizing centimeter spatial resolution imagery captured by sUAS flying at low altitude (60 m). Using digital surface models (DSM) and ortho-rectified imagery from sUAS data, this method extracts candidate ridge locations by surface roughness segmentation in combination with geometric constraints. This method then exploits vegetation removal and morphological operations to refine candidate ridge elements, leading to polyline-based representations of cropland strip boundaries. This procedure has been tested using sUAS data from four typical cropland plots located approximately 60 km west of Jinan, China. The plots contained early winter wheat. The results indicated an ability to detect ridges with comparatively high recall and precision (96.8% and 95.4%, respectively). Cropland strips were extracted with over 98.9% agreement relative to ground truth, with kappa coefficients over 97.4%. To our knowledge, this method is the first to attempt cropland strip mapping using centimeter spatial resolution sUAS images. These results have demonstrated that sUAS mapping is a viable approach for data collection to assist in agricultural land management in the North China Plain.

1. Introduction

Cropland strips are long and narrow agricultural parcels that are common in parts of China and India [1]. In the North China Plain (NCP), these strips are typically 3–8 m in width and 50–300 m in length [2]. As shown in Figure 1, cropland strips are typically separated by ridges that mark ownership or management boundaries and aid in irrigation [3]. The ridges are commonly 30–40 cm in width and 10–20 cm in height. These dimensions aid in efficient use of water during flood irrigation, which has been adopted for more than 90% of cropland in this region [4]. Water is electronically pumped from a nearby well or ditch, and transported to cropland strips by hoses. The elongated shapes of the cropland strips, combined with raised ridges between the strips, guide the flow of irrigation water very efficiently.
Widespread use of cropland strips in the NCP is a result of policies that were instituted in China in 1979 to stimulate agricultural productivity [5]. Now, there is a strong need to develop maps of cropland strips to guide irrigation decisions and to support cadastral management. One reason for this need is the dramatic shift in population from rural to urban locations. According to the World Bank (https://data.worldbank.org/), the ratio of urban population in China has increased from 17.9% in 1979 to 56.7% in 2016. Another reason for obtaining maps is the relatively low amount of arable land per capita in China: 0.086 ha per capita in 2016, as compared to 0.471 ha per capita in the United States.
Sustainable agriculture requires field-specific information to support management decisions, including irrigation planning [6] and fertilization management [7]. Obtaining such information is still a challenge in regions that are dominated by small agricultural parcels. Current mapping methods primarily rely on field surveying and manual digitization using traditional instruments, which is time-consuming. Satellite imagery is constrained by its relative inability to support the identification of small parcels at the desired level of resolution. The emergence of low-altitude small unmanned aircraft system (sUAS) imaging technology offers great potential for mapping and assessing small agricultural parcels.
The objective of this paper is to present an automated method for mapping typical cropland strips in the NCP using sUAS photogrammetry. Specifically, this study has investigated the degree to which cropland ridges and strips in the NCP can be identified from centimeter ground sampling distance (GSD) images acquired by a sUAS. An accuracy assessment is presented, and the results are compared with manual measurements. The proposed automated method focuses on detecting small ridge candidates, and linking them automatically to identify enclosed cropland strips. To the best of our knowledge, this is the first research on the topic of cropland strip mapping using centimeter-spatial-resolution sUAS images.

2. Background

Cropland strip mapping is essential to strip-specific management and cadastral mapping of farmland in the NCP. A cropland strip is a fundamental cultivated unit with an exclusive right of land use. Each cropland strip has a couple of adjacent ridges as its boundary. Therefore, ridge detection is particularly important to strip mapping. Moreover, cropland ridges have an elongated structure and obvious elevation differences with their surroundings. The linear nature of cropland ridges is similar to roads, which are easily recognized in images. Thus, road detection methods have strong potential to be used in identifying ridges.
Coarse-scale cropland mapping has been studied using multi-temporal satellite images, such as MODIS [8], Landsat [9,10] and Sentinel [11]. Fine-scale mapping could be supported by different types of earth observation, such as ground-based observation, spaceborne imaging, or airborne imaging [12]. Fine-scale cropland mapping is often done using manual digitization subsequent to field surveys using traditional surveying equipment, such as total stations, real-time kinematic global positioning systems (RTK-GPS), etc. These methods tend to be labor-intensive, time-consuming, and subjective. Adverse weather, such as rainfall or snow, will also extend the surveying period. Although very high resolution (VHR) satellite images have many agricultural applications, they have been used to only limited effect in identifying such small ridges and strips (see Figure 2). Currently, the highest resolution of commercial satellite images is the 0.31 m of WorldView-3, which still does not enable accurate detection of cropland ridges. Moreover, civil satellites acquiring VHR images generally have a long revisit cycle with relatively narrow swaths. Imaging ability is easily blocked on rainy or cloudy days. It is easy to miss the best image acquisition period for identifying ridges. Taken together, it is difficult to precisely extract farm parcels using VHR images. Piloted airborne imaging could provide decimeter GSD products [13], but requires proximate air strips and cumbersome administrative procedures. As such, the non-UAS methods mentioned above have clear limitations for this use case.
Mapping using sUAS photogrammetry has been implemented in many fields in the past few years. The rapid development of UAV technology, coupled with lightweight centimeter spatial resolution sensors, is enabling the acquisition of extremely high resolution images with flexible acquisition times. For example, the DJI Phantom 4 Pro (Shenzhen, China) carrying a 20.48-million-pixel optical camera is one of the most popular drone-sensor combinations in recent studies [14]. Robust and accessible algorithms have also been continuously improved to provide high-quality image products, including orthophotos, digital surface models (DSMs), and 3D point clouds.
Comparative accuracy has been demonstrated between digital elevation models (DEM) derived by structure from motion (SfM) algorithms and terrestrial laser scanning (TLS) in regions of complicated topography at decimeter-scale vertical accuracy [15,16]. Many studies have illustrated the potential for extracting small objects and detecting subtle spatial heterogeneity. These include applications such as cadastral mapping [17], plant density counts [18], tobacco plant detection [19], vine canopy segmentation [20], landslide scarp recognition [21], water stress detection [22], soil erosion quantification [23], and characterization of gravel size distributions [24]. Millimeter spatial resolution aerial images were collected to evaluate pavement distress conditions [25]. sUAS imaging technology has thus become an important earth observation technology that is accessible to general users. It also poses great challenges to image processing, analysis, and applications because of large data volumes and immature or poorly applied approaches [26].
Based on the literature review, previous studies on cropland mapping using sUAS images are lacking, and this presents a research gap. Centimeter GSD sUAS images could provide an alternative approach to ridge identification and strip mapping. This study focuses on the utility of centimeter GSD images obtained from a sUAS for automated, fine-scale cropland strip mapping on the NCP.

3. Methodology

The method is presented in four parts: data preparation (Section 3.1), ridge detection (Section 3.2) and strip mapping (Section 3.3), followed by accuracy assessment (Section 3.4). The pipeline of the method is displayed as Figure 3. The details of each step are presented in the following subsections.

3.1. Study Site and Dataset Preparation

3.1.1. Site Description

The site is located in the west of Jinan, Shandong Province (112°45′0″–122°48′0″E, 32°0′0″–40°24′0″N). This region of the NCP has four distinctive seasons and a typical temperate monsoon climate, and is an important agricultural zone in China (Figure 4). It is about 44,000 km2 and 50 m above sea level on average. The NCP is an alluvial plain developed by the intermittent flooding of the Huang-Huai-Hai rivers, and cultivated farmland accounts for 85% of the area [27] . The main cropping system is winter wheat and summer corn [28]. The NCP produces more than 75% and 32% of Chinese wheat and corn, respectively [29].

3.1.2. Small UAS and Image Acquisition

A sUAS kit was deployed during data acquisition in the field. A DJI Matrice 100 with a consumer-grade digital camera (the Zenmuse X3) mounted on a three-axis gimbal was used. The main specifications are listed in Table 1 and detailed information can be found on the official website (https://www.dji.com/).
The sUAS data were acquired at the early growing stage of winter wheat (November, 2016). The altitudes above ground level (AGL) of flights were set at 60 m, 100 m and 150 m (see Table 2). The flight trajectory was designed in advance with a front overlap of 80% and a side overlap of 60%. The camera captured an image vertically every three seconds. The camera was set to use shutter speed priority and auto adjustment of ISO applied gains. Sequential images were stored in Joint Photographic Experts Group (JPEG) format on a memory card. UAV-embedded global navigation satellite system (GNSS) and inertial measurement unit (IMU) equipment provided position and attitude information with relatively low precision [14].
To achieve accurate georeferenced results after 3D construction, 13 ground control points (GCPs) were distributed as evenly as possible on the site and marked as crosses 10 cm wide and 1 m long using lime. The central coordinate of each GCP was obtained using a GNSS receiver (South Survey GALAXY G1; real-time kinematics surveying with a typical accuracy of 0.008 m + 1 ppm horizontally and 0.015 m + 1 ppm vertically). The study site required multiple flights because of the flight time on each battery charge (about 25 min).

3.1.3. Dataset Preparation

Pix4D mapper (3.0.17, Pix4D, Lausanne, Switzerland) was used to process the sequential images with surveyed GCPs to obtain the DSMs and georeferenced orthophotos. The specific steps and processing parameters can be found in Table A1 of Appendix A.
The first dataset consisted of four plots on which winter wheat was being grown. This dataset was used to develop the automated strip mapping method, and were selected for features such as ridge length, crop coverage, and topographic gradient. These specific parameters and detailed statistics can be seen in Figure 5 and Table 3.
The second dataset was selected to explore the effects of spatial resolution on strip mapping. The first four plots were resampled into 10 different spatial resolutions ranging from 3 to 12 cm using nearest neighbor sampling, resulting in 40 test images.
The third dataset was prepared as ancillary data to verify the extracted accuracy using the second dataset. It includes the orthophotos and DSMs produced by sUAS images at altitudes of 100 m and 150 m AGL.

3.1.4. Validation Data Collection

VHR orthophotos enable visual discrimination of inter-strip ridges. Validation data were acquired using heads-up digitzing in geographic information system (GIS) software. All ridges were visually identified as accurately as possible as polylines and then the strip outlines were made by connecting adjacent polylines in each plot. These spatial reference data were used for accuracy assessment, and are shown in Figure 6.

3.2. Ridge Detection

Four steps were conducted to detect cropland ridges, including initial extraction using threshold segmentation of surface roughness (Section 3.2.1), ridge filtering using shape index (Section 3.2.2), ridge cleaning by removing impacts of vegetation coverage (Section 3.2.3), and ridge smoothing using morphological operation (Section 3.2.4).

3.2.1. Initial Extraction Using Threshold Segmentation of Surface Roughness

Elevation profiles of cropland in the plain area have regular peaks along with fluctuating ridges (see Figure 7). The internal strip is relatively flat and the edges of the strip are rougher. Surface roughness, reflecting the irregularity of a topographic surface [30], is comparatively larger for the ridges than for the crop area between two adjacent ridges. This characteristic allows automated ridge detection from VHR sUAS imagery. As illustrated in Figure 5m–p, the surface roughness of cropland has a Gaussian distribution in the plain area. It has advantages over ground elevation or slope.
Surface roughness is obtained by calculating DSM deviations using a moving rectangle window. The window size is set as the ridge width (0.35 m). In order to automated segmentation, the threshold of surface roughness is determined as the mean of roughness and half of standard deviation. Ridge binarization (Figure 8b) by this threshold keeps each ridge continuous and its edge smoother. The result of the binary image is named as f 1 :
f 1 = 1 , f ( x , y ) T 1 , r i d g e s 0 , f ( x , y ) < T 1 , n o n r i d g e s
where f ( x , y ) is the binary value at pixel ( x , y ) , and T 1 is the determined threshold.

3.2.2. Ridge Filtering Using Shape Index

Pixel shape index (PSI) [31] is introduced to depict the spatial information around the central pixel. The PSIs are calculated from a binary image of surface roughness. Four PSIs (Table 4) are used as the filter to refine ridges from f 1 , including area, perimeter of minimum enclosing rectangle (MER), major axis length, and area of MER. Image holes are filled to remove the noise by searching connectivity of an eight-connected neighborhood. The results of each step can be seen in Figure 9:
S 1 = S 0 > m e a n ( s h a p e a r e a o f S 0 ) S 2 = S 1 > m e a n ( M E R p e r i m e t e r o f S 1 ) S 3 = S 2 > m e a n ( m a j o r a x i s l e n g t h o f S 2 ) S 4 = S 3 > m e a n ( M E R a r e a o f S 3 )
where S 0 indicates binary images after threshold segmentation of surface roughness, S 1 indicates binary images after the first filtering, S 2 indicates binary images after the second filtering, S 3 indicates binary images after the third filtering, and S 4 indicates binary images after the fourth filtering:
f 2 = 1 , S 4 a f t e r f i l t e r i n g , r i d g e s 0 , o t h e r w i s e , n o n r i d g e s
where f 2 is the outcome from ridge filtering using the four shape indexes.

3.2.3. Ridge Cleaning by Removing Impacts of Vegetation Coverage

Vegetation prevents precise ridge detection. Removing vegetation allows a better ridge delineation in Figure 10b than the rough border in Figure 10a. Fortunately, an orthophoto is obtained from sUAS photogrammetry, which can be used to mask the vegetation coverage in an image. Vegetation segmentation mainly focuses on determining a segmentation threshold using a statistical histogram of an image color space characteristic or vegetation index [32].
The segmentation method is adopted from [33] for the same camera employed as this paper, as it had better performance than vegetation index segmentation using a global threshold [34]. The Hue histogram is extracted after converting the color space of the image from red-green-blue (RGB) to hue-saturation-value (HSV). Next, the threshold is detected from the fitted graph of the Hue Gaussian based on the filtered Hue histogram. Finally, the binary image is created using the detected threshold. The binary image of the vegetation mask ( f 3 ) is divided into vegetation (value 0) and non-vegetation (value 1).
f 3 = 1 , f ( x , y ) T 2 , n o n v e g e t a t i o n 0 , f ( x , y ) > T 2 , v e g e t a t i o n
where f ( x , y ) is a binary value at pixel ( x , y ) , and T 2 is equal to 0.
The result after the vegetation filter ( f 4 ) is obtained via a point operation, pixel-specific multiplication between the ridge candidates ( f 2 ) and the vegetation mask ( f 3 ). The result can be found in Figure 10:
f 4 = f 2 · f 3

3.2.4. Ridge Smoothing Using Morphological Operation

To refine the delineation of cropland ridges, morphological operations are conducted by selecting adequate structural elements (SE) [35], including image opening using multi-directional structuring elements (MDSE), pixel area filtering to remove tiny objects, image closing using structuring element of line (SEL), and image thinning.
Image opening employs image dilation after erosion, which can diminish local small blocks of bright regions. MDSE ( g 1 ), with appropriate values for direction and length, allows for excluding the pixels associated with main ridges. Branches with only four directions and a small window have demonstrated a favorable balance [36] between computational efficiency and precision. SE of g 1 is constructed as the following equation:
g 1 ( x i , y i ) = y i = x i tan ( α i ) , x i = 0 , ± 1 , ± ( L 1 ) cos ( α i ) 2 , i f α i 45 x i = y i cot ( α i ) , y i = 0 , ± 1 , ± ( L 1 ) sin ( α i ) 2 , i f 45 < α i 90
where g 1 ( x i , y i ) is the pixel value of g 1 at pixel ( x i , y i ) , α i is the i-th directional angle, interval of i ranges from 90 to 90 , and L denotes length of the window size of the structuring element. The angle was set to 45 and L to one ridge width (13 pixels for the original image resolution of 2.5 cm):
f 5 = f 4 g 1 = ( f 4 g 1 ) g 1
where g 1 denotes a multi-directional structuring element.
Small artifacts were removed by filtering by object area:
f 6 = 1 , f 5 T 3 , r i d g e s 0 , f 5 < T 3 , n o n r i d g e s
where T 3 is the filtering threshold of the object area, set as 1000 pixels.
Image closing uses image erosion after dilation. SEL ( g 2 ) with a proper slope and length is adopted to connect patches in each ridge. In order to automate operation, the median slope was used to avoid the effect of outliers, and the length was set to 0.5 times pixel number of the major axis length acquired from ridges in each plot:
f 7 = f 6 · g 2 = ( f 6 g 2 ) g 2
where g 2 denotes structuring element of line.
The thinning algorithm mentioned in [37] is adopted in which the iterative thinning algorithm generates the skeleton of objects by iteratively checking and removing the contour pixels in a sequential means. The skeleton of detected ridges is achieved as a binary image ( f 8 ). The results of each step can be seen in Figure 11.
The parameters of cropland ridge detection are summarized in Table 5.

3.3. Cropland Strip Mapping

Each cropland strip is bounded by two adjacent ridges and is thus just a polygon with two polyline boundaries. Therefore, the point set of each ridge is first detected (Section 3.3.1) and labeled (Section 3.3.2), then ridges are drawn by connecting points detected one by one after point improvement (Section 3.3.3). Finally, strips are mapped by connecting adjacent ridges (Section 3.3.4).

3.3.1. Point Detection Using Hough Transform

The Hough Transform (HT) can be used to detect linear objects, such as cropland ridges. HT is the classical method to detect straight lines [38], and then was improved to employ polar coordinate space [39]. HT can also be used to detect curves or other shapes [40]. HT transforms the shape from image coordinate space into Hough parameter space. Every straight line in the spatial domain has a corresponding point in Hough space, and the converse situation is also true. Generally, the HT requires three steps: (1) defining a parameter space of HT, (2) voting and identifying peaks in parameter space, and (3) extracting line segments using intersection points ( ρ - θ ) in HT space [41].
Detected angle ( θ ) and HT spacing along the angle axis ( ρ ) are the key parameters in HT space. The line angle to be detected is set as the median angle of ridge candidates with a range of 3 , which is a good tradeoff between computational efficiency and the resulting accuracy. Spacing ( ρ ) represents the distance from the origin to the closest point on the line, which is set as 0.25. Then, the parameter space is constructed as a matrix.
Peaks are a crucial factor in parameter space and indicate extrema after accumulation. Every peak, as a curve intersection in HT space, has a corresponding line segment in image space. The number of peaks is 300 in this study. Finally, the peaks in HT space are transformed into line segments in coordinate space, and recorded as a table with several pairs of point pixel coordinates. In addition, point coordinates are then transformed from pixel space to projection space using the following equation. The detected result is shown in Figure 12a:
x = x 0 + n p x r 0 y = y 0 n p y r 0
where n p x and n p y denote pixel coordinates in the image, r 0 denotes image resolution, x and y denote the corresponding projection coordinates, and x 0 and y 0 denote the projection coordinates of the image at the point of the left upper corner.

3.3.2. Point Labeling and Sorting

Different point sets are labeled by their relation with the MER of each ridge, and sorted using their coordinates. This step can be seen in Figure 12b.
Coordinate rotation is performed around the centroid point to make the ridge direction from north to south using the following equation:
x y = cos θ sin θ sin θ cos θ x y
where x and y denote the coordinates before transformation, x and y denote the coordinates after transformation, and θ denotes the rotation angle.

3.3.3. Point Reduction and Improvement

Point reduction (named as P1) preserves ridge shape using fewer points considering Euclidean distance, normal angle, and curvature [42]. In this study, a ridge is a straight line consisting of many line segments. The tolerance distance between adjacent points is set as 3% of ridge length. Two adjacent points in given point set are iteratively replaced by their midpoint if the distance between them is smaller than the tolerance distance.
Point improvement includes three intermediate determinations, as follows: the central point of the MER minor axis (P2), the four corner points of each plot (P3), and the outlier of endpoints (P4). The coordinate rotation is conducted to compare these points. The endpoint coordinate is added using the central point of the corresponding MER minor axis if it is close to the centroid. For the top and bottom of a given plot (where ridges begin and end) the median of the ridge endpoints is calculated. Corner points (the endpoints of the outside ridges) are replaced by the median ridge endpoint for a given side. The outlier of endpoints is replaced by the intersection point between the corresponding ridge and the line segment composited by its two adjacent endpoints. The tolerance of endpoint outlier is the ridge width (0.35 m). Subsequently, the point closest to ridge endpoint at its side is removed if their distance is less than the threshold point reduction adopted (3% of ridge length). The result is shown in Figure 13.

3.3.4. Polyline (Ridge) Drawing and Polygon (Strip) Mapping

As discussed above, the point set generates in a simple and optimal order. Each polyline of the ridge is drawn by linking one point to another according to the sorted order and the label of the corresponding ridge. Finally, strips or polygons are finished by connecting adjacent ridges in the counterclockwise direction.

3.4. Accuracy Assessment

The automated method and manual digitization were compared at both the ridge- and strip-levels. The evaluation methods are based on those used with road detection [43] for ridges and cadastral standards for strips.

3.4.1. Accuracy Assessment of Ridges

Ridge accuracy is assessed, like road detection, using completeness and correctness [43]. As is shown in Figure 1 of Heipke et al. (1997) [43], completeness is the proportion between the reference data and the extracted data lying around its buffer, which is also called recall. Correctness is the percentage correctly extracted from the total region of extracted objects, which is also called precision.
Considering the width range of ridges, we determine the buffer width (35 cm) with 17.5 cm on both sides. The buffer zone is generated from ridges both automatically and manually extracted, and an overlay analysis is implemented between buffer zones and extracted ridges. True positive (TP) is the situation if automatically extracted ridges are matched with the buffer of manually extracted ridges, otherwise it is false positive (FP). A false negative (FN) is when the reference data are not in the buffer around the ridges extracted by the proposed method. Accuracy assessment equations are given below:
R e c a l l = T P T P + F N
P r e c i s i o n = T P T P + F P
L e n g t h e r r o r r a t i o = L 1 L 0 L 0
where TP denotes the total matched extracted data, FP denotes the total unmatched extracted data, FN denotes the total unmatched reference data, L 1 is the extracted length of a single ridge, and L 0 is the reference length of the corresponding ridge.

3.4.2. Accuracy Assessment of Strips

Strip accuracy is estimated by comparing extracted polygons using the true polygon boundaries. Reference data were obtained by manual extraction of strips. Average extraction accuracy (AEA) and Kappa coefficient (KC) are used to assess strip accuracy [44]. AEA is computed by the average ratio of extracted strips using the proposed method to the corresponding reference data for each plot:
A E A = 1 n i = 1 n A i A r i
where n is the total number of strips, A i is the extracted area of the i-th strip, and A r i is the reference area of the i-th strip.
KC is determined from the confusion matrix using the following equation:
K C = N i = 1 n x i i i = 1 n ( x i + x + i ) N 2 i = 1 n ( x i + x + i )
where n is the number of rows in the confusion matrix, x i i is the number of observations in row i and column i, x i + and x + i are the marginal totals of row i and column i, respectively, and N is the total number of observations.

4. Results

4.1. Improvement of Point Quality

Point reduction and improvement are important to improve extraction quality. Four steps were carried out to enhance the quality of ridge identification and strip extraction. The original dataset consists of the points detected by the Hough Transform (as P0). Recall and average extraction ratio were used to assess the accuracy of ridge and strip extraction, respectively. Point reduction and improvement improves the accuracy by which ridges are identified and thus, by extension, improves cropland strip area (Table 6).

4.2. Accuracy of Ridge Detection

Ridges were accurately identified, with length error ratios ranging from -1.24% to -0.3%. As seen from Table 7 and Figure 14, detected ridges are in good agreement with the corresponding manual extraction. Recall was over 96.8% and precision over 95.4% for all plots. Similar or lower accuracies were noted in similar cases of linear extraction, such as detection of roads, landslide scarps, and subway tunnel cracks. Road detection recalls across several recent studies are reported as 82% [45], 90% [46] and 93% [47], with precisions of 76% [45], 93% [46] and 95% [47]. Landslide scarp recognition using surface roughness index [21] had a recall of 66% and a precision of 88%. Classification accuracy of subway tunnel crack detection [48] was over 90%.

4.3. Performance of Strip Extraction

The mapped strips are displayed as Figure 15. As is shown in Table 8, the extraction ratio is high, ranging from 98.9% to 99.9%. The KC range is from 97.4% to 99.9% with an average of 99.1%. There is no distinct difference among KCs from the 85 strips across four plots.

4.4. Effects of Spatial Resolution

The AEA of strips is high with minimal bias (less than 1%). However, ridge detection accuracy worsens as the spatial resolution of surface roughness is decreased. Recall decreases from over 90% to around 60% for all four plots (see Figure 16). A spatial resolution of 4–5 cm enables accurate ridge detection. Significant decreases were observed in the accuracy of cropland ridge detection and strip mapping once spatial resolution exceeded 5 cm (Figure 16a–c). As can be seen, a lower acquisition resolution (as was the case with the 4.2 and 6.5 cm DSMs) is necessary to truly assess the effects of resolution on mapping performance.

5. Discussion

5.1. Suitable Sites and Data Acquisition

This study is conducted in a simplified area, which is cropped from a complete orthophoto derived from sUAS images into small plots with a few cropland ridges and strips. It is relatively straightforward to extract ridges and map strips under this ideal situation. As such, a priority for subsequent algorithm development is developing the ability to map cropland strips accurately even in more heterogeneous landscapes. In this paper, ridge detection relies primarily on segmentation of surface roughness. Other potentially suitable sites are those with significant differences in terrain or geomorphology indexes amid an otherwise regular cropland distribution, such as rice cultivated land in plain areas (Chiang Mai, Thailand: 18°55′25″N, 98°57′18″E) [49] and terrace landscapes in mountainous areas (Yuanyang, China: 23°6′47″N, 102°44′53.9″E [50] and Apline regions, Itlay [51]).
Images should be acquired in the early stages of crop growth, particularly before the elongation stage of wheat or corn. Otherwise, the ridges will be occluded as the crops grow, especially for those crops with large canopy cover, such as summer corn.

5.2. Accuracy: Ridges as Line Detection

Line detection from images remains a hot topic in remote sensing, with detected features including roads [36], building edges [52], windthrown trees [53], ground cracks [54], etc. Line detection methods include heuristic reasoning, dynamic programming, statistical inference, and map matching [55,56,57]. Knowledge- [58] and morphology-based [59] approaches are also extensively used. In croplands, each strip, as a single polygon, is contained by two neighboring ridges, which are not straight lines but polylines. The Hough Transform allows for detecting objects that have regular features or could be represented by mathematical expressions, such as lines, circles, and ellipses. Therefore, the proposed method could in principle be extended to other objects with regular shapes, such as areas under center pivot irrigation (Dalhart, TX, USA: 36°3′5″N, 102°27′43″W) [9], vineyards [20] and plastic-mulched farmland [60].

5.3. Accuracy: Cropland Strips as Regions vs. Cadastral Requirements

With respect to region mapping, this study performs well with the KC ranging from 97.4% to 99.9% and a total extraction ratio over 98.9%. According to Regulation Practice for the Right of Rural Land Contractual Management of China (NY/T 2537-2014), the point mean error in cadastral surveying should be lower than 0.25 m, 0.5 m, and 1.0 m at scales of 1:500, 1:1000, and 1:2000, respectively. Generally, a scale of 1:500 supports the investigation of residential land. Area error is required to be less than 5%. In this study, the strip extraction ratio ranges from 98.9% to 99.9%. As such, the outlined protocol meets current cadastral mapping accuracy standards.

5.4. Impact of Spatial Resolution

A spatial resolution of 4–5 cm appears optimal (given the study constraints) for detecting the narrow ridges between cropland strips, enabling both high extraction accuracy and high computational efficiency. However, ridge width may vary from field to field and region to region. As such, different spatial resolutions may be necessary even in the NCP.

6. Conclusions

This study reports on an effort to automatically extract typical cropland strips from cm spatial resolution imagery captured by a small UAS mounted on a consumer-level digital camera at one point in time. Surface roughness was important for identifying small linear objects with different microtopographies in plain areas. Typical cropland strips were well identified with AEAs over 98.9% and KCs over 97.4%. Ridges were also well detected with favorable recall (over 96.8%) and precision (over 95.4%). A spatial resolution of 4–5 cm worked well for extracting ridges and strips with the presented method. Cropland strips can thus be mapped at high accuracy using VHR images captured from sUAS in similar agricultural landscapes, especially in the North China Plain. In addition, this study also demonstrates the great potential of VHR sUAS imagery in identifying small objects with high accuracy. Other research conducted in small fields could benefit from this flexible sUAS technique.
This automated method was developed and tested in cropped farmland images with elongated ridges, which is a relatively simple use case at the local scale. It could be extended to similar cases, such as plastic mulch farmland. Actual cropland can be more complicated than the experimental plots in this research. Landscape- to regional-scale application (villages or even the whole NCP) will require dealing with more heterogeneity, including road, pond, or forest patches among the strips or different ridge lengths in a given plot. Larger regions could be divided into smaller patches with relatively consistent landscapes. Images acquired by diverse sUAS using different flight altitudes should be further explored for detecting cropland ridges and strips to verify the robustness of the proposed method. Thresholds mentioned in this paper should still be tested in other areas. More complex use cases should also be explored to enable gradual process improvement, with eventual potential contribution to smart farm or automated cadastral mapping.

Author Contributions

J.Z. and Y.Z. conceived and designed the experiments; J.Z. developed and tested the programming code with input from A.L.A. and R.H.W.; Y.Z. and S.T. helped with data acquisition and processing; A.L.A. and R.H.W. discussed the methodology and helped write the manuscript; Z.H. and R.H.W. modified the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 41771542), Key Projects in the National Science & Technology Pillar Program during the 12th Five-year Plan Period (Grant No. 2012BAC04B03). Funding for R.H.W.’s participation was also provided by the Virginia Agricultural Experiment Station and the McIntire-Stennis Program of NIFA, USDA (Project Number 1007054).

Acknowledgments

We would like to thank the following organization for providing a satellite image of WorldView-3 as an illustration: DigitalGlobe, Inc.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
sUASsmall unmanned aircraft systems
UASunmanned aircraft systems
UAVunmanned aerial vehicles
DSMdigital surface model
NCPNorth China Plain
GSDground sampling distance
RTK-GPSreal-time kinematic global positioning systems
VHRvery high resolution
DEMdigital elevation model
SfMstructure from motion
TLSterrestrial laser scanning
ISOinternational standards organization
CMOScomplementary metal–oxide–semiconductor
JPEGjoint photographic experts group
AGLabove ground level
GPSglobal positioning systems
IMUinertial measurement units
GCPground control point
GNSSglobal navigation satellite system
RMSEroot mean squared error
GISgeographic information system
PSIpixel shape index
MERminimum enclosing rectangle
RGBred-green-blue
HSVhue-saturation-value
SEstructural elements
MDSEmulti-directional structural elements
SELstructuring element of line
HTHough transform
TPtrue positive
FPfalse positive
FNfalse negative
AEAaverage extraction accuracy
KCKappa coefficient
FVCfractional vegetation coverage

Appendix A

Table A1. Specific steps and parameters of UAS image processing.
Table A1. Specific steps and parameters of UAS image processing.
StepSubstepParameterSetting
Initial processingGeneralKeypoints image scaleFull
MatchingMatching image pairsAerial grid or corridor
CalibrationTargeted number of keypointsAutomatic
Calibration methodStandard
Internal parameters optimizationAll
External parameters optimizationAll
RematchAutomatic
Point cloudPoint cloudImage scale1/2, half image size
and mesh Point densityOptimal
Minimum number of matches3
3D textured meshMesh resolutionMedium
DSM,DSM and orthomosaicResolutionAutomatic
Orthomosaic DSM filters-1Use noise filtering
and index DSM filters-2Use surface smoothing-Sharp

References

  1. Cui, Z.L.; Zhang, H.Y.; Chen, X.P.; Zhang, C.C.; Ma, W.Q.; Huang, C.D.; Zhang, W.F.; Mi, G.H.; Miao, Y.X.; Li, X.L.; et al. Pursuing sustainable productivity with millions of smallholder farmers. Nature 2018, 555, 363–366. [Google Scholar] [CrossRef]
  2. Zhang, W.F.; Cao, G.X.; Li, X.L.; Zhang, H.Y.; Wang, C.; Liu, Q.Q.; Chen, X.P.; Cui, Z.L.; Shen, J.B.; Jiang, R.F.; et al. Closing yield gaps in china by empowering smallholder farmers. Nature 2016, 537, 671–674. [Google Scholar] [CrossRef]
  3. Dong, Q.H.; Liu, J.; Wang, L.M.; Chen, Z.X.; Gallego, J. Estimating crop area at county level on the north china plain with an indirect sampling of segments and an adapted regression estimator. Sensors 2017, 17, 2638. [Google Scholar] [CrossRef]
  4. Gao, Y.; Yang, L.L.; Shen, X.J.; Li, X.Q.; Sun, J.S.; Duan, A.W.; Wu, L.S. Winter wheat with subsurface drip irrigation (SDI): Crop coefficients, water-use estimates, and effects of SDI on grain yield and water use efficiency. Agric. Water Manag. 2014, 146, 1–10. [Google Scholar] [CrossRef]
  5. Lin, J.Y. The household responsibility system in China’s agricultural reform: A theoretical and empirical study. Econ. Dev. Cult. Chang. 1988, 36, 199–224. [Google Scholar] [CrossRef]
  6. Gonzalez-Dugo, V.; Zarco-Tejada, P.J.; Nicols, E.; Nortes, P.A.; Alarcn, J.J.; Intrigliolo, D.S.; Fereres, E. Using high resolution UAV thermal imagery to assess the variability in the water status of five fruit tree species within a commercial orchard. Precis. Agric. 2013, 14, 660–678. [Google Scholar] [CrossRef]
  7. Vergara-Daz, O.; Zaman-Allah, M.A.; Masuka, B.; Hornero, A.; Zarco-Tejada, P.; Prasanna, B.M.; Cairns, J.E.; Araus, J.L. A novel remote sensing approach for prediction of maize yield under different conditions of nitrogen fertilization. Front. Plant Sci. 2016, 7, 666. [Google Scholar] [CrossRef]
  8. Massey, R.; Sankey, T.T.; Congalton, G.R.; Yadav, K.; Thenkabail, S.P.; Ozdogan, M.; Meadore, A.J.S. MODIS phenology-derived, multi-year distribution of conterminous U.S. crop types. Remote Sens. Environ. 2017, 198, 490–503. [Google Scholar] [CrossRef]
  9. Yan, L.; Roy, D.P. Automated crop field extraction from multi-temporal Web Enabled Landsat Data. Remote Sens. Environ. 2014, 144, 42–64. [Google Scholar] [CrossRef] [Green Version]
  10. Graesser, J.; Ramankutty, N. Detection of cropland field parcels from Landsat imagery. Remote Sens. Environ. 2017, 201, 165–180. [Google Scholar] [CrossRef] [Green Version]
  11. Belgiu, M.; Ovidiu, C. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
  12. Jilge, M.; Heiden, U.; Neumann, C.; Feilhauer, H. Gradients in urban material composition: A new concept to map cities with spaceborne imaging spectroscopy data. Remote Sens. Environ. 2019, 223, 179–193. [Google Scholar] [CrossRef]
  13. Padró, J.; Muñoz, F.; Planas, J.; Pons, X. Comparison of four UAV georeferencing methods for environmental monitoring purposes focusing on the combined use with airborne and satellite remote sensing platforms. Int. J. Appl. Earth Obs. Geoinf. 2019, 75, 130–140. [Google Scholar] [CrossRef]
  14. Ashapure, A.; Jung, J.; Yeom, J.; Chang, A.; Maeda, M.; Maeda, A.; Landivar, J. A novel framework to detect conventional tillage and no-tillage cropping system effect on cotton growth and development using multi-temporal UAS data. ISPRS J. Photogramm. Remote Sens. 2019, 152, 49–64. [Google Scholar] [CrossRef]
  15. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. Structure-from-Motion photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef]
  16. Cook, K.L. An evaluation of the effectiveness of low-cost UAVs and structure from motion for geomorphic change detection. Geomorphology 2017, 278, 195–208. [Google Scholar] [CrossRef]
  17. Crommelinck, S.; Bennett, R.; Gerke, M.; Nex, F.; Yang, M.; Vosselman, G. Review of automatic feature extraction from high-resolution optical sensor data for UAV-based cadastral mapping. Remote Sens. 2016, 8, 689. [Google Scholar] [CrossRef]
  18. Jin, X.L.; Liu, S.Y.; Baret, F.; Hemerl, M.; Comar, A. Estimates of plant density of wheat crops at emergence from very low altitude UAV imagery. Remote Sens. Environ. 2017, 198, 105–114. [Google Scholar] [CrossRef] [Green Version]
  19. Fan, Z.; Lu, J.W.; Gong, M.G.; Xie, H.H.; Goodman, E.D. Automatic tobacco plant detection in UAV images via deep neural networks. IEEE J. Select. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 876–887. [Google Scholar] [CrossRef]
  20. Poblete-Echeverra, C.; Olmedo, G.F.; Ingram, B.; Bardeen, M. Detection and segmentation of vine canopy in ultra-high spatial resolution RGB imagery obtained from unmanned aerial vehicle (UAV): A case study in a commercial vineyard. Remote Sens. 2017, 9, 268. [Google Scholar] [CrossRef]
  21. Al-Rawabdeh, A.; He, F.N.; Moussa, A.; El-Sheimy, N.; Habib, A. Using an unmanned aerial vehicle-based digital imaging system to derive a 3D point cloud for landslide scarp recognition. Remote Sens. 2016, 8, 95. [Google Scholar] [CrossRef]
  22. Zarco-Tejada, P.J.; Gonzlez-Dugo, V.; Berni, J.A.J. Fluorescence, temperature and narrow-band indices acquired from a UAV platform for water stress detection using a micro-hyperspectral imager and a thermal camera. Remote Sens. Environ. 2012, 117, 322–337. [Google Scholar] [CrossRef]
  23. Pineux, N.; Lisein, J.; Swerts, G.; Bielders, C.L.; Lejeune, P.; Colinet, G.; Degra, A. Can DEM time series produced by UAV be used to quantify diffuse erosion in an agricultural watershed? Geomorphology 2017, 280, 122–136. [Google Scholar] [CrossRef]
  24. Mu, Y.; Wang, F.; Zheng, B.Y.; Guo, W.; Feng, Y.M. McGET: A rapid image-based method to determine the morphological characteristics of gravels on the Gobi desert surface. Geomorphology 2018, 304, 89–98. [Google Scholar] [CrossRef]
  25. Zhang, S.; Lippitt, C.; Bogus, S.; Neville, P. Characterizing pavement surface distress conditions with hyper-spatial resolution natural color aerial photography. Remote Sens. 2016, 8, 392. [Google Scholar] [CrossRef]
  26. Lippitt, C.; Zhang, S. The impact of small unmanned airborne platforms on passive optical remote sensing: A conceptual perspective. Int. J. Remote Sens. 2018, 39, 4852–4868. [Google Scholar] [CrossRef]
  27. Mo, X.G.; Chen, X.J.; Hu, S.; Liu, S.X.; Xia, J. Attributing regional trends of evapotranspiration and gross primary productivity with remote sensing: A case study in the North China Plain. Hydrol. Earth Syst. Sci. 2017, 21, 295. [Google Scholar] [CrossRef]
  28. Zhai, L.C.; Xu, P.; Zhang, Z.B.; Li, S.K.; Xie, R.Z.; Zhai, L.F.; Wei, B.H. Effects of deep vertical rotary tillage on dry matter accumulation and grain yield of summer maize in the Huang-Huai-Hai Plain of China. Soil Tillage Res. 2017, 170, 167–174. [Google Scholar] [CrossRef]
  29. Wang, Y.; Hu, C.; Dong, W.; Li, X.; Zhang, Y.; Qin, S.; Oenema, O. Carbon budget of a winter-wheat and summer-maize rotation cropland in the north china plain. Agric. Ecosyst. Environ. 2015, 206, 33–45. [Google Scholar] [CrossRef]
  30. Cevik, E.; Topal, T. GIS-based landslide susceptibility mapping for a problematic segment of the natural gas pipeline, Hendek (Turkey). Environ. Geol. 2003, 44, 949–962. [Google Scholar] [CrossRef]
  31. Zhang, L.P.; Huang, X.; Huang, B.; Li, P.X. A pixel shape index coupled with spectral information for classification of high spatial resolution remotely sensed imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2950–2961. [Google Scholar] [CrossRef]
  32. Rotz, J.D.; Abaye, A.O.; Wynne, R.H.; Rayburn, E.B.; Scaglia, G.; Phillips, R.D. Classification of digital photography for measuring productive ground cover. Rangeland Ecol. Manag. 2008, 61, 245–248. [Google Scholar] [CrossRef]
  33. Hassanein, M.; Lari, Z.; El-Sheimy, N. A new vegetation segmentation approach for cropped fields based on threshold detection from Hue histograms. Sensors 2018, 18, 1253. [Google Scholar] [CrossRef] [PubMed]
  34. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  35. Park, H.; Chin, R.T. Decomposition of arbitrarily shaped morphological structuring elements. IEEE Trans. Pattern Anal. 1995, 17, 2–15. [Google Scholar] [CrossRef] [Green Version]
  36. Chaudhuri, D.; Kushwaha, N.K.; Samal, A. Semi-automated road detection from high resolution satellite images by directional morphological enhancement and segmentation techniques. IEEE J. Select. Top. Appl. Earth Observ. Remote Sens. 2012, 5, 1538–1544. [Google Scholar] [CrossRef]
  37. Lam, L.; Lee, S.-W.; Suen, C.Y. Thinning methodologies: A comprehensive survey. IEEE Trans. Pattern Anal. 1992, 14, 869–885. [Google Scholar] [CrossRef]
  38. Hough, P.V.C. Method and Means for Recognizing Complex Patterns. U.S. Patent No. 3069654, 18 December 1962. [Google Scholar]
  39. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  40. Sklansky, J. Image segmentation and feature extraction. IEEE Trans. Syst. Man Cybern. 1978, 8, 237–247. [Google Scholar] [CrossRef]
  41. Mukhopadhyay, P.; Chaudhuri, B.B. A survey of Hough Transform. Pattern Recogn. 2015, 48, 993–1010. [Google Scholar] [CrossRef]
  42. Ma, X.H.; Cripps, R.J. Shape preserving data reduction for 3D surface points. Comput. Aided Des. 2011, 43, 902–909. [Google Scholar] [CrossRef]
  43. Heipke, C.; Mayer, H.; Wiedemann, C.; Jamet, O. Evaluation of automatic road extraction. Int. Arch. Photogramm. Remote Sens. 1997, 32, 151–160. [Google Scholar]
  44. Zhao, H.; Chen, F.; Zhang, M.M. A systematic extraction approach for mapping glacial lakes in high mountain regions of Asia. IEEE J. Select. Top. Appl. Earth Observ. Remote Sens. 2018, 12, 1–12. [Google Scholar] [CrossRef]
  45. Tuncer, O. Fully automatic road network extraction from satellite images. IEEE Int. Conf. Recent Adv. Space Technol. 2007, 3, 708–714. [Google Scholar] [CrossRef]
  46. Das, S.; Mirnalinee, T.T.; Varghese, K. Use of salient features for the design of a multistage framework to extract roads from high-resolution multispectral satellite images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3906–3931. [Google Scholar] [CrossRef]
  47. Zhang, J.; Chen, L.; Wang, C.; Zhuo, L.; Tian, Q.; Liang, X. Road recognition from remote sensing imagery using incremental learning. IEEE Trans. Intell. Transp. 2017, 18, 2993–3005. [Google Scholar] [CrossRef]
  48. Zhang, W.Y.; Zhang, Z.J.; Qi, D.P.; Liu, Y. Automatic crack detection and classification method for subway tunnel safety monitoring. Sensors 2014, 14, 19307–19328. [Google Scholar] [CrossRef]
  49. Yodkhum, S.; Sampattagul, S.; Gheewala, S.H. Energy and environmental impact analysis of rice cultivation and straw management in northern Thailand. Environ. Sci. Pollut. Res. 2018, 25, 17654–17664. [Google Scholar] [CrossRef]
  50. Sun, Y.; Zhou, H.; Wall, G.; Wei, Y. Cognition of disaster risk in a tourism community: An agricultural heritage system perspective. J. Sustain. Tour. 2017, 25, 536–553. [Google Scholar] [CrossRef]
  51. Sofia, G.; Marinello, F.; Tarolli, P. A new landscape metric for the identification of terraced sites: The Slope Local Length of Auto-Correlation (SLLAC). ISPRS J. Photogramm. Remote Sens. 2014, 96, 123–133. [Google Scholar] [CrossRef]
  52. Wang, Q.; Yan, L.; Sun, Y.B.; Cui, X.M.; Mortimer, H.; Li, Y.Y. True orthophoto generation using line segment matches. Photogramm. Rec. 2018, 33, 113–130. [Google Scholar] [CrossRef]
  53. Duan, F.Z.; Wan, Y.C.; Deng, L. A novel approach for coarse-to-fine windthrown tree extraction based on unmanned aerial vehicle images. Remote Sens. 2017, 9, 306. [Google Scholar] [CrossRef]
  54. Omar, T.; Nehdi, M.L. Remote sensing of concrete bridge decks using unmanned aerial vehicle infrared thermography. Autom. Constr. 2017, 83, 360–371. [Google Scholar] [CrossRef]
  55. Song, M.; Civco, D. Road extraction using SVM and image segmentation. Photogramm. Eng. Remote Sens. 2004, 70, 1365–1371. [Google Scholar] [CrossRef]
  56. Gruen, A.; Li, H. Road extraction from aerial and satellite images by dynamic programming. ISPRS J. Photogramm. Remote Sens. 1995, 50, 11–20. [Google Scholar] [CrossRef]
  57. Xu, Y.Y.; Xie, Z.; Feng, Y.X.; Chen, Z.L. Road extraction from high-resolution remote sensing imagery using deep learning. Remote Sens. 2018, 10, 1461. [Google Scholar] [CrossRef]
  58. Yang, J.; Wang, R.S. Classified road detection from satellite images based on perceptual organization. Int. J. Remote Sens. 2007, 28, 4653–4669. [Google Scholar] [CrossRef]
  59. Valero, S.; Chanussot, J.; Benediktsson, J.A.; Talbot, H.; Waske, B. Advanced directional mathematical morphology for the detection of the road network in very high resolution remote sensing images. Pattern Recogn. Lett. 2010, 31, 1120–1127. [Google Scholar] [CrossRef] [Green Version]
  60. Lu, L.; Di, L.; Ye, Y. A decision-tree classifier for extracting transparent plastic-mulched landcover from Landsat-5 TM images. IEEE J. Select. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4548–4558. [Google Scholar] [CrossRef]
Figure 1. Typical cropland landscape of (a) aerial view and (b) field photo of actual strips and ridges.
Figure 1. Typical cropland landscape of (a) aerial view and (b) field photo of actual strips and ridges.
Remotesensing 11 02343 g001
Figure 2. Limited performance of satellite panchromatic image in cropland ridge identification: (a) WorldView-3 (0.31 m), (b) GF-2 (0.81 m). The bigger the digital number, the brighter the image pixel.
Figure 2. Limited performance of satellite panchromatic image in cropland ridge identification: (a) WorldView-3 (0.31 m), (b) GF-2 (0.81 m). The bigger the digital number, the brighter the image pixel.
Remotesensing 11 02343 g002
Figure 3. Pipeline of the methods used in this study.
Figure 3. Pipeline of the methods used in this study.
Remotesensing 11 02343 g003
Figure 4. (a) location of North China Plain (NCP) and study site; (b) location of four plots with a 32-bit panchromatic image from the GF-2 satellite (spatial resolution: 0.83 m) as background.
Figure 4. (a) location of North China Plain (NCP) and study site; (b) location of four plots with a 32-bit panchromatic image from the GF-2 satellite (spatial resolution: 0.83 m) as background.
Remotesensing 11 02343 g004
Figure 5. Plot images of orthophoto for Plot 1 (a), Plot 2 (b), Plot 3 (c), Plot 4 (d), DSM for Plot 1 (e), Plot 2 (f), Plot 3 (g), Plot 4 (h), elevation histogram for Plot 1 (i), Plot 2 (j), Plot 3 (k), Plot 4 (l), and surface roughness histogram for Plot 1 (m), Plot 2 (n), Plot 3 (o), Plot 4 (p).
Figure 5. Plot images of orthophoto for Plot 1 (a), Plot 2 (b), Plot 3 (c), Plot 4 (d), DSM for Plot 1 (e), Plot 2 (f), Plot 3 (g), Plot 4 (h), elevation histogram for Plot 1 (i), Plot 2 (j), Plot 3 (k), Plot 4 (l), and surface roughness histogram for Plot 1 (m), Plot 2 (n), Plot 3 (o), Plot 4 (p).
Remotesensing 11 02343 g005
Figure 6. Validation ridges obtained by manual digitization for Plot 1 (a), Plot 2 (b), Plot 3 (c), Plot 4 (d).
Figure 6. Validation ridges obtained by manual digitization for Plot 1 (a), Plot 2 (b), Plot 3 (c), Plot 4 (d).
Remotesensing 11 02343 g006
Figure 7. Typical profile of DSM after processing sUAS images and field photo of actual strips and ridges. (a) typical DSM data and example profile line; (b) typical orthophoto data and example profile line; (c) typical DSM profile with black circles marking ridge peaks.
Figure 7. Typical profile of DSM after processing sUAS images and field photo of actual strips and ridges. (a) typical DSM data and example profile line; (b) typical orthophoto data and example profile line; (c) typical DSM profile with black circles marking ridge peaks.
Remotesensing 11 02343 g007
Figure 8. Binary image before (a) and after (b) threshold segmentation. Partial location is displayed in the left upper corner in Figure 8a. Ridge candidates are valued as 1 in white and non-ridge candidates are valued as 0 in black.
Figure 8. Binary image before (a) and after (b) threshold segmentation. Partial location is displayed in the left upper corner in Figure 8a. Ridge candidates are valued as 1 in white and non-ridge candidates are valued as 0 in black.
Remotesensing 11 02343 g008
Figure 9. Extracted results using the shape index filter. Ridge candidates are valued as 1 in white and non-ridge candidates are valued as 0 in black. (a) binary image ( f 1 ) as S 0 segmented from surface roughness of Plot 1, (b) result ( S 1 ) after the first filtering using mean area of S 0 , (c) result ( S 2 ) after the second filtering using mean MER perimeter of S 1 , (d) result ( S 3 ) after the third filtering using mean major axis length of S 2 , (e) result ( S 4 ) after the fourth filtering using MER area of S 3 .
Figure 9. Extracted results using the shape index filter. Ridge candidates are valued as 1 in white and non-ridge candidates are valued as 0 in black. (a) binary image ( f 1 ) as S 0 segmented from surface roughness of Plot 1, (b) result ( S 1 ) after the first filtering using mean area of S 0 , (c) result ( S 2 ) after the second filtering using mean MER perimeter of S 1 , (d) result ( S 3 ) after the third filtering using mean major axis length of S 2 , (e) result ( S 4 ) after the fourth filtering using MER area of S 3 .
Remotesensing 11 02343 g009
Figure 10. Ridge extraction before (a) and after (b) removing vegetation impacts in partial Plot 1. Overlaying images are the ridge candidates (red lines) and vegetation coverage (green). Significantly improved ridges are marked with black ellipses.
Figure 10. Ridge extraction before (a) and after (b) removing vegetation impacts in partial Plot 1. Overlaying images are the ridge candidates (red lines) and vegetation coverage (green). Significantly improved ridges are marked with black ellipses.
Remotesensing 11 02343 g010
Figure 11. Final results using morphological operation. Ridge candidates are valued as 1 in white and non-ridge candidates are valued as 0 in black. (a) binary image ( f 4 ) after removing vegetation coverage of Plot 1, (b) result ( f 5 ) using image opening with MDSE, (c) result ( f 6 ) after removing small regions with threshold: 1000 pixels, (d) result ( f 7 ) using image closing with SEL.
Figure 11. Final results using morphological operation. Ridge candidates are valued as 1 in white and non-ridge candidates are valued as 0 in black. (a) binary image ( f 4 ) after removing vegetation coverage of Plot 1, (b) result ( f 5 ) using image opening with MDSE, (c) result ( f 6 ) after removing small regions with threshold: 1000 pixels, (d) result ( f 7 ) using image closing with SEL.
Remotesensing 11 02343 g011
Figure 12. Point detection using the Hough Transform and point labeling and sorting. The binary image of the ridge candidates is the background. (a) result of point detection using Hough Transform for Plot 1; (b) point labeling and sorting.
Figure 12. Point detection using the Hough Transform and point labeling and sorting. The binary image of the ridge candidates is the background. (a) result of point detection using Hough Transform for Plot 1; (b) point labeling and sorting.
Remotesensing 11 02343 g012
Figure 13. Results of four steps after point reduction and improvement for Plot 1. The binary image of ridge candidates is the background. (a) overlaying results of point reduction (P1) and comparison with central points of MER minor axis borders (P2); (b) endpoint improvement for corner parts (P3) and other endpoints (P4).
Figure 13. Results of four steps after point reduction and improvement for Plot 1. The binary image of ridge candidates is the background. (a) overlaying results of point reduction (P1) and comparison with central points of MER minor axis borders (P2); (b) endpoint improvement for corner parts (P3) and other endpoints (P4).
Remotesensing 11 02343 g013
Figure 14. Ridge detection assessment result. True positives (TP) are black and false positives (FP) are red. (a) Plot 1, (b) Plot 2, (c) Plot 3, (d) Plot 4.
Figure 14. Ridge detection assessment result. True positives (TP) are black and false positives (FP) are red. (a) Plot 1, (b) Plot 2, (c) Plot 3, (d) Plot 4.
Remotesensing 11 02343 g014
Figure 15. Extracted strips of four plots filled with different colors. (a) Plot 1, (b) Plot 2, (c) Plot 3, (d) Plot 4.
Figure 15. Extracted strips of four plots filled with different colors. (a) Plot 1, (b) Plot 2, (c) Plot 3, (d) Plot 4.
Remotesensing 11 02343 g015
Figure 16. The impacts on mapping performance of different resolution images: (a) recall and AEA, (b) length ratio error, (c) area ratio error; typical images of cropland ridge: (d) orthophoto and (e) DSM in different GSD with the actual extent: 2.1 m × 3.5 m.
Figure 16. The impacts on mapping performance of different resolution images: (a) recall and AEA, (b) length ratio error, (c) area ratio error; typical images of cropland ridge: (d) orthophoto and (e) DSM in different GSD with the actual extent: 2.1 m × 3.5 m.
Remotesensing 11 02343 g016aRemotesensing 11 02343 g016b
Table 1. Specifications of sUAS employed in this study.
Table 1. Specifications of sUAS employed in this study.
No.UAV SpecificationUAV ParameterDigital Camera SpecificationDigital Camera Parameter
1Diagonal wheelbase650 mmFocal length3.64 mm
2Maximum takeoff weight3.6 kgWeight of camera and gimbal247 g
3Maximum payload1.0 kgSensor size 6.17 × 4.55 mm
4Maximum AGL500 mEffective pixels12.4 megapixel
5Hovering accuracy (P-mode with GPS)Vertical: 0.5 m, Horizontal: 2.5 mDiagonal field of view94
6Capacity of battery (TB48D)5700 mAhPixel size1.55 μ m
7Hovering time (with TB48D battery)No payload: 28 min, 500 g payload: 20 min, 1 kg payload: 16 minSensor typecomplementary metal-oxide-semiconductor (CMOS)
Table 2. Summary of the sUAS flight parameters for the study site.
Table 2. Summary of the sUAS flight parameters for the study site.
Flight MissionAGL (m)Date of FlightOverlap (Front × Side)Number of GCPsResolution (cm)
Mission 1601 November 201680% × 60%132.5
Mission 21002 November 201680% × 60%134.2
Mission 31502 November 201680% × 60%136.5
Table 3. Specific parameters of four plots.
Table 3. Specific parameters of four plots.
PlotPlot 1Plot 2Plot 3Plot 4
area (m2)17,46624,74362289447
number of cropland strips18221827
ridge width (m)0.330.410.360.39
strip, length × width range (m)181.5 × (3.8–7.5)237.8 × (4.5–5.3)52.1 × (5.5–8.0)72.3 × (4.7–5.0)
elevation, mean(min-max) (m)28.33(27.72–28.93)28.35(28.03–28.87)28.07(27.69–28.43)28.31(28.17–28.55)
gradient, mean(min-max)2.797(0,39.103)2.590(0,35.815)2.639(0,28.540)2.387(0,35.963)
surface roughness, mean(min-max)0.490(0,1)0.495(0,1)0.494(0,1)0.495(0,1)
crop coverage conditionpartlyscarcelypartlyscarcely
Table 4. Details of pixel shape indexes (PSIs).
Table 4. Details of pixel shape indexes (PSIs).
NameUnitConcept
Areapixel2Actual number of pixels in the region
Area of MERpixel2Area of smallest rectangle containing the region
Perimeter of MERpixelPerimeter of smallest rectangle containing the region
Major axis lengthpixelLength of the major axis of the ellipse with the same normalized second central
moments as the objective region
Minor axis lengthpixelLength of the minor axis of the ellipse with the same normalized second central
moments as the objective region
OrientationdegreeAngle between the x-axis and the major axis of the ellipse that has the same
second-moments as the region
Table 5. Parameter summary of cropland ridge detection.
Table 5. Parameter summary of cropland ridge detection.
StepSubstep or IndexThreshold or MethodSection
Ridge segmentationsurface roughness>mean and half of standard deviationSection 3.2.1
Ridge filteringshape area>meanSection 3.2.2
MER perimeter>meanSection 3.2.2
major axis length>meanSection 3.2.2
MER area>meanSection 3.2.2
Ridge cleaningvegetation segmentationdetected value from Hue histogramSection 3.2.3
ridge maskimage operationSection 3.2.3
Ridge smoothingimage openingMDSE with angle 45 and length 13 pixelsSection 3.2.4
pixel area filtering>1000 pixelsSection 3.2.4
image closingSEL with half of major axis lengthSection 3.2.4
image thinninginfiniteSection 3.2.4
Table 6. Accuracy assessment of point reduction and improvement.
Table 6. Accuracy assessment of point reduction and improvement.
Plot andDetectedMean Length of RidgesRecallTotal Area of StripsAEA
StepPoints(m)(%)(m2)(%)
1P0286172.794.116,65895.4
1P1223172.494.416,63495.2
1P2261177.897.317,16198.3
1P3261179.398.117,25098.8
1P4232181.298.617,43999.9
2P0275224.792.423,44794.8
2P1246224.792.423,43994.7
2P2290235.296.524,47598.9
2P3290235.196.524,47298.9
2P4262236.696.824,62699.5
3P029647.188.9556590.2
3P19239.976.2468575.9
3P213049.594.7588995.5
3P313049.895.4591395.8
3P410551.597.4610699.0
4P029264.791.5835492.0
4P120862.088.0803188.5
4P226467.695.3873196.2
4P326467.995.6874896.4
4P422269.697.1897598.9
Table 7. Accuracy assessment of ridge detection.
Table 7. Accuracy assessment of ridge detection.
PlotAverage ExtractedAverage ActualLength ErrorLength ErrorRecallPrecision
Length (m)Length (m)(m)Ratio (%)(%)(%)
1181.2181.5−0.30−0.1698.698.8
2236.6237.8−1.24−0.5296.895.4
351.552.0−0.49−0.9497.996.9
469.470.3−0.95−1.3597.197.5
Table 8. Accuracy assessment of strip extraction.
Table 8. Accuracy assessment of strip extraction.
PlotAutomated ExtractedTotal ReferenceTotal AreaTotal AreaKC Range
Area (m2)Area (m2)Error (m2)Extraction Ratio (%)(%)
117,43917,466−26.999.997.6–99.4
224,62624,743−117.399.597.4–99.3
361066170−63.899.098.6–99.8
489759075−99.698.998.5–99.9

Share and Cite

MDPI and ACS Style

Zhang, J.; Zhao, Y.; Abbott, A.L.; Wynne, R.H.; Hu, Z.; Zou, Y.; Tian, S. Automated Mapping of Typical Cropland Strips in the North China Plain Using Small Unmanned Aircraft Systems (sUAS) Photogrammetry. Remote Sens. 2019, 11, 2343. https://doi.org/10.3390/rs11202343

AMA Style

Zhang J, Zhao Y, Abbott AL, Wynne RH, Hu Z, Zou Y, Tian S. Automated Mapping of Typical Cropland Strips in the North China Plain Using Small Unmanned Aircraft Systems (sUAS) Photogrammetry. Remote Sensing. 2019; 11(20):2343. https://doi.org/10.3390/rs11202343

Chicago/Turabian Style

Zhang, Jianyong, Yanling Zhao, A. Lynn Abbott, Randolph H. Wynne, Zhenqi Hu, Yuzhu Zou, and Shuaishuai Tian. 2019. "Automated Mapping of Typical Cropland Strips in the North China Plain Using Small Unmanned Aircraft Systems (sUAS) Photogrammetry" Remote Sensing 11, no. 20: 2343. https://doi.org/10.3390/rs11202343

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop