Next Article in Journal
A Novel Thermal Analysis Method Based on a Multi-Physics Two-Way Coupled Method and Its Application to Submersible Permanent Magnet Synchronous Motors
Previous Article in Journal
Blockchain-Based Continuous Knowledge Transfer in Decentralized Edge Computing Architecture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Survey of Farmland Boundary Extraction Technology Based on Remote Sensing Images

1
College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
2
School of Engineering, University of Lincoln, Lincoln LN6 7TS, UK
3
School of Mathematics and Statistics, Jiangsu Normal University, Xuzhou 221116, China
4
College of Engineering, Nanjing Agricultural University, Nanjing 210031, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(5), 1156; https://doi.org/10.3390/electronics12051156
Submission received: 20 January 2023 / Revised: 17 February 2023 / Accepted: 23 February 2023 / Published: 27 February 2023

Abstract

:
Farmland boundary information plays a key role in agricultural remote sensing, and it is of importance to modern agriculture. We collected the relevant research in this field at home and abroad in this review, and we systematically assessed the farmland boundary extraction process, detection algorithms, and influencing factors. In this paper, we first discuss the five parts of the assessment: (1) image acquisition; (2) preprocessing; (3) detection algorithms; (4) postprocessing; (5) the evaluation of the boundary information extraction process. Second, we discuss recognition algorithms. Third, we discuss various detection algorithms. The detection algorithms can be divided into four types: (1) low-level feature extraction algorithms, which only consider the boundary features; (2) high-level feature extraction algorithms, which consider boundary information and other image information simultaneously; (3) visual hierarchy extraction algorithms, which simulate biological vision systems; (4) boundary object extraction algorithms, which recognize boundary object extraction ideas. We can subdivide each type of algorithm into several algorithm subclasses. Fourth, we discuss the technical factors and natural factors that affect boundary extraction. Finally, we summarize the development history of this field, and we analyze the problems that exist, such as the lack of algorithms that can be adapted to higher-resolution images, the lack of algorithms with good practical ability, and the lack of a unified and effective evaluation index system.

1. Introduction

Timely and accurate agricultural land use information is an important prerequisite for the development of agricultural informatization and modernization, and it is also of great strategic relevance for macro-agricultural policy formulation, agricultural planning and management, agricultural resource protection, and sustainable development [1]. With its macroscopic, timely, economical, and nondestructive method of surface information observation, remote sensing has become one of the most effective technical means for cultivated land monitoring [2].
Farmland boundary information is an important part of agricultural land use information. Currently, we lack an authoritative definition of farmland boundary information. In this paper, we define the real-world agricultural farmland boundary as the location where the crop type changes [3]. Mapped onto remote sensing imagery, farmland boundaries correspond to locations that are defined as feature discontinuities, such as grayscale, color, texture, etc. [4]. In specific studies, the farmland boundaries of remote sensing imagery exist in four forms: (1) as threads in farmland imagery [3] (Figure 1a); (2) as demarcation lines between farmlands [5] (Figure 1b); (3) as boundary objects between farmlands [6] (Figure 1c); (4) as the perimeter boundaries of farmland blocks [7] (Figure 1d). There is such a difference between the second type and the fourth type. For the second type, each edge depends on the farmlands on both sides. For the fourth type, the boundary ring is determined by the farmland it wraps.
Field boundary extraction based on remote sensing images refers to the vectorization and digitization of the field boundaries based on aerial and aerospace images [8]. The technology is divided into human-based manual digitization methods [9,10,11] and computer-based semiautomatic or fully automatic image segmentation methods. Manual digitization refers to the visual interpretation of images with a high spatial resolution by professionals based on prior knowledge and the principle of pixel similarity, as well as on complete and accurate field boundary extraction [12]. However, in the face of a large range of plot boundary data, manual digitization consumes a lot of time and material resources; thus, it is better to use automated extraction methods. In a narrow sense, boundary extraction technology emphasizes that the recognition process revolves around the goal of extracting boundaries, such as various types of edge detection operators, which directly identify the dividing lines of each object in the extracted image. The recognition results correspond to the boundary line in the figure; however, it is not necessary to investigate whether the boundary is a closed graphic or whether the original image contains noise that needs to be corrected during the recognition. In a broad sense, boundary extraction technology refers to the various techniques that researchers use to identify boundaries, and these generally include preprocessing, postprocessing, and other processes to obtain more practical results, such as image segmentation technology, which can be used to divide the image into nonoverlapping closed objects, discarding some of the meaningless noise points and edge lines. In this paper, we primarily discuss automatic field boundary recognition and extraction technology in a broad sense.
Many previous researchers have covered this field, and their work is mainly divided into reviews on image segmentation [13,14,15,16,17,18,19,20,21] and reviews of agricultural remote sensing [2,16,22]; however, we lack relevant reviews on farmland boundary extraction based on remote sensing images. The difference between this paper and the above two types of reviews is that, due to space constraints, there is not a lot of focus in the latter on boundary extraction in the field of agricultural remote sensing. Reviews on image segmentation angle types are often general technical analyses in idealized scenarios [13,18,19,23,24], or their authors comb and summarize the image segmentation applications in different fields [14,17]. There are few analyses in respect of the field of agriculture. As for reviews on agricultural remote sensing [2,16], although the authors describe the application of agricultural remote sensing in detail and also explain the monitoring technology of agricultural remote sensing, they do not explain the image segmentation problem in depth. In this paper, we make up for the shortcomings of these two types of reviews by discussing field boundary extraction technology based on agricultural remote sensing images in more depth.
In this article, we describe the four parts of the extraction process (except for detection algorithms) in Section 2. The extraction process involves five parts: (1) image acquisition; (2) image preprocessing; (3) detection algorithms; (4) postprocessing; (5) evaluation. Because detection algorithms are the core of the boundary extraction process, we discuss these in Section 3, namely low-level feature extraction algorithms, high-level feature extraction algorithms, visual hierarchy extraction algorithms, and boundary object extraction algorithms. Then, in Section 4, we discuss the technical and natural factors that affect boundary extraction. Finally, in Section 5, we summarize the development process of farmland boundary recognition and extraction technology based on remote sensing images, and we analyze some of the existing problems.

2. Extraction Process

In general, the whole process of field boundary extraction based on remote sensing images can be divided into five parts: (1) image acquisition; (2) preprocessing; (3) detection algorithms; (4) postprocessing; (5) evaluation. We can divide the obtained remote sensing images into two categories: aerospace images (mainly from satellites) and aerial images (mainly from unmanned aerial vehicles). The directly obtained remote sensing images should undergo corresponding preprocessing to minimize image errors, filter image noise, and strengthen image features to meet the processing conditions of the detection algorithm. After that, due to some differences between the processing results of the detection algorithm and the target requirements, postprocessing is required. Finally, it is necessary to perform a systematic evaluation of the extraction process, especially the detection algorithm. The detection algorithm is the most critical part of the five parts, and it plays a decisive role in the recognition results. The preprocessing, postprocessing, and even the evaluation are selected according to the detection algorithm; thus, we elaborate on the detection algorithm in Section 3. We present a schematic diagram of the boundary extraction process in Figure 2.

2.1. Image Acquisition

Before 2015, images were mainly captured by satellites, and a small number of images were captured by dedicated remote sensing aircraft [6,25]. The resolution was several meters or tens of meters, and it was difficult to accurately identify field boundaries. The large error of satellite geopositioning [26] brings greater difficulty to the accurate extraction of boundaries. At the same time, some scholars believe that [27] low spatial resolution (tens of meters) is of little value for depicting field boundaries. In high-resolution satellite images, the resolution can generally reach the submeter level [11,28,29], which provides convenience for accurately identifying field boundaries. At the same time, UAVs are rapidly developing, and research is increasingly relying on them. The resolutions of UAVs generally reach the decimeter level, which can not only accurately identify the boundary of a field but can also create conditions for the direct identification of boundary objects, such as footpaths, ditches, and roads. With the development of technology, the future remote sensing images of field blocks will have higher resolutions. The resolution of the UAV images of many teams has reached the centimeter level [25,30,31,32,33], and image resolution is no longer a problem that hinders the extraction of field boundaries.
We counted the aerial remote sensing images and space remote sensing images relevant in the literature. We present an introduction to the satellites with which space remote sensing images have been obtained in Table 1.
As shown in Table 1, the source of satellite remote sensing imagery is primarily large national government or commercial satellite imagery. QuickBird is the most popular satellite imagery, which is probably due to its highly competitive resolution. Official satellite remote sensing images are freely available in some countries; however, their resolutions are often lower than those of commercial satellites.
We present an introduction to the UAVs with which aerial remote sensing images have been obtained in Table 2.
As shown in Table 2, the resolution of UAV images is significantly higher than that of aerospace images. Another advantage of drone imagery is that the resolution can be adjusted by changing the height of the drone, which allows for more flexible resolution acquisition.
Satellite imagery and drone imagery have their advantages. Satellite imagery covers a wide area, has a wide range of wavelengths, and provides temporal images, which allow for field boundary extraction from angles based on different bands and time series [46,47,48]. Drones produce images with high resolution, and they can also shoot multispectral images, similar to satellites [60]. Moreover, the shooting efficiency and pertinence are stronger, and the equipment can be carried by the team so that they can obtain the images and modify the drone as needed. Because satellite imagery is not acquired in real-time, drone photography is often required to understand recent field changes. At the same time, considering that UAV shooting requires the team to reach the target area and is affected by the meteorological conditions and shooting technology, the potential cost can be high; thus, for tasks with low-resolution requirements, satellite imagery should be considered.

2.2. Preprocessing

Preprocessing refers to the correction and organization of obtained images to render them more accurate and reliable, which is conducive to processing by the detection algorithm, which can be divided into targeted preprocessing and general preprocessing.
Targeted preprocessing refers to preprocessing methods that are only used for specific types of detection algorithms and are difficult to use for others. Because of their high correlation with detection algorithms, in this article, we consider them to be part of the detection algorithms, and so we do not cover them in detail in this section.
General preprocessing refers to the low correlation between the preprocessing content and algorithm, and it primarily involves correction of the image accuracy, contrast improvement, fusion of multiple images, and selection of the image size, which can be freely combined with images and algorithms according to the preprocessing needs. For example, for satellite remote sensing images, there is radiometric calibration, atmospheric correction, image clipping, minimum noise separation, and other operations [3,11,29]. For aerial remote sensing images, there is matching, stitching, radiometric correction, geometric correction, orthorectification, and other operations [6,61,62]. For multiband images, there are band-picking or image fusion operations [29,55,63]. For images that are not a suitable size, there is image segmentation cropping [7], matching, stitching [6], downsampling, and chunking [30].
We present some of the common universal preprocessing methods in Table 3.
In addition to the common general pretreatments, there are some less common pretreatment methods or treatments that fall between the general and targeted pretreatments. Such methods tend to only target some of the more specific algorithm requirements, but they have a certain degree of generality. Examples include the color space of the transformed image [65,68], projection transformation [55], feature extraction [69], the anisotropic diffusion algorithm [35,38], automatic rotation [70], superpixel clustering [56,68], resampling [33], and other processing methods. We do not describe these methods in detail in this article.
Although most preprocessing methods do not play a decisive role in the extraction results, suitable preprocessing methods can improve the accuracy of the detection algorithm to a certain extent and reduce the processing time.

2.3. Postprocessing

The postprocessing involves the processing and sorting of the results obtained by the detection algorithm, and the optimization of the output form so that it meets the needs of the user. The postprocessing process generally does not affect the accuracy of the extraction. Postprocessing can be divided into algorithm-oriented postprocessing and demand-oriented postprocessing, according to the specific algorithms and requirements that it serves. The common demand-oriented postprocessing includes merging the dimensions of the images obtained by the detection algorithm [7], removing non-field objects [64], removing noise [44,64], etc.
We present some of the more common requirements and the corresponding postprocessing methods in Table 4.

2.4. Evaluation

Researchers can use the evaluation link to analyze the above links and final extraction results to understand the advantages and shortcomings of the method, and to clarify the direction for improvement. Due to the diversity of detection algorithms, the evaluation indicators are also diverse. Early experiments mainly involved qualitative evaluations, which are difficult to accurately evaluate and compare. With the development of this field, many quantitative evaluations have emerged; however, we still lack unified quantitative analysis standards. We discuss quantitative evaluation systems in this section.
Before evaluating the extraction results, the research team needs to obtain the actual boundary information, which is considered correct and is used for comparison with the predicted values (i.e., field boundary information obtained by automated boundary extraction technology). Obtaining the actual plot boundary information through field investigation is undoubtedly a traditional but accurate method [10], and detailed information, such as the crop conditions and bare soil, can be found in the field investigation of the target plot [8], which is conducive to the analysis of and improvement in the research; however, this method is time-consuming and costly. The direct manual annotation of images is a more common practice [57,64] to achieve satisfactory accuracy while maintaining high efficiency, and it is another method of using the data obtained from the work of other scholars as a standard [37,43] or national database [5,47,48] to reduce the workload. However, the data of other researchers or organizations may not match those of a particular experiment, and the accuracy of these data is beyond the control of the investigator. Therefore, it may be appropriate for researchers to manually mark the correct field boundaries in the imagery.
Evaluation can generally be divided into two categories: (1) evaluation of the core competencies; (2) evaluation of the related attributes. Evaluation of the core competencies refers to the relevant evaluation regarding the accuracy of the extraction, which is an important indicator of the extraction effect of the algorithm, and which can be subdivided into one of the following three systems: (1) a system that is based on the correct and false classification of the pixels within the boundary line; (2) a system that is based on the classification of the pixel deviation in the boundary line; (3) a system that is based on the overall classification accuracy rate. Evaluation of the relevant attributes can be used to further assess the practicability and generalization of the algorithm, including the algorithm complexity, operating efficiency, stability, robustness, completeness, extraction quality, etc.
We present some of the commonly used quantitative evaluations of core competencies in Table 5 and evaluations of related attributes in Table 6.
The focus of the existing quantitative evaluations is on the evaluation of the core performance, such as the accuracy of the algorithm. The core performance is indeed the focus of boundary extraction algorithms; however, with the improvement in the algorithm accuracy, boundary extraction algorithms will gradually enter the stage of practicality. The evaluation of the relevant attributes can help researchers to make reasonable judgments on the practicality of the algorithm.
In addition to the abovementioned common quantitative evaluation systems, there are many other types of evaluation systems, such as visual interpretation [34,38,53,74], the rare evaluation index [10,41,64], the self-created evaluation index [33,35,36,71], and the boundary object evaluation system. Visual interpretation evaluation is a subjective evaluation system that often appeared in the early literature but is no longer popular. Rare evaluation indicators refer to evaluation indicators that are not widely accepted and used by other researchers. Because self-created evaluation indexes are used for some experimental designs, the experimental evaluation is more appropriate for one experiment, and comparisons with the same types of experiment are difficult. The boundary object evaluation system is formulated according to the characteristics of the various attributes of the various boundary objects. There are differences in the evaluation ideas between the first three types of boundary extraction (low-level feature extraction algorithms, high-level feature extraction algorithms, and the visual hierarchy extraction algorithm) and the boundary object extraction; evaluation systems based on boundary object extraction are rare, so we will not comment on the boundary object evaluation system.

3. Detection Algorithms

There are many kinds of automatic agricultural field boundary extraction algorithms that are based on remote sensing images, but there have been no reviews of these. Considering that the boundary extraction of agricultural remote sensing images can be regarded as a kind of image segmentation, and there have been many reviews in the field of image segmentation, we can perform classification of the algorithms by referring to an overview of image segmentation.
In the early days, classification and sorting according to specific algorithm principles was a common classification standard for the review of image segmentation classes. K. S. Fu and J. K. Mui [23] divided image segmentation algorithms into three categories in their 1981 literature review: feature threshold, cluster edge detection, and region extraction. In 1994, Koschan A. [20] further divided them into four categories: pixel-based, region-based, edge-based, and region-based, and then further divided them into 51 subcategories. Cheng, H. D., Jiang, X. H., Sun, Y., et al. [21] supplemented the algorithm based on fuzzy-set theory and the artificial neural network algorithm with the large-class segmentation algorithm so that the total algorithm contained six categories in 2001. There are an increasing number of algorithms, and researchers pay too much attention to the classification of specific algorithm principles, which leads to more complex classification results. Therefore, other classification criteria have emerged, such as the smallest image unit used by the algorithm, which we can divide into pixel-based units and object-oriented units [16,19]. According to the driving type of the algorithm, they can be divided into knowledge-driven and data-driven algorithms [15]. Depending on whether the algorithm directly identifies the boundary or extends the recognition route from the inside to the outside, it can be categorized as either an edge-based or region-based algorithm [14]. Based on the idea of the algorithm’s consideration of the image information processing, in this paper, we divide the algorithms into the following: low-level feature extraction algorithms, which only consider the boundary features; high-level feature extraction algorithms, which consider the boundary and internal information simultaneously; visual hierarchy extraction algorithms, which simulate biological vision systems; boundary object extraction algorithms, which extracts boundary objects that can be regarded as boundary lines. We present a schematic table of the algorithm classifications, corresponding characteristics, and information sources in Table 7.

3.1. Low-Level Feature Extraction Algorithms

Low-level feature extraction algorithms utilize a boundary extraction method in the narrow sense, which means that they are algorithms that only consider features presented by the boundary on the image to identify the boundary of the extracted field. Low-level feature extraction algorithms often extract field boundaries based on the contrast between neighborhoods, including edge detection operators, frequency domain features, and other algorithms. This kind of algorithm is simple and clear, and it has high computing efficiency and low computing power consumption. It can accurately perform boundary recognition and extraction for field images in simple flat areas, and it has a good extraction effect in low-resolution images. We present the classification of low-level feature extraction algorithms in Figure 3.

3.1.1. Traditional Edge Detection Operators

Image segmentation algorithms based on the traditional edge detection operator mainly involve locating the edge of the local area by using a differential operator to identify the place where the gray value abruptly changes in the image. The general principle is that between the edges, the pixel properties suddenly change; thus, the mutation is the edge [75], which we can detect by deriving the gray change in the image, such as the peak point of the first derivative, or the zero position of the second derivative. Therefore, the traditional edge detection operator is also called the gradient operator. This kind of algorithm is easy to implement, can accurately reflect the contours of the field [42], and is effective in the extraction of typical simple linear features [76]; however, this kind of algorithm pays too much attention to detail, has poor coherence, has a poor recognition effect in images with complex background textures, and is susceptible to noise interference. This makes it difficult to locate the edges of the field, and edge information is omitted; thus, such algorithms often require corresponding postprocessing.
The extraction effect of the simple traditional edge detection operators is not ideal. Chen Yizhe et al. used Roberts and Sobel operators and found that, although they could detect the edges of the plots well, there was a large amount of noise that was not easily eliminated by filtering [74]. However, with appropriate improvements, the edge detection operator could improve the recognition quality of the field boundary. A. Rydberg et al. proposed a multispectral edge detection algorithm [49]. Barry Watkins et al. combined the Canny and Scharr operators with the sum of 28 graphs in the four bands of RGB and NIR and seven sampling dates to generate a composite multitemporal edge layer [64]. Yu Wang et al. developed a combination of the ADSS and Canny edge detection algorithm, and the preprocessing step replaced the traditional Gaussian-scale space with an anisotropic diffusion scale space to better retain the local characteristics. They then used the multiscale segmentation method for farmland boundary detection, which provides anti-false edge information with a coarse scale and spatially accurate results with a fine scale [38]. Pang Xinhua et al. added an edge closure step based on the mathematical morphology after the recognition of the Canny operator [40]. According to the qualitative evaluation, the improved edge detection algorithm performs better in flat-area field imagery. In fact, under ideal conditions, the boundary extraction accuracy is high. Ruoxian Li et al.’s proposed algorithm first uses edge operators for the edge enhancement, then uses fixed threshold segmentation and combines the image segmentation algorithms based on void removal to perform the region segmentation, and finally performs information extraction and vectorization operations; it has an average accuracy of 94.2% [66].

3.1.2. Frequency Domain Characterization Algorithm

Frequency domain feature algorithms mainly use the wavelet transform, and other algorithms to identify a certain law in the image to extract the boundary information. The principle is that the pattern change in the image follows a certain law, and the information in the image can be decomposed into components of different frequencies through the processing of frequency domain transformation algorithms. Different types of filters are then used to filter out the information that is or is not of interest.
Wavelet transform is a common edge detection method [18]. The wavelet analysis algorithm evolved from Fourier analysis, which has the advantages of multiresolution analysis and has good approximation characteristics for one-dimensional signals [77]. Isida et al. discussed the identification of rice field edges based on a multiresolution wavelet transform algorithm, and they found that the algorithm is more suitable for the recognition of SAR remote sensing images than Gaussian differential DOG filters [44]. C. Y. Ji discussed the method of processing multitemporal remote sensing images based on wavelet transform, and they achieved a certain field boundary extraction effect [50].

3.1.3. Simple Threshold Method

The threshold method is an ancient edge detection method. The basic idea is to classify pixels or objects that belong to the edge by comparing the thresholds, which is straightforward. The threshold method can be used in conjunction with a variety of other algorithms. Chen Yizhe et al. identified the edges of farmland plots based on the adaptive threshold method, and they found that, although the threshold method is inferior to differential operators in boundary recognition, it has better noise immunity [74]. When the image grayscale range overlaps or the image grayscale difference is not obvious, it is difficult to obtain the ideal result with the threshold method; thus, it cannot be directly used for high-resolution images with large data volumes and high spatial variability.

3.1.4. Summary

In general, the actual performance of low-level feature extraction algorithms for simple farmland area images has been satisfactory. Recently, the traditional Canny edge detection, Hough transform, and Suzuki85 contour algorithms were used in the land boundary extraction algorithm developed by Rokgi Hong [72], and the accuracy and completeness rate of the boundary recognition was about 80% in a Korean experiment on regular field boundary extraction. Mustafa Turker [43] proposed the addition of a perceptual grouping algorithm after the Canny operator to identify the field block boundary. The perceptual grouping involves the Gestalt laws of proximity, continuity, symmetry, and closure. The author determined the field block sub-boundary with the help of geometric shapes, and the overall matching accuracies were 76.2% and 82.6% in the images of SPOT4 and SPOT5, respectively. However, such algorithms are generally only suitable for more regular and flat plots and are difficult to generalize for complex areas.

3.2. High-Level Feature Extraction Algorithms

High-level feature extraction algorithms consider both the feature information of the farmland boundary and other feature information contained in the imagery, making full use of both, especially the latter. The other features contain rich feature information, such as the spectrum, space, texture, and topology, making up for the shortcomings of the low-level feature extraction algorithms, which use edge information alone. Therefore, some high-level feature extraction algorithms can form a completely closed rather than a fragmented and meaningless land block boundary. That is, compared with low-level feature extraction algorithms, high-level feature extraction algorithms include the influence of the surrounding information on the edge line (not just the information on the edge) on the edge detection results.
Modern high-level feature extraction algorithms are mostly based on image objects, or primitives for classification [17], which overcomes the “salt and pepper effect” problem (in high-resolution imagery, local spatial heterogeneity is higher. This can be understood to mean that in high-resolution images, the same kind of features are more likely to be classified into different categories. As such, there is a lot of speckle noise which is called “salt and pepper noise” in the image) in high-resolution images to which traditional pixel-based classification algorithms are prone [71]. High-level feature extraction algorithms are one of the mainstream technologies for high-resolution image extraction [14,78]. The principle is to classify the image objects according to the similarity between adjacent cells, such as by merging and dividing cells, comparing the spectral heterogeneity and setting a spectral heterogeneity threshold, and finally forming a target object that is composed of multiple homogeneous pixels [62]. The algorithms have the ability to detect and extract farmland boundaries for images of complex areas, such as mountainous areas and paddy fields.
The first step of image segmentation is the most critical. By using various methods to finally determine the optimal segmentation scale, the field boundary can be more accurately extracted [4]. The determination of the optimal segmentation scale is mainly affected by the type of feature, the contrast between the feature and surrounding environment, the internal heterogeneity of the feature, and the image resolution [4,79]. We present the detection algorithms based on their segmentation methods in Figure 4.

3.2.1. Segmentation Based on Edge Detection Operators

With the help of edge detection operator segmentation, we can obtain an extremely high boundary extraction accuracy, and the boundary can reach the pixel level. Combined with the idea of region segmentation, we can obtain a better regional structure and, at the same time, overcome the defects of the missing and difficult-to-close boundaries of traditional edge detection. As early as 1995, Janssen et al. developed a three-stage strategy to extract the field sub-boundaries and crop types, which first integrates the results of the edge detection with the geometric data in the GIS, then classifies them based on the objects, and finally merges them based on the conditions. The strategy can be used to obtain a field geometry consistency of 87% [80]. Marina Mueller et al. obtained an edge-information-guided region growth algorithm based on an edge detection algorithm to close the gap within the edge and effectively identify large field boundaries [53]. By using a segmentation algorithm that combines gradient edge detectors with an unsupervised classification of iterative self-organizing data analysis (ISODATA), Rydberg et al. obtained highly consistent plot boundaries, with a deviation of 83% within one pixel [4]. Zhang Yanling et al. first used the mathematical morphological operation operator to segment a remote sensing image, then used the regional seed growth method to extract the complete farmland, and then further opened the farmland to smooth, track, and refine the boundary to obtain the complete farmland boundary [34]. Liao Dan used the Sobel operator for multiscale segmentation, and based on the normalized vegetation index, spectrum, texture, and other information, conducted the field boundary identification of paddy fields and dry fields in plain, hilly, and mountainous terrain areas. The field boundary recognition rate in the plain and hilly areas was about 95%, and it was 85% in the mountainous areas [55].
In 2016, Yan et al. improved a complex farmland edge recognition algorithm, which had been developed over 14 years and used throughout the United States, with an overall accuracy of 92.7% per pixel of farmland classification, and overall farmland producer and user accuracies of 93.7% and 94.9%, respectively [47]. The algorithm consists of five steps: (1) generation of the crop description of the farmland edge linearity and edge significance plots; (2) the geometric activity profile based on a variable subregion segmentation algorithm that is applied to the probability plot to obtain the candidate farmland objects; (3) application of a watershed algorithm to decompose the connected candidate field objects into isolated fields; (4) the use of geometry-based algorithms to detect and correlate the fields; (5) the use of two-pixel expansion and single-pixel erosion morphology filters for the plots. However, because the algorithm is based on remote sensing images with a resolution of 30 m, its recognition accuracy needs to be improved.

3.2.2. Watershed Algorithm

The watershed algorithm is a mathematical morphological segmentation algorithm that is based on topology theory [81]. The idea is to simulate the process of water immersion. The pixel gray value in the image indicates the altitude of the point, the water starts from each local minimum point and flows around, and the water boundary that corresponds to the different minimum points forms a watershed. The watershed algorithm can obtain accurate boundary results for single-pixel localization; however, it is prone to over-segmentation. Because the pure watershed algorithm is first used to identify the field boundary and then to create the object area, in this paper, we treat it as an edge-based algorithm; however, it acts as a region-based algorithm for operations such as special over-segmentation and merging in the later stage. Whether the algorithm is an edge-based or region-based algorithm is still controversial [14,24,82,83,84,85]. However, algorithms that are based on mathematical morphology generally perform better than traditional edge detection algorithms, and the boundary recognition results of watershed algorithms are also better than those of traditional differential edge detection algorithms [86].
In 2004, Butenuth et al. used a watershed-based algorithm that employed line detection algorithms and prior geographic information system information for the field boundary extraction operations [25]. In 2010, Deren Li et al. developed a watershed algorithm based on edge embedding markers, and the edge information detected by the edge detector with the embedded confidence could assist in the detection of weakly bounded objects and improve the accuracy of the object boundary positions. The algorithm finally obtained a pixel-level edge accuracy of about 77.6%, which indicated an excellent effect [36]. In 2014, Chen Tieqiao first used watershed segmentation and then a field clustering algorithm, and the extraction accuracy of the fields in hilly areas reached 73%, which was better than the results of the mean-shift segmentation [39]. Recently, Watkins B. et al. found that the watershed algorithm had better segmentation results than multithreshold segmentation and multiresolution segmentation algorithms. The overall accuracy (OA) of the multitemporal segmentation results combined with the watershed algorithm and Canny operator was 92.9% [64].

3.2.3. Other Segmentation Algorithms

In addition to the above two types of algorithms, there are other algorithms, which are presented in Table 8.

3.2.4. Summary

High-level feature extraction algorithms, which are rich in variety, are popular among farmland boundary extraction algorithms, among which segmentation algorithms combined with edge detection operators and watershed segmentation algorithms are the most popular. High-level feature extraction algorithms exhibit good extraction performance and can be used for a variety of terrains. However, due to the complex parameter selection of many algorithms, the different optimal parameters of different target regions, poor transferability, and recognition accuracy in extremely complex regions are still not ideal. Only a few algorithms are currently being used instead of manual visual interpretation methods.

3.3. The Visual Hierarchy Extraction Algorithm

High-level feature boundary extraction algorithms have high recognition accuracy and a certain practical performance; however, they still have many shortcomings. For example, this kind of algorithm lacks the boundary extraction ability of complex images and the generalization application ability of different images. Extracting contours from complex natural scenes is still difficult with the existing computer vision systems; however, human visual systems can quickly and accurately complete this task [90], and boundary detection algorithms that are based on vision principles are a potential breakthrough in contour extraction.
According to the principles of the visual receptive field, these boundary detection algorithms mix the boundary information and internal information of the object in the receptive field at different scales to understand the visual cues, which reflects certain advantages in the generalization of boundary detection in complex scenes. For example, Yang K. F. et al. proposed the DoG (difference of Gaussian) model derived from the classical–nonclassical receptive fields of visual nerve cells [90]. George Azzopardi et al. [91] proposed the combination of the receptive fields, and some scholars have studied the neural coding mode of the visual system with the help of convolutional neural network models [92]. In the field of farmland boundary extraction, the extraction algorithm at the visual hierarchy level is only a convolutional network algorithm for the time being.
A convolutional network is a multilayer feedforward neural network model with powerful feature learning and feature expression capabilities, and it can be used alone or in combination for field boundary recognition, with good potential. A fully convolutional network (FCN) is made up of several processing layers, which include convolution, pooling, dropout, batch normalization, and nonlinearities, and are trained end-to-end to extract semantic information. Khairiya Mudrik Masoud et al. discussed the use of a multiple dilation FCN (MD-FCN) algorithm to predict field boundaries pixel-by-pixel in 10 m resolution remote sensing. Test data from five TSs (tiles for testing the network) show that the average precision is 0.66, the average Recal is 0.61, and the average F-score is 0.63. The proposed super-resolution profile detection network (SRC-Net) can increase the image resolution from 10 m to 5 m and reduce the classification effect [3]. By combining convolutional networks with other algorithms, high accuracy, and robustness can be obtained. C Persello et al. used the fully convolutional network (FCN) to generate fragmentation contours, then used the OWT-UCM process to obtain hierarchical segmentation, and finally used the SCG algorithm to merge regions at different levels. Moreover, the gPb contour algorithm can be incorporated into the process when necessary. The accuracies of the algorithm within 10 pixels of the field boundary deviation in three experimental images were 0.72, 0.706, and 9.790 [5].
In addition to the FCN algorithm, the CNN algorithm can also be used to extract the farmland boundary. Xirong Li et al. proposed a deep boundary combination (DBC) algorithm. They first used a deep convolutional network (CNN) to obtain the edge probability map of the farmland image, then used the OWT-UCM algorithm to divide the edge probability map into the closed boundary hierarchy tree, and finally selected the appropriate threshold (k) to obtain the hierarchy tree image. According to the experiments, the algorithm effectively extracted the field boundary, and the accuracy and integrity rates of the farmland boundary detected in the two images were about 90%. According to the comparison, the correctness, completeness, and recognition quality of the algorithm are higher than the Canny-based recognition algorithm, and the correctness and quality are higher than the MCG-based algorithm [59].
Although convolutional neural network algorithms have produced good recognition results, they have aspects that are not conducive to generalization, they often take a lot of time and computing power to train a model with good performance, and they require a large number of labeled cultivated land sample datasets to train; such datasets are difficult to obtain in a short time. In addition, convolutional neural network models usually do not have good interpretability as they are only black-box simulations, which makes it difficult to interpret suitable features in the semantic sense.
At present, remote-sensing field block boundary extraction algorithms based on neural network technology are still slightly immature, and the accuracy and applicability of the existing models need to be improved. In fact, many High-Level Feature Extraction Algorithms perform better than Visual Hierarchy Extraction Algorithms in terms of accuracy. Low-Level Feature Extraction Algorithms have no obvious advantage in accuracy over Visual Hierarchy Extraction Algorithms, but the computational complexity is smaller. However, with the continuous development of artificial intelligence technology, this field is worthy of more in-depth exploration and will experience more substantial development [16].

3.4. Boundary Object Extraction Algorithm

The mainstream farmland boundary is defined as the connection line from one farmland plot to another, or from one farmland plot to a non-farmland plot [3]. However, there will be non-field boundary objects in the images, such as footpaths, roads, ditches, etc. Therefore, field boundary object extraction is another method of identification. The field boundary object extraction method refers to the idea that the object of the identification extraction is the field boundary rather than the traditional agricultural farmland boundary (AFB). At the same time, the field boundary objects may help with the identification and extraction of the field boundaries. Both Song Jiantao and Chen Jie point out that the use of roads or footpaths between plots can improve the accuracy of farmland extraction [35].
At present, there is not much research in this area because the field requires images with higher-resolution pixels, and it generally requires submeter-resolution pixels. For example, the width of the footpath in a field is 2–4 pixels at a resolution of 0.3 m [6]. High-resolution images of 0.3 m were extremely rare before 2015. In addition to roads, field boundary objects have weak linear characteristics, and they are seen against a complex natural background. Moreover, automatic extraction is difficult to achieve, and the requirements are high. In addition, the field boundary object may serve other needs, such as field road extraction, which is relevant to autonomous driving and has higher requirements. We will now briefly introduce the research status of the identification and extraction of common field boundary objects, such as footpaths, roads, and ditches.
The extraction effect of footpaths in fields in regular plots in flat areas is better than irregular or uneven areas. Zhang Mingjie studied the characteristics and marginal characteristics of footpaths. For edge features, Zhang Mingjie proposed an algorithm that first uses Canny edge detection, then postprocesses the breakpoint edge connection, and finally uses a high length threshold to extract the footpath; the comprehensive footpath extraction quality reached 96%. For the texture features, Zhang Mingjie manually selected the seed points, constructed the directional texture template, calculated the feature value, and then searched and fitted the candidate points of the footpath; the recognition accuracy of the footpath reached 98% [6]. Cai Daoqing et al. used the support vector machine algorithm and the ability of the ground equipment to shoot images at a 90.1% F1 value to achieve the recognition of a paddy field area [68].
In terms of Tiankan extraction (Tiankan is a Chinese name. Tiankan refers to the bulge in the field that is higher than the field, which is mainly used for demarcation, water storage, and pedestrian walking. The geometry, texture, and grayscale features of Tiankans and footpaths are relatively similar.), the Tiankan can be approximated as a footpath with a width greater than 1 m, according to China’s “Technical regulation of the third nationwide land survey”. The width of a Tiankan is larger than those of the footpaths in fields, the identification is less difficult, and the relevant research is more in-depth; however, Tiankans are still more difficult to extract than roads [93]. Tang Lei segmented the images from the Gaofen-2 satellite (GF-2) through multiscale layering, and then extracted them with four characteristic indexes: the angular second-order moment, entropy, contrast, and correlation indexes. The extraction accuracy of the Tiankan was 68.83%, and the kappa coefficient was 0.61 [51]. Yang Yunhui studied the maximum likelihood method, support vector machine, and unsupervised algorithm and found that the rule-based object-oriented method was the best; the extraction accuracy of the Tiankan was 86.42% [60]. Based on the object-oriented algorithm, Liu Changjuan et al. identified Tiankans in mountainous and hilly areas according to the spectral, geometric, and textural features, and the coincidence rate between the measured and actual results was 94.74% [45]. Wang et al. used a similar algorithm to extract the Tiankans of terraces, and the overlap rates of the QuickBird images (resolution: 0.6 m) and SPOT5 images (resolution: 2.5 m) were 92.33% and 88.03%, respectively [41]. However, so far, manual delineation is still the most accurate method. Zhao Yandong et al. manually outlined a Tiankan based on high-resolution images (the result was the predicted value), and they obtained a correlation coefficient (R2) of 0.977, compared with the results measured in the field (actual values) [10].
In terms of field road recognition, because most of the research is aimed at the automation of agricultural machinery, the images are ground-derived, and a good recognition effect has been achieved [65,94,95,96]. Few studies exist in which the authors use remote sensing images as the image sources; however, researchers (e.g., Ren et al. [52]) have achieved good results using GF-2-based remote sensing images to identify field roads, and the recognition rate has reached 90%.
Researchers have performed many studies on interditch identification; however, in mainstream research, the focus has been on identifying larger river ditches rather than on identifying “hairy canals” (very fine little ditches) on farmlands. Liquid-filled ditches and plots can substantially vary and can facilitate the extraction. However, a “hairy canal” has the characteristics of temporary drying, and the images of plants and weeds in the canal are larger, which results in unsatisfactory recognition accuracy. Han Wenting et al. used the object-oriented method to extract a field canal system, and the overall extraction accuracy was 77.8% [69]. Temesgen Gashaw et al. used Feature Analyst software for the identification, which can be used to select the width, linear feature shape attributes, input reflection, and discrete, texture, and elevation bands, and the identification results contained a substantial number of false positives. The accuracy of the classification of the plots with a large number of ditches was only 59% [97]. Ayana E. K. used four steps (image segmentation, morphological processing, binarization, and Hough transform) to detect field trenches, obtaining an overall classification accuracy of 87.6% [98].
Due to the variety of boundary objects and the substantial differences between the different types of boundary objects, the search for related studies is complex, and comparisons with these articles are vague. We present a summary of the above findings in Table 9.
The algorithms that identify boundary objects may have better generalization abilities than the mainstream boundary extraction algorithms. The extraction effect of the mainstream boundary extraction algorithms is affected by the crop species, which is not conducive to the promotion of different crops. Many crops are planted in small areas and are not worthy of the development of targeted extraction algorithms. Moreover, the traits of many crops are quite different from common research objects, such as rice, wheat, and corn, which may produce unsatisfactory algorithm extraction results. Extraction algorithms that are based on boundary objects are mainly affected by fields, roads, etc. If there are common fields and roads between the crops, then the extraction effect of the algorithm will not be affected.
We use chrysanthemum boundaries as an example to illustrate the advantages of boundary object extraction in generalization. Chrysanthemums have important ornamental, medicinal [99], and cultural [100] values. There are many varieties of chrysanthemums, and there are more than 3000 in China alone [101], with several of the more important varieties (e.g., Hang Bai chrysanthemum) planted in areas of 1000 km2 [102]. For such planting areas, the efficiency of the manual boundary marking method is much lower than the efficiency of the automated boundary extraction method, and thus there is a demand for the latter. However, there are no reports on the automatic boundary extraction of chrysanthemum field blocks because it is not worth developing a special boundary extraction algorithm for areas of 1000 km2. Moreover, chrysanthemums are quite different from crops such as rice, wheat, and corn, and it is difficult to achieve good results in chrysanthemum extraction with the algorithms that are used for these crops. However, the boundary object extraction method is mainly affected by the type of boundary object rather than the chrysanthemum, and chrysanthemum boundary objects are mainly footpaths and roads, which is the ideal extraction type in the boundary object extraction method. The use of this method to extract chrysanthemum boundaries is promising.

3.5. Summary of Extraction Algorithms

The four types of automated boundary extraction algorithms have their own characteristics. Low-level feature boundary extraction algorithms extract the boundaries according to the boundary linear features of the objects in the image, which is the basic technology of boundary extraction and which we can use to effectively extract the boundaries of farmland images of flat and regular areas. High-level feature boundary extraction algorithms fuse the internal information of the object in the image so that the extraction of the boundary line is complete and closed, which is of practical importance. At the same time, the internal information of the object can assist in locating the boundary information of the object so that the algorithm can process more complex terrain images, such as terraces. The boundary detection algorithm based on the principle of vision performs boundary extraction based on visual features, and it is expected to make breakthroughs in the recognition accuracy of complex areas and the generalization of different regions. The boundary object extraction algorithm is a new idea in field boundary extraction, and it has a substantial amount of potential in terms of its generalization ability.
We present the extraction results of some of the algorithms, which will help readers to understand their accuracies, in Table 10.
Due to the different image sources, types of plots, geographical environments, crop types, and growth stages, the specific criteria for the experimental evaluation are inconsistent, and we could not simply rank the algorithms according to the results. In general, the accuracy of manual annotation is much higher than that of automated annotation, and the accuracy of high-level algorithms do not have superior advantages compared with low-level algorithms, which is because low-level algorithms are used to extract the boundaries of more regular and flat fields, which is less difficult. Moreover, the early low-level research algorithms do not have systematic quantitative evaluation standards; thus, the low-level feature extraction algorithms with quantitative evaluation are often the later stage of the mature research algorithms. In contrast, early advanced feature extraction algorithms already have quantitative evaluation criteria, and the extraction accuracy of these algorithms is low. Therefore, there will be a situation where the advanced feature extraction algorithm does not have a significant advantage in accuracy compared with the low-level feature extraction algorithm. However, high-level algorithms are the mainstream technology, and they have an overwhelming advantage over low-level algorithms in the processing of high-resolution images and images of complex terrains, which has been confirmed by researchers in studies that involve comparisons of different algorithms [59,64].
In terms of use, the advantages of low-level feature extraction algorithms are that they are relatively simple, mature, easy to understand, and have high computational efficiency. Low-level feature extraction algorithms work well for images with good contrast between the object and background and can maintain correspondence with the edge of the object. However, this type of algorithm does not work well for images with smooth transitions, low contrast, and high noise values, and is difficult to use on high-resolution imagery and the imagery of complex areas.
Compared with low-level feature extraction algorithms, high-level feature extraction algorithms are generally insensitive to noise, and they can use the boundary information and graphic internal information simultaneously; thus, they can be applied to higher-resolution images, and they often have better accuracy than low-level feature extraction algorithms. However, the effectiveness of many of the algorithms depends on more suitable parameters, which is a challenge for non-specialists to establish. Moreover, they are more complex and time consuming and have high consumption costs.
Biological visual hierarchy extraction algorithms have better generalization, which makes them suitable for more complex terrain areas. These algorithms have a development potential that cannot be ignored; however, interpreting the appropriate features and rules that are obtained by them is difficult. Sometimes, different image objects, such as water and shadows, have similar spectral values, which cause “semantic gap” problems. The most critical factor is that the biological vision hierarchy extraction algorithm requires a large number of datasets and calculations, as well as the constant adjustment of the parameters.
Due to the different terrains, resolutions, and requirements of the imagery, there is currently no general algorithm to recommend [18]. To facilitate readers in the selection of the appropriate algorithm according to the image content and resolution, we present a flowchart of the recommended algorithm category selection process as a reference in Figure 5. It is worth noting that when selecting an optimal algorithm, readers should consider not only algorithm accuracy but also computational complexity and other factors.
For readers who have direct application needs and do not want to design their own algorithms, there are currently only a few software applications or products that can be put to use. We present some of the available algorithms that can be directly used in Table 11.

4. Influencing Factors

The influencing factors on the farmland boundary extraction effect can be divided into technical influencing factors and natural influencing factors. The technical factors include two categories: images and algorithms, which are the human design and change factors and are also the focus of the current research. The natural factors, which include crops, climate, topography, etc., also have an impact on the extraction results. Researchers do not study the natural factors as much as the technical factors because they generally determine the former in experiments and are often unable to select the natural factors at all. However, the influence of natural factors on the extraction process also needs to be studied because such research will help with the evaluation of the implementation difficulties of boundary extraction projects and the expected extraction effects, as well as helping with the design of the algorithm in a targeted manner. In this section, we analyze the influence of the technical and natural influences on the extraction results so that users with boundary extraction needs can reasonably obtain or purchase images, design or select algorithms, and evaluate in advance the resistance that the natural factors may produce.

4.1. Technical Factors

4.1.1. Remote Sensing Imagery

Several properties of the image affect the extraction results. In general, the image resolution, image band, and multiphase imagery all have impacts on the extraction results. Among them, the impact of the image resolution is the most critical. The improvement in the image resolution helps with the design and extraction of algorithms. The high image resolution not only makes the specific location of the identified boundary more accurate but also allows for the identification of subfield massifs (small plots inside large complete fields) [80]. In addition, the increase in image resolution has allowed for a decrease in extraction accuracy requirements to some extent [37]. For example, suppose that a boundary buffer of 2 m (i.e., the allowable horizontal offset distance between the boundary extracted by the algorithm and the real boundary) [73] corresponds to ten pixels with a resolution of 0.2 m, or four pixels with a resolution of 0.5 m; for higher-resolution images, the algorithm allows for more pixel shifts.
In addition to image resolution, image bands, and multitemporal images also have certain impacts on the extraction. All three factors can be tradeoffs in the selection of the shooting platform and sensor.

Choice of Platform

The early remote sensing images were mainly space images that relied on satellite platforms. Nowadays, there are an increasing number of aerial images that rely not only on UAV platforms but also on large aircraft platforms. In Figure 6, we present the trends for the numbers of remote sensing images for the two types with the years (the final number of images is small because the middle number is based on the five-year statistical range, and finally only from 2021 to the present). Since 2015, there has been substantial development of aerial imaging platforms, which is mainly due to the rapidly increasing popularity of drones. Therefore, presently, the two mainstream platforms that can be selected are satellite platforms and unmanned aerial vehicle platforms.
Because the type is not stated for some images, the sum of the aerial and aerospace images is less than or equal to all the images.
Compared with space imaging platforms, the most important advantage of aerial imaging platforms is the resolution. We present the numbers of images of different resolutions of aerial imagery and aerospace imagery in the literature in Figure 7. The resolutions of the aerial images are generally at the decimeter level or lower, while only about one-third of the space images are at the submeter level.
Drones are a fast, low-cost, and a flexible platform for image acquisition, especially when it comes to high-resolution images. In addition, users can change the sensor and altitude to obtain the band [33] and resolution [51,59] that they require to meet their individual needs. The disadvantages of drone platforms include payload limitations, short flight durations due to batteries, uncertain or restricted airspace rules, and the collection of large amounts of data, which is time-consuming. The first two of these drawbacks may be overcome as technology advances. The third disadvantage may be resolved as the relevant provisions are improved. In general, the development of UAV platforms still has considerable potential, especially in terms of fast data capture and high accuracy. Drones are even thought to have the potential to revolutionize remote sensing and its application areas, just as the advent of geographic information systems (GISs) did two decades ago [103].
Satellites are a more traditional image acquisition platform. Although there are disadvantages in terms of image resolution, satellites can easily provide periodic image data and a wide range of images, which makes them suitable for researchers to develop multitemporal image algorithms [47], or to select the best [48] images for boundary extraction. In addition, the acquisition of satellite images does not require researchers to enter the target area to shoot; thus, it is more convenient to obtain satellite images for long-distance research areas.
We present the main features of the two platforms in Table 12.

Choice of Sensor

One of the effects of the development of sensor technology is the increased resolution of the images. The higher the spatial resolution of the images, the more accurate the extraction results [41]. However, for drone platforms, the effect of the sensors on the image resolution is not important.
Another impact of the development of sensor technology is that researchers can obtain multiple bands of image information. The early images often only had one band [70]; however, now researchers use 12 bands of image information [3]. Different bands may have an impact on the performance of the algorithm. For example, Liu Xin [63] found that, when extracting edge information based on the Canny edge operator of three RGB spectra, forest land and fields with crop cover are more suitable for G spectrum segmentation, and fields without crop cover are more suitable for R spectrum processing. In addition to the mainstream RGB-visible-light band images, many researchers have involved other types of wavelengths, such as infrared [3,46,48]. The band has an impact on the extraction results. In addition, different bands can help with image preprocessing, such as the blue band, which provides an additional standard for aerosol detection [46].
There is a lack of direct research on the band selection in this area; however, multiband imagery tends to be advantageous over single-band imagery. For the sensors of satellite platforms, the resolution is also an important factor to pay attention to.

4.1.2. Algorithms

The algorithm is the core factor that affects farmland boundary extraction. The influence mechanism of the extraction effect of the algorithm is more complicated. In terms of the fineness of the extraction, the algorithm needs to be adapted to higher-resolution images. In terms of image complexity, the algorithm needs to be able to better capture the features. From the perspective of the algorithm feasibility, the computational pressure of the algorithm needs to be within the tolerances of the hardware and time.

Adaptability to Image Resolution

High-resolution images often have a positive effect on the improvement in extraction accuracy, as we explained in Section 4.1.1. However, the image resolution is not as high as possible, and an image resolution that is too high may render the algorithm model unusable because the performance of the same algorithm in different-resolution images varies greatly. With the increase in the number of unmanned aerial vehicle platforms, the image resolution span has increased from centimeters to tens of meters, and researchers can now, in a sense, arbitrarily select the resolution of the image. At this point, the algorithm’s inability to adapt to the resolution is a limiting factor in the image selection.
For satellite remote sensing images with meter-level or lower resolutions, only pixel-based algorithms are required to obtain better extraction results [43]. However, for higher-resolution images, the “salt and pepper” effect occurs [71]. At this point, object-based algorithms can be used instead of pixel-based algorithms to fully mine the information of the submeter- and even decimeter-level high-resolution images. The existing object-based algorithms are best suited for resolutions of 20 cm [33] or 30 cm [30]. The centimeter-level resolution images that drones can obtain have no matching algorithm idea. In simple terms, the law of “the higher the resolution, the higher the recognition accuracy ”is not suitable for centimeter-level resolution.

Ability to Capture Complex Features

The algorithm completes the extraction task by mining various features in the image. For example, the boundary of the farmland has gradient characteristics and linear features. We can use the traditional boundary detection operator to detect the gradient features, and we can use the frequency domain class operator to detect the linear features. The location where these features are detected are the field boundary. Each feature can be used by multiple algorithms for detection. For example, the gradient features can be detected using the Canny and Sobel boundary extraction operators. Generally, the stronger the algorithm’s ability to extract features, the wider the range of the feature types covered by the algorithm, and the better the algorithm’s extraction effect on the boundaries of complex fields. We present some of the common features and recommended situations in Table 13.

Cost of Computation

Researchers discussed techniques for reducing the cost of computation in the early literature [70]. However, with the development of the hardware, most algorithms today are designed and selected without consideration of the computation cost. We present the calculation speeds of some of the algorithms in the literature in Table 14.
The overall calculation speed and year are positively correlated, which is most likely due to the hardware development. The memory in the early literature was only 1 GB [70]; however, recently, many researchers have used 8 GB of memory [67,68]. Similarly, CPUs have evolved [67,70]. However, programming languages have not changed much, and mainly C [70] or C++ [43,68,74] are used.
The hardware has been improved so that the computation costs do not need to be considered for regular-sized images or algorithms. The visual-feature-based approach requires special attention to the cost of computation. For example, the MD-FCN method used by Khairiya Mudrik Masoud et al. requires 16 GB of random-access memory [3]; however, even with such a high configuration of hardware, the processing time is a major challenge in the promotion of the method.

4.2. Natural Influencing Factors

4.2.1. Crop Factors

Researchers discuss crop factors in terms of two aspects: crop type and crop growth state.
In terms of the crop type, there are currently no directly related studies on the impact of the crop type on boundary extraction, and there is controversy as to whether the crop type has a non-negligible impact on the research. Some scholars have pointed out that the crop type has little influence on the research [50]; however, others have pointed out that the selection of some images [63] and the design of the algorithms [37] do consider the crop species, which is often correlated with the local topography and climate. The differences in the images, algorithms, and evaluations between the different studies make it difficult to compare them with each other. However, by summarizing the research objects, we can roughly exclude the crops and the literature in Table 15. The crops that appear in the table are included in more than two relevant studies, which proves that boundary extraction studies for these crops are feasible, or at least that there is relevant references available in the literature
In addition, sorghum [47], peanuts[64], pecans [64], licorice [47], sunflowers [46], grapes [70], and cotton [47] all also appear once. In general, the most common crops can be boundary extracted, and we have not found any crop species for which boundary extraction is difficult. A specific influence of the crop species on the algorithm may exist; however, no special attention is required when performing the boundary extraction.
The growth stage of the crop has a substantial impact on the algorithm, and the effect is complex. For example, for a texture-feature-based algorithm [70], the texture features at the seedling stage are completely different from the texture features at the mature stage, and the latter are more conducive to algorithm processing. However, from the perspective of imagery, vigorously growing crops obscure the field boundaries [55], which makes boundary extraction more difficult. Moreover, we cannot obtain the ideal extraction results without vigorous crop growth, and crops that cannot completely cover the bare soil lead to missing separations [32], which is also one of the difficulties of boundary extraction. Different crop growth periods have different difficulties; thus, there is no uniform answer to the suitable image shooting time, which mainly depends on the algorithm being used. A general response to this problem is to use the multitemporal image data from satellite imagery for repeat experiments. If the investigator uses drone imagery, then it is recommended that he/she shoots when the crop is vigorously growing and does not obscure the field boundary [55].

4.2.2. Topographic Factors

The topographic factors are important influencing factors. Flat areas are more conducive to the automated boundary extraction of plots than non-flat areas [46,55]. Areas with a single terrain are more conducive to the automatic boundary extraction of fields than many different combinations of terrain, and regularly shaped farmland is more conducive to boundary extraction than irregularly shaped farmland [66]. The boundary extraction of special farmlands, such as paddy fields [58], irrigated fields [64], terraces [36], and valleys [56], is feasible. Because it is difficult to control for the exact consistency of variables such as the algorithm type and image resolution, these terrains are difficult to study by comparing the experiments of different researchers. In a small number of studies, the researchers focus on the flatness of the terrain. We present a comparison of the extracted results from these studies in Table 16.
As we can see in the table, the topography has a huge influence on the extraction results. Plains or flat areas are the most suitable areas for automated boundary extraction. The high-level feature extraction algorithms are affected by the terrain, and the effect of the terrain on the lower-level feature class algorithms is smaller. At the same time, Li R. et al. [66] found that regular farmland can greatly improve the resolution compared with irregular farmland. Therefore, the most ideal boundary extraction topographic environment is a regular field in a plain area.

4.2.3. Climatic Factors

Whether in respect of aerial or aerospace imagery, climatic conditions are one of the important influencing factors. Li R. et al. [66] demonstrated that the boundary extraction of images with cloud reflection is more challenging than that of conventional farmland images. Harsh weather can severely interfere with the imagery, or even render it completely worthless. Therefore, it is recommended that researchers who use satellite imagery obtain multitemporal satellite imagery and select a surface image that is as cloud-free and unpolluted as possible [48]. In addition, multispectral sensors can detect aerosols, water vapor, cirrus clouds, etc., which may help reduce the impact of harsh environments. In general, environments with small amounts of cloud cover, sufficient light, and no air pollution are the most suitable.

5. Discussion

The field of boundary extraction technology based on remote sensing images can be divided into three stages: technology exploration, technology development, and the practical application of the technology. The boundary points of the different stages are blurred. In the technical exploration stage, which was limited by the limited resolution that the early sensors provided, researchers mainly studied low-level feature extraction algorithms, and studies on the quantitative evaluation capabilities are lacking for this stage; thus, researchers could only extract regular plots. During the technological development period, with the development and use of new sensors represented by high-resolution remote sensing satellites and unmanned aerial vehicles, the image resolution has been increased by nearly ten meters, and researchers are using images with different spectra and imaging principles in their experiments, which has promoted the development of the algorithms, and as a result, high-level feature extraction algorithms and comparative scientific quantitative evaluation systems have emerged. During the practical application period of the technology, further improvements in the resolution and extraction accuracy, the emergence of some application cases, and many registrations of related patents [61,104] reflect the practical and landing needs of the algorithms. During this period, visual hierarchy extraction algorithms, with more versatility and with a complex area extraction potential, have become a new algorithm development direction. The evaluation also includes the extraction accuracy, which involves the relevant attributes of the practical requirements, such as the operation time and stability. We present the overall process in Figure 8.
Field boundary extraction technology, based on remote sensing images, plays an increasingly important role in the research on crop remote sensing and other related fields [43]. Specifically, for the agricultural field, we can use the extracted plot boundary information to count the farmland data products [64] and create cadastral maps, regional models, and farmland databases [105], and it can then be used to predict crop yields in the field [106], quantify the water consumption of plots [107], identify classified crops [11,108], derive the wind erosion risk fields [109], generate crop maps [8], assist in the deployment of insecticidal lamp nodes [110], and guide the automatic navigation of the agricultural machinery [61]. For other related fields, the boundary information can assist in mapping the relevant spatial distribution, and we can use the scientific data products that are generated for scientific research and application in climate change research, ecology, land management, environmental risk assessment, bioenergy, plant protection, water resource management, efficient fertilization, and other directions [2]. At the same time, the research on boundary extraction technology can help the field of image processing, such as coping with the angular reflector effect [58], registering the image [111], and improving the resolution [3].
Although the field of automated farmland boundary recognition has been substantially developed, we still cannot obtain all the feature boundaries as accurately as we can with visual interpretation [105]. Especially in the face of large-scale farmland identification, the generalization ability of the models and algorithms is weak, the stability of the accuracy is poor, and manual interpretation is still required, such as the manual visual interpretation in China’s third land survey and geographical national conditions monitoring project, which accounts for more than 90% of the total interpretation work [16]. The problems of the generalization ability and accuracy in the existing plot boundary acquisition technology substantially affect the practical application of plot classification, which needs to be further developed [16,22].
With the development of the technology and the expansion of the application fields, the demand for intelligent, refined, and precise remote sensing farmland monitoring has become increasingly important, demanding higher requirements for farmland boundary extraction [16]. In 2016, Yan et al. [48] achieved field extraction based on 30 m resolution satellite imagery in the United States, which is a milestone in automated field boundary extraction. The field of farmland boundary extraction has developed from low-level feature extraction algorithms to high-level feature extraction algorithms, and it is now being developed in the visual direction. Overall, the results have been fruitful. However, there are still the following challenges and shortcomings in the field of the boundary extraction of existing plots, which we hope will be solved by future generations.
Researchers need to study new algorithms to accommodate higher resolutions. The fineness and accuracy of the boundary demarcation of the plot are the most important aspects of farmland boundary information. With the rapid development of remote sensing technology, decimeter- and even centimeter-level high-resolution images have become common, and have laid the image foundation for the fine extraction of field boundaries. However, the most suitable resolutions of the existing algorithms are 20 cm [33] or 30 cm [30], which can no longer keep up with the development speed of image resolution [19]; thus, we require a new recognition algorithm to match it. At present, there are four main ways to generate new algorithms: (1) combination and optimization of the original algorithms, including the different preprocessing methods, detection algorithms, and postprocessing methods; (2) the use of cadastral recognition algorithms based on UAV images for farmland boundary extraction [103], which has not yet been attempted, as well as algorithms based on ground image recognition boundaries [112]; (3) the integration of visual feature technology and the development of deep-learning technology, such as semantic-based algorithms, which have good potential, and the “data-model-knowledge” jointly driven model, which will be an important development direction for field boundary extraction [16]; (4) the exploration of new algorithm ideas; for example, directly identifying the field boundary objects, which is a different research idea from the traditional one, and which also shows a good generalization ability and recognition accuracy.
We need to resolve the practical application of the algorithms. At present, many algorithms have excellent performances in identifying farmland boundaries in flat areas, especially algorithms that mix edge detection technology and area-based algorithms, which have excellent field boundary recognition effects. However, most of the algorithms still have three difficulties to overcome for practical use: (1) refinement, (2) automation, and (3) generalization. In terms of refinement, the lower limit of the existing field size recognition is high. We present the accuracy of the minimum identifiable field blocks obtained in some studies in Table 17, according to which the high accuracy of the existing algorithm is based on the recognition of larger field blocks. However, in practical application, large and small fields are mixed, and there are many fields that are not recognized due to their small areas or short side lengths. Algorithms need to improve the lower limit of the size of the identified field plot.
In terms of automation, the acquisition of the parameters requires manpower. Most of the parameters of the algorithm are obtained through trial and error, the parameters are only applicable to the corresponding experimental images and the remote sensing images of new areas, and multiple repeated trials are required to obtain the correct parameters. In addition, some algorithms require significant manpower, such as for the manual selection of the seed points [6], the manual delineation of the target areas, or the manual labeling of the datasets [67]. In terms of the generalization ability, many researchers only focus on regular farmland in flat areas, and although they have achieved good identification results, mountainous and hilly terrains, irregular fields, different crop species, meteorological conditions, and mixed residential areas all affect the boundary field extraction results, and subsequent researchers could further address these aspects.
The existing evaluation indicators and systems are not unified and complete. Different scholars have used different evaluation methods for the same type of experiment, and even the same evaluation method will have different names. For example, the concept of TP/(TP + FN) is referred to as the actual detection rate [30], producer accuracy [55,71], accuracy rate [68,113], PA [67], precision [5], and so on. However, there is a lack of more in-depth quantitative evaluations, such as chaotic matrices, for the common causes of poor or inappropriate image segmentation, such as over-segmentation, under-segmentation, and boundary shifts. The confusion matrix is a widely used evaluation system that is suitable for the in-depth and comprehensive evaluation of the boundary extraction results related to the pixel area. If one wants to measure the error between the extracted boundary and the actual boundary, then there are two types of mature algorithms from which to select: (1) the evaluation of the coincidence ratio of the extracted pixel to the actual boundary pixel within the range allowed by the error [4,36]; (2) the calculation of the distance from each point of the extracted boundary to the actual boundary line, which can be measured by the mean absolute error (MAE) [64]. The former evaluation algorithm is simple and direct, and it can represent the recognition accuracy within the specified error; the latter can better represent the overall deviation degree of the boundary; however, the calculation amount is large and the process is more complicated.

6. Conclusions

Farmland boundary information has important value in the agricultural field and has undergone long-term development. The extraction process of farmland boundary information can be divided into five parts: (1) image acquisition; (2) preprocessing; (3) detection algorithms; (4) postprocessing; (5) the evaluation of the boundary information extraction process, among which the detection algorithm part is the most important. The detection algorithm can be divided into four types:(1) low-level feature extraction algorithms, which only consider the boundary features; (2) high-level feature extraction algorithms, which consider boundary information and other image information simultaneously; (3) visual hierarchy extraction algorithms, which simulate biological vision systems; (4) boundary object extraction algorithms, which recognize boundary object extraction ideas. The design of the detection algorithm should consider the adaptability to different resolution images (adapt to higher resolution images as much as possible), consider the ability to capture different features, and consider the computation cost. Topography and climate have a definite influence on the results of boundary extraction. Flat, regular farmland with little cloud cover and plenty of light is the ideal scenario. Crop species and growth stages may also have potential effects. However, there are still three important problems to be solved in this field: the lack of algorithms that can be adapted to higher-resolution images, the lack of algorithms with good practical ability, and the lack of a unified and effective evaluation index system.

Author Contributions

Conceptualization, L.S. and X.W. (Xuying Wang); Writing—original draft preparation, X.W. (Xuying Wang); Writing—review and editing, R.H., X.W. (Xiaochan Wang) and T.G.; Supervision, L.S. and R.H.; Resources, X.W. (Xuying Wang), F.Y. and H.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by National Natural Science Foundation of China under (Grant 62072248, 62102170), in part by the High-end Foreign Experts Recruitment Plan of the Ministry of Science and Technology China under Grant G2021145009L, in part by Jiangsu Modern Agricultural Machinery Equipment and Technology Demonstration and Promotion Project under Grant NJ2021-11, in part by the Jiangsu Agriculture Science and Technology Innovation Fund under (Grant CX(21)306).

Data Availability Statement

Data are available upon request from researchers who meet the eligibility criteria. Kindly contact the second author privately through e-mail.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. See, L.; Fritz, S.; You, L.; Ramankutty, N.; Herrero, M.; Justice, C.; Becker-Reshef, I.; Thornton, P.; Erb, K.; Gong, P.; et al. Improved global cropland data as an essential ingredient for food security. Glob. Food Secur. 2015, 4, 37–45. [Google Scholar] [CrossRef]
  2. Chen, Z.; Ren, J.; Tang, H.; Shi, Y.; Leng, P.; Liu, J.; Wang, L.; Wu, W.; Yao, Y.; Hastua. Progress and prospect of agricultural remote sensing research. J. Remote Sens. 2016, 20, 748–767. [Google Scholar]
  3. Masoud, K.; Persello, C.; Tolpekin, V. Delineation of Agricultural Field Boundaries from Sentinel-2 Images Using a Novel Super-Resolution Contour Detector Based on Fully Convolutional Networks. Remote Sens. 2019, 12, 59. [Google Scholar] [CrossRef] [Green Version]
  4. Rydberg, A.; Borgefors, G. Integrated method for boundary delineation of agricultural fields in multispectral satellite images. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2514–2520. [Google Scholar] [CrossRef]
  5. Persello, C.; Tolpekin, V.A.; Bergado, J.R.; de By, R.A. Delineation of agricultural fields in smallholder farms from satellite images using fully convolutional networks and combinatorial grouping. Remote Sens. Environ. 2019, 231, 111253. [Google Scholar] [CrossRef] [PubMed]
  6. Zhang, M. Research on Identification and Extraction of Flat Area Boundary Based on Aerial Imagery. Master’s Thesis, China University of Mining and Technology, Beijing, China, 2015. [Google Scholar]
  7. Zheng, M.; Luo, Z.; Chen, W.; Guan, B.; Ma, H. Plot-boundary extraction of UAV agricultural applications based on improved Mean Shift. Hubei Agric. Sci. 2021, 60, 153–156. [Google Scholar] [CrossRef]
  8. Turker, M.; Arikan, M. Sequential masking classification of multi-temporal Landsat7 ETM+ images for field-based crop mapping in Karacabey, Turkey. Int. J. Remote Sens. 2005, 26, 3813–3830. [Google Scholar] [CrossRef]
  9. Addo, K.A. Urban and Peri-Urban Agriculture in Developing Countries Studied using Remote Sensing and In Situ Methods. Remote Sens. 2010, 2, 497–513. [Google Scholar] [CrossRef] [Green Version]
  10. Zhao, Y.; Wei, C.; Wang, G.; Gao, X. Discussion on the calculation method of land three-adjustment field coefficient based on high-resolution image. Mine Surv. 2019, 47, 82–86. [Google Scholar]
  11. Yang, Y.; Huang, Q.; Wu, W.; Luo, J.; Gao, L.; Dong, W.; Wu, T.; Hu, X. Geo-Parcel Based Crop Identification by Integrating High Spatial-Temporal Resolution Imagery from Multi-Source Satellite Data. Remote Sens. 2017, 9, 1298. [Google Scholar] [CrossRef] [Green Version]
  12. Han, Y. Research on Remote Sensing Classification Method of Crops for Plots. Master’s Thesis, University of Chinese Academy of Sciences (Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences), Beijing, China, 2018. [Google Scholar]
  13. Rahini, K.K.; Sudha, S.S. Review of Image Segmentation Techniques: A Survey. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2014, 4, 842–845. [Google Scholar]
  14. Hossain, M.D.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar] [CrossRef]
  15. Liu, J.; Mao, Z. A review of high spatial resolution remote sensing image segmentation methods. Remote Sens. Inf. 2009, 95–101. Available online: https://kns.cnki.net/kcms2/article/abstract?v=3uoqIhG8C44YLTlOAiTRKgchrJ08w1e75TZJapvoLK307TAp-qw9bu4fIW2-ILc608-q_ACFPMRYiasx1Eshc-BqpgvvOeSA&uniplatform=NZKPT (accessed on 10 January 2023).
  16. Gang, Z.; Wang, J.; Hua, L.; Duan, Z.; Xu, G. Review of remote sensing farmland monitoring status and methods. Guangxi Sci. 2022, 29, 1–12+211. [Google Scholar] [CrossRef]
  17. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  18. Duan, R.; Li, Q.; Li, Y. A review of image edge detection methods. Opt. Technol. 2005, 31, 415–419. [Google Scholar]
  19. Fu, G. Review of high-resolution remote sensing image segmentation methods. West. Resour. 2016, 6, 135–137. [Google Scholar] [CrossRef]
  20. Koschan, A.; Bericht, T.; Fur, U. Colour Image Segmentation: A Survey. Hw3.arz.oeaw.ac.at 1994. Available online: https://www.researchgate.net/publication/2553859_Colour_Image_Segmentation_A_Survey (accessed on 10 January 2023).
  21. Cheng, H.D.; Jiang, X.H.; Sun, Y.; Wang, J. Color image segmentation: Advances and prospects. Pattern Recognit. 2001, 34, 2259–2281. [Google Scholar] [CrossRef]
  22. Han, Y.; Meng, J. Research progress on remote sensing classification of crops for plots. Remote Sens. Land Resour. 2019, 31, 1–9. [Google Scholar]
  23. Fu, K.S.; Mui, J.K. A survey on image segmentation. Pattern Recognit. 1981, 13, 3–16. [Google Scholar] [CrossRef]
  24. Carleer, A.; Debeir, O.; Wolff, E. Assessment of Very High Spatial Resolution Satellite Image Segmentations. Photogramm. Eng. Remote Sens. 2005, 71, 1285–1294. [Google Scholar] [CrossRef] [Green Version]
  25. Butenuth, M.; Straub, B.-M.; Heipke, C. Automatic Extraction of Field Boundaries from Aerial Imagery. In Proceedings of the KDNet Symposium on Knowledge-Based Services for the Public Sector, Bonn, Germany, 3–4 June 2004; Available online: https://www.ipi.uni-hannover.de/fileadmin/ipi/publications/butenuth.etal.kdnetsymposium04.bonn.pdf (accessed on 10 January 2023).
  26. Duveiller, G.; Defourny, P. A conceptual framework to define the spatial resolution requirements for agricultural monitoring using remote sensing. Remote Sens. Environ. 2010, 114, 2637–2650. [Google Scholar] [CrossRef]
  27. Valero, S.; Morin, D.; Inglada, J.; Sepulcre-Canto, G.; Arias, M.; Hagolle, O.; Dedieu, G.; Bontemps, S.; Defourny, P.; Koetz, B. Production of a Dynamic Cropland Mask by Processing Remote Sensing Image Series at High Temporal and Spatial Resolutions. Remote Sens. 2016, 8, 55. [Google Scholar] [CrossRef] [Green Version]
  28. Xie, G.; Huang, Q.H.; Yang, S.; Qin, Z.; Liu, L.; Deng, T. Extraction of citrus planting plots in hilly areas based on medium and high resolution images. South. J. Agric. Sci. 2021, 52, 3454–3462. [Google Scholar]
  29. Bai, C. Information Extraction from Terraces Based on Deep Learning and Its Influence on Soil Erosion Estimation. Master’s Thesis, Xi’an University of Science and Technology, Xi’an, China, 2021. [Google Scholar]
  30. Wu, H.; Lin, X.; Li, X.; Xu, X. Parcel boundary extraction of UAV remote sensing imagery for agricultural applications. Computer 2019, 39, 298–304. [Google Scholar]
  31. Huang, H.; Wu, B. Analysis of the relationship between feature size, object scale, and image resolution. Remote Sens. Technol. Appl. 2006, 3, 243–248. [Google Scholar]
  32. Zheng, Q.; Qiu, C.; Li, C.; Zhou, J.; Huai, H.; Wang, J.; Zhang, Q. Object-oriented UAV remote sensing image subfield block boundary extraction. Technol. Innov. 2022, 3, 1–3. [Google Scholar] [CrossRef]
  33. Crommelinck, S.; Bennett, R.; Gerke, M.; Yang, M.Y.; Vosselman, G. Contour Detection for UAV-Based Cadastral Mapping. Remote Sens. 2017, 9, 171. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, Y.; Feng, F.; Yan, H. High-resolution image farmland information extraction method. Geospat. Inf. 2010, 8, 78–80. [Google Scholar]
  35. Chen, J.; Chen, T.; Mei, X.; Shao, Q.; Deng, M. High-resolution remote sensing images based on optimal scale selection for hilly farmland extraction. Trans. Chin. Soc. Agric. Eng. 2014, 30, 99–107. [Google Scholar]
  36. Li, D.; Zhang, G.; Wu, Z.; Yi, L. An Edge Embedded Marker-Based Watershed Algorithm for High Spatial Resolution Remote Sensing Image Segmentation. IEEE Trans. Image Process. 2010, 19, 2781–2787. [Google Scholar] [CrossRef]
  37. Ghaffarian, S.; Turker, M. An improved cluster-based snake model for automatic agricultural field boundary extraction from high spatial resolution imagery. Int. J. Remote Sens. 2019, 40, 1217–1247. [Google Scholar] [CrossRef]
  38. Yu, W.; Meichen, F.; Li, W. A robust farmland edge detection method combining anisotropic diffusion smoothing and a Canny edge detector. In Proceedings of the 2015 IEEE International Conference on Progress in Informatics and Computing (PIC), Nanjing, China, 18–20 December 2015; pp. 296–300. [Google Scholar]
  39. Chen, T. Multi-Scale Extraction of Cultivated Land with High-Resolution Remote Sensing Images. Master’s Thesis, Central South University, Changsha, China, 2014. [Google Scholar]
  40. Pang, X.; Zhu, W.; Pan, Y.; Jia, B. Research on cultivated land plot extraction method based on high-resolution remote sensing imagery. Sci. Surv. Mapp. 2009, 34, 48–49+161. [Google Scholar]
  41. Wang, Q.; Liang, L. Calculation of field coefficient based on high-resolution remote sensing images. Mapp. Spat. Geogr. Inf. 2009, 32, 99–101. [Google Scholar]
  42. Usman, B.; Beiji, Z. Satellite Imagery Cadastral Features Extractions using Image Processing Algorithms: A Viable Option for Cadastral Science. Int. J. Comput. Sci. Issues IJCSI 2012, 9, 30–38. [Google Scholar]
  43. Turker, M.; Kok, E.H. Field-based sub-boundary extraction from remote sensing imagery using perceptual grouping. ISPRS J. Photogramm. Remote Sens. 2013, 79, 106–121. [Google Scholar] [CrossRef]
  44. Ishida, T.; Itagaki, S.; Sasaki, Y.; Ando, H. Application of wavelet transform for extracting edges of paddy fields from remotely sensed images. Int. J. Remote Sens. 2004, 25, 347–357. [Google Scholar] [CrossRef]
  45. Liu, C.; Yang, M.; Zhang, X. Feasibility study on calculating cultivated land yield coefficient using high-resolution remote sensing images. Land Resour. Guide 2007, 5, 48–50. [Google Scholar]
  46. Waldner, F.; Canto, G.S.; Defourny, P. Automated annual cropland mapping using knowledge-based temporal features. ISPRS J. Photogramm. Remote Sens. 2015, 110, 1–13. [Google Scholar] [CrossRef]
  47. Yan, L.; Roy, D.P. Automated crop field extraction from multi-temporal Web Enabled Landsat Data. Remote Sens. Environ. 2014, 144, 42–64. [Google Scholar] [CrossRef] [Green Version]
  48. Yan, L.; Roy, D.P. Conterminous United States crop field size quantification from multi-temporal Landsat data. Remote Sens. Environ. 2016, 172, 67–86. [Google Scholar] [CrossRef] [Green Version]
  49. Rydberg, A.; Borgefors, G. Extracting multispectral edges in satellite images over agricultural fields. In Proceedings of the Proceedings 10th International Conference on Image Analysis and Processing, Venice, Italy, 27–29 September 1999; pp. 786–791. [Google Scholar]
  50. Ji, C.Y. Delineating agricultural field boundaries from TM imagery using dyadic wavelet transforms. ISPRS J. Photogramm. Remote Sens. 1996, 51, 268–283. [Google Scholar] [CrossRef]
  51. Tang, L. Automatic Extraction of Terraced Fields in Loess Hilly Area Based on GF-2 Image. Master’s Thesis, Gansu Agricultural University, Lanzhou, China, 2020. [Google Scholar]
  52. Ren, Y.; Pan, Y.; Liu, Y.; Tang, X.; Gao, B.; Gao, Y. Evaluation Method of Field Road Accessibility Based on GF-2 Satellite Imagines. In Proceedings of the 2018 7th International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Hangzhou, China, 6–9 August 2018; pp. 1–4. Available online: https://ieeexplore.ieee.org/abstract/document/8476060 (accessed on 10 January 2023).
  53. Mueller, M.; Segl, K.; Kaufmann, H. Edge- and region-based segmentation technique for the extraction of large, man-made objects in high-resolution satellite imagery. Pattern Recognit. 2004, 37, 1619–1628. [Google Scholar] [CrossRef]
  54. Maghsoodi, L.; Ebadi, H.; Sahebi, M.R.; Kabolizadeh, M. Development of a Nonparametric Active Contour Model for Automatic Extraction of Farmland Boundaries from High-Resolution Satellite Imagery. J. Indian Soc. Remote Sens. 2019, 47, 295–306. [Google Scholar] [CrossRef]
  55. Liao, D. Research on Field Boundary Extraction Method of Different Geomorphological Areas Based on High-Resolution Remote Sensing Imagery. Master’s Thesis, Sichuan Agricultural University, Ya’an, China, 2014. [Google Scholar]
  56. García-Pedrero, A.; Gonzalo-Martín, C.; Lillo-Saavedra, M. A machine learning approach for agricultural parcel delineation through agglomerative segmentation. Int. J. Remote Sens. 2017, 38, 1809–1819. [Google Scholar] [CrossRef] [Green Version]
  57. Su, T.; Li, H.; Zhang, S.; Li, Y. Image segmentation using mean shift for extracting croplands from high-resolution remote sensing imagery. Remote Sens. Lett. 2015, 6, 952–961. [Google Scholar] [CrossRef]
  58. Shao, Y.; Tan, Q.; Xiao, J.; Guo, H. Paddy field boundary extraction study on Radarsat SAR image. High-Tech Commun. 2002, 11, 26–29. [Google Scholar]
  59. Li, X.; Xu, X.; Yang, R.; Pu, F. DBC: Deep Boundaries Combination for Farmland Boundary Detection Based on UAV Imagery. In Proceedings of the IGARSS 2020–2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1464–1467. [Google Scholar]
  60. Yang, Y. Tiankan object-oriented extraction based on UAV imagery. Master’s Thesis, Shandong Agricultural University, Tai’an, China, 2020. [Google Scholar]
  61. Li, Y.; Zhang, Y. A Farmland Boundary Extraction Method Based on High-Resolution Remote Sensing Images. CN109146889B, 19 November 2021. [Google Scholar]
  62. Fan, L.; Cheng, Y.; Wang, L.; Liu, T.; Hu, F.; Feng, X. The planting area of winter wheat was extracted by object-oriented classification method based on multi-scale segmentation. China’s Agric. Resour. Zoning 2010, 31, 44–51. [Google Scholar]
  63. Liu, X. Research on Cultivated Land Information Extraction in Low-Hilly Area of Sichuan Province Based on UAV High-Resolution Remote Sensing Imagery. Master’s Thesis, Sichuan Agricultural University, Ya’an, China, 2017. [Google Scholar]
  64. Watkins, B.; van Niekerk, A. A comparison of object-based image analysis approaches for field boundary delineation using multi-temporal Sentinel-2 imagery. Comput. Electron. Agric. 2019, 158, 294–302. [Google Scholar] [CrossRef]
  65. Wang, X.; Li, Y.; Liu, D.; Sun, H.; Huang, X. Virtual midline extraction method of field roads in hilly mountainous areas based on machine vision. J. Southwest Univ. Nat. Sci. Ed. 2018, 40, 162–169. [Google Scholar] [CrossRef]
  66. Li, R.; Gao, K.; Dou, Z. A New Smoothing-Based Farmland Extraction Approach with Vectorization from Raster Remote Sensing Images. In International Conference on Image and Graphics, Proceedings of the Image and Graphics, Beijing, China, 23–25 August 2019; Springer: Cham, Switzerland, 2019; pp. 334–346. [Google Scholar]
  67. Song, J.; Li, D.; Guo, B. Semi-automatic extraction of parcel boundaries based on remote sensing imagery. Beijing Surv. Mapp. 2019, 33, 1171–1175. [Google Scholar] [CrossRef]
  68. Cai, D.; Li, Y.; Qin, C.; Liu, C. Detection method of paddy field boundary support vector machine. Trans. Chin. Soc. Agric. Mach. 2019, 50, 22–27+109. [Google Scholar]
  69. Han, W.; Zhang, L.; Zhang, H.; Shi, Z.; Yuan, M.; Wang, Z. Extraction of distribution information of field canal system based on UAV remote sensing and object-oriented method. Trans. Chin. Soc. Agric. Mach. 2017, 48, 205–214. [Google Scholar]
  70. Da Costa, J.P.; Michelet, F.; Germain, C.; Lavialle, O.; Grenier, G. Delineation of vine parcels by segmentation of high resolution remote sensed images. Precis. Agric. 2007, 8, 95–110. [Google Scholar] [CrossRef]
  71. Wang, H.; Li, Y.; Wu, X.; Li, Z. Object-oriented UAV imagery land use classification combined with spatial analysis. Surv. Mapp. Eng. 2018, 27, 57–61. [Google Scholar] [CrossRef]
  72. Hong, R.; Park, J.; Jang, S.; Shin, H.; Kim, H.; Song, I. Development of a Parcel-Level Land Boundary Extraction Algorithm for Aerial Imagery of Regularly Arranged Agricultural Areas. Remote Sens. 2021, 13, 1167. [Google Scholar] [CrossRef]
  73. Wassie, Y.A.; Koeva, M.N.; Bennett, R.M.; Lemmen, C.H.J. A procedure for semi-automated cadastral boundary feature extraction from high-resolution satellite imagery. J. Spat. Sci. 2018, 63, 75–92. [Google Scholar] [CrossRef] [Green Version]
  74. Chen, Y.; Tang, X.; Peng, Y.; Xu, Y.; Li, C. Research on image segmentation technology of farmland plots. Trans. Chin. Soc. Agric. Mach. 2010, 41, 253–256. [Google Scholar]
  75. Shih, F.Y.; Cheng, S. Adaptive mathematical morphology for edge linking. Inf. Sci. 2004, 167, 9–21. [Google Scholar] [CrossRef]
  76. Zhang, C.; Wang, Z.; Yang, J.; Zhu, D. Automatic extraction method of linear engineering features in farmland based on Canny operator. Trans. Chin. Soc. Agric. Mach. 2015, 46, 270–275. [Google Scholar]
  77. Jiao, Y.; Huang, B. Review of image denoising methods based on wavelet transform. Electron. Prod. 2015, 7, 55–56. [Google Scholar] [CrossRef]
  78. Ming, D.; Li, J.; Wang, J.; Zhang, M. Scale parameter selection by spatial statistics for GeOBIA: Using mean-shift based multi-scale segmentation as an example. ISPRS J. Photogramm. Remote Sens. 2015, 106, 28–41. [Google Scholar] [CrossRef]
  79. Zhang, X.; Du, S. Learning selfhood scales for urban land cover mapping with very-high-resolution satellite images. Remote Sens. Environ. 2016, 178, 172–190. [Google Scholar] [CrossRef]
  80. Janssen, L.L.F.; Molenaar, M. Terrain objects, their dynamics and their monitoring by the integration of GIS and remote sensing. IEEE Trans. Geosci. Remote Sens. 1995, 33, 749–758. [Google Scholar] [CrossRef]
  81. Vincent, L.; Soille, P. Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 583–598. [Google Scholar] [CrossRef] [Green Version]
  82. Dey, V.; Zhang, Y.S.; Zhong, M. A Review on Image Segmentation Techniques with Remote Sensing Perspective. 2010. Available online: https://citeseerx.ist.psu.edu/pdf/72ea0209ea48769cbda2863c4295cafdaeac41d1 (accessed on 10 January 2023).
  83. Yang, J.; He, Y.; Caspersen, J. Region merging using local spectral angle thresholds: A more accurate method for hybrid segmentation of remote sensing images. Remote Sens. Environ. 2017, 190, 137–148. [Google Scholar] [CrossRef]
  84. De Smet, P. Implementation and Analysis of an Optimized Rainfalling Watershed Algorithm. Proc. SPIE-Int. Soc. Opt. Eng. 2000, 3974, 759–766. [Google Scholar]
  85. Meinel, G.; Neubert, M. A Comparison of Segmentation Programs for High Resolution Remote Sensing Data. Int. Arch. Photogramm. Remote Sens. 2004, 35 Pt B, 1097–1105. [Google Scholar]
  86. Kaur, B.; Garg, A. Mathematical morphological edge detection for remote sensing images. In Proceedings of the 2011 3rd International Conference on Electronics Computer Technology, Kanyakumari, India, 8–10 April 2011; pp. 324–327. [Google Scholar]
  87. Cheng, J.; Li, L.; Luo, B.; Wang, S.; Liu, H. High-resolution remote sensing image segmentation based on improved RIU-LBP and SRM. EURASIP J. Wirel. Commun. Netw. 2013, 2013, 263. [Google Scholar] [CrossRef] [Green Version]
  88. Arbeláez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour Detection and Hierarchical Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 898–916. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Lin, Y.; Li, X.; Zhang, H.; Luan, Q. Fast local binary fitting optimization segmentation algorithm. Computer 2013, 33, 491–494. [Google Scholar]
  90. Yang, K.F.; Li, C.Y.; Li, Y.J. Multifeature-Based Surround Inhibition Improves Contour Detection in Natural Images. IEEE Trans. Image Process. 2014, 23, 5020–5032. [Google Scholar] [CrossRef] [PubMed]
  91. Azzopardi, G.; Petkov, N. A CORF computational model of a simple cell that relies on LGN input outperforms the Gabor function model. Biol. Cybern. 2012, 106, 177–189. [Google Scholar] [CrossRef] [Green Version]
  92. Zheng, Y.; Jia, S.; Yu, Z.; Liu, J.K.; Huang, T. Unraveling neural coding of dynamic natural visual scenes via convolutional recurrent neural networks. Patterns 2021, 2, 100350. [Google Scholar] [CrossRef]
  93. Tang, L.; Zhou, B.; Ma, T.; Tian, J.; Zhang, F. Automatic extraction of terraced fields based on GF-2 images. People’s Yellow River 2021, 43, 116–119. [Google Scholar]
  94. Li, Y.; Xu, J.; Liu, D.; Yu, Y. Scene recognition of field roads in hilly mountainous areas based on improved void convolutional neural network. Trans. Chin. Soc. Agric. Eng. 2019, 35, 150–159. [Google Scholar]
  95. Zhu, S.; Xie, Z.; Xu, H.; Pan, J.; Ren, S. Field road detection method based on machine vision. Jiangsu J. Agric. Sci. 2013, 29, 1291–1296. [Google Scholar]
  96. Li, Y.; Wang, X.; Liu, D. 3D Autonomous Navigation Line Extraction for Field Roads Based on Binocular Vision. J. Sens. 2019, 2019, 6832109. [Google Scholar] [CrossRef] [Green Version]
  97. Gashaw, T.; Bantider, A.; Zeleke, G.; Alamirew, T.; Jemberu, W.; Worqlul, A.W.; Dile, Y.T.; Bewket, W.; Meshesha, D.T.; Adem, A.A.; et al. Evaluating InVEST model for estimating soil loss and sediment export in data scarce regions of the Abbay (Upper Blue Nile) Basin: Implications for land managers. Environ. Chall. 2021, 5, 100381. [Google Scholar] [CrossRef]
  98. Ayana, E.K.; Fisher, J.R.B.; Hamel, P.; Boucher, T.M. Identification of ditches and furrows using remote sensing: Application to sediment modelling in the Tana watershed, Kenya. Int. J. Remote Sens. 2017, 38, 4611–4630. [Google Scholar] [CrossRef]
  99. Hadizadeh, H.; Samiei, L.; Shakeri, A. Chrysanthemum, an ornamental genus with considerable medicinal value: A comprehensive review. South Afr. J. Bot. 2022, 144, 23–43. [Google Scholar] [CrossRef]
  100. Dai, S. Chinese Chrysanthemums and World Horticulture (Overview). J. Hebei Norm. Univ. Sci. Technol. 2004, 2, 1–5+19. [Google Scholar]
  101. Li, H.; Shao, J. Investigation, collection and classification of chrysanthemum variety resources in China. J. Nanjing Agric. Univ. 1990, 1, 30–36. [Google Scholar]
  102. Li, J.; Li, H.; Hu, S. Current status of production areas and commodity types of chrysanthemums. Chin. Pharm. 2015, 18, 1098–1100. [Google Scholar]
  103. Crommelinck, S.; Bennett, R.; Gerke, M.; Nex, F.; Yang, M.Y.; Vosselman, G. Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping. Remote Sens. 2016, 8, 689. [Google Scholar] [CrossRef] [Green Version]
  104. Zhang, H.; Liu, M.; Wang, Z.; Yan, Y.; Song, W.; Liu, X.; Mu, X.; Wang, H.; Li, Y.; Li, J.M. A High-Precision Extraction Method and System for Farmland Boundaries. CN114648704A, 21 June 2022. [Google Scholar]
  105. De Wit, A.J.W.; Clevers, J.G.P.W. Efficiency and accuracy of per-field classification for operational crop mapping. Int. J. Remote Sens. 2004, 25, 4091–4112. [Google Scholar] [CrossRef]
  106. Enclona, E.A.; Thenkabail, P.S.; Celis, D.; Diekmann, J. Within-field wheat yield prediction from IKONOS data: A new matrix approach. Int. J. Remote Sens. 2004, 25, 377–388. [Google Scholar] [CrossRef]
  107. Abdul Hakeem, K.; Raju, P.V. Use of high-resolution satellite data for the structural and agricultural inventory of tank irrigation systems. Int. J. Remote Sens. 2009, 30, 3613–3623. [Google Scholar] [CrossRef]
  108. Janssen, L.L.F.; Middelkoop, H. Knowledge-based crop classification of a Landsat Thematic Mapper image. Int. J. Remote Sens. 1992, 13, 2827–2837. [Google Scholar] [CrossRef]
  109. Böhner, J.; Schäfer, W.; Conrad, O.; Gross, J.; Ringeler, A. The WEELS model: Methods, results and limitations. CATENA 2003, 52, 289–308. [Google Scholar] [CrossRef]
  110. Li, K.; Shu, L.; Huang, K.; Sun, Y.; Yang, F.; Zhang, Y.; Huo, Z.; Wang, Y.; Wang, X.; Lu, Q.; et al. Research status and prospect of solar insecticidal lamp Internet of Things. Smart Agric. 2019, 1, 13–28. [Google Scholar]
  111. Saleem, S.; Bais, A.; Khawaja, Y.M. Registering aerial photographs of farmland with satellite imagery. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 945–948. [Google Scholar]
  112. Chen, B.; Cao, Q. Laser and IMU-based crop height and boundary detection of fields. Mechatronics 2020, 26, 46–51. [Google Scholar] [CrossRef]
  113. Liu, B.; Deng, J.; Song, Y.; Shu, Y. High-resolution remote sensing image cloud detection based on convolutional neural network. Geospat. Inf. 2017, 15, 12–15+19. [Google Scholar]
Figure 1. Different forms of farmland boundaries in remote sensing imagery: (a) threads in farmland imagery [3]; (b) demarcation lines between farmlands [5]; (c) boundary objects between farmlands [6]; (d) perimeter boundaries of farmland blocks [7] (The number in the picture (d) is the number of the farmland).
Figure 1. Different forms of farmland boundaries in remote sensing imagery: (a) threads in farmland imagery [3]; (b) demarcation lines between farmlands [5]; (c) boundary objects between farmlands [6]; (d) perimeter boundaries of farmland blocks [7] (The number in the picture (d) is the number of the farmland).
Electronics 12 01156 g001
Figure 2. The flow of image boundary extraction.
Figure 2. The flow of image boundary extraction.
Electronics 12 01156 g002
Figure 3. Classification of low-level feature extraction algorithms.
Figure 3. Classification of low-level feature extraction algorithms.
Electronics 12 01156 g003
Figure 4. Classification of high-level feature extraction algorithms.
Figure 4. Classification of high-level feature extraction algorithms.
Electronics 12 01156 g004
Figure 5. Flowchart of algorithm selection.
Figure 5. Flowchart of algorithm selection.
Electronics 12 01156 g005
Figure 6. Graphs of the types of remote sensing used in the literature over time.
Figure 6. Graphs of the types of remote sensing used in the literature over time.
Electronics 12 01156 g006
Figure 7. Space imagery and aerial imagery resolutions and quantitative statistical charts.
Figure 7. Space imagery and aerial imagery resolutions and quantitative statistical charts.
Electronics 12 01156 g007
Figure 8. The development process of farmland boundary extraction technology.
Figure 8. The development process of farmland boundary extraction technology.
Electronics 12 01156 g008
Table 1. Types, names, and resolutions of remote sensing satellites used in the literature.
Table 1. Types, names, and resolutions of remote sensing satellites used in the literature.
LiteratureSatellite Type (Affiliation)Satellite NameResolution
[34,35,36,37,38,39,40,41,42]Commercial satellitesQuickBird0.61 m, 2.44 m
[4,41,43,44,45,46]Centre National d’Etudes Spatiales (CNES)SPOT seriesSPOT-4, 20 m
SPOT-5, 2.5 m
[4,46,47,48,49,50]National Aeronautics and Space Administration (NASA)Landsat30 m
[10,38,51,52]China Center for Resource Satellite Data and Applications (CRESDA)GF seriesGF-1, 2 m
GF-2, 1 m
[53,54]Government of IndiaIRS seriesIRS-C, 5.8 m
IRS-P5, 2.5 m
[37,53]Commercial satellitesIKONOS4 m, 1 m
[3,5]Commercial satellitesRapidEye5 m
[3,5]European Space Agency (ESA)Sentinel-210 m
[55]Commercial satellitesPleiades0.5 m
[10]Commercial satellitesBJ-21 m
[56]Commercial satellitesWorldView-22.4 m
[57]Commercial satellitesORBVIEW34 m
[46]National Aeronautics and Space Administration (NASA)MODIS250 m
[58]Canadian Space Agency (CSA)RADARSAT seriesN
Table 2. Names, sensors, resolutions, and altitudes of remote sensing UAVs used in the literature.
Table 2. Names, sensors, resolutions, and altitudes of remote sensing UAVs used in the literature.
LiteratureAircraft NameSensorResolutionAltitude
[30,59]DJI Phantom 4 PROown sensor0.03 m110 m
[51]DJI Phantom 4 RTKown sensor0.05 m195~225 m
[33]DJI Phantom 3Sony EXMOR FC300S0.03 mN
[33]GerMAP G180Ricoh GR0.0486 mN
[33]DT18 PPKDT-3Bands RGB0.0361 mN
[60]eBee plusCannon IXUS 220 HS 0.48 m340 m
Table 3. Introduction to universal preprocessing methods.
Table 3. Introduction to universal preprocessing methods.
Preprocessing MethodsBrief IntroductionLiterature
CorrectionGeometric correctionA method for correcting position distortion in an image.[5,37,44,45,55,61,64,65]
Radiometric correctionA method for eliminating or correcting noise from the radiated energy of a sensor during imaging.[5,6,45,51,64]
Atmospheric correctionA Method for correcting the influence of the atmosphere on satellite imagery.[5]
OrganizingImage enhancementA method of improving image quality and highlighting objects of interest by enhancing the edges of the boundary area or enhancing the overall image contrast.[6,35,38,39,55,66]
Image smoothingImage smoothing is also known as image filtering. The image smoothing method can filter out image background noise, adjust the ratio of textures, and improve image quality.[6,40,66,67]
Band extractionA method of extracting bands of interest from imagery containing multiple bands.[38,40]
Weighted averageA method to sum a weighted average of multiple band plots to obtain a new image. This method reduces image error, but the characteristics of the imagery in each band are affected.[43,64]
Vegetation indexationA method for calculating three channels of RGB to obtain a visible light vegetation index that is conducive to classification.[32,47]
Image fillingA method for filling time series images. This method uses the linear interpolation of the nearest non-missing images to fill in the missing places to ensure the integrity of the experimental data.[47]
Image fusionA method of fusing images from different bands and retaining the characteristics of each band image.[37,39,45,55]
Image transformationA method of converting a traditional RGB image format to another format. For example, after HSV transformation, you can get Hue, Saturation, and Value.[55]
Image croppingA method of cropping an image to meet the upper limit of how the algorithm runs.[7,30,38,51]
Table 4. Common requirements and corresponding postprocessing methods.
Table 4. Common requirements and corresponding postprocessing methods.
LiteratureDemand and PurposeThe Corresponding
Postprocessing Method
[3,6,40,42,67,71]Affected by the noise in the image, there are gaps and gaps in the edges of the results obtained by the detection algorithm, and edge breakpoints need to be connectedExpansion refinement method, minimum point pairing method, corrosion operation, and related mathematical morphological treatment
[7,67]For detection algorithms based on object-oriented ideas, it is necessary to merge a large number of labeled result objects and extract the contoursBinarization method, merging process, and edge detection method
[42,68]If the field in the test area is regular, the extracted boundary points can be converted into straight linesHough transform
[37,44,66]The boundary needs to be refined, simplified, and skeletonizedSnakes model, line simplification algorithm, Hilditch algorithm
[66]Vectorize the extraction resultsFlood Fill
Table 5. Evaluations of core competencies.
Table 5. Evaluations of core competencies.
The Type of EvaluationThe Name of the EvaluationBrief IntroductionLiterature
Correct and correct pixels inside and outside the boundary lineTrue and false data statisticsCount the number of correct (coincident) pixels and incorrect (non-coincident) pixels, a relatively simple way to evaluate.[36,66]
Area correlation coefficientThe overall area error is evaluated, which is suitable for boundary extraction algorithms for area-related tasks.[10]
Confusion matrixThe confusion matrix is a standard format for accuracy evaluation, on which a variety of evaluation criteria such as overall classification accuracy (OA), Accuracy, Precision, Sensitivity (Recall), Specificity, and F1 Score can be obtained.[5,30,46,47,48,54,57,59]
Pixel deviation on the boundary lineAverage absolute error (MAE)MAE measures the closest distance from each predicted boundary element to the actual boundary unit, and the algorithm indicates how close the extracted boundary is to the actual boundary in geographic space, the smaller the better.[64]
Boundary overlap methodWithin the buffer width, match the extraction boundary to the reference boundary. The resulting data represents the extraction accuracy within a certain acceptable error. The results of this method can be further evaluated using other methods, such as simple correct-error statistics, and confusion matrices.[4,33,72,73]
Overall classification correctnessOverall matching ratioThe results are divided into three categories: over-segmentation (incorrectly subdividing complete objects into multiple smaller objects), under-segmentation (many complete objects are incorrectly merged into larger objects, also known as insufficient segmentation), and equal segmentation, or divided into two categories: accurate identification and failure to accurately identify. The evaluation is meticulous and convenient, but manual judgment is required.[37,43]
Object consistency errorOn the basis of the overall matching ratio, it has weights, which are extremely sensitive to under-segmentation and over-segmentation of farmland.[35,39]
Degree of over-segmentationUsed to study cases where a complete object is incorrectly subdivided into multiple, smaller objects[47,48]
Degree of under-segmentationUsed to study the proportion of multiple complete objects incorrectly merged into larger objects[47,48]
Table 6. Evaluations of related attributes.
Table 6. Evaluations of related attributes.
The Type of EvaluationThe Name of the EvaluationBrief IntroductionLiterature
StabilityStandard deviation, VarianceThese data reflect the stability of the model. The smaller the standard deviation or variance, the more stable the model.[48,56,68]
TimeRuntimeReflecting the time of the algorithm running, it can be used to calculate the efficiency of the algorithm, and can also demonstrate the feasibility of the algorithm[30,43]
Time distributionTime distribution is a time evaluation of each process of extracting boundaries, which is conducive to the analysis and improvement of algorithms and links.[47]
SensitivitySensitivity analysisIt is used to assess how changes in coefficients affect the quality of the results, facilitating the interpretation and improvement of the model.[37]
Table 7. Classification of algorithms, corresponding characteristics, and sources of information.
Table 7. Classification of algorithms, corresponding characteristics, and sources of information.
Sources of InformationCorresponding FeatureAlgorithmic Classification
Graphic boundary informationGradient featuresLow-level feature extraction algorithm
Frequency domain features
Graphic internal informationTexture featuresHigh-level feature extraction algorithm
Optical features
Shape features
Information of multiple graphsTime features
Biological visual informationBiological visual featuresVisual hierarchical feature extraction algorithm
Table 8. Names, introductions, and performances of segmentation algorithms.
Table 8. Names, introductions, and performances of segmentation algorithms.
NameIntroductionPerformance
Threshold segmentation based on internal featuresFirst, the internal features are quantified by the algorithm to obtain quantitative feature information such as texture, pixel mean, and vegetation index, and then the threshold is set to segment them.It has good extraction performance at the vine plot boundary [70] and the subfield plot boundary [32] between wheat and bare ground, and it is believed to have higher extraction accuracy than the traditional support vector machine method [32].
Multiscale Combinatorial Grouping(MCG)MCG is a fast and effective contour detection and image segmentation algorithm. The algorithm generates each boundary and region based on structural forest edge detector, spectrum division, watershed transform, and global weighting.The algorithm has higher boundary extraction accuracy than the global boundary probability (gPb) algorithm [33], statistical region merging (SRM) segmentation algorithm [87], and the classical edge detection sobel algorithm [30].
Mean Shift(MS)Mean Shift is a nonparametric density estimation algorithm that eliminates the need to assume a sample distribution model and determine the number of categories, which can effectively reduce intra-field variation and preserve edge information.The algorithm has a better effect than the Canny edge detection algorithm and SLD line segment detection method, and the improved field boundary with irregular shape or interference also has a better extraction effect and good performance [7,57,73].
Globalized probability of boundary(gPb) [88]The gPb is an extremely advanced edge detection algorithm in computer vision that combines the steps of image segmentation, line extraction, and contour generation with the ability to integrate image information at both local and global scales for texture, color, and brightness.Experiments in the three regions show that the gPb algorithm has limited availability. [33]
Fuzzy c-means(FCM)The FCM algorithm is a clustering technique that uses the principle of fuzzy sets to iteratively minimize the weighted disparate function to produce the best n partitions and then uses the edge detection class algorithm to obtain more accurate boundaries.It performed well in field sub-boundary extraction of crops with intrinsic high heterogeneity such as rice and maize [37], with 93.61%, 88.78%, and 84.96% accuracy for the three image sets, respectively.
Simple linear iterativeclustering(SLIC)SLIC is a clustering algorithm extended on the basis of the K-Means clustering algorithm, which can build superpixels simply and efficiently and can be used for segmentation operations before clustering.The recognition accuracy of the two high-resolution remote sensing farmland images was 83.53% and 85.58% [56] by using SLIC and then using the supervised classification-based method for area merging operations.
Local binary fitting(LBF)LBF is an energy-based segmentation algorithm that effectively segments images with uneven grayscale [89].LBF performs well in satellite imagery of flatter farmland. Overall accuracy, user accuracy, producer accuracy, and kappa coefficients were 89.53%, 65.93%, 86.13%, and 86.52%, respectively [54].
Table 9. Categories, numbers, performances, and the representative literature of boundary object extraction algorithms.
Table 9. Categories, numbers, performances, and the representative literature of boundary object extraction algorithms.
The Category of Boundary ObjectNumber of StudiesPerformance of ExtractionRepresentative Literature
Footpath in fieldsRelatively smallPerforms well[6]
Tian KanRelatively largeNot bad[45]
Field roadRelatively smallPerforms well[52]
Inter-ditchRelatively hugeNeed to improve[98]
Table 10. Research, categories, introductions, evaluation indicators, and performances of boundary extraction algorithms.
Table 10. Research, categories, introductions, evaluation indicators, and performances of boundary extraction algorithms.
LiteratureCategoryIntroduction to AlgorithmsEvaluation IndicatorsPerformance
[10]Artificial vectorizationManually label boundary informationcorrelation coefficientThe correlation coefficient is 0.997
[43]Low-level feature extraction algorithmperceptual groupingOverall match ratio82.6% for SPOT5 and 76.2% for SPOT4
[72]Low-level feature extraction algorithmCanny edge detection, Hough transform, and Suzuki85 profile algorithmMaximum boundary overlap area method, confusion matrix Area accuracy 89.7%, boundary accuracy 80.7%
[35]High-level feature extraction algorithmAnisotropic diffusion operator, information entropy difference analysis, labeled watershed algorithmObject consistency error73.06% for hilly areas
[47]High-level feature extraction algorithmvariational region-based geometric active contour and Watershed algorithmConfusion matrix class, degree of under-segmentation and over-segmentation The accuracy rates of users, producers, under-segmentation degree, and over-segmentation degree were 83.0%, 93.8%, 5.7%, and 14.2%
[57]High-level feature extraction algorithmAdvanced MS filtering with mixed cores, automatic bandwidth estimation, and two-stage area coalescing technology.Confusion matrixThe F-measures of the two sites are 0.9038 and 0.9189
[56]High-level feature extraction algorithmLinear iterative clustering segmentation algorithm and supervised learningConfusion matrix, standard deviationThe mean accuracy (user and producer) is greater than 89%, and the standard deviation is less than 1.09 on average
[36]High-level feature extraction algorithmWatershed algorithm based on edge embedding markersTrue and false data statistics40,631 of the 52,358 pixels are correct. Accuracy 77.6%
[30]High-level feature extraction algorithmMulti-scale combined aggregationConfusion matrixThe average F-value is 94%
[59]The visual hierarchy extraction algorithmDeep Boundary Combination (DBC) algorithmConfusion matrixThe completeness is 88.73%, the accuracy rate is 91.42%, and the quality is 81.91%
Table 11. Introduction to available algorithms (software).
Table 11. Introduction to available algorithms (software).
NameCategoryBrief IntroductionReference Website or Literature
eCognitionHigh-level feature extraction algorithmBusiness software that integrates multiple algorithmshttps://geospatial.trimble.com/ecognition-download (accessed on 10 January 2023)
Canny operator and perceptual groupingLow-level feature extraction algorithmThe literature gives the pseudocode of the algorithm[43]
Table 12. Features of satellite and UAV platforms.
Table 12. Features of satellite and UAV platforms.
FeatureSatellite PlatformUAV Platform
ResolutionMediumVery high
Individual requirementsNot supportedSupported
Multi-temporal informationSupportedNot supported
Shooting rangeHugeSmall
Table 13. Suggested scenarios and desired resolutions of features.
Table 13. Suggested scenarios and desired resolutions of features.
FeatureSuggested ScenariosDesired Resolution
Gradient featuresThe color difference is noticeable at the borderLow
Frequency domain featuresField image with pronounced boundary linearityLow
Shape featuresRegular fields [32] or regular boundary objects [6,45]High
Texture featuresImagery with texture featuresHigh
Optical featuresMultispectral imagery [3]High
Time featuresMulti-temporal imagery [46]Medium
Features unique to boundary objectsThe boundary objects are easy to extractMedium
Table 14. Calculation speeds of algorithms.
Table 14. Calculation speeds of algorithms.
LiteratureYearThe Category of AlgorithmAverage Size (Pixel) per ImageRunning Time (s) per ImageAverage Speed (pixels/s)
[43]2013Low-level feature extraction algorithm460 × 7202801183
[33]2017High-level feature extraction algorithm1000 × 10006601515
[30]2019High-level feature extraction algorithm5000 × 500035470,621
[70]2007High-level feature extraction algorithm1829 × 16058366,943
[68]2019Boundary object extraction algorithm1280 × 7200.61,536,000
Table 15. Crops and the literature.
Table 15. Crops and the literature.
CropLiterature
Wheat[32,34,37,43,46,47,55]
Paddy[30,37,43,44,55,58]
Corn[37,43,46,47,55,64]
Beet[37,43,46]
Pepper[37,43]
Tomato[37,43]
Soybean[46,47]
Rape[46,55]
Alfalfa[47,64]
Barley[46,64]
Table 16. Studies focusing on the flatness of the terrain.
Table 16. Studies focusing on the flatness of the terrain.
LiteratureThe Category of AlgorithmPlain Area (Flat Field)Hilly Areas (Sloping Fields)Mountainous (Complex Fields)
[46]High-level feature extraction algorithm92.12%, 89.05%80.03%N
[55]Low-level feature extraction algorithm95.12%94.65%78.64%
High-level feature extraction algorithm95.04%94.63%85.19%
Table 17. Studies focusing on the flatness of the terrain.
Table 17. Studies focusing on the flatness of the terrain.
LiteratureImage ResolutionImage Coverage AreaThe Smallest Identified Plot Size
[43]20 m, 10 m4.6 km × 7.2 kmMore than 1.5 hectares
[10]1 mGansu Province, ChinaNot less than 4000 square meters
[47]30 mThree images of 150 km × 150 kmNot less than 14,400 square meters
[6]0.15 mNThe shortest length of the ridge is 25 m
[37]4 m, 2.44 m, 0.61 m33 km2Not less than 1.8 hectares
[48]30 mAll AmericanThe minimum straight side length of the plot reaches 120 m, and the minimum area reaches 18,000 square meters
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Shu, L.; Han, R.; Yang, F.; Gordon, T.; Wang, X.; Xu, H. A Survey of Farmland Boundary Extraction Technology Based on Remote Sensing Images. Electronics 2023, 12, 1156. https://doi.org/10.3390/electronics12051156

AMA Style

Wang X, Shu L, Han R, Yang F, Gordon T, Wang X, Xu H. A Survey of Farmland Boundary Extraction Technology Based on Remote Sensing Images. Electronics. 2023; 12(5):1156. https://doi.org/10.3390/electronics12051156

Chicago/Turabian Style

Wang, Xuying, Lei Shu, Ru Han, Fan Yang, Timothy Gordon, Xiaochan Wang, and Hongyu Xu. 2023. "A Survey of Farmland Boundary Extraction Technology Based on Remote Sensing Images" Electronics 12, no. 5: 1156. https://doi.org/10.3390/electronics12051156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop