Next Article in Journal
A Mobile Application to Follow Up the Management of Broiler Flocks
Previous Article in Journal
Application of Internet of Things (IoT) for Optimized Greenhouse Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Research Progress on Remote Sensing Classification Methods for Farmland Vegetation

1
China Meteorological Administration Training Center, Beijing 100081, China
2
Patent Examination Cooperation (Beijing) Center of the Patent Office, CNIPA, Beijing 100160, China
*
Author to whom correspondence should be addressed.
AgriEngineering 2021, 3(4), 971-989; https://doi.org/10.3390/agriengineering3040061
Submission received: 27 August 2021 / Revised: 17 November 2021 / Accepted: 25 November 2021 / Published: 8 December 2021

Abstract

:
Crop planting area and spatial distribution information have important practical significance for food security, global change, and sustainable agricultural development. How to efficiently and accurately identify crops in a timely manner by remote sensing in order to determine the crop planting area and its temporal–spatial dynamic change information is a core issue of monitoring crop growth and estimating regional crop yields. Based on hundreds of relevant documents from the past 25 years, in this paper, we summarize research progress in relation to farmland vegetation identification and classification by remote sensing. The classification and identification of farmland vegetation includes classification based on vegetation index, spectral bands, multi-source data fusion, artificial intelligence learning, and drone remote sensing. Representative studies of remote sensing methods are collated, the main content of each technology is summarized, and the advantages and disadvantages of each method are analyzed. Current problems related to crop remote sensing identification are then identified and future development directions are proposed.

1. Introduction

Agricultural production represents the foundation of a nation. Furthermore, it is a major issue that is related to a country’s national economy, individual livelihood, and all levels of government. Moreover, food security is the most important goal of all countries’ agricultural policies. Cultivated land represents the basic resource for human survival and social development [1,2] and provides most of the products that humans depend on for survival, such as food, fodder, fiber, and biofuels [3]. Changes in arable land use patterns will affect grain yield, subsequently affecting food security [1], an important area of land system scientific research [4,5]. The issue of food security is related to the survival and development of humankind, and neither developed nor developing countries can be ignored. Therefore, in recent years, food security has attracted increasing attention from the international community. Crop planting area and spatial distribution information have important practical significance for food security, global change, and sustainable agricultural development. Timely and accurate acquisition of crop spatial distribution and its temporal and spatial dynamic change information is not only the core data source for monitoring crop growth, estimating regional crop yields, and studying regional food balance, but it is also the main basis for crop structure adjustment and layout optimization [6,7]. Remote sensing technology has been widely used in Earth observation activities due to its advantages such as extensive coverage, short detection period, more intuitive, and low cost, providing new technical means for the rapid and accurate acquisition of large-scale crop spatial distribution information [8,9].
Spectral reflectance characteristics of crops are the basic physical basis for remote sensing extraction of crop planting structures. Like other green vegetation, crops have two absorption bands in the blue and red wavelengths of visible light, and their reflectivity is low. There is an obvious reflection peak in the visible green band between the two absorption bands, close to 1.1 µm. In the infrared band, the reflectance reaches a peak, forming a unique feature of vegetation. In the mid-infrared band (1.3–2.5 µm), due to the influence of the water content of green plants, the absorption rate increases greatly, the reflectance decreases greatly, and a low valley is formed in the water absorption zone. These crop spectral features are often different due to the type of crop, growing season, growth conditions, and field management [10,11]. Therefore, the scientific and rational use of the differences in crop spectral features can achieve remote sensing extraction of various crops.
Temporal characteristics of crops are the specific theoretical basis for crop remote sensing identification. Affected by the phenomena of “same matter with different spectrum and foreign matter with same spectrum” and mixed pixel effects, remote sensing recognition of crops is much more complicated than remote sensing extraction of natural vegetation (woodland and grassland), and it is difficult to achieve ideal results based solely on spectral features. Making full use of the typical seasonal rhythm characteristics of crops is the key theoretical basis for distinguishing different crops from other green vegetation [12]. The spatial characteristics of crops are an important theoretical basis for crop remote sensing identification. Moreover, with the rapid development of image processing technology, spatial features have become an important source of auxiliary spectral features and temporal features for crop remote sensing extraction, especially for the suppression of the phenomenon of “same matter with different spectrum” [13,14].
Agricultural production has obvious seasonality and periodicity. The phenological phase refers to the response of the growth, development, activity, and other plant laws and biological changes to climatic conditions. By observing and recording the growth and decline of plants over a year, comparing their temporal and spatial distribution differences, exploring the periodicity of plant development and activity processes, and their dependence on surrounding environmental conditions, then, we can understand the effects of climate change and its effect on agricultural production. Information on the temporal and spatial distribution of crops reflects the utilization of human agricultural resources in space. It is an important foundation for studying the pattern of agricultural ecosystems, terrestrial ecosystems, and the impact of global changes on agriculture. It is also important for national food security and agricultural resources and environmental research [15,16]. There are currently three main methods for obtaining regional-scale crop spatiotemporal distribution information, which are based on statistical data, remote sensing information extraction, and multi-source information fusion.
In addition, with the rapid development of remote sensing technology and computer technology, remote sensing images have shown a trend toward high spatial resolution, high spectral resolution, and high temporal resolution, and the available information provided is becoming increasingly abundant. Spatial resolution refers to the minimum distance between two adjacent features that can be identified in a remote sensing image. The higher the spatial resolution, the richer the feature information contained in the remote sensing image and the smaller the target that can be identified. Spectral resolution refers to the recording width of the detector in the wavelength direction. A higher spectral resolution means that an imaging spectrometer can obtain ground object radiation information in more spectral channels. Time resolution refers to the minimum time interval between two adjacent remote sensing observations in the same area. The smaller the interval, the higher the time resolution.
Remote sensing image processing and classification methods have also become more accurate over time, from traditional unsupervised classification and parameter supervised classification to neural network classification, genetic algorithms, machine learning classification algorithms, and other non-parametric supervised classifications, as well as a variety of classifier combination integration algorithms.
The emergence and development of UAV remote sensing technology provide new ideas for planting information collection [17,18,19]. On small and medium scales, UAV remote sensing can play a important role and obtain more accurate crop planting information, which is of great significance to the development and application of crop monitoring technology [20,21]. UAV remote sensing has the characteristics of high resolution, simple operation, fast data acquisition, and low cost. It can quickly collect images of a certain area and combine these with ground measurement data to fulfill crop planting information monitoring tasks in the area. Large-scale remote sensing provides accuracy verification and is a useful sment to satellite remote sensing and aerial remote sensing. UAV survey information has the characteristics of timeliness, objectivity, and large coverage, and has become an indispensable remote sensing monitoring resource [22,23].
This article is based on locally and internationally published crop remote sensing extraction literature over the past 25 years (1985–2019). See Table 1. Through in-depth analysis and refinement of the literature, it is concluded that the classification of farmland vegetation can be divided into farmland vegetation classification based on (1) vegetation index and spectral band, (2) multi-source data fusion, (3) artificial intelligence learning, and (4) drone remote sensing.

2. Farmland Vegetation Classification Based on Vegetation Index

The classification of farmland vegetation based on vegetation index comprises the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). Chang et al. (2007) introduced land surface temperature (LST), combined with MODIS 7 time series single-band and time-series NDVI, as the final input feature quantity, based on a regression tree classifier to extract the spatial distribution area of corn and soybeans in the main producing areas of the United States [24]. Zhang et al. (2008) used fast Fourier transform to process the MODIS NDVI time-series curve, and selected the average value of the curve, the initial phase of the 1–3 harmonics, and the amplitude ratio as the parameters for crop identification, and realized the corn, cotton, and crop rotation in North China [25]. Zhang et al. (2008) determined four key phenological variables based on the phenological law as shown by the MODIS enhanced vegetation index (EVI) time-series curve of maize and wheat; namely, the initial growth time of the crop (Tonset), peak growth time (Tpeak), EVI maximum time (EVIpeak), and growth termination time (Tend). This information was combined with expert knowledge to determine the threshold of critical period variables, and the spatial distribution and rotation of winter wheat and corn in the North China Plain were successfully identified [26]. Xiong et al. (2009) selected summer and autumn crop rotation periods and MODIS NDVI average values as standards, used a layered method to distinguish autumn harvest crop areas from other areas, and used the BP (back propagation) neural network method to classify and effectively extract three crop types of middle rice, late rice, and cotton in Jiangling District, Hubei Province [27]. Cai et al. (2009) also fused ETM+ images with time series MODIS NDVI images and used the fused 24-scene time-series NDVI data to better extract rice, rape, wheat, and their crop rotation in Zhanghe Irrigation Area [28]. He (2010) used a wavelet transform to fuseMODIS NDVI and TM NDVI. The fused NDVI not only guarantees the spectral characteristics of the original time series, but also increases the spatial resolution from 250 m to 30 m, which improves the single NDVI. Moreover, the feature quantity extracts the accuracy of the planting structure [29,30]. Huang (2010) analyzed the phenological characteristics of crops and the NDVI time series change characteristics and found the key period for the identification of the main crop types in three provinces in Northeast China. Through the phenological calendar and the agricultural field, the monitoring data iteratively revise and adjust the crop recognition threshold and build a remote sensing extraction model of crop planting structure. Hao (2011) obtained the spatial distribution of crop planting structures in three northeastern provinces by analyzing time-series MODIS NDVI images, using the ISODATA unsupervised classification algorithm and spectral coupling technology [31]. Peña-Barragán et al. (2011) performed object-oriented segmentation on Aster images and constructed the time-series vegetation index of the object (VIgreen), NDVI, etc., and another 336 feature quantities, and finally used a decision tree to realize the automatic extraction of the planting structure composed of 13 crops in Yolo County, California [32]. Zhang et al. (2012) compared the maximum, minimum, and average values of each time-series point in the MODIS EVI curve of each crop to find the critical period for each crop identification and the corresponding threshold, combined with the results of TM supervised classification, the crop planting structure in Heilong Port area was extracted [33]. Foerster et al. (2012) collaborated with 35 Landsat TM/ETM+ images of different seasons from 1986 to 2002 to construct crop NDVI time-series curves and set a reasonable range values by analyzing the difference in spectral standard deviation values of different crops at various time-series points. The spatial distribution map of 12 crops in northeastern Germany was drawn [34]. Zhong et al. (2014) used the phenological parameters EVI, phenological index, normalized difference decay index (NDSVI), normalized tillage index (NDTI), and other characteristic quantities, as well as their combinations, for testing. It was found that the participation of phenological parameters in classification can reduce the requirements of crop mapping for ground data, and the participation of four types of feature quantities in classification can obtain the highest overall classification accuracy [35].

3. Farmland Vegetation Classification Based on Spectral Band

Remote sensing recognition methods for crops based on spectral features comprises visual interpretation, supervised classification, and unsupervised classification based on image statistical classification, and various integrated classification methods based on syntactic structure classification [36].
The visual interpretation method is used to directly observe the color, shape, texture, spatial position, and other characteristics of various features in the image to interpret the remote sensing image after analysis, reasoning, and inspection based on highly experienced specialists. The advantage of this method is that it can obtain high classification accuracy, which is mostly used in early crop yield estimation research based on remote sensing technology [37], but its disadvantages are also obvious. For example, it requires interpreters to have rich experience and strong professional knowledge. Moreover, it needs to be based on a large number of on-site sampling surveys, which requires significant manpower and material resources, leading to limitations in the method. Lastly, is not suitable for crop identification research in a large area [38].
The main difference between supervised and unsupervised classification is whether there is prior knowledge of the specific classification of images. It is currently the most basic and generalized, mature, and commonly used feature information extraction technology. Among these technologies, supervised classification has high accuracy, and the classification result is in agreement with the actual category, but because it requires certain prior knowledge, the workload is relatively large. On the other hand, unsupervised classification is easier to implement; however, the accuracy of the classification results is relatively poor [39]. Both methods have their advantages, but also have certain shortcomings. With the continuous introduction and enhancement of new methods, theories, and technologies, as well as the continuous development of computer technology, the classification accuracy of supervised and unsupervised classification has also continued to improve. In order to improve some of the limitations of traditional algorithms, an increasing number of scholars are constantly improving classic algorithms and constructing new algorithms to improve the accuracy of crop recognition. Therefore, various integrated classification methods based on syntactic structure are gradually being applied [40].
However, due to the limitations in satellite image resolution, it is difficult to avoid the phenomena of “same matter with different spectrum” and “same spectrum with foreign matter” in the classification process. Therefore, it is difficult to obtain ideal results for crop classification with complex planting structures based solely on the spectral characteristics of ground objects [41].
The classification of farmland vegetation based on spectral bands can also be divided into remote sensing recognition of farmland vegetation based on (1) a single image and (2) multi-temporal remote sensing images. The remote sensing recognition of farmland vegetation based on multi-temporal remote sensing images can be divided into (1) single-feature parameter recognition, (2) multi-feature parameter recognition, and (3) multi-feature parameter statistical models. From the aspects of applicability, data sources, classification methods, advantages, and disadvantages, the remote sensing recognition of farmland vegetation based on a single image and on multi-temporal remote sensing images are compared, as shown in Table 1. In general, crop remote sensing recognition based on a single image is suitable for areas with relatively simple crop planting structures. Data sources include SPOT-5, IRS-1D, CBERS-02B, LANDSAT-TM, HJ-1B, HJ-1A, MODIS(Note, these are the names of sensors or satellites), and other data; classification methods include decision trees, support vector machines, neural networks, maximum likelihood, spectral angle mapping, etc. The characteristics of single-image crop remote sensing recognition include high efficiency and strong operability, but the disadvantage is that the revisit period is long, and the the accuracy is poor when key phenological period it is not obvious. The remote sensing recognition of crops based on multi-temporal remote sensing images is not only suitable for areas with relatively simple crop planting structures, but also for areas with complex crop planting structures; data sources include MODIS, AVHRR, SPOT VGT, ASTER, AWIFS, Landsat, TM/ETM+, HJ-1A/B, ETATION,( Note, these are the names of sensors) etc. The classification methods are different according to different parameter recognition. The main classification methods include fast Fourier transform, unsupervised classification and spectral coupling technology, BP neural network, threshold method, wavelet transform, minimum distance threshold method, classification regression tree, See5.0, unsupervised classification, spectral matching technology, image segmentation, random forest, temporal decomposition model, neural network model, independent component analysis model, CPPI index model, etc. The characteristics ofremote sensing recognition based on multi-temporal remote sensing images include simple operation, high efficiency, and high precision, but the disadvantages are stability, universality, and that the selection of feature quantities may be subjective. Table 2 lists farmland vegetation remote sensing recognition methods based on spectral bands.

4. Farmland Vegetation Classification Based on Multi-Source Data Fusion

The remote sensing identification methods for crops based on multi-source data comprises multi-source remote sensing data fusion crop identification methods and crop identification methods that integrate remote sensing information and non-remote sensing information. Data from different sensors vary in space, time, spectrum, direction, and polarization. Therefore, for the same area, multiple sources of remote sensing data can be obtained. In the process of crop identification, a single datado not provide enough information to meet the needs of crop identification. Therefore, the use of multi-source data can fuse remote sensing information of different types of images to compensate for the lack of instantaneous remote sensing information and reduce the ambiguity of understanding, thereby improving the accuracy of crop recognition [42,43].
Crop recognition based on multi-source data fusion is applicable to a wide range of areas. The main data sources are radar SAR remote sensing, high score data, drone remote sensing, ground remote sensing, cultivated land data, statistical data, agricultural climate suitability data, population density, etc. The main classification methods are Principal Component Analysis (PCA) transformation method, HIS (hue, intensity, and saturation) transformation method, Brovey transformation method, Gram–Schmidt transformation fusion, NNDiffuse fusion method, SPAM (set pair analysis method) model, MIRCA2000 (global monthly irrigated and rainfed crop areas), GAEZ (global agro-ecological zones), etc. The advantages of multi-source data are high precision, high time-effectiveness and wide application range, which can compensate for the lack of instantaneous remote sensing information. The disadvantages include a large workload, difficult data acquisition, poor regional suitability of the data, and a lack of long-term sequence data sets.
Since crop area statistical data are irreplaceable and crucial in the fields of climate change, national food security, etc., scholars have integrated statistical data and natural factors such as temperature, precipitation, soil, and topography, as well as farmers’ planting habits. Moreover, socio-economic factors such as population density and agricultural product prices are integrated as non-remote sensing information and remote sensing information to establish a crop spatial distribution model [16,44,45], In this way, the spatial distribution information of crops in a large range can be extracted. Distribution grid maps provide reliable basic crop spatial distribution data for global change and food security research [46,47]. For example, Leff et al. (2004) used remote sensing information to obtain agricultural land cover data, fused crop statistics at the national scale and some provincial scales, and extracted the spatial distribution results of 18 major crops around the world with a spatial resolution of 10 km [48]. Ramankutty et al. (2008) and Monfreda et al. (2008) used linear regression models to use crop statistics at different spatial scales and distributed them to farmland pixels with a resolution of 10 km around the world and obtained the spatial distribution information of 11 major crops. Some scholars—based on the cross-entropy principle of crop spatial distribution model (SPAM)—fused remotely sensed information with agricultural statistical data and obtained high-precision crop spatial distribution results on the global and regional scales [49,50]. Fischer et al. (2012), using the latest GAEZ model, comprehensively used the global cultivated land distribution map, crop suitability, population density, market distance, and other information based on the same cross-entropy theory and method to assign crop statistical information to the 5-point grid-scale pixels to obtain the spatial distribution of 23 types of crops in the world.
However, most of the current crop spatial distribution mapping technologies that integrate remote sensing information and non-remote sensing information use remote sensing to obtain land use, agricultural irrigation, and arable land suitability as auxiliary information and consider the growth mechanism and change laws of the research target crops. The crop’s own remote sensing information (especially time-series remote sensing information) is not fully applied directly, which hinders the accuracy improvement of the crop spatial distribution mapping. In recent years, some scholars have shown that the introduction of remote sensing feature parameters such as NDVI into the crop spatial distribution mapping of remote sensing information and non-remote sensing information (such as statistical data) reduces the dependence of classification rules on training samples. This method is easy to understand and easy to operate. The operation can effectively improve the accuracy and efficiency of crop spatial distribution mapping [51,52].
Multi-source remote sensing data fusion is based on combining remote sensing data sets from different sources through a certain mathematical algorithm to complement and synthesize the temporal and spatial resolution and accuracy of the multi-source data to obtain a new data set. Multi-source remote sensing data usually comprise a variety of global- and regional-scale remote sensing data sets from different countries and organizations, such as the MODIS Collection 5 product developed by Boston University [53], China’s land use remote sensing monitoring data developed by the Institute of Geography of the Chinese Academy of Sciences [54], and the GlobeLand30 data set developed by the National Basic Geographic Information Center [55]. These data sets come from different sensors, different spatial resolutions, and different classification algorithms, and there are large inconsistencies in space [56]. Multi-source data fusion can effectively solve the above problems and obtain data products with higher accuracy [57].
Multi-source remote sensing data fusion methods are divided into data consistency scoring methods and regression analysis methods [56]. The former are used to build a scoring table, based on the consistency of the input data set, and select high-confidence pixels for fusion. For example, Jung et al. (2006) developed a fuzzy consistency scoring method to generate a new 1 km spatial resolution global land cover product [58]. Following Jung et al. (2006), Fritz et al. (2015) used an optimized fuzzy consistency scoring method to generate a global cultivated land distribution map [59]. Lu et al. (2017) used a new hierarchical optimization method to generate China’s integrated cultivated land distribution map [60]. The second method: first of all, establish the regression relationship between the training sample and the input data set, and then use it to predict the probability of cultivated land in the sample-free area. Regression models are usually based on a large number of training samples. Regression analysis has been widely used in global- and regional-scale fusion mapping. Kinoshita et al. (2014) integrated six remote sensing data products and established a global land coverage and percentage map through logistic regression [61]. See et al. (2015) used the logistic geographically weighted regression (GWR) method to establish a global model and produced a global land cover product with a spatial resolution of 1 km [57]. In addition, Schepaschenko et al. (2015) used the GWR model to generate a global forest cover map [62]. Table 3 lists representative papers of multi-source remote sensing data fusion methods.

5. Farmland Vegetation Classification Based on Machine Learning

The extraction of crop remote sensing based on spectral characteristics is based on the spectral characteristics, spatial pattern characteristics, or other information of each pixel point or region in the image in different spectral bands, using certain rules to extract different crops [67]. This method was initially based on visual interpretation, and subsequently developed into a syntactic analysis method represented by support vector machine algorithms and decision tree algorithms. The syntax analysis method is based on non-parametric classifiers and is widely used. The calculation of automatic classification is an important direction for remote sensing image classification. Commonly used machine learning algorithms in remote sensing image classification include neural network classifiers, genetic algorithms, decision trees, and support vector machines. Due to the spatial resolution of remote sensing data and the size of features, there are problems in remote sensing image classification, including mixed pixels, which also occur with the same spectrum and different spectra of the same matter. In this regard, classification methods such as sub-pixel decomposition classification, object-oriented classification, and plot-based classification have gradually developed and solved some problems. Most classifications belong to pixel-based classifications, such as maximum likelihood classification, decision tree classification, neural network classification, etc. Sub-pixel classification includes methods such as fuzzy mean classification and spectral decomposition. Classification based on land parcels is mainly based on classification methods combining remote sensing (RS) and geographic information system (GIS) technologies [68]. However, due to differences in remote sensing image data types, classification features, and study areas, there are still some differences between remote sensing image classification algorithms. In order to learn from the strengths of each method, the combined and integrated research of multiple classification algorithms has been given more attention. The following is a brief introduction to farmland vegetation classification methods commonly used in remote sensing image machine learning.

5.1. Support Vector Machine Algorithm

Support vector machine (SVM) is a machine learning method that was developed by Vapnik et al. in the mid to late 20th century on the basis of statistical theory [69]. The support vector machine algorithm is essentially a linear classifier but can deal with non-linear problems; that is, it is based on the principle of a linear classifier and evolved after the principle of structural risk minimization and the kernel theory were introduced [70]. Nowadays, support vector machine theory has been widely used in the study of land use type classification and crop area extraction and its spatial pattern dynamic changes [71,72]. Breunig et al. used SVM, maximum likelihood (ML), spectral angle mapper (SAM), and spectral information divergence (SID) methods to extract different soybean varieties in Brazil. Among them, SVM and ML showed the best classification accuracy, 81.76% and 89.90%, respectively; the accuracy of the other two classification methods was less than 75%. However, because precipitation and irrigation increase the soil moisture content, which interferes with the extraction of soybeans, corn, rice, and wheat to a certain extent, the extraction of crops close to water sources and irrigation areas poses new challenges to classification by support vector machines [73]. Based on multi-temporal HJ-1A/B data, Jin et al. used the support vector machine algorithm to study the classification and mapping of irrigation and dry-farming wheat. The results showed that the support vector machine algorithm can be used in the spatial pattern mapping of irrigation and dry farming wheat, effectively avoiding the subjective influence of actual experience threshold setting in supervised classification [74]. The advantages of the support vector machine are its simple structure, global optimization, sufficient adaptability and generalization ability, strong robustness, ability to solve high-dimensional nonlinear features, and the avoidance of problems such as over-learning [75,76,77,78]. Compared with neural networks and decision classifiers, support vector machines have the advantage of effectively processing limited sample data. Moreover, support vector machines use different kernel functions to solve nonlinear problems. Therefore, the support vector classifier has favorable promotion ability. Furthermore, it is widely used in land use and land cover classification for specific feature information extraction (such as crops, forests, buildings, impervious layers), and to change monitoring research [76].
The further development and application of non-parametric classifiers have created a new development direction for the improvement and stability of classification accuracy. Therefore, in order to overcome the shortcomings caused by a single classifier, the development and application of combined classifiers in recent years have become another popular direction in this field; however, improvements in both the original and new algorithms need to be further studied. In addition, the problem of remote sensing extraction of crop spatial pattern information, how to select the appropriate classification algorithm, and the setting of the classifier parameters are also urgent problems to be solved [79].

5.2. Neural Network Algorithm

The neural network algorithm is a non-parametric supervised classification method that simulates the learning process of the human brain [80,81,82] and was the earliest machine learning algorithm. Compared with the parameter classifier, the neural network classifier does not require the assumption of statistical distribution characteristics, so it is suitable for the classification of remote sensing images with any distribution characteristics. The neural network algorithm is also suitable for the classification of multi-source remote sensing image data. Neural network classification is widely used in land use and land cover classification and specific feature information extraction [82,83,84,85]. Furthermore, neural network algorithms have wavelet neural networks, radial basis neural networks, three-dimensional Hopfield neural networks, and back-propagation neural networks. Currently, the BP neural network is the most widely used; however, the neural network algorithm is based on the principle of minimizing empirical risk. When the samples tend to be infinite, the model obtains the optimal result. In reality, it is impossible to collect infinite samples. Moreover, the neural network algorithm requires a large number of samples to repeatedly train the model. Lastly, the calculation speed is slow and the efficiency is low, so its generalization is also limited [86].

5.3. Decision Tree Algorithm

Decision tree classification is a statistical-based non-parametric supervised classification method that uses the structure of the tree, hierarchical classification as the guiding idea, and a certain segmentation threshold principle to classify each pixel of the remote sensing image into different types [86,87]. It first builds a discriminant function based on the training sample and builds a branching tree based on the value obtained by the discriminant function. Then, it builds the next branch of the tree repeatedly on each branch of the tree until all types are classified [79]. The decision tree algorithm has the advantages of clear structure, fast calculation speed, appropriate robustness and flexibility, suitability for using multiple features, high classification accuracy, and easy understanding of classification principles [88]. Moreover, pixel-based classification and object-oriented classification can be realized. In order to solve different problems in decision tree classification, different types of decision tree algorithms have also been developed. However, the decision tree algorithm requires a large number of training samples. Moreover, there is a lot of uncertainty in the determination of the threshold of split nodes, and its universality needs to be improved.
Random forest is a typical method in decision trees. The random forest (RF) algorithm is a new machine learning method that was proposed by Leo Breiman and Adele Cutler in 2001 [89]. It has a high tolerance for outliers and noise and is not prone to overfitting. It uses the bootstrap re-sampling method to extract multiple samples from the given prediction data, models each bootstrap sample separately, and finally synthesizes the prediction results of multiple decision trees and votes to determine the final result [90]. In recent years, the random forest algorithm has been widely used in remote sensing monitoring of crop spatial pattern information. Schultz et al. combined image segmentation technology with random forest classification technology to establish a set of object-oriented automatic classification processes and applied it to the classification of staple crops in São Paulo, Brazil. The mapping accuracy and user accuracy for soybeans reached 92.5% and 85.3%, respectively [91]. In the absence of yearly training samples, Zhong et al. used phenological indicators to classify soybeans and corn based on the random forest algorithm. The results showed that when the training sample data are consistent with the mapping year, the overall classification accuracy can exceed 88%; however, when the training sample data are inconsistent with the mapping year, the input phenological index can lead to an overall classification accuracy within an acceptable 80% [35]. Song Qian used the random forest algorithm to learn and train spectrum-texture high-dimensional feature space. Under the GF-1 (GF-1 is a high-resolution Chinese satellite) remote sensing image, the introduction of texture features reduced the misclassification rate for soybeans and corn and increased the extraction accuracy by 3.57% and 2.86%, respectively [92].

5.4. Object-Oriented Machine Learning Algorithms

With continuous improvements to the spatial resolution of remote sensing images, the image scenes become increasingly complex, and the phenomena of “the same spectrum of foreign matter” and “different spectrum of the same matter” are becoming increasingly important. Pixel-based classification has a serious salt-and-pepper effect, and the accuracy of image information extraction is limited. For this reason, object-oriented classification methods that can comprehensively utilize spectral features, texture features, shape features, and topological information with image spots, or objects, as the basic unit of image processing have begun to emerge [93,94,95]. Compared with pixel-based classification, object-oriented classification can extract information at multiple scales through the combined use of geographic information systems and remote sensing technologies, which can better reduce the salt-and-pepper effect and improve the accuracy of information extraction [96]. Object-oriented classification is widely used in land use and land cover type classification [97], specific feature information extraction, and change monitoring [98]. However, the optimization of scale in object-oriented classification is a difficult problem [94].

5.5. Deep Learning Algorithm

Deep learning began with an article published by Hinton et al. in “Science” in 2006 [99]. It is a machine learning algorithm based on a multilayer neural network. The deep learning algorithm is completed through four steps: data processing, feature extraction and selection, forward guidance of the neural network, and tuning of the neural network [100]. At present, deep learning algorithms are used more in the extraction of hyperspectral remote sensing image information [101,102,103].
In general, remote sensing image classification technology and methods are rapidly developing and there are many types. However, the most effective classification in different situations is yet to be fully understood. Currently, pixel-based classification methods are widely used. However, the classification accuracy may be affected by mixed pixels. The classification method based on sub-pixels was developed to solve the problem regarding the confusion of pixels. The sub-pixel classification method has a positive effect on improving the classification accuracy of medium spatial resolution and low spatial resolution data and only uses spectral information. The classification—based on plots or object-oriented—is suitable for the use of various features such as spectral information, spatial structure, contextual features, and shapes. However, object-oriented classification methods are very suitable for high-resolution spatial data [102]. Comparative analysis studies show that non-parametric machine learning methods based on multiple features outperform other methods [68].

6. Crop Classification Based on Drone Remote Sensing

The essence of crop type extraction is remote sensing interpretation. Through the spectral characteristics and texture characteristics of different crops in UAV images, it is possible to obtain planting information such as crop category and crop spatial distribution [104,105]. In terms of using drones to monitor crops, Rcols et al. used multi-rotor drones equipped with Pentax A40 cameras to obtain onion images, determine surface cover crops, establish a simple linear relationship between canopy density and leaf area index, and estimate onion leaf area index. The main advantages of this method include losslessness, simplicity, and time saving [20]. Li Bing et al. used unmanned aerial vehicles equipped with ADC multi-spectral cameras to monitor field wheat in different phases, providing a reference for the study of high-resolution image coverage [106]. Tian Zhenkun et al. used the Kyosho Caliber ZG 260 remote-controlled gasoline helicopter, equipped with the ADCAir canopy measurement camera (Tetracam, Los Angeles, CA, USA), and used winter wheat as the research object, PixelWrench2 software (Los angeles, CA, USA, version 1.5) to export the data and calculate the spectral reflectance and NDVI, and proposed a fast classification extraction method. This method has high accuracy and universality with characteristics such as speed and low cost [107]. Xu et al. used polynomial correction on images. After geometric correction, the accuracy of the obtained field information met the requirements of precision agriculture [108,109]. Popescu et al. used LBP texture features in RGB and HSV space to achieve UAV image segmentation [110]. Jin et al. used drone images to estimate the density of wheat plants, and Milas et al. used drone images for shadow classification [111,112]. The above research shows that UAV remote sensing has abundant applications in agriculture.
The texture features of crops are the entry point for crop classification and extraction. Extracting texture features through image filtering can allow the recognition or classification of specific crops in remote sensing images [110,112]. For example, Li et al. established corresponding texture rule sets to extract tree crowns from UAV images [113]. Zhang Chao et al. obtained UAV remote sensing image data with high spatial resolution, studied the calculation scale of the texture characteristics of seed production, and through comparative analysis, determined that the most suitable resolution for distinguishing seed production and field corn is 0.6–0.9 m. Finally, a 0.7 m resolution image was used to verify the method, and the decision tree method was used to obtain seed production information, which provided support for the high-spatial-resolution remote sensing management of the seed production cornfield [114]. Zou Kunlin et al. used drones equipped with visible light cameras to collect images of cotton fields and surrounding areas. Through the vegetation index and texture characteristics, the effects of different characteristics on the defoliated cotton field, non-defoliated cotton field, winter wheat field, and bare area were compared and analyzed. The results showed that the vegetation index feature can effectively distinguish the non-defoliated cotton field, the texture feature can effectively distinguish bare land and defoliated cotton fields, and the use of spectral and texture features can effectively extract the area of cotton fields [115]. Li Ming used UAV remote sensing to obtain images of rice planting areas, using Agisoft Photoscan software (St. Petersburg, Russia, version 1.7.5) to stitch the images, and segmented the test area to extract spectral, geometric, and texture features, establishing a two-class logistic regression model for identifying rice plots.[116]. Dai Jianguo et al. obtained visible light images through remote sensing of drones, extracted and optimized spectral and texture features, and classified the main crops in northern Xinjiang. The results showed that the support vector machine classification effect was the most optimal, and the classification accuracy for α, summer squash, corn, and cotton crops reached more than 80%, which can provide a reference for the crop information survey [117]. Xue et al. used drones to obtain orthophoto and terrain data. At a resolution of 0.5 m, the terraces were segmented and extracted based on the object-oriented method and the two data sets were combined. The results showed that the use of spectrum, texture, and terrain information for the classification results obtained by the SVM classification method is available [118].
The physical properties of crops can also be used for classification. The canopy structure of different crops varies significantly, such as height, shape, leaf inclination, etc. However, traditional remote sensing cannot obtain high-resolution canopy structure data. With the rapid development of UAV remote sensing and sensor technology, it is possible to obtain more data types, providing a new development space for crop remote sensing classification. Extracting the physical properties of crops from human–machine images has become a new topic of interest [119]. Among these properties, Bendig et al. confirmed that the plant height extracted from CSMs (crop surface model) has sufficient accuracy and established an estimation model of barley plant height and biomass [120]. Zhang et al. used UAV images and digital surface models to extract spatial information, and extracted distribution information such as trees, roads, and water bodies [121]. To achieve the classification of land cover types, Kim et al. generated a TIN (triangulated irregular network) model by manually collecting ground control points to generate DEM (digital elevation model) data; calculated the difference between DSM, DEM, and NDSM (normalized DSM); and selected RGB band, NDSM, and improved NDVI as features [122]. Nevalainen et al. extracted three-dimensional features from RGB point clouds and combined multi-spectral features to classify trees [123]. Yang et al. used DSM information to extract rice lodging information and verified that DSM information is helpful in distinguishing vegetation lodging [124]. Zisi et al. used multi-spectrum, texture, and height information to monitor weed distribution [125]. Mao Zhihui et al. used UAV DSM images to extract corn lodging information and verified that the accuracy of DSM obtained by oblique photography is better than that obtained by orthophotography [126]. Currently, the use of DSM data as classification features is divided into two types of processing. The first type is to directly use DSM as the classification feature for classification; the second type is to collect ground control points in the research area during crop growth and use interpolation to generate DEM through the subtraction of DSM and DEM to obtain the relative height characteristics.
The resolution of crops on UAV remote sensing images does not entirely depend on the spatial resolution. Moreover, the resolution results are related to the types and sizes of crops and the brightness and structure of the farmland. In different studies, the resolution of drones used by agricultural researchers varies. For example, Li used drone remote sensing images with a resolution of 1.7 cm to estimate rice yield [127], Lottes et al. used drones to obtain a resolution of 1.5 cm for crop and weed classification research [128], and Yang et al. used images with a resolution of 5.5 cm to evaluate rice lodging [128]. The above research shows that when conducting crop research, different research objects require different resolutions of UAV images, and the application prospects of ultra-high-resolution UAV images are broad.

7. Summary and Outlook

Firstly, medium- and high-resolution image data are mostly concentrated in the study of crop spatial pattern distribution in a small area, and medium- and low-resolution remote sensing images are mostly used in large areas. However, there is a lack of information extraction and spatial distribution data for farmland vegetation in large areas based on medium- and high-resolution image data. At present, most research is focused on the remote sensing monitoring of spatial distribution changes in different land use types, and there is a lack of in-depth analysis of the spatial distribution changes for a single crop. Current studies on the dynamic changes in crop spatial patterns are mostly based on factors such as changes in the crop area, crop spatial distribution, and land use types, or using model methods. The comprehensive use of crop area information and the landscape pattern index to study the dynamic changes in crop spatial patterns remains insufficient.
Secondly, most of the data sources used to extract the spatial distribution of farmland vegetation are relatively limited. Due to differences in satellite sensors, classification schemes, and classification methods, there are large differences in accuracy and spatial consistency between the data sources, resulting in relatively large differences in the classification results for farmland vegetation. The ability of remote sensing to extract large-scale, multi-level, high-precision cultivated land use pattern indicators and their changes is weak. For regional- and global-scale research, the spatial resolution of the data source is low or sampling statistics are directly used, which are highly subjective, and the displayed cultivated land use pattern index is not accurate. Moreover, it can only qualitatively explain the problem and cannot be used as an aid to carry out the quantitative research. Therefore, the ability of remote sensing extraction of large-scale, multi-level, high-precision farmland-use pattern indicators and their changes must be further strengthened.
Third, the selection of crop classification features in UAV remote sensing images must be expanded, and the extraction accuracy needs to be further improved. The development of UAV remote sensing provides a new development space for crop remote sensing classification. The construction of new crop remote sensing classification feature variables with suitable classification effects and the comprehensive utilization of these feature variables have become the core content of future remote sensing classification methods. Future developments include the effective extraction of crops’ spatial physical properties from UAV images, such as height, shape, and surface fluctuations, and texture features to classify crops, thereby improving classification accuracy.
Finally, in this article, we have demonstrated that the future development direction of crop remote sensing recognition includes several major developments. It is necessary to (1) further strengthen the ability of remote sensing recognition of large crop areas and improve the temporal resolution of remote sensing data [129]; (2) extensively carry out remote sensing spatial data mining research, collect results from various disciplines (including machine learning, statistics, artificial intelligence, etc.), and mine the hidden information in remote sensing images and data; (3) improve the utilization of multi-source remote sensing information and strengthen the comparative use of multi-source data; (4) expand the application field and scope of agricultural remote sensing and promote the interdisciplinary application of agricultural remote sensing; and (5) strengthen the promotion and use of domestic satellite remote sensing data to promote businesses. (6) Due to our limited knowledge, this review article inevitably omits many other important studies, as well as operational land cover products such as Agriculture and Agrifood Canada Annual Crop Inventory Map (Annual Crop Inventory—Open Government Portal (canada.ca)).

Author Contributions

Conceptualization, D.F. and X.S.; methodology, D.F.; validation, D.F., B.W. and T.W.; investigation, D.F. and X.S.; resources, D.F. and X.S.; data curation, X.S.; writing—original draft preparation, D.F.; writing—review and editing, D.F. and B.W.; visualization, B.W. and F.Y.; supervision, F.Y. and X.S.; project administration, D.F.; funding acquisition, T.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Open Research Project of the Key Laboratory of Special Agricultural Meteorological Disaster Monitoring and Early Warning and Risk Management in Arid Regions of China Meteorological Administration “Research on Northwest Wheat Drought Assessment Technology Based on APSIM Model-Taking Ningxia as an Example” (grant no. CAMF-201905).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Foley, J.A.; Ramankutty, N.; Brauman, K.A.; Cassidy, E.S.; Gerber, J.S.; Johnston, M.; Mueller, N.D.; O’Connell, C.; Ray, D.K.; West, P.C.; et al. Solutions for a cultivated planet. Nature 2011, 478, 337–342. [Google Scholar] [CrossRef] [Green Version]
  2. Kearney, J. Food consumption trends and drivers. Philos. Trans. R. Soc. B Biol. Sci. 2010, 365, 2793–2807. [Google Scholar] [CrossRef] [PubMed]
  3. Godfray, H.; Beddington, J.; Crute, I.; Haddad, L.; Lawrence, D.; Muir, J.; Pretty, J.; Robinson, S.; Thomas, S.; Toulmin, C. Food security: The challenge of feeding 9 billion people. Science 2010, 327, 812–818. [Google Scholar] [CrossRef] [Green Version]
  4. Sterling, S.M.; Ducharne, A.; Polcher, J. The impact of global land-cover change on the terrestrial water cycle. Nat. Clim. Chang. 2012, 3, 385–390. [Google Scholar] [CrossRef]
  5. Lambin, E.F.; Gibbs, H.K.; Ferreira, L.; Grau, R.; Mayaux, P.; Meyfroidt, P.; Morton, D.C.; Rudel, T.K.; Gasparri, I.; Munger, J. Estimating the world’s potentially available cropland using a bottom-up approach. Glob. Environ. Chang. 2013, 23, 892–901. [Google Scholar] [CrossRef]
  6. Ozdogan, M. The spatial distribution of crop types from MODIS data: Temporal unmixing using Independent Component Analysis. Remote Sens. Environ. 2010, 114, 1190–1204. [Google Scholar] [CrossRef]
  7. Wardlow, B.D.; Egbert, S.L.; Kastens, J.H. Analysis of time-series MODIS 250 m vegetation index data for crop classification in the US Central Great Plains. Remote Sens. Environ. 2007, 108, 290–310. [Google Scholar] [CrossRef] [Green Version]
  8. Vintrou, E.; Desbrosse, A.; Bégué, A.; Traoré, S.; Baron, C.; Seen, D.L. Crop area mapping in West Africa using landscape stratification of MODIS time series and comparison with existing global land products. Int. J. Appl. Earth Obs. Geoinf. 2012, 14, 83–93. [Google Scholar] [CrossRef]
  9. Xiao, X.; Boles, S.; Frolking, S.; Li, C.; Moore, B. Mapping paddy rice agriculture in South and Southeast Asia using multi-temporal MODIS images. Remote Sens. Environ. 2006, 100, 95–113. [Google Scholar] [CrossRef]
  10. Van Niel, T.G.; Mcvicar, T.R. Determining temporal windows for crop discrimination with remote sensing: A case study in south-eastern Australia. Comput. Electron. Agric. 2004, 45, 91–108. [Google Scholar] [CrossRef]
  11. White, E.V.; Roy, D.P. A contemporary decennial examination of changing agricultural field sizes using Landsat time series data. Geo Geogr. Environ. 2015, 2, 33–54. [Google Scholar] [CrossRef] [PubMed]
  12. Bargiel, D.; Herrmann, S. Multi-temporal land-cover classification of agricultural areas in two European regions with high resolution spotlight TerraSAR-X data. Remote Sens. 2011, 3, 859–877. [Google Scholar] [CrossRef] [Green Version]
  13. Jia, K.; Yao, Y.; Wei, X.; Shuai, G.; Bo, J.; Xiang, Z. Research progress in remote sensing estimation of vegetation coverage. Adv. Earth Sci. 2013, 28, 774–782. [Google Scholar]
  14. Patil, A.; Ishwarappa, R.K. Classification of crops using FCM segmentation and texture, color feature. World J. Sci. Technol. 2013, 2121–2123. [Google Scholar]
  15. Lobell, D.B.; Burke, M.B.; Tebaldi, C.; Mastrandrea, M.D.; Falcon, W.P.; Naylor, R.L. Prioritizing climate change adaptation needs for food security in 2030. Science 2008, 319, 607. [Google Scholar] [CrossRef]
  16. Portmann, F.T.; Siebert, S.; Döll, P. MIRCA2000—Global monthly irrigated and rainfed crop areas around the year 2000: A new high-resolution data set for agricultural and hydrological modeling. Glob. Biogeochem. Cycles 2010, 24, 1–24. [Google Scholar] [CrossRef]
  17. Susana, D.P.; Pablo, R.G.; David, H.L.; Beatriz, F.G. Vicarious Radiometric Calibration of a Multispectral Camera on Board an Unmanned Aerial System. Remote Sens. 2014, 6, 1918–1937. [Google Scholar]
  18. Mesas-Carrascosa, F.J.; Notario-García, M.D.; Meroño De Larriva, J.E.; Emilio, M.; Manuel, S.; García-Ferrer Porras, A. Validation of measurements of land plot area using UAV imagery. Int. J. Appl. Earth Obs. Geoinf. 2014, 33, 270–279. [Google Scholar] [CrossRef]
  19. Rokhmana, C.A. The Potential of UAV-based Remote Sensing for Supporting Precision Agriculture in Indonesia. Procedia Environ. Sci. 2015, 24, 245–253. [Google Scholar] [CrossRef] [Green Version]
  20. Córcoles, J.I.; Ortega, J.F.; Hernández, D.; Moreno, M.A. Estimation of leaf area index in onion (Allium cepa L.) using an unmanned aerial vehicle. Biosyst. Eng. 2013, 115, 31–42. [Google Scholar] [CrossRef]
  21. Sona, G.; Passoni, D.; Pinto, L.; Pagliari, D.; Facchi, A. Uav Multispectral Survey to Map Soil and Crop for Precision Farming Applications. In Proceedings of the Remote Sensing and Spatial Information Sciences Congress: International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences Congress, Prague, Czech Republic, 12–19 July 2016; International Society for Photogrammetry and Remote Sensing (ISPRS): Hannover, Germany, 2016; Volume XLI-B1, pp. 1023–1029. [Google Scholar]
  22. Ballesteros, R.; Ortega, J.F.; Hernandez, D.; Moreno, M.A. Applications of georeferenced high-resolution images obtained with unmanned aerial vehicles. Part I: Description of image acquisition and processing. Precis. Agric. 2014, 15, 579–592. [Google Scholar] [CrossRef]
  23. Lelong, C.C.D. Assessment of Unmanned Aerial Vehicles Imagery for Quantitative Monitoring of Wheat Crop in Small Plots. Sensors 2008, 8, 3557–3585. [Google Scholar] [CrossRef]
  24. Chang, J.; Hansen, M.C.; Pittman, K.; Carroll, M.; DiMiceli, C. Corn and soybean mapping in the United States using MODIS time-series data sets. Agron. J. 2007, 99, 1654–1664. [Google Scholar] [CrossRef]
  25. Zhang, M.; Zhou, Q.; Chen, Z.; Jia, L.; Zhou, Y.; Cai, C. Crop discrimination in Northern China with double cropping systems using Fourier analysis of time-series MODIS data. Int. J. Appl. Earth Obs. Geoinf. 2008, 10, 476–485. [Google Scholar]
  26. Zhang, X.; Jiao, Q.; Zhang, B.; Chen, Z.C. Preliminary study on extracting crop planting patterns using MODIS_EVI image time series. Trans. Chin. Soc. Agric. Eng. 2008, 24, 161–165. [Google Scholar]
  27. Xiong, Q.; Huang, J. Monitoring the planting area of autumn crops using the time series characteristics of NDVI index. Trans. Chin. Soc. Agric. Eng. 2009, 25, 144–148. [Google Scholar]
  28. Cai, X.; Cui, Y. Extraction of crop planting structure in irrigation area based on heterogeneous multi-temporal remote sensing data. Trans. Chin. Soc. Agric. Eng. 2009, 25, 124–130. [Google Scholar]
  29. He, X. Research on Remote Sensing Extraction of Corn Planting Area Based on Multi-Source Data Fusion. Ph.D. Thesis, Nanjing University of Information Science and Technology, Nanjing, China, 2010. [Google Scholar]
  30. Huang, Q.; Tang, H.; Zhou, Q.; Wu, W.; Wang, L.; Zhang, L. Remote sensing extraction and growth monitoring of planting structure of main crops in Northeast China. Chin. J. Agric. Eng. 2010, 26, 218–223. [Google Scholar]
  31. Hao, W.; Mei, X.; Cai, X.; Cai, X.; Jian, T.; Qin, L. Crop distribution information extraction in the three Northeast provinces based on multi-temporal remote sensing images. Chin. J. Agric. Eng. 2011, 27, 201–207. [Google Scholar]
  32. Peña-Barragán, J.M.; Ngugi, M.K.; Plant, R.E.; Six, J. Object-based crop identification using multiple vegetation indices, textural features and crop phenology. Remote Sens. Environ. 2011, 115, 1301–1316. [Google Scholar] [CrossRef]
  33. Zhang, J.; Cheng, Y.; Zhang, F.; Yue, D.; Tang, H. Crop planting information extraction based on multi-temporal remote sensing images. Trans. Chin. Soc. Agric. Eng. 2012, 28, 134–141. [Google Scholar]
  34. Foerster, S.; Kaden, K.; Foerster, M.; Itzerott, S. Crop type mapping using spectral–temporal profiles and phenological information. Comput. Electron. Agric. 2012, 89, 30–40. [Google Scholar] [CrossRef] [Green Version]
  35. Zhong, L.; Gong, P.; Biging, G.S. Efficient corn and soybean mapping with temporal extendability: A multi-year experiment using Landsat imagery. Remote Sens. Environ. 2014, 140, 1–13. [Google Scholar] [CrossRef]
  36. Tang, H.; Wu, W.; Yang, P. Research progress in remote sensing monitoring of crop spatial pattern. Chin. Agric. Sci. 2010, 43, 2879–2888. [Google Scholar]
  37. Wang, N. China Wheat Remote Sensing Dynamic Monitoring and Yield Estimation; China Science and Technology Press: Beijing, China, 1996. [Google Scholar]
  38. Xu, W.; Tian, Y. Research progress on remote sensing extraction methods of crop planting area. J. Yunnan Agric. Univ. Nat. Sci. 2005, 20, 94–98. [Google Scholar]
  39. Zhao, C.; Qian, L. Comparison of supervised and unsupervised classification of remote sensing images. J. Henan Univ. Nat. Ed. 2004, 34, 90–93. [Google Scholar]
  40. Yan, R.; Jing, Y.; He, X. Research progress on remote sensing extraction of crop planting area. Anhui Agric. Sci. 2010, 38, 14767–14768. [Google Scholar]
  41. Wang, N.; Li, Q.; Du, X.; Zhang, Y.; Wang, H. Remote sensing identification of main crops in northern Jiangsu based on univariate feature selection. J. Remote Sens. 2017, 21, 519–530. [Google Scholar]
  42. Bouman, B.A.M.; Uenk, D. Crop classification possibilities with radar in ERS-1 and JERS-1 configuration. Remote Sens. Environ. 1992, 40, 1–13. [Google Scholar] [CrossRef]
  43. Feng, A.; He, H.; Liu, L.; Ren, X.; Zhang, L.; Ge, R.; Zhao, F. Comparative study on identification methods of winter wheat growth period in Yucheng farmland ecosystem based on multi-source data. Remote Sens. Technol. Appl. 2016, 31, 958–965. [Google Scholar]
  44. De Espindola, G.M.; De Aguiar, A.P.D.; Pebesma, E.; Camara, G.; Fonseca, L. Agricultural land use dynamics in the Brazilian Amazon based on remote sensing and census data. Appl. Geogr. 2012, 32, 240–252. [Google Scholar] [CrossRef]
  45. Khan, M.R.; De Bie, C.A.; Van Keulen, H.; Smaling, E.; Real, R. Disaggregating and mapping crop statistics using hypertemporal remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2010, 12, 36–46. [Google Scholar] [CrossRef]
  46. Barnwal, P.; Kotani, K. Climatic impacts across agricultural crop yield distributions: An application of quantile regression on rice crops in Andhra Pradesh, India. Ecol. Econ. 2013, 87, 95–109. [Google Scholar] [CrossRef]
  47. Song, Q.; Zhou, Q.; Wu, W.; Hu, Q.; Yu, Q.; Tang, H. Research progress of multi-source data fusion in crop remote sensing identification. Chin. Agric. Sci. 2015, 48, 1122–1135. [Google Scholar]
  48. Leff, B.; Ramankutty, N.; Foley, J.A. Geographic distribution of major crops across the world. Glob. Biogeochem. Cycles 2004, 18. [Google Scholar] [CrossRef]
  49. Liu, J.; Fritz, S.; Van Wesenbeeck, C.F.A.; Fuchs, M.; You, L.; Obersteiner, M.; Hong, Y. A spatially explicit assessment of current and future hotspots of hunger in Sub-Saharan Africa in the context of global change. Glob. Planet. Chang. 2008, 64, 222–235. [Google Scholar] [CrossRef]
  50. Liu, Z.; Li, Z.; Tang, P.; Li, Z.; Wu, W.; Yang, P.; You, L.; Tang, H. Analysis on the spatiotemporal changes of rice planting areas and yields in China in the past 30 years. J. Geogr. Sci. 2013, 68, 680–693. [Google Scholar]
  51. Pervez, M.S.; Brown, J.F. Mapping irrigated lands at 250-m scale by merging MODIS data and national agricultural statistics. Remote Sens. 2010, 2, 2388–2412. [Google Scholar] [CrossRef] [Green Version]
  52. Brown, J.F.; Pervez, M.S. Merging remote sensing data and national agricultural statistics to model change in irrigated agriculture. Agric. Syst. 2014, 127, 28–40. [Google Scholar] [CrossRef] [Green Version]
  53. Friedl, M.; Sulla-Menashe, D.; Tan, B.; Schneider, A.; Ramankutty, N.; Sibley, A. MODIS collection 5 global land cover: Algorithm refinements and characterization of new datasets. Remote Sens. Environ. 2010, 114, 168–182. [Google Scholar] [CrossRef]
  54. Zhang, Z.; Wang, X.; Zhao, X.; Liu, B.; Yi, L.; Zuo, L.; Wen, Q.; Liu, F.; Xu, J.; Hu, S. A 2010 update of National land use/cover database of China at 1: 100000 scale using medium spatial resolution satellite images. Remote Sens. Environ. 2014, 149, 142–154. [Google Scholar] [CrossRef]
  55. Chen, D.; Chang, N.; Xiao, J.; Zhou, Q.; Wu, W. Mapping dynamics of soil organic matter in croplands with MODIS data and machine learning algorithms. Sci. Total Environ. 2019, 669, 844–855. [Google Scholar] [CrossRef] [PubMed]
  56. Lu, M.; Wu, W.; Zhang, L.; Liao, A.P.; Peng, S.; Tang, H.J. A comparative analysis of five global cropland datasets in China. Sci. China Earth Sci. 2016, 59, 2307–2317. [Google Scholar] [CrossRef]
  57. See, L.; Schepaschenko, D.; Lesiv, M.; McCallum, I.; Fritz, S.; Comber, A.; Perger, C.; Schill, C.; Zhao, Y.; Maus, V. Building a hybrid land cover map with crowdsourcing and geographically weighted regression. ISPRS J. Photogramm. Remote Sens. 2015, 103, 48–56. [Google Scholar] [CrossRef] [Green Version]
  58. Jung, M.; Henkel, K.; Herold, M.; Churkina, G. Exploiting synergies of global land cover products for carbon cycle modeling. Remote Sens. Environ. 2006, 101, 534–553. [Google Scholar] [CrossRef]
  59. Fritz, S.; See, L.; Mccallum, I.; You, L.; Bun, A.; Moltchanova, E.; Duerauer, M.; Albrecht, F.; Schill, C.; Perger, C.; et al. Mapping global cropland and field size. Glob. Chang. Biol. 2015, 21, 1980–1992. [Google Scholar] [CrossRef]
  60. Lu, M.; Wu, W.; You, L.; Chen, D.; Zhang, L.; Yang, P.; Tang, H. A synergy cropland of china by fusing multiple existing maps and statistics. Sensors 2017, 17, 1613. [Google Scholar] [CrossRef] [Green Version]
  61. Kinoshita, T.; Iwao, K.; Yamagata, Y. Creation of a global land cover and a probability map through a new map integration method. Int. J. Appl. Earth Obs. Geoinf. 2014, 28, 70–77. [Google Scholar] [CrossRef]
  62. Schepaschenko, D.G.; Shvidenko, A.Z.; Lesiv, M.Y.; Ontikov, P.V.; Shchepashchenko, M.V.; Kraxner, F. Estimation of forest area and its dynamics in Russia based on synthesis of remote sensing products. Contemp. Probl. Ecol. 2015, 8, 811–817. [Google Scholar] [CrossRef] [Green Version]
  63. Fritz, S.; You, L.; Bun, A.; See, L.; Mccallum, I.; Schill, C.; Perger, C.; Liu, J.; Hansen, M.; Obersteiner, M. Cropland for sub-saharan Africa: A synergistic approach using five land cover data sets. Geophys. Res. Lett. 2011, 38, 155–170. [Google Scholar] [CrossRef] [Green Version]
  64. Waldner, F.; Fritz, S.; Di Gregorio, A.; Defourny, P. Mapping priorities to focus cropland mapping activities: Fitness assessment of existing global, regional and national cropland maps. Remote Sens. 2015, 7, 7959–7986. [Google Scholar] [CrossRef] [Green Version]
  65. Dendoncker, N.; Rounsevell, M.; Bogaert, P. Spatial analysis and modelling of land use distributions in Belgium. Computers, Environ. Urban Syst. 2007, 31, 188–205. [Google Scholar] [CrossRef]
  66. Song, X.; Huang, C.; Feng, M.; Sexton, J.O.; Channan, S.; Townshend, J.R. Integrating global land cover products for improved forest cover characterization: An application in North America. Int. J. Digit. Earth 2014, 7, 709–724. [Google Scholar] [CrossRef] [Green Version]
  67. Zhao, Y. Principles and Methods of Remote Sensing Application Analysis; Science Press: Beijing, China, 2013. [Google Scholar]
  68. Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
  69. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  70. Deng, N.; Tian, Y. A New Method in Data Mining—Support Vector Machines; Science Press: Beijing, China, 2004. [Google Scholar]
  71. Baumann, M.; Ozdogan, M.; Kuemmerle, T.; Wendland, K.J.; Esipova, E.; Radeloff, V.C. Using the Landsat record to detect forest-cover changes during and after the collapse of the Soviet Union in the temperate zone of European Russia. Remote Sens. Environ. 2012, 124, 174–184. [Google Scholar] [CrossRef]
  72. Kaya, G.T. A hybrid model for classification of remote sensing images with linear SVM and support vector selection and adaptation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 1988–1997. [Google Scholar] [CrossRef]
  73. Breunig, F.M.; Galvao, L.S.; Formaggio, A.R.; Epiphanio, J.C.N. Classification of soybean varieties using different techniques: Case study with Hyperion and sensor spectral resolution simulations. J. Appl. Remote Sens. 2011, 5, 053533. [Google Scholar] [CrossRef] [Green Version]
  74. Jin, N.; Tao, B.; Ren, W.; Feng, M.; Sun, R.; He, L.; Zhuang, W.; Yu, Q. Mapping Irrigated and Rainfed Wheat Areas Using Multi-Temporal Satellite Data. Remote Sens. 2016, 8, 207. [Google Scholar] [CrossRef] [Green Version]
  75. Guo, L.; Tang, J.; Mi, S.; Zhang, C.; Zhao, L. Research progress on remote sensing image fusion classification methods based on support vector machines. Anhui Agric. Sci. 2010, 17, 9235–9238. [Google Scholar]
  76. Liu, Y. Research on Land Cover Classification Method Based on Semi-Supervised Integrated Support Vector Machine. Ph.D. Thesis, Graduate University of Chinese Academy of Sciences (Northeast Institute of Geography and Agroecology), Beijing, China, 2013. [Google Scholar]
  77. Bigdeli, B.; Samadzadegan, F.; Reinartz, P. A Multiple SVM System for Classification of Hyperspectral Remote Sensing Data. J. Indian Soc. Remote Sens. 2013, 41, 763–776. [Google Scholar] [CrossRef] [Green Version]
  78. Tuia, D.; Muñoz-Marí, J.; Kanevski, M.; Camps-Valls, G. Structured Output SVM for Remote Sensing Image Classification. J. Signal Process. Syst. 2011, 65, 301–310. [Google Scholar] [CrossRef] [Green Version]
  79. Jia, K.; Li, Q.; Tian, Y.; Wu, B. Research progress in remote sensing image classification methods. Spectrosc. Spectr. Anal. 2011, 10, 2618–2623. [Google Scholar]
  80. Shoemaker, D.A.; Cropper, W.P. Application of remote sensing, an artificial neural network leaf area model, and a process-based simulation model to estimate carbon storage in Florida slash pine plantations. J. For. Res. 2010, 21, 171–176. [Google Scholar] [CrossRef]
  81. Zhang, H. Research on Remote Sensing Image Classification Based on BP Neural Network. Ph.D. Thesis, Shandong Normal University, Jinan, China, 2013; p. 81. [Google Scholar]
  82. Zhao, J. Research on Remote Sensing Image Land Cover Classification Based on BP Artificial Neural Network. Ph.D. Thesis, China University of Geosciences, Wuhan, China, 2010. [Google Scholar]
  83. Jiang, J. Research on High-Resolution Remote Sensing Image Classification Based on BP Neural Network. Ph.D. Thesis, Capital Normal University, Beijing, China, 2011. [Google Scholar]
  84. Liu, X. Research on Remote Sensing Image Classification Based on Improved BP Neural Network. Ph.D. Thesis, Chang’an University, Xi’an, China, 2009. [Google Scholar]
  85. Wang, C.; Wu, W.; Zhang, J. Remote sensing image classification method based on BP neural network. J. Liaoning Tech. Univ. Nat. Sci. Ed. 2009, 1, 32–35. [Google Scholar]
  86. Zhang, R.; Wan, L.; Zhang, F.; Shi, Y. Research progress on remote sensing classification methods of land use. South North Water Divers. Water Sci. Technol. 2006, 2, 39–42. [Google Scholar]
  87. Chasmer, L.; Hopkinson, C.; Veness, T.; Quinton, W.; Baltzer, J. A decision-tree classification for low-lying complex land cover types within the zone of discontinuous permafrost. Remote Sens. Environ. 2014, 143, 73–84. [Google Scholar] [CrossRef]
  88. Zhang, X. Implementation of Decision Tree Classifier and Its Application in Remote Sensing Image Classification. Ph.D. Thesis, Lanzhou Jiaotong University, Lanzhou, China, 2013. [Google Scholar]
  89. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random Forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  90. Fang, K.; Wu, J.; Zhu, J.; Xie, B. Research review of random forest methods. Stat. Inf. Forum 2011, 26, 32–38. [Google Scholar]
  91. Schultz, B.; Immitzer, M.; Formaggio, A.; Sanches, I.; Luiz, A.; Atzberger, C. Self-guided segmentation and classification of multi-temporal Landsat 8 images for crop type mapping in Southeastern Brazil. Remote Sens. 2015, 7, 14482–14508. [Google Scholar] [CrossRef] [Green Version]
  92. Song, Q. Research on the Extraction Method of Crop Planting Structure Based on GF-1/WFV and Object-Oriented. Ph.D. Thesis, Chinese Academy of Agricultural Sciences, Beijing, China, 2016. [Google Scholar]
  93. Huang, Z. Research on Multi-Scale Methods in Object-Oriented Image Analysis. Ph.D. Thesis, National University of Defense Technology, Zunyi, China, 2014. [Google Scholar]
  94. Yi, L. Uncertainty Analysis of Object-Oriented Remote Sensing Image Classification. Ph.D. Thesis, Wuhan University, Wuhan, China, 2011. [Google Scholar]
  95. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  96. Zhang, X.; Xiao, P.; Feng, X. Impervious surface extraction from high-resolution satellite image using pixel- and object-based hybrid analysis. Int. J. Remote Sens. 2013, 34, 4449–4465. [Google Scholar] [CrossRef]
  97. Voltersen, M.; Berger, C.; Hese, S.; Schmullius, C. Object-based land cover mapping and comprehensive feature calculation for an automated derivation of urban structure types at block level. Remote Sens. Environ. 2014, 154, 192–201. [Google Scholar] [CrossRef]
  98. Qin, Y.; Niu, Z.; Chen, F.; Li, B.; Ban, Y. Object-based land cover change detection for cross-sensor images. Int. J. Remote Sens. 2013, 34, 6723–6737. [Google Scholar] [CrossRef]
  99. Xing, C. Hyperspectral Remote Sensing Image Classification Based on Deep Learning. Ph.D. Thesis, China University of Geosciences, Wuhan, China, 2016. [Google Scholar]
  100. Zuo, Y. Remote Sensing Image Classification Based on Active Deep Learning. Ph.D. Thesis, Yanshan University, Qinhuangdao, China, 2016. [Google Scholar]
  101. Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral Image Classification Using Deep Pixel-Pair Features. IEEE Trans. Geosci. Remote Sens. 2017, 55, 844–853. [Google Scholar] [CrossRef]
  102. Wang, L.; Zhang, J.; Liu, P.; Choo, K.R.; Huang, F. Spectral–spatial multi-feature-based deep learning for hyperspectral remote sensing image classification. Soft Comput. 2017, 21, 213–221. [Google Scholar] [CrossRef]
  103. Zhou, X.; Li, S.; Tang, F.; Qin, K.; Hu, S.; Liu, S. Deep Learning with Grouped Features for Spatial Spectral Classification of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 97–101. [Google Scholar] [CrossRef]
  104. Michez, A.; Piégay, H.; Lisein, J.; Claessens, H.; Lejeune, P. Classification of riparian forest species and health condition using multi-temporal and hyperspatial imagery from unmanned aerial system. Environ. Monit. Assess. 2016, 188, 146. [Google Scholar] [CrossRef] [Green Version]
  105. Yu, Q.; Shi, Y.; Tang, H.; Peng, Y.; Xie, A.; Liu, B.; Wu, W. eFarm: A Tool for Better Observing Agricultural Land Systems. Sensors 2017, 17, 453. [Google Scholar] [CrossRef] [Green Version]
  106. Li, B.; Liu, R.; Liu, S.; Liu, Q.; Liu, F.; Zhou, G. Monitoring of winter wheat coverage changes based on low-altitude drone remote sensing. Trans. Chin. Soc. Agric. Eng. 2012, 28, 160–165. [Google Scholar]
  107. Tian, Z.; Fu, Y.; Liu, S.; Liu, F. A rapid classification method of crops based on low-altitude remote sensing by drones. Trans. Chin. Soc. Agric. Eng. 2013, 7, 109–116. [Google Scholar]
  108. Zhang, Y.; Tao, P.; Liang, S.; Liang, W. Application of UAV remote sensing in forest resource survey. J. Southwest For. Univ. 2011, 3, 49–53. [Google Scholar]
  109. Xu, H.; Ding, X.; Han, N.; Deng, J.; Wang, K. Preliminary study on geometric correction of remote sensing image of rotary-wing UAV. Zhejiang J. Agric. 2009, 21, 63–65. [Google Scholar]
  110. Popescu, D.; Ichim, L. Aerial image segmentation by use of textural features. In Proceedings of the 20th International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 13–15 October 2016; pp. 721–726. [Google Scholar]
  111. Jin, X.; Liu, S.; Baret, F.; Hemerlé, M.; Comar, A. Estimates of plant density of wheat crops at emergence from very low altitude UAV imagery. Remote Sens. Environ. 2017, 198, 105–114. [Google Scholar] [CrossRef] [Green Version]
  112. Milas, A.S.; Arend, K.; Mayer, C.; Simonson, M.A.; Mackey, S. Different colours of shadows: Classification of UAV images. Int. J. Remote Sens. 2017, 38, 3084–3100. [Google Scholar] [CrossRef]
  113. Li, Y. Research on UAV/RS3D Image Pairing Forest Information Extraction Method. Master’s Thesis, Beijing Forestry University, Beijing, China, 2016. [Google Scholar]
  114. Zhang, C.; Qiao, M.; Liu, Z.; Jin, H.; Ning, M.; Sun, H. Optimization of texture feature scales for seed production corn field recognition based on UAV and satellite remote sensing images. Trans. Chin. Soc. Agric. Eng. 2017, 33, 98–104. [Google Scholar]
  115. Zou, K.; Zhang, R.; Jiang, Y. Cotton field identification and area estimation based on drone imaging. J. Shihezi Univ. Nat. Sci. Ed. 2018, 6, 1–7. [Google Scholar]
  116. Li, M.; Huang, Y.; Li, X.; Peng, D.; Xie, J. Rice planting information extraction based on UAV remote sensing images. Trans. Chin. Soc. Agric. Eng. 2018, 34, 108–114. [Google Scholar]
  117. Dai, J.; Zhang, G.; Guo, P.; Zeng, T.; Cui, M.; Xue, J. Classification method of main crops in northern Xinjiang based on UAV remote sensing visible light images. Trans. Chin. Soc. Agric. Eng. 2018, 34, 122–129. [Google Scholar]
  118. Xue, M.; Zhang, H.; Yang, J.; Li, X. Information extraction of terraces based on drone imagery and topographic index. Comput. Appl. Res. 2019, 9, 1–12. [Google Scholar]
  119. Zarco-Tejada, P.J.; Diaz-Varela, R.; Angileri, V.; Loudjani, P. Tree height quantification using very high resolution imagery acquired from an unmanned aerial vehicle (UAV) and automatic 3D photo-reconstruction methods. Eur. J. Agron. 2014, 55, 89–99. [Google Scholar] [CrossRef]
  120. Bendig, J.; Bolten, A.; Bennertz, S.; Broscheit, J.; Eichfuss, S.; Bareth, G. Estimating Biomass of Barley Using Crop Surface Models (CSMs) Derived from UAV-Based RGB Imaging. Remote Sens. 2014, 6, 10395–10412. [Google Scholar] [CrossRef] [Green Version]
  121. Zhang, Q.; Qin, R.; Huang, X.; Fang, Y.; Liu, L. Classification of Ultra-High Resolution Orthophotos Combined with DSM Using a Dual Morphological Top Hat Profile. Remote Sens. 2015, 7, 16422–16440. [Google Scholar] [CrossRef] [Green Version]
  122. Kim, G.H. Land Cover Classification with High Spatial Resolution Using Orthoimage and DSM Based on Fixed-Wing UAV. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2017, 35, 1–10. [Google Scholar]
  123. Nevalainen, O.; Honkavaara, E.; Tuominen, S.; Viljanen, N.; Hakala, T.; Yu, X.; Hyyppä, J.; Saari, H.; Pölönen, I.; Imai, N.; et al. Individual Tree Detection and Classification with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging. Remote Sens. 2017, 9, 185. [Google Scholar] [CrossRef] [Green Version]
  124. Yang, M.; Huang, K.; Kuo, Y.; Tsai, H.; Lin, L. Spatial and Spectral Hybrid Image Classification for Rice Lodging Assessment through UAV Imagery. Remote Sens. 2017, 9, 583. [Google Scholar] [CrossRef] [Green Version]
  125. Zisi, T.; Alexandridis, T.; Kaplanis, S.; Navrozidis, I.; Polychronos, V. Incorporating Surface Elevation Information in UAV Multispectral Images for Mapping Weed Patches. J. Imaging 2018, 4, 132. [Google Scholar] [CrossRef] [Green Version]
  126. Mao, Z.; Deng, L.; Zhao, X.; Hu, Y. Using drone remote sensing to extract corn lodging information in breeding plots. Chin. Agric. Sci. Bull. 2019, 35, 62–68. [Google Scholar]
  127. Li, A. Research on Rice Yield Estimation Based on UAV Digital Image. Master’s Thesis, Shenyang Agricultural University, Shenyang, China, 2018. [Google Scholar]
  128. Lottes, P.; Khanna, R.; Pfeifer, J.; Siegwart, R.; Stachniss, C. UAV-based crop and weed classification for smart farming. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 3024–3031. [Google Scholar]
  129. Boori, M.S.; Choudhary, K.; Paringer, R.; Kupriyanov, A. Spatiotemporal ecological vulnerability analysis with statistical correlation based on satellite remote sensing in Samara, Russia. J. Environ. Manag. 2021, 285, 112138. [Google Scholar] [CrossRef]
Table 1. Summary of remote sensing classification methods for farmland vegetation.
Table 1. Summary of remote sensing classification methods for farmland vegetation.
Remote Sensing Classification of Farmland VegetationClassification
Farmland vegetation classification based on vegetation indexNormalized difference vegetation index, enhanced vegetation index, surface temperature, etc.
Farmland vegetation classification based on spectral bandRemote sensing recognition of crops based on single image
Remote sensing recognition of crops based on multi-temporal remote sensing imagesSingle feature parameter recognition
Multiple feature parameter recognition
Multi-feature parameter statistical model
Farmland vegetation classification based on multi-source data fusionData consistency scoring
Regression analysis
Farmland vegetation classification based on machine learningSupport vector machine algorithm
Neural network algorithm
Decision tree algorithm
Object-oriented machine learning algorithms
Deep learning algorithm
Crop classification based on drone remote sensing
Table 2. Comparison of remote sensing identification methods for farmland vegetation based on spectral bands.
Table 2. Comparison of remote sensing identification methods for farmland vegetation based on spectral bands.
MethodApplicabilityData SourceClassificationAdvantagesDisadvantage
Remote sensing recognition of crops based on single image Suitable for areas with relatively simple crop planting structureSPOT-5Decision treeHigh efficiency and strong operabilityLong revisit period and poor accuracy when the “critical phenological period” is not obvious
IRS-1DSupport vector machines
CBERS-02BNeural networks
Maximum likelihood
LANDSAT-TMSpectral angle mapping
HJ-1B
HJ-1A
MODIS
Remote sensing recognition of crops based on multi-temporal remote sensing imagesSingle feature parameter recognitionSuitable for areas with relatively simple crop planting structureMODISFast Fourier transformSimple operation and high efficiencyFeature selection is subjective and has limitations in areas with complex and diverse crop types
TM/ETM+Unsupervised classification and spectral coupling technology
BP neural network
Threshold method
Wavelet transform
Shortest distance
Multiple feature parameter recognitionSuitable for areas with complex crop planting structuresMODISThreshold methodUse multiple spectral time series feature quantities to better capture the characteristics of each type of crop that is different from other cropsReduce the efficiency of data processing and calculation and increase the accumulation of errors
AVHRRClassification regression tree
SPOT VGTSee5.0
ASTERUnsupervised classification
AWIFSSpectral matching technology
LandsatImage segmentation
TM/ETM+Random forest
HJ-1A/B
Multi-feature parameter statistical modelSuitable for areas with land consolidation, diverse terrain, and complex planting structureMODISTemporal decomposition modelHigher extraction accuracy of crop planting areaStability and universality need to be further strengthened and improved
VHRRNeural network model
SPOT-VEGIndependent component analysis model
CPPI index model
ETATION
Table 3. Multi-source remote sensing data set synergy methods.
Table 3. Multi-source remote sensing data set synergy methods.
Fusion MethodData SourceResearch AreaSpatial ResolutionFusion ProcessLiterature Source
Data consistency scoringGLC2000, MODIS, IGBP DISCoverGlobal1 kmCalculate affinity index for multi-source data set fusion mapping[58]
GLC-2000, MODIS VCF, GIS data, statistical dataRussia1 kmEstablish a fusion information system for multi-source data set fusion mapping[62]
GLC-2000, MODIS, GlobCover2005,
GEOCOVER, cropland probability layer
Global1 kmAnalyze the consistency of remote sensing data products, set weights, and establish fusion rules[59,63]
FROM-GLC, GlobCover2009 et al. regional data set (Corine Land Cover et al.), national data setGlobal250 mMulti-index analysis, scoring different data sets, setting weights, and fusion[64]
Regression analysisUSGS-Hydro1k DEM, PELCOM, slope, soil data, meteorological data, land use ratio dataBelgium1.1 kmConstruct a logistic regression model of spatial autocorrelation to predict the spatial distribution of different land cover types[65]
GLC2000, MOD12C5, MOD12C4, GLCNMO, UMD, GlobCoverGlobal5′Using logistic regression model to predict types of land cover[62]
GLCC, GlobCover GLC2000, UMD LC, MODIS LC, MODIS VCF,North America5 kmUse regression tree model to integrate global and regional land cover products[66]
GlobCover, GLC2000, MODISGlobal1 kmUsing GWR logistic regression model to predict the type of land cover in the sample-free area[57]
Land cover (MODIS LC, regional mosaics GLC2000, GlobeCover, GLCNMO), tree cover (Hansen’s TC, Landsat VCF, MODIS VCF)Global1 kmUsing GWR logistic regression model to predict the proportion of forest coverage in the sample-free area[62]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fan, D.; Su, X.; Weng, B.; Wang, T.; Yang, F. Research Progress on Remote Sensing Classification Methods for Farmland Vegetation. AgriEngineering 2021, 3, 971-989. https://doi.org/10.3390/agriengineering3040061

AMA Style

Fan D, Su X, Weng B, Wang T, Yang F. Research Progress on Remote Sensing Classification Methods for Farmland Vegetation. AgriEngineering. 2021; 3(4):971-989. https://doi.org/10.3390/agriengineering3040061

Chicago/Turabian Style

Fan, Dongliang, Xiaoyun Su, Bo Weng, Tianshu Wang, and Feiyun Yang. 2021. "Research Progress on Remote Sensing Classification Methods for Farmland Vegetation" AgriEngineering 3, no. 4: 971-989. https://doi.org/10.3390/agriengineering3040061

APA Style

Fan, D., Su, X., Weng, B., Wang, T., & Yang, F. (2021). Research Progress on Remote Sensing Classification Methods for Farmland Vegetation. AgriEngineering, 3(4), 971-989. https://doi.org/10.3390/agriengineering3040061

Article Metrics

Back to TopTop