Fusion of GF and MODIS Data for Regional-Scale Grassland Community Classiﬁcation with EVI2 Time-Series and Phenological Features

: Satellite-borne multispectral data are suitable for regional-scale grassland community classiﬁcation owing to comprehensive coverage. However, the spectral similarity of different communities makes it challenging to distinguish them based on a single multispectral data. To address this issue, we proposed a support vector machine (SVM)–based method integrating multispectral data, two-band enhanced vegetation index (EVI2) time-series, and phenological features extracted from Chinese GaoFen (GF)-1/6 satellite with (16m) spatial and (2d) temporal resolution. To obtain cloud-free images, the Enhanced Spatial and Temporal Adaptive Reﬂectance Fusion Model (ESTARFM) algorithm was employed in this study. By using the algorithm on the coarse cloudless images at the same or similar time as the ﬁne images with cloud cover, the cloudless ﬁne images were obtained, and the cloudless EVI2 time-series and phenological features were generated. The developed method was applied to identify grassland communities in Ordos, China. The results show that the Caragana pumila Pojark, Caragana davazamcii Sanchir and Salix schwerinii E. L. Wolf grassland, the Potaninia mongolica Maxim, Ammopiptanthus mongolicus S. H. Cheng and Tetraena mongolica Maxim grassland, the Caryopteris mongholica Bunge and Artemisia ordosica Krasch grassland, the Calligonum mongolicum Turcz grassland, and the Stipa breviﬂora Griseb and Stipa bungeana Trin grassland are distinguished with an overall accuracy of 87.25%. The results highlight that, compared to multispectral data only, the addition of EVI2 time-series and phenological features improves the classiﬁcation accuracy by 9.63% and 14.7%, respectively, and even by 27.36% when these two features are combined together, and indicate the advantage of the ﬁne images in this study, compared to 500m moderate-resolution imaging spectroradiometer (MODIS) data, which are commonly used for grassland classiﬁcation at regional scale, while using 16m GF data suggests a 23.96% increase in classiﬁcation accuracy with the same extracted features. This study indicates that the proposed method is suitable for regional-scale grassland community classiﬁcation.


Introduction
Grassland covers about 40% of the Earth's surface [1]. In China, grassland covers about 3.93 × 10 6 km 2 , accounting for 41.7% of the total land area [2]. It is a renewable source for livestock production, helps ecological stability, and produces wealth for humans [3]. Generally, grassland classification plays an essential role in sustainable grass image, two days of high-temporal but low-spatial resolution and high-spatial but lowtemporal resolution images are needed by ESTARFM.
The Inner Mongolian steppe is vast and is an important part of the Eurasia Steppe [2]. Ordos, with 5.9 × 10 4 km 2 of grassland, is located in the south of Inner Mongolia. There are numerous types of grassland communities [45]. Therefore, the study on the classification of grassland communities at the regional scale is important for the conservation and utilization of grassland resources in the region.
In this study, we attempt to propose a SVM-based method for regional-scale grassland community classification integrating two-band enhanced vegetation index (EVI2) [46] timeseries, phenological features and GF multispectral data. The objectives of this study are to: (1) test the applicability of the ESTARFM algorithm in GF-1/6 satellite data; (2) verify whether the addition of phenological features and EVI2 time-series are capable of improving the accuracy of grassland community classification; (3) verify the advantage of GF-1/6 satellite data in grassland community classification; and (4) map the spatial distribution of the five main grassland communities in Ordos at 16 m spatial resolution.

Study Area
The study was conducted in the Ordos region, in which the Ordos Prefecture consists of two parts: grassland and desert. Ordos is located in the Inner Mongolia province of China, and the total area is approximately 8.7 × 10 4 km 2 (37.59°to 40.86°N, 106.71°to 111.46°E), of which about 70% is grassland ( Figure 1). The area's climate is temperate semiarid, with an mean annual temperature and precipitation of 6.2°C and 348.3 mm, respectively. Winter is cold, with a daily minimum temperature reaching −31.4°C, and summer is hot, with a daily maximum temperature reaching 38°C. Precipitation mostly occurs in July, August, and September, which constitutes about 70% of the annual total precipitation. According to the fieldwork and previous studies [48,49], within the study area, there are five main grassland communities, namely (1) Caragana pumila Pojark, Caragana davazamcii Sanchir, Salix schwerinii E. L. Wolf grassland (hereafter CCSg), (2) Potaninia mongolica Maxim, Ammopiptanthus mongolicus S. H. Cheng, Tetraena mongolica Maxim grassland (hereafter PATg), (3) Caryopteris mongholica Bunge and Artemisia ordosica Krasch grassland (hereafter CAg), (4) Calligonum mongolicum Turcz grassland (hereafter Cmg), and (5) Stipa breviflora Griseb and Stipa bungeana Trin grassland (hereafter SSg). Each community contains several companion species (Table 1).  [50]. They carry WFV cameras with the same spatial resolution of 16 m and cover the visible to the near-infrared spectral regions (Table 2). Meanwhile, the GF-1/6 satellite has achieved a 2 d revisit cycle of China, which has dramatically improved RS data acquisition's scale and timeliness [51]. In this study, 23 phases of images from 13 December 2018 to 3 December 2019, a total of 64 GF-1 and 30 GF-6 images (as shown in Table 3), were employed for the grassland community classification.  The following pre-processing procedures were performed on the GF dataset: (a) radiometric calibration, (b) atmospheric correction using the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) model, (c) geometric correction, (d) reprojec-tion, and (e) resampling. The digital numbers were converted into real reflectance values for the land surfaces by the above processes, atmosphere effects were eliminated, and the exact geometric correction ensured accuracy within 0.5 pixels. All images were transformed in the WGS84 reference coordinate system, and resampled at 16 m spatial resolution using the bilinear interpolation method.

MODIS MOD09GA Dataset and Pre-Processing
The MOD09GA dataset provides seven spectral channels ranging from visible to infrared bands with a spatial resolution of 500 m and temporal resolution of 1 d, which is corrected for atmospheric effects [52]. The MOD09GA dataset covering the study area (Tiles h26v04 and h26v05) was downloaded from the National Aeronautics and Space Administration (NASA) website (https://ladsweb.nascom.nasa.gov/search, accessed on 10 October 2020). As the data were corrected for atmospheric effects, we only reprojected the data to the WGS84 reference coordinate system and resampled them to 16 m using the MODIS Reprojection Tool (MRT). Then we geo-rectified the MOD09GA to the GF images.

Reference Data
The ground truth field survey was conducted in July 2019, when the Ordos' grassland was at peak growing season. Combining previous studies, the vegetation type map of Ordos [48], and Google Earth images, a set of samples for each of the five communities were collected ( Table 4). Given that the Ordos did not have dramatic changes in land-cover types between 2017 and 2019, we masked the non-grassland area out by using the 10 m land-cover map of Ordos in 2017 [47], to eliminate the interference of other land-use types in this study.

Methods
The flowchart of the grassland community classification in the study area is presented in Figure 2. The proposed method has four main steps. Firstly, the EVI2 was calculated from the GF and the MOD09GA dataset, respectively. Secondly, the fused cloudless images were generated by the ESTARFM algorithm to replace the original images with cloud cover (described in Section 3.1). Thirdly, after the Savitzky-Golay smoothing, six phenological features were derived from the fused EVI2 time-series, then the key information of fused EVI2 time-series (hereafter PCA EVI2 time-series) was extracted using the principle component analysis (PCA) algorithm. After that, the non-grassland land types were masked out (as described in Section 2.4). Finally, grassland community classification was performed based on the SVM classifier, integrating the PCA EVI2 time-series, phenological features, and GF multispectral data.

Generation of Cloudless EVI2 Time-Series by the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) Algorithm
Due to frequent cloud cover in the study area, a large proportion of the GF data were unusable. Thus, we performed the ESTARFM algorithm for the GF and MODIS images to capture the cloudless EVI2 time-series. Although the ESTARFM algorithm is rigorous, it still yields some errors during the process of fusion, which would reduce classification accuracy. Thus, we only applied it to the areas of the images with cloud cover, rather than the entire image.
The ESTARFM algorithm was modified from the spatial and temporal adaptive reflectance fusion model (STARFM) [43], and the flowchart of it is shown in Figure 3. It considers the heterogeneity of the pixels, adjusts the assignment method, and introduces a conversion coefficient to improve the fusion results, which improves the fusion accuracy, especially in the regions with considerable heterogeneity [44]. After aligning two images, it is assumed that the difference in reflectance between the GF and MODIS pixels is only caused by the systematic biases. Therefore, the linear relationship between the reflectances of the MODIS and GF data is as shown in Equation (1) where F and C represent the reflectances of GF and MODIS data, respectively; (x, y) represents the pixel's location; t k is the acquisition time of the image; B is the band of image; and a and b are the coefficients of the linear equation.
If there are two pairs of GF and MODIS images for the same area in time t m and t n , respectively, and the type of land cover and sensor calibration do not change significantly during the period, then Equation (1) can be written as Equation (2) at t m and Equation (3) From Equations (2) and (3), Equation (4) can be obtained: However, many pixels of the MODIS images are mixed pixels due to its low spatial resolution; thus, the reflectance relationship between the GF and MODIS data may not exist as described in Equation (4). Therefore, the linear mixture model is required to denote the reflectance relationship between them: where f i is the fraction of each type of land cover in the mixed pixels, ε is the residual error, and M is the total number of endmembers (all the GF pixels contained within the mixed MODIS pixels). It is assumed that the change in the reflectance of each endmember is linear from t m to t n : where ∆t = t n − t m and g i is the rate of change of the kth endmember, which is assumed to be stable during this period. Then, from Equations (5) and (6), we can get the change in MODIS reflectance within this time: If the reflectance of the kth endmember at date t m and t n can obtained, Equation (6) can be written as: Integrating Equation (8) into Equation (7), we obtain the ratio of the change in reflectance of the kth endmember to the change in reflectance of a coarse pixel, v k , which is called the conversion coefficient: Then, Equation (4) can be rewritten as follows: However, using only a single pixel's information to fuse the reflectance of GF data is not accurate enough, and making full use of the information of adjacent pixels can obtain higher fusion accuracy [43]. The ESTARFM algorithm uses the search window of size w to search for similar pixels (adjacent pixels with the same type of land cover as the central pixel) within the window and calculates the weighted average of the values of similar pixels. Then, the GF images' reflectance of the central pixel (x w\2 ,y w\2 ) at date t n can be calculated as: where N is the number of similar pixels, including the central pixel, (x i , y i ) are the ith similar pixel, W i and v i is the weight and the conversion coefficient of the ith similar pixel, respectively. W i is determined by the distance between the ith similar pixel and the central pixel, and the spectral similarity between GF and MODIS pixel. The spectral similarity is determined by the correlation coefficient between the ith similar pixel and its corresponding MODIS pixel as follows: where F i and C i are the datasets of pixels corresponding to GF and MODIS data, respectively, and D(F i ), D(C i ), E(F i ) and E(C i ) are the corresponding variance and expectation values, respectively. The distance d i between the ith similar pixel and central pixel can be calculated as follows: where w is the size of the search window. Combining Equations (12) and (13), a synthetic index S i can be calculated that combines spectral similarity and d i as follows: and the weight w i is calculated by the normalized reciprocal of S i as: Since this study is mainly based on the EVI2 time-series, to reduce the computational cost, we used the ESTARFM algorithm for EVI2 instead of using it for raw data before calculating to obtain EVI2. The 2-band enhanced vegetation index (EVI2), which bears a strong resemblance to EVI [53], was derived from raw data using the following formula: where ρ Nir and ρ Red represent the reflectance values in the near-infrared and red bands, respectively. After calculating EVI2, the ESTARFM algorithm was performed to fuse the region's spectral reflectance with heavy cloud cover. The pre-processing steps for the fusion of the GF and MODIS data with the ESTARFM algorithm are as follows: Step 1, the MODIS EVI2 images were resampled to 16 m spatial resolution as the GF EVI2 images using the bilinear interpolation method.
Step 2, we chose the GF EVI2 images as a reference, geo-rectified the MODIS EVI2 to the GF images.
Step 3, the MODIS EVI2 images were cropped by the GF images, ensuring they were the exact same size. Finally, the ESTARFM algorithm was performed on the areas of the GF EVI2 images with heavy cloud cover.

Reconstruction of Smoothed EVI2 Time-Series
With fused EVI2 time-series, six phenological features were extracted from them. However, although the influence of cloud cover on GF images was eliminated by using the algorithm ESTARFM, the EVI2 time-series data still contained measurement errors and signal noise. Thus, it is consequential to reconstruct the EVI2 time-series before deriving the phenological parameters precisely. In this study, the Savitzky-Golay (S-G) filter, proposed by Savitzky and Golay (1964), was applied to smooth EVI2 time-series using the TIMESAT 3.3 program. It is a filtering algorithm based on the least-squares convolutional fitting of local polynomials. It applies a least-squares fit to an Nth-order polynomial by selecting some points near the original data and then calculates the average value of that point through that polynomial. It is essentially a weighted average of the original series, with the weights depending on the polynomial of the least-squares fit within the filter window. The formula of S-G filter is as follows [54]: where Y * j , Y j+1 , m, and C i denote the filtered data, the original data, the window's width, and the filtered coefficient, respectively. N is the number of convolutions and numerically equals 2n + 1 times the filter window width. This filter is affected by two parameters: the sliding window size r and the polynomial order q. The larger the sliding window size, the more data will be involved in the fit, and the smoother the fit result will be; otherwise, more details of the original curve will be retained. The lower-order smoothing works well for the polynomials but increases the error, while the opposite results in overfitting. Based on previous studies, these two parameters can be determined according to the EVI2 observations, and q usually ranges from 2 to 4 [55]. Therefore, we tried to smooth the EVI2 time-series with different parameters of the S-G filter to find the optimal parameters for this experiment. As shown in Figure 4, the optimal smoothing effect was achieved when r was 5 and q was 2.

Extraction of Phenological Features and PCA EVI2 Time-Series
Due to the similarity of spectra among various grassland communities, it is difficult to differentiate using GF multispectral data only. Using phenological information as input data can improve the classification accuracy of different grasslands [56]. In this study, six phenological parameters were derived from the fused EVI2 time-series: (1) start of season (hereafter SOS): time for which the left edge increased to a user-defined level measured from the left minimum; (2) end of season (hereafter EOS):time for which the right edge decreased to a user-defined level measured from the right minimum level; (3) maximum of EVI2 (hereafter Max): the maximum value of each pixel in the EVI2 temporal data; (4) minimum of EVI2 (hereafter Min): the minimum value of each pixel in the temporal data; (5) mean EVI2: the mean value of each pixel in the temporal data; and (6) phenology index (hereafter PI): a measure of EVI2 seasonal variation [1]. The following equation calculates this: (abs(mean − t 1 ) + abs(mean − t 2 ) + · · · + abs(mean − t n )) n where abs, mean, t i , and n represent absolute values, the mean of EVI2, the EVI2 value in the ist temporal phase, and the number of temporal phases of the image, respectively.
To reducing data redundancy, we performed the PCA algorithm on the EVI2 timeseries. PCA is a commonly used linear dimensionality reduction method that maps higher dimensional data into a lower-dimensional space through linear projection with the expectation that the data will be most informative (i.e., have the most significant variance) in the dimension being projected. In this way, fewer data dimensions are used while retaining most of the original data's characteristics [57]. The results of the PCA show that the first four components explained 95.07% of the variance. Thus, the first four components after the PCA transformation were included for subsequent trials.

SVM Classification and Accuracy Assessment
In recent years, the deep learning (DL) technique has shown great success in the RS classification task due to its strong learning and generalization ability [58,59]. It contains many parameters and hence requires a large number of samples for training to avoid overfitting [60]. However, sampling a large number of grassland images is challenging because it is time consuming and costly, thus limiting its usage [5,61]. Compared to DL, the SVM classifier, which is suitable for tasks with small-sample and high-dimensionality datasets [62], also has a strong generalization ability [63] and does not require assumptions about the statistical distribution of the data [64]. For these reasons, it performs well in grassland classification [65,66], and thus it was employed in this study.
To investigate the impacts of the different RS features on the classification accuracy of grassland communities, we implemented five scenarios: (1) the GF multispectral (Red, Green, Blue, and NIR bands) data only; (2) the GF multispectral data and PCA EVI2 timeseries; (3) the GF multispectral data and phenological features; (4) the GF multispectral data, PCA EVI2 time-series, and phenological features; and (5) the MODIS multispectral data, PCA EVI2 time-series, and phenological features. Considering that August is the period in which the Ordos grasslands flourish the most, the GF multispectral data utilized in these scenarios are from 2 August 2019.
In this study, a compound comparison of these five scenarios was performed using the same samples (as described in Section 2.4) and the SVM classifier parameters. Four statistics, including overall accuracy, kappa coefficient, producer's accuracy, as well as user's accuracy, were computed to validate the classification accuracy.

Accuracy Assessment of the ESTARFM Fusion
To verify that the quality of the synthetic images were reliable, we selected three representative subdomains (land cover types including grassland, bareland, cropland, water, and impervious surfaces) of 400 × 400 pixels from GF images acquired on 28 July 2019 in the study area. We compared them with the corresponding fused EVI2 image. As shown in Figure 5, the pairs of the fused and actual images are visually similar, except on some cropland (marked in Figure 5). This type of land cover would be masked out by the land-cover map of Ordos (as described in Section 2.4). Thus, it would not interfere with this study. To further quantify the fused performance, Figure 6 shows four statistics, scatter plots, and residual histograms of the pixel values for the three pairs of actual and fused EVI2 images in Figure 5. The results show that the Pearson correlation coefficient is greater than 0.86, the mean error is less than 0.03, and RMSE and MAE range from 0.046 to 0.067 and from 0.027 to 0.046, respectively. Moreover, the actual and fused EVI2 show a close adherence to the y = x line, and the residuals between them are approximately consistent with normal distributions. Therefore, the fused images are sufficiently reliable to be utilized in subsequent studies.

Pheonological Information Analysis
In this study, six phenological features were extracted from the EVI2 time-series of the training samples. Based on Table 5, the maximum EVI2 values differ the most across communities, ranging from 0.638 to 0.333, while minimum EVI2 values differ the least, ranging from 0.022 to 0.089. SSg has the highest mean EVI2 of 0.302. Meanwhile, the phenology index of it is also the highest, indicating its astounding degree of EVI2 fluctuation during the year. As for SOS and EOS, PATg is significantly earlier than the other communities.

Accuracy Evaluation of Different Scenarios
The overall accuracy and kappa coefficient are used to evaluate the performances of the different scenarios (see Table 6). According to the results, Scenario 1 has the lowest overall accuracy (59.89%) and kappa coefficient (0.46), revealing that it is a challenge for only RGB and NIR spectral data to differentiate grassland. Compared with Scenario 1, the overall accuracies of Scenarios 2 and 3 increase by about 9.63% and 14.7%, respectively, demonstrating that the inclusion of either phenological features or PCA EVI2 time-series improves the classification accuracy. When the two features are combined, Scenario 4 achieves the highest accuracy of 87.25%. The results prove that integrating PCA EVI2 timeseries and phenological features dramatically boosts the classification accuracy compared to using one or the other alone. However, it should be noted that when the 500 m resolution MODIS data are utilized for classification, the overall accuracy is less than 65%, even when combining the above two features, and only about 3.4% higher than using GF multispectral data only. Therefore, the situation occurs between Scenarios 4 and 5, revealing that the improvement in spatial resolution from 500 to 16 m can also significantly improve the classification accuracy of grassland communities.

Grassland Communities Mapping in Ordos at 16 m Resolution
In this study, the result of Scenario 4 was applied in mapping grassland communities of Ordos (Figure 8). The corresponding confusion matrix (Table 7) reports that the overall accuracy reaches 87.25%, and the kappa coefficient reaches 0.83. The result illustrates that the 16 m GF data are reliable for differentiating grassland communities at the regional scale, integrating the EVI2 time-series and phenological features.  According to Figure  To verify the 16 m spatial resolution images' superiority over 500 m resolution images in grassland community mapping, we selected three subdomains (marked in Figure 8) from the study area and compared the results from Scenarios 4 and 5. As shown in Figure 9, Scenario 4's results are more detailed, and the boundaries between different communities are more clearly defined than those of Scenario 5. This result indicates that classification results based on the 16 m spatial resolution image are superior.

Applicability of GF-1/6 Satellite Data in Regional-Scale Grassland Communities Classification
To improve the classification accuracy of grassland communities with satellite-borne multispectral data, the PCA EVI2 time-series data and phenological features were employed in this study. However, the acquisition of these data requires a short revisit cycle of the satellite. With the advantage of a short return period (2 d), the GF-1/6 satellite is very suitable for this task. Meanwhile, compared to most satellites with short revisit cycles, such as MODIS, the GF-1/6 satellite, with a finer spatial resolution (16 m), can pick up more detail on the surface [32,34], thus improving the classification accuracy of grassland communities. Based on the results of Scenarios 4 and 5, under the same conditions, the classification accuracy is improved by 23.96% with GF data than that of MODIS data.
Nevertheless, it should be noted that the 16 m resolution image does contribute to the improvement in the classification accuracy of grassland communities, but also increases the volume of data, which makes the classification become time-consuming. With the construction of multiple cloud computing platforms, in future work, we will attempt to conduct the experiments on advanced remote sensing platforms such as the Google Earth Engine [67]. Relying on cloud computing's powerful performance, we can complete the acquisition and processing of large volumes of data in a short time.

Limitation of ESTARFM Algorithm
Although the GF-1/6 satellite can provide sufficient data for this study, as with other optical satellites, the usage of GF-1/6 data is still affected by weather conditions [68]. Therefore, the ESTARFM algorithm was performed in this study and the results of fusion suggest that it is reliable to use this algorithm with the GF-1/6 WFV data. However, it is worth noting that there are still errors compared to using actual images. Moreover, the accuracy of the fusion results is affected by the effect of pre-processing, the setting of parameters in the algorithm [44], and the change in land cover types [37], which increases the uncertainty of the fusion result. With many satellite data becoming available, we will attempt to integrate multi-sensor and multi-source datasets, such as Sentinel-2 and Landsat 8, to obtain multispectral time-series data.

Grassland Classification in Ordos
Based on previous research, the vital phenological features and the time-series VI data provide additional useful information for the classification of grassland [1], because they reveal the growth characteristics of different grassland communities [28], which cannot be expressed by multispectral data. Therefore, in this study, we attempted to employ the PCA EVI2 time-series and phenological features derived from fused temporal EVI2 data in grassland community classification. We supposed that the inclusion of these features could provide a more valid basis for differentiating grassland communities and improving the accuracy of the classification.
According to the result, the classification in Scenario 1, which used only the four bands of multispectral data, yields the lowest overall accuracy and kappa coefficient. With the addition of PCA EVI2 time-series and phenological features, the classifications in Scenario 2 and Scenario 3 improve the classification accuracy by 9.63% and 14.7%, respectively. Using both of these data for classification, the accuracy of the classification in Scenario 4 is even higher (27.36%). The results indicate that compared to using multispectral data only, the method that integrates multispectral data, PCA EVI2 time-series, and phenological features contributes to improving the classification accuracy of grassland communities.
In this study, we classify the grasslands of Ordos into five communities, including the Caragana pumila Pojark, Caragana davazamcii Sanchir, Salix schwerinii E. L. Wolf grassland, the Potaninia mongolica Maxim, Ammopiptanthus mongolicus S. H. Cheng, Tetraena mongolica Maxim grassland, the Caryopteris mongholica Bunge and Artemisia ordosica Krasch grassland, the Calligonum mongolicum Turcz grassland, and the Stipa breviflora Griseb and Stipa bungeana Trin grassland, with an overall accuracy of 87.25%. Although we achieved a promising result, it should be noted that we only classified the main grassland in the study area at community level, but not at species level. Combining emerging satellite-borne hyperspectral data, such as the Orbita Hyperspectral Satellite (OHS) data, with EVI2 timeseries and phenological features to achieve finer grass species classification is the focus of our future work.

Conclusions
This study explored regional-scale grassland classification using 23 phases of GF satellite data. The ESTARFM algorithm was validated for its applicability on GF data to generate 16 m spatial resolution cloudless time-series data. Combined with phenological features, PCA EVI2 time-series and multispectral data, the five main grassland communities of Ordos were classified with overall accuracy of 87.25%. The results reveal that the addition of either EVI2 time-series or phenological features can improve the classification accuracy of grassland communities, and the combined utilization of them is even more effective. The results also show that classification of grassland using the 16 m resolution GF data with multi-features performed much better than that using the 500 m MODIS data with the same features (overall accuracy: 87% compared to 63%), which highlights the advantage of fine spatial resolution for grassland community classification. In summary, this study proves that the proposed method can serve as a basis for the grassland community classification of large areas with moderate-and high-resolution images.