Next Article in Journal
Household Labour Migration and Farmers’ Access to Productive Agricultural Services: A Case Study from Chinese Provinces
Previous Article in Journal
Effects of Exogenous Calcium on Adaptive Growth, Photosynthesis, Ion Homeostasis and Phenolics of Gleditsia sinensis Lam. Plants under Salt Stress
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rice Mapping Using a BiLSTM-Attention Model from Multitemporal Sentinel-1 Data

1
Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Agriculture 2021, 11(10), 977; https://doi.org/10.3390/agriculture11100977
Submission received: 3 September 2021 / Revised: 1 October 2021 / Accepted: 7 October 2021 / Published: 9 October 2021

Abstract

:
Timely and accurate rice distribution information is needed to ensure the sustainable development of food production and food security. With its unique advantages, synthetic aperture radar (SAR) can monitor the rice distribution in tropical and subtropical areas under any type of weather condition. This study proposes an accurate rice extraction and mapping framework that can solve the issues of low sample production efficiency and fragmented rice plots when prior information on rice distribution is insufficient. The experiment was carried out using multitemporal Sentinel-1A Data in Zhanjiang, China. First, the temporal characteristic map was used for the visualization of rice distribution to improve the efficiency of rice sample production. Second, rice classification was carried out based on the BiLSTM-Attention model, which focuses on learning the key information of rice and non-rice in the backscattering coefficient curve and gives different types of attention to rice and non-rice features. Finally, the rice classification results were optimized based on the high-precision global land cover classification map. The experimental results showed that the classification accuracy of the proposed framework on the test dataset was 0.9351, the kappa coefficient was 0.8703, and the extracted plots maintained good integrity. Compared with the statistical data, the consistency reached 94.6%. Therefore, the framework proposed in this study can be used to extract rice distribution information accurately and efficiently.

1. Introduction

Rice is one of the most important food crops in the world, and more than half of the world’s population relies on rice as a staple food [1]. With the continuous growth of population and consumption, the global demand for rice will increase for at least another 40 years [2]. Nearly 496 million metric tons of milled rice were produced in 2019 worldwide (http://www.worldagriculturalproduction.com/crops/rice.aspx) accessed on 20 September 2021. China’s rice output exceeded 209 million tons in 2019, becoming the world’s leading rice producer, followed by India and Indonesia. Almost all rice areas in China are irrigated, which makes China’s production even higher [3]. A reliable and accurate rice classification map is an important prerequisite for spatiotemporal rice monitoring and yield estimation [4,5], and it is also an important data source for food policy formulation and food security assessment [6,7,8].
Compared with traditional land resource survey methods, remote sensing technology has a large spatial coverage and a low cost, is not limited by season, and can provide timely and effective rice information [9]. Rice planting areas are mainly distributed in tropical and subtropical monsoon climates that share similar periods of rain and heat, increasing the difficulty of obtaining reliable high-resolution optical time series data [10]. Synthetic aperture radar (SAR) can work under any weather conditions and is very sensitive to the geometric structure and dielectric properties of crops [7]. Therefore, SAR has been more and more widely used in the field of rice monitoring and yield estimation [11].
The general method of rice recognition based on multitemporal SAR data is to calculate the time series change in the radar backscatter coefficient during rice growth as an important indicator for distinguishing rice areas [12,13,14]. By combining the analysis of the backscattering coefficient curve of the rice growth cycle and rice growth phenological calendar, the phenological indicators for rice identification and classification were defined [15,16,17]. Alternatively, by comparing the polarization decomposition components of rice and other crops in full polarization SAR data [18,19], an appropriate feature scheme to extract feature variables with significant differences between rice and other crops was designed. Then, an empirical model [20,21] was established or appropriate machine learning classifiers k-means [22,23], decision tree (DT) [24,25,26], support vector machine (SVM) [27,28,29], and random forest (RF) [30,31,32,33] were used to realize rice recognition. Compared with other machine learning algorithms mentioned above, random forest can efficiently deal with large amounts of data and has strong generalization ability and over fitting resistance [30,34]. However, the rice extraction methods based on empirical models and traditional machine learning have some defects. Although the methods based on empirical model are relatively simple, the research field must have accurate prior knowledge to establish the equation and verify the results, so most of them need too much manual intervention. Moreover, these methods cannot make full use of the context information of images and cannot deal with the complex situation of crop planting structure. In addition, they are inefficient in processing high-dimensional features.
With the development of deep learning, many researchers have introduced Fully Convolutional Networks (FCNs) [35] into the field of crop extraction and mapping. Cué La Rosa et al. combined FCNs with the Most Likely Class Sequence method and used 14 Sentinel-1 VV/VH polarization data to extract crops in tropical Brazil. The results revealed that FCNs tended to produce smoother results when compared with its counterparts [36]. Wei et al. used the improved FCNs model U-Net and 18 Sentinel-1VV/VH data in 2017 to realize the crop classification in Fuyu City, Jilin Province, China [37]. Compared with SVM and RF methods, U-Net model showed better classification performance. However, due to the limitation of convolution structure in FCNs, it is unable to find and extract changing and interdependent features from SAR time series data [38]. There are internal feedback connections and feedforward connections between the data processing units of the Recurrent Neural Network (RNN) model, which reflect the process dynamic characteristics in the calculation process and can better learn the time characteristics in time series data [39,40,41,42,43]. Therefore, researchers introduced the RNN into the study of multitemporal rice extraction to achieve the goals of rice extraction and rice distribution mapping [43,44]. Among different RNN models, the most representative ones are Long Short-Term Memory (LSTM) [45] and Bidirectional Long Short-Term Memory (BiLSTM) networks [46]. Ndikumana et al. simultaneously inputted VH and VV polarization data into the variant LSTM and the Gated Cycle Unit (GRU) of RNN, and its classification result was better than that of the traditional method [41]. Crisóstomo et al. filtered only VH polarization data and used BiLSTM to realize rice classification. The result was better than the results of LSTM and classical machine learning methods [39]. The above results show that the application of deep learning technology to rice extraction from multitemporal SAR data has great potential.
However, at present, many studies on rice extraction based on multitemporal SAR use public datasets [32,47,48], and the coverage of the public datasets is limited. In addition, tropical or subtropical rice is a year-round active multi-cropping system with a complex planting cycle. Traditional methods based on artificial low dimensional features are difficult to extract rice effectively. Although LSTM or BiLSTM is used to extract rice from multitemporal SAR data, its learning ability of rice time series information and the accuracy of extraction results need to be improved. In China’s large-scale rice mapping, because the rice plot is small and vulnerable to background influence, it is easy to produce false alarm or misclassification. Therefore, in order to improve the classification accuracy, further post-processing is needed.
To address the abovementioned issues, a multitemporal rice extraction and mapping framework was designed. First, the statistical parameter characteristic maps of time series data were used to assist rice sample production and improve the efficiency of sample generation. Second, the attention mechanism [49] was introduced into the BiLSTM network model to strengthen the learning of rice temporal features and improve the accuracy of rice extraction. Finally, the classification results were optimized by using FROM-GLC10 (Finer Resolution Observation and Monitoring of Global Land Cover) [50]. The body of this paper is organized as follows. Section 2 introduces materials and the proposed method, and Section 3 introduces the experimental results and analysis. Section 4 provides a discussion of results. Finally, a conclusion is drawn.

2. Materials and Methods

2.1. Study Area and Material

2.1.1. Study Area

The study area (109°31′ E to 110°55′ E, 20°12′ N to 21°35′ N) is in the southern part of China in the area of Zhanjiang, southwest of Guangdong Province, China, shown in Figure 1. Zhanjiang City, with a total area of 13,225.44 km2, is the largest rice planting area in Guangdong Province, and it is known as the “granary of western Guangdong”. Zhanjiang city has a tropical monsoon climate and a subtropical monsoon climate. The annual active accumulated temperature ≥10 °C was 8000~8500 °C. The terrain is dominated by plains and platforms, and paddy fields are mainly distributed in coastal plains and intermountain basins. The rice planting cycle in Zhanjiang City is mainly from April to December. The planting system is a one-year multi-cropping system dominated by double cropping indica rice, which implements water and drought rotation with sugarcane, peanut, potato, beans, and other crops in the same year or the next year.

2.1.2. SAR Data

To fully ensure the integrity of the rice planting cycle in the SAR time series data, total of 66 C-band (frequency = 5.406 GHz, wavelength ~ 6 cm) SAR images of the Sentinel-1A (S1A) satellite spanning March 2019 to December 2019 were used. The Sentinel-1 images used were dual polarization (VV and VH) GRD products in interferometric broadband (IW) imaging mode [51]. The coverages of the adjacent track S1A data used in this paper are presented in Figure 1b, and the list of SAR data is shown in Table 1.

2.2. Methodology

As mentioned above, the following issues are present in the research of rice extraction from multitemporal SAR data: (1) it is very difficult to construct rice samples using only SAR time series data without rice prior distribution information; (2) the rice planting cycle in tropical or subtropical areas is complex, and the existing rice extraction methods do not make full use of the temporal characteristics of rice, and the classification accuracy needs to be improved; (3) additionally, small rice plots are often affected by small roads and shadows. There are some false alarms in the extraction results, so the classification results need to be optimized.
Therefore, this paper proposes a rice extraction and mapping method using multitemporal SAR data, as shown in Figure 2. This research was conducted in the following parts: (1) pixel-level rice sample production based on temporal statistical characteristics; (2) the BiLSTM-Attention network model constructed by combining BiLSTM model and attention mechanism for rice region, and (3) the optimization of classification results based on FROM-GLC10 data.

2.2.1. Preprocessing

Because VH polarization is superior to VV polarization in monitoring rice phenology, especially during the rice flooding period [52,53], the VH polarization was chosen. Several preprocessing steps were carried out. First, the S1A level-1 GRD data format were imported to generate the VH intensity images. Second, the multitemporal intensity image in the same coverage area were registered using ENVI software. Then, the De Grandi Spatio-temporal Filter was used to filter the intensity image in the time-space combination domain. Finally, Shuttle Radar Topography Mission (SRTM)-90 m DEM was used to calibrate and geocode the intensity map, and the intensity data value was converted into the backscattering coefficient on the logarithmic dB scale. The pixel size of the orthophoto is 10 m, which is reprojected to the UTM region 49 N in the WGS-84 geographic coordinate system.

2.2.2. Time Series Curves of Different Landcovers

To understand the time series characteristics of rice and non-rice in the study area, typical rice, buildings, water, and vegetation samples in the study area were selected for time series curve analysis. The sample areas of four types of ground objects were selected from 157-63, 157-66 and 84-65, a total of 12 sample boxes were selected, and each box contains 20 pixels * 20 pixels. Figure 3 shows the distribution diagram of the selected four types of landcovers.
The average of 400 sample points in each box was calculated to obtain the average of the time series curve of the four types of landcovers, as shown in Figure 4.
Among the four types of ground objects, the average backscattering coefficient of buildings was the highest, and that of water was the lowest. The average backscattering coefficient of non-rice vegetation was higher than that of rice. Moreover, because there was no flooding period for non-rice vegetation, the minimum value of its time series curve was greater than that of rice.
Different from other dryland crops and vegetation, there was an agricultural flooding period in the growth process of rice, at which the backscattering coefficient of rice was close to that of water. The transplanting time of early rice was approximately April, and the harvesting time was approximately from the end of July to the beginning of August. The transplanting time of late rice was approximately from the end of July to the beginning of August, and the harvesting time was approximately December. The rice in the three frames was rice-1, rice-2 and rice-3. They started transplanting at the corresponding first time, when the rice was in the flooding period. With the growth of rice, the backscattering coefficient reached the maximum at almost the eighth time. When the rice entered the mature stage, the backscattering coefficient began to decrease, and the harvest was completed at the beginning of August and entered the next growth cycle of late rice. The results showed that the growth cycle of rice in the three frames had a certain synchronization. Although the data of the three frames at the corresponding time were not completely consistent, the maximum time difference was only 6 days, which was not enough to affect the phenological analysis of rice. The backscatter curves of three rice samples had some fluctuations, and a possible explanation was different soil conditions.

2.2.3. Rice Sample Production Based on Optimal Time Series Statistical Parameters

In order to calculate the efficiency, four simple time series statistical parameters were selected for comparative analysis of four ground objects, including maximum, minimum, average and variance. The average represents the relatively concentrated position in the time series data, the maximum value and the time series minimum value reflect the range of data change, and the variance reflects the dispersion of time series data. The results were shown in Figure 5.
According to Figure 5, the maximum value of rice was close to the vegetation, the minimum value of rice was close to the water body, the variance of rice was large, and the average was lower than that of vegetation. The maximum, minimum, and average values of buildings were the highest. The maximum, minimum, and the average of the water body were the lowest.
Then, the three parameters were arbitrarily selected to synthesize the false color composite image. As shown in Figure 6, rice was displayed in red in combination 1 (R: maximum G: minimum B: average), blue-purple in combination 2 (R: maximum G: minimum B: variance) and combination 3 (R: maximum G: average B: variance), and dark blue-purple in combination 4 (R: Average G: Minimum B: Variance).
Two subareas containing rice and water were selected for comparative analysis, as shown in Figure 7 and Figure 8. First, which false color image had high color discrimination between rice area and other ground objects was analyzed. As shown in Figure 7a, the central belt area was a rice area. In Figure 7b, the area was shown in red, which was significantly different from the green shown by the surrounding features. In Figure 7c–e the rice area was shown in blue, but some surrounding non-rice areas, such as small roads and building shadows, were also shown in blue, which was easy to be confused with rice and was not conducive to rice sample extraction. Therefore, the combination (R: maximum G: minimum B: average) shown in Figure 7b was more suitable for extracting rice areas. The next step was to confirm the separability of rice and water on the false color images. Figure 8a showed an area containing both rice and water. Figure 8c–e could not accurately distinguish rice from water, only the color of rice area and water in Figure 8b was significantly different. Therefore, according to the above results, among the four time series statistical parameter combinations, the combination 1, i.e., the maximum, minimum, and average synthetic false color images, had the best visualization effect on rice in this study area.
After determining the optimal time series statistical parameter characteristic image, the production of SAR rice sample sets was carried out. The specific steps are as follows: (1) based on the rice position displayed in the false color image, the corresponding rice position in the time series SAR data is preliminarily determined to quickly locate the rice region; (2) cross validation using Google Earth’s optical data; (3) manually draw the boundaries of rice and non-rice plots and complete the production of sample sets.
The samples were divided into two classes: rice and non-rice. The distribution of sample points is shown in Figure 9. There were 300,000 sample points (150,000 sample points in each category), of which 210,000 samples were used for model training (70%), 60,000 samples were used for model training verification (20%), and 30,000 samples were used for the model performance test (10%).

2.2.4. BiLSTM-Attention Model

The Bi-LSTM structure consists of a forward LSTM layer and a backward LSTM layer, which can be used to understand the past and future information in time series data [46]. Because the output of the BiLSTM model at a given time depends on both the previous time period and the next time period, the BiLSTM model has a stronger ability to process contextual information than the one-way LSTM model.
The rice planting patterns in tropical or subtropical regions are complex and diverse. The existing research methods have yet to improve the ability of learning time series information of rice, which makes it difficult to achieve high-precision extraction of rice distribution. It is necessary to strengthen the study of important temporal characteristics of rice and non-rice land types, and strengthen the separability of rice and non-rice, to improve the extraction results of rice. However, the various time-dimensional features extracted from the time series data by the BiLSTM model have the same weight in the decision-making process of the classification results, which will weaken the role of important time-dimensional features in the classification process and affect the classification results. Therefore, it is necessary to assign different weights to the various time-dimensional features obtained by the BiLSTM model to give full play to the contribution of different time-dimensional features to the classification results.
To solve the abovementioned issues, a BiLSTM-Attention network model was designed combining a BiLSTM model and an attention mechanism to realize high-precision rice extraction.
The core of the model was composed of two BiLSTM layers (each layer had five LSTM units, and the hidden dimension of each LSTM unit was 256), one attention layer, two full connection layers, and a softmax function, as shown in Figure 10. The input of the model was the vector composed of the sequential backscattering coefficient of VH polarization at each sample point. Since the time dimension of time series data was 22, its size was 22 * 1. Each BiLSTM layer consisted of a forward LSTM layer and a backward LSTM layer.
When the data passed through the forward LSTM layer, the forward LSTM layer learned the time characteristics of the positive change in the backscattering coefficient of the rice time series. When the data passed through the backward LSTM layer, the backward LSTM layer learned the time characteristics of the reverse change in the backscattering coefficient of the rice time series. The existence of the forward LSTM layer and backward LSTM layer determined the output of the model at a given time depending on the backscattering coefficient values of the previous time and the later time. Then, the rice timing features learned by the two BiLSTM layers were input into the attention layer. The core idea of the attention layer was to learn task-related features by suppressing irrelevant parts in pattern recognition, as shown in Figure 10.
The attention layer forced the network to focus on the rice extraction task, was more sensitive to the unique information of different classes in the time series data, paid attention to extracting the effective information that could be used for classification in the SAR time series, endowed it with the ability of different “attention”, and kept consistent with the classification information in the whole time series data. When faced with more complicated rice extraction tasks in tropical and subtropical regions, the presence of the attention layer enabled the network model to reduce the misclassification of rice and non-rice.
First, the hidden vector h i t obtained from the two BiLSTM layers was input into a single-layer neural network to get u i t , then the transposition of u i t and u w , were multiplied and then normalized by softmax to get the weight α i t . Subsequently, α i t and h i t were multiplied and summed to get the weighted vector c i . Finally, the output of attention c i successively was sent to two fully connected layers and one softmax layer to get the final classification result.
u i t = tan h W w h i t + b w
α i t = exp u i t T u w t exp u i t T u w
c i = t α i t h i t
where h i t represents the hidden vector at time t of the ith sample, α i t , W w and u w are the weights, b w   is bias, and c i t represents the output of the attention mechanism. The hidden vector h i t obtained from BiLSTM obtains u i t after activating the function. Additionally, u w and W w were randomly initialized.
The BiLSTM-Attention model could effectively mine the change information between the previous time and the next time in the SAR time series data and could discern the high-dimensional time features of rice and non-rice from the time series data. Additionally, by learning the variation characteristics of the temporal backscatter coefficient of the rice growth cycle and the variation characteristics of the temporal backscatter coefficient of non-rice, the model could extract the key temporal data for rice and non-rice, strengthen the ability to distinguish rice and non-rice, and help to improve the classification effect of the model.

2.2.5. Optimization of Classification Results Based on FROM-GLC10

Due to the fragmentation of rice plots in the study area and the impact of buildings and water bodies, there may be a misclassification of rice in the classification results. Further post-processing was needed to improve the classification results.
In 2019, the research team of Professor Gong Peng, Department of Earth System Science at Tsinghua University, released the method and results of global surface coverage mapping with 10 m resolution (FROM-GLC10), which can be passed through http://data.ess.tsinghua.edu.cn (accessed on 22 January 2021) free download. The experimental results show that the overall accuracy of FROM-GLC10 product is 72.76% [50].
As shown in Figure 3, the water layer mask and impermeable layer mask were extracted from FROM-GLC10, and then the rice classification results were optimized using the intersection of the initial extraction results and the mask layer.

2.2.6. Accuracy Evaluation

In this research, the precision indicators of the confusion matrix widely used in crop classification research were used, including accuracy, precision, recall, F1, and kappa [54,55,56].
a c c u r a c y = T P + T N T P + T N + F N + F P
p r e c i s i o n = T P T P + F P
r e c a l l = T P T P + F N
F 1 = 2 T P 2 T P + F P + F N
  k a p p a = a c c u r a c y P e 1 P e
P e = T P + F P × T P + F N + F N + T N × F P + T N T P + T N + F N + F P 2
where TP is the number of the rice pixels truly classified as rice pixels, TN is the number of non-rice pixels truly classified as non-rice pixels, FP is the number of non-rice pixels falsely classified as rice, FN is the number of rice pixels falsely classified as non-rice pixels, and Pe is the expected accuracy.

2.2.7. Parameter Settings

The BiLSTM-Attention model was built through the PyTorch framework. The version of Python is 3.7, and the version of PyTorch employed in this study is 1.2.0. All the processes were performed on a Windows 7 workstation with a NVIDIA GeForce GTX 1080 Ti graphics card. The batch size was set to 64, the initial learning rate was 0.001, and the learning rate was adjusted according to the epoch training times. The attenuation step of the learning rate was 10, and the multiplication factor of the updating learning rate was 0.1. Using the Adam optimizer, the optimized loss function was cross entropy, which was the standard loss function used in all multiclassification tasks and has acceptable results in secondary classification tasks [57].

3. Results

In order to verify the effectiveness of our proposed method, we carried out three experiments: (1) the comparison of our proposed method with BiLSTM model and RF classification method; (2) comparative analysis before and after optimization by using FROM-GLC10; (3) comparison between our experimental results and agricultural statistics.

3.1. Comparison of Rice Classification Methods

In this experiment, the BiLSTM method and the classical machine learning method RF were selected for comparative analysis, and the five evaluation indexes introduced in Section 2.2.5 were used for quantitative evaluation.
To ensure the accuracy of the comparison results, the BiLSTM model had the same BiLSTM layers and parameter settings with the BiLSTM-Attention model. The BiLSTM model was also built through the PyTorch framework.
Random forest, like its name implies, consists of a large number of individual decision trees that operate as an ensemble. Each individual tree in the random forest spits out a class prediction and the class with the most votes becomes the model’s prediction. The implementation of the RF method is shown in [58]. By setting the maximum depth and the number of samples on the node, the tree construction can be stopped, which can reduce the computational complexity of the algorithm and the correlation between sub-samples. In our experiment, RF and parameter tuning were realized by using Python and Sklearn libraries. The version of Sklearn libraries was 0.24.2. The number of trees was 100, the maximum tree depth was 22.
The quantitative results of different methods on the test dataset mentioned in the Section 2.2.3 are shown in Table 2. The accuracy of BiLSTM-Attention was 0.9351, which was significantly better than that of BiLSTM (0.9012) and RF (0.8809). This result showed that compared with BiLSTM and RF, the BiLSTM-Attention model achieved higher classification accuracy.
A test area was selected for detailed comparative analysis, as shown in Figure 11. Figure 11b shows the RF classification results. There were some broken missing areas. It was possible that the structure of RF itself limited its ability to learn the temporal characteristics of rice. The areas missed in the classification results of BiLSTM shown in Figure 11c were reduced and the plots were relatively complete. It was found that the time series curve of missed rice in the classification results of BiLSTM model and RF had obvious flooding period signal. When the signal in harvest period is not obvious, the model discriminates it into non-rice, resulting in missed detection of rice. Compared with the classification results of the BiLSTM and RF, the rice plots in the classification results of the BiLSTM-Attention in Figure 11d were relatively complete and less missing, indicating that the proposed method had better discrimination ability between rice and non-rice.

3.2. Optimization of Classification Results Based on FROM-GLC10 Data

Due to the complex background of the test area and the fragmentation of rice plots, there are still some false alarms after classification by the BiLSTM-Attention model. These false alarms are mainly small water bodies and impermeable layers. FROM-GLC10 data were used to remove false alarms in the original classification results.
Figure 12 shows an example of using FROM-GLC10 data to remove false alarms. As shown in Figure 12a, this area contains water bodies and impermeable layers. As shown in Figure 12b, small roads, riverside vegetation and some water bodies were wrongly classified as rice in the original classification results. The water layer and the impermeable layer extracted from FROM-GLC10 are respectively shown in Figure 12c–e is the optimized result. The false alarm removed (Figure 12f) accounted for about 2.4% of the extracted rice area.

3.3. Rice Distribution Map

Figure 13 shows the distribution results of rice in Zhanjiang city using the proposed framework. The rice area in the rice distribution map was 180,012.39 ha, and the statistical data of paddy fields in Zhanjiang city in 2018 released by the Department of Natural Resources of Guangdong Province was 190,280.02 ha (http://nr.gd.gov.cn/zwgknew/tzgg/gg/content/post_2719232.html, accessed on 6 March 2021). If the area of paddy fields was the area of rice, the overall detection accuracy of our method is 94.6%, which is in good agreement with the statistical data. The difference between the rice area in the classification results and the statistical data was 10,267.63 ha, accounting for approximately 5.4% of the statistical data.
Table 3 shows the classified areas of rice in various districts and counties of Zhanjiang City. It can be seen from Table 3 that there were some omissions in in some districts and counties. There were some omissions in the extraction results of rice in Lianjiang City. The possible reason is that the terrain of Lianjiang City is mainly mountainous, and the complex terrain affects the extraction of rice. Potou District, Chikan District and Xiashan district also had the same problems. It may be that the administrative divisions in these areas are very small, the distribution of buildings and streets is very dense, and the paddy fields are small and scattered, resulting in the decline of detection accuracy. In the future, with the help of higher precision DEM data and landcover data, the omission can be reduced.

4. Discussion

In this study, our goal was to study how to use SAR data to extract rice in tropical or subtropical areas based on deep learning methods. Based on our proposed method, the rice area of Zhanjiang City is successfully extracted by using Sentinel-1 data.
Both the classification method based on deep learning and the traditional machine learning method need a certain amount of rice sample data. Most existing studies used the open land cover classification map drawn by government agencies as the ground truth value of rice extraction research [32,47,48], but the coverage of these land cover classification maps is limited and cannot be updated in time to meet the research needs. In addition, researchers could obtain the basic truth value of rice distribution through field investigations [43]. However, this method is time-consuming and laborious. When field investigation is impossible, rice samples are often selected based on remote sensing images. Due to the imaging mechanism of SAR images, the interpretation of SAR images is much more difficult than optical images. At present, the common solution is to locate the rice planting area by using the time series curve of the backscattering coefficient of SAR image and optical data [24,27,30,39,59]. It is a great challenge for human eyes to interpret rice region on SAR gray images. It is an effective strategy to use the combination of characteristic parameters to form a false color image to increase the color difference between rice and other ground objects as much as possible and achieve the best interpretation effect. Based on the analysis of the statistical characteristics of time series backscatter coefficients of rice and non-rice in Zhanjiang City, this paper compared the color combination methods of multiple statistical parameters, selected the feature combination strategy most suitable for extracting rice region, realized the rapid positioning of rice and improved the efficiency of sample production.
There are many successful cases of rice classification methods based on traditional machine learning or deep learning [32,39,41,52,60]. In 2016, Nguyen et al. used the decision tree method to realize rice recognition based on Sentinel-1 time series data, with an accuracy of 87.2% [52]. Bazzi et al. used RF and DT classifiers with Sentinel-1 SAR data time series between May 2017 and September 2017 to map the rice area over the Camargue region of France [32]. The overall accuracies of both methods were better than 95%. However, the derived indicators used in these machine learning methods are too dependent on the prior knowledge of specific regions, and it is difficult to be directly applied to other regions. In addition, they all studied single cropping rice and were not suitable for rice areas with complex planting patterns. Ndikumana et al. carried out a comparative experimental study of deep learning methods and traditional machine learning methods in crop classification of SAR data. Based on 25 Sentinel-1 images, they carried out crop classification in Camage, France. The experimental results showed that LSTM and GRU classifiers were significantly better than the classical methods [41]. Wang et.al combined 11 Sentinel-2 images and 23 Sentinel-1 GRD images covering the Tongxiang County of China’s Zhejiang Province and then put them to the designed LSTM classifier to obtain a paddy rice map [60]. The overall accuracy was up to 0.937. Filho et al. used 60 scenes of Sentinel-1 VH data from 2017 to 2018 and BiLSTM to classify rice in Rio Grande do Sul state of Brazil [39]. The results of the BiLSTM model were better than the LSTM model. RNNs have achieved some success in the field of rice extraction, but these models give the same weight to the time dimension features with different importance in the classification decision-making process, which affects the final classification accuracy. We added the attention model to the BiLSTM model, which could fully mine the favorable time series information, gave different weights to various time dimension features in the classification decision-making process, and strengthened the separability of rice and non-rice, so as to improve the classification performance of the model.
In the absence of a large amount of prior knowledge of rice, there will inevitably be some misclassification in the original classification results, so the original classification results need to be post-processed. Many researchers used post-processing methods to optimize the classification results [36,61,62,63]. Therefore, we used FROM-GLC10 for the post-processing of rice extraction results, which reduced the false alarm to a certain extent.
Whether compared with other methods or with statistical data, our proposed method has achieved good results, which shows that our method has certain practical value in the extraction of tropical and subtropical rice. However, there are still some deficiencies in the current research results. In mountainous areas, the mountains and shadows in SAR images cause the omission of rice. Secondly, the riverside vegetation has similar temporal characteristics with rice, which leads to false alarm in rice extraction results. In the future, we will add some negative sample training to further improve the performance of the method.

5. Conclusions

According to the application requirements of tropical and subtropical rice monitoring, this study proposed a set of rice extraction and mapping frameworks, including rice sample making method using time characteristics, rice classification method based on BiLSTM-Attention model, and post-processing method based on high-precision global land cover. Using 66 scenes of Sentinel-1 data in 2019 and the proposed framework, rice mapping was carried out in Zhanjiang City, China. Experimental results show that the time series feature combination strategy of time series maximum, time series minimum, and average can intuitively reflect the distribution of rice and improve the production efficiency of samples. The accuracy of rice region extraction by the proposed method is 0.9351, which is better than BiLSTM and RF methods, and the extracted plots maintain good integrity.
In the coming years, we will carry out large-scale rice mapping research based on multitemporal SAR data, further improve the classification accuracy, and promote rice yield estimation based on yield estimation models, so as to provide valuable information for the formulation of food policy.

Author Contributions

Conceptualization, methodology, software, C.S.; validation, formal analysis, H.Z.; investigation, C.S. and L.X.; resources, data curation, L.X.; writing—original draft preparation, C.S.; writing—review and editing, H.Z.; visualization, L.X. and L.L.; supervision, project administration, H.Z. and C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grants 41971395, 41930110 and 42001278.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Sentinel-1 data presented in this study are openly and freely available at https://urs.earthdata.nasa.gov/, accessed on 15 April 2020.

Acknowledgments

The authors would like to thank ESA and EU Copernicus Program for providing the Sentinel-1A SAR data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kuenzer, C.; Knauer, K. Remote sensing of rice crop areas. Int. J. Remote Sens. 2012, 34, 2101–2139. [Google Scholar] [CrossRef]
  2. Godfray, H.C.; Beddington, J.R.; Crute, I.R.; Haddad, L.; Lawrence, D.; Muir, J.F.; Pretty, J.; Robinson, S.; Thomas, S.M.; Toulmin, C. Food security: The challenge of feeding 9 billion people. Science 2010, 327, 812–818. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Maclean, J.; Hardy, B.; Hettel, G. Rice Almanac: Source Book for One of the Most Important Economic Activities on Earth; IRRI: Los Baños, Philippines, 2013. [Google Scholar]
  4. Jin, X.; Kumar, L.; Li, Z.; Feng, H.; Xu, X.; Yang, G.; Wang, J. A review of data assimilation of remote sensing and crop models. Eur. J. Agron. 2018, 92, 141–152. [Google Scholar] [CrossRef]
  5. Laborte, A.G.; Gutierrez, M.A.; Balanza, J.G.; Saito, K.; Zwart, S.J.; Boschetti, M.; Murty, M.V.R.; Villano, L.; Aunario, J.K.; Reinke, R.; et al. RiceAtlas, a spatial database of global rice calendars and production. Sci. Data 2017, 4, 170074. [Google Scholar] [CrossRef] [PubMed]
  6. de Leeuw, J.; Vrieling, A.; Shee, A.; Atzberger, C.; Hadgu, K.; Biradar, C.; Keah, H.; Turvey, C. The Potential and Uptake of Remote Sensing in Insurance: A Review. Remote Sens. 2014, 6, 10888–10912. [Google Scholar] [CrossRef] [Green Version]
  7. Orynbaikyzy, A.; Gessner, U.; Conrad, C. Crop type classification using a combination of optical and radar remote sensing data: A review. Int. J. Remote Sens. 2019, 40, 6553–6595. [Google Scholar] [CrossRef]
  8. Wei, Y.; Tong, X.; Chen, G.; Liu, D.; Han, Z. Remote Detection of Large-Area Crop Types: The Role of Plant Phenology and Topography. Agriculture 2019, 9, 150. [Google Scholar] [CrossRef] [Green Version]
  9. Mosleh, M.K.; Hassan, Q.K.; Chowdhury, E.H. Application of remote sensors in mapping rice area and forecasting its production: A review. Sensors 2015, 15, 769–791. [Google Scholar] [CrossRef] [Green Version]
  10. Liu, C.-A.; Chen, Z.-X.; Shao, Y.; Chen, J.-S.; Hasi, T.; Pan, H.-Z. Research advances of SAR remote sensing for agriculture applications: A review. J. Integr. Agric. 2019, 18, 506–525. [Google Scholar] [CrossRef] [Green Version]
  11. Zhao, R.; Li, Y.; Ma, M. Mapping Paddy Rice with Satellite Remote Sensing: A Review. Sustainability 2021, 13, 503. [Google Scholar] [CrossRef]
  12. Koppe, W.; Gnyp, M.L.; Hütt, C.; Yao, Y.; Miao, Y.; Chen, X.; Bareth, G. Rice monitoring with multi-temporal and dual-polarimetric TerraSAR-X data. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 568–576. [Google Scholar] [CrossRef]
  13. Lopez-Sanchez, J.M.; Ballester-Berman, J.D.; Hajnsek, I. First Results of Rice Monitoring Practices in Spain by Means of Time Series of TerraSAR-X Dual-Pol Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 412–422. [Google Scholar] [CrossRef]
  14. Mansaray, L.R.; Zhang, D.; Zhou, Z.; Huang, J. Evaluating the potential of temporal Sentinel-1A data for paddy rice discrimination at local scales. Remote Sens. Lett. 2017, 8, 967–976. [Google Scholar] [CrossRef]
  15. Chen, C.F.; Son, N.T.; Chen, C.R.; Chang, L.Y.; Chiang, S.H. Rice Crop Mapping Using Sentinel-1a Phenological Metrics. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 863–865. [Google Scholar] [CrossRef] [Green Version]
  16. Hoang-Phi, P.; Lam-Dao, N.; Pham-Van, C.; Chau-Nguyen-Xuan, Q.; Nguyen-Van-Anh, V.; Gummadi, S.; Le-Van, T. Sentinel-1 SAR Time Series-Based Assessment of the Impact of Severe Salinity Intrusion Events on Spatiotemporal Changes in Distribution of Rice Planting Areas in Coastal Provinces of the Mekong Delta, Vietnam. Remote Sens. 2020, 12, 3196. [Google Scholar] [CrossRef]
  17. Nguyen, D.B.; Wagner, W. European Rice Cropland Mapping with Sentinel-1 Data: The Mediterranean Region Case Study. Water 2017, 9, 392. [Google Scholar] [CrossRef]
  18. Wu, F.; Wang, C.; Zhang, H.; Zhang, B.; Tang, Y. Rice Crop Monitoring in South China With RADARSAT-2 Quad-Polarization SAR Data. IEEE Geosci. Remote Sens. Lett. 2011, 8, 196–200. [Google Scholar] [CrossRef]
  19. Yonezawa, C.; Negishi, M.; Azuma, K.; Watanabe, M.; Ishitsuka, N.; Ogawa, S.; Saito, G. Growth monitoring and classification of rice fields using multitemporal RADARSAT-2 full-polarimetric data. Int. J. Remote Sens. 2012, 33, 5696–5711. [Google Scholar] [CrossRef]
  20. Nelson, A.; Setiyono, T.; Rala, A.; Quicho, E.; Raviz, J.; Abonete, P.; Maunahan, A.; Garcia, C.; Bhatti, H.; Villano, L.; et al. Towards an Operational SAR-Based Rice Monitoring System in Asia: Examples from 13 Demonstration Sites across Asia in the RIICE Project. Remote Sens. 2014, 6, 10773–10812. [Google Scholar] [CrossRef] [Green Version]
  21. Phan, H.; Le Toan, T.; Bouvet, A.; Nguyen, L.D.; Pham Duy, T.; Zribi, M. Mapping of Rice Varieties and Sowing Date Using X-Band SAR Data. Sensors 2018, 18, 316. [Google Scholar] [CrossRef] [Green Version]
  22. Tian, H.; Wu, M.; Wang, L.; Niu, Z. Mapping Early, Middle and Late Rice Extent Using Sentinel-1A and Landsat-8 Data in the Poyang Lake Plain, China. Sensors 2018, 18, 185. [Google Scholar] [CrossRef] [Green Version]
  23. Mandal, D.; Kumar, V.; Bhattacharya, A.; Rao, Y.S.; Siqueira, P.; Bera, S. Sen4Rice: A Processing Chain for Differentiating Early and Late Transplanted Rice Using Time-Series Sentinel-1 SAR Data with Google Earth Engine. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1947–1951. [Google Scholar] [CrossRef]
  24. Chang, L.; Chen, Y.-T.; Wang, J.-H.; Chang, Y.-L. Rice-Field Mapping with Sentinel-1A SAR Time-Series Data. Remote Sens. 2020, 13, 103. [Google Scholar] [CrossRef]
  25. Nguyen, D.; Clauss, K.; Cao, S.; Naeimi, V.; Kuenzer, C.; Wagner, W. Mapping Rice Seasonality in the Mekong Delta with Multi-Year Envisat ASAR WSM Data. Remote Sens. 2015, 7, 15868–15893. [Google Scholar] [CrossRef] [Green Version]
  26. Inoue, S.; Ito, A.; Yonezawa, C. Mapping Paddy fields in Japan by using a Sentinel-1 SAR time series supplemented by Sentinel-2 images on Google Earth Engine. Remote Sens. 2020, 12, 1622. [Google Scholar] [CrossRef]
  27. Hoang, H.K.; Bernier, M.; Duchesne, S.; Tran, Y.M. Rice Mapping Using RADARSAT-2 Dual- and Quad-Pol Data in a Complex Land-Use Watershed: Cau River Basin (Vietnam). IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3082–3096. [Google Scholar] [CrossRef] [Green Version]
  28. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Wang, C.; Wu, J.; Qi, J.; Salas, W.A. Mapping paddy rice with multitemporal ALOS/PALSAR imagery in southeast China. Int. J. Remote Sens. 2009, 30, 6301–6315. [Google Scholar] [CrossRef]
  30. Son, N.-T.; Chen, C.-F.; Chen, C.-R.; Minh, V.-Q. Assessment of Sentinel-1A data for rice crop classification using random forests and support vector machines. Geocarto Int. 2017, 33, 587–601. [Google Scholar] [CrossRef]
  31. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  32. Bazzi, H.; Baghdadi, N.; El Hajj, M.; Zribi, M.; Minh, D.H.T.; Ndikumana, E.; Courault, D.; Belhouchette, H. Mapping Paddy Rice Using Sentinel-1 SAR Time Series in Camargue, France. Remote Sens. 2019, 11, 887. [Google Scholar] [CrossRef] [Green Version]
  33. Onojeghuo, A.O.; Blackburn, G.A.; Wang, Q.; Atkinson, P.M.; Kindred, D.; Miao, Y. Mapping paddy rice fields by applying machine learning algorithms to multi-temporal Sentinel-1A and Landsat data. Int. J. Remote Sens. 2017, 39, 1042–1067. [Google Scholar] [CrossRef] [Green Version]
  34. Singha, M.; Dong, J.; Zhang, G.; Xiao, X. High resolution paddy rice maps in cloud-prone Bangladesh and Northeast India using Sentinel-1 data. Sci. Data 2019, 6, 26. [Google Scholar] [CrossRef]
  35. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  36. Cué La Rosa, L.E.; Queiroz Feitosa, R.; Nigri Happ, P.; Del’Arco Sanches, I.; Ostwald Pedro da Costa, G.A. Combining Deep Learning and Prior Knowledge for Crop Mapping in Tropical Regions from Multitemporal SAR Image Sequences. Remote Sens. 2019, 11, 2029. [Google Scholar] [CrossRef] [Green Version]
  37. Wei, S.; Zhang, H.; Wang, C.; Wang, Y.; Xu, L. Multi-Temporal SAR Data Large-Scale Crop Mapping Based on U-Net Model. Remote Sens. 2019, 11, 68. [Google Scholar] [CrossRef] [Green Version]
  38. Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  39. Crisóstomo de Castro Filho, H.; Abílio de Carvalho Júnior, O.; Ferreira de Carvalho, O.L.; Pozzobon de Bem, P.; dos Santos de Moura, R.; Olino de Albuquerque, A.; Rosa Silva, C.; Guimarães Ferreira, P.H.; Fontes Guimarães, R.; Trancoso Gomes, R.A. Rice Crop Detection Using LSTM, Bi-LSTM, and Machine Learning Models from Sentinel-1 Time Series. Remote Sens. 2020, 12, 2655. [Google Scholar] [CrossRef]
  40. Rußwurm, M.; Korner, M. Temporal Vegetation Modelling Using Long Short-Term Memory Networks for Crop Identification from Medium-Resolution Multi-spectral Satellite Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 11–19. [Google Scholar]
  41. Ndikumana, E.; Ho Tong Minh, D.; Baghdadi, N.; Courault, D.; Hossard, L. Deep Recurrent Neural Network for Agricultural Classification using multitemporal SAR Sentinel-1 for Camargue, France. Remote Sens. 2018, 10, 1217. [Google Scholar] [CrossRef] [Green Version]
  42. Teimouri, N.; Dyrmann, M.; Jørgensen, R.N. A Novel Spatio-Temporal FCN-LSTM Network for Recognizing Various Crop Types Using Multi-Temporal Radar Images. Remote Sens. 2019, 11, 990. [Google Scholar] [CrossRef] [Green Version]
  43. Zhao, H.; Chen, Z.; Jiang, H.; Jing, W.; Sun, L.; Feng, M. Evaluation of Three Deep Learning Models for Early Crop Classification Using Sentinel-1A Imagery Time Series—A Case Study in Zhanjiang, China. Remote Sens. 2019, 11, 2673. [Google Scholar] [CrossRef] [Green Version]
  44. Ma, X.; Hovy, E. End-to-end sequence labeling via bi-directional lstm-cnns-crf. arXiv 2016, arXiv:1603.01354. [Google Scholar]
  45. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  46. Hameed, Z.; Garcia-Zapirain, B.; Ruiz, I.O. A computationally efficient BiLSTM based approach for the binary sentiment classification. In Proceedings of the 2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Ajman, United Arab Emirates, 10–12 December 2019; pp. 1–4. [Google Scholar]
  47. Wei, P.; Chai, D.; Lin, T.; Tang, C.; Du, M.; Huang, J. Large-scale rice mapping under different years based on time-series Sentinel-1 images using deep semantic segmentation model. ISPRS J. Photogramm. Remote Sens. 2021, 174, 198–214. [Google Scholar] [CrossRef]
  48. Jo, H.-W.; Lee, S.; Park, E.; Lim, C.-H.; Song, C.; Lee, H.; Ko, Y.; Cha, S.; Yoon, H.; Lee, W.-K. Deep Learning Applications on Multitemporal SAR (Sentinel-1) Image Classification Using Confined Labeled Data: The Case of Detecting Rice Paddy in South Korea. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7589–7601. [Google Scholar] [CrossRef]
  49. Yang, Z.; Yang, D.; Dyer, C.; He, X.; Smola, A.; Hovy, E. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, CA, USA, 12–17 June 2016; pp. 1480–1489. [Google Scholar]
  50. Gong, P.; Liu, H.; Zhang, M.; Li, C.; Wang, J.; Huang, H.; Clinton, N.; Ji, L.; Li, W.; Bai, Y.; et al. Stable classification with limited sample: Transferring a 30-m resolution sample set collected in 2015 to mapping 10-m resolution global land cover in 2017. Chin. Sci. Bull. 2019, 64, 370–373. [Google Scholar] [CrossRef] [Green Version]
  51. Potin, P.; Rosich, B.; Miranda, N.; Grimont, P. Sentinel-1 Mission Status. Procedia Comput. Sci. 2016, 100, 1297–1304. [Google Scholar] [CrossRef] [Green Version]
  52. Nguyen, D.B.; Gruber, A.; Wagner, W. Mapping rice extent and cropping scheme in the Mekong Delta using Sentinel-1A data. Remote Sens. Lett. 2016, 7, 1209–1218. [Google Scholar] [CrossRef]
  53. Stendardi, L.; Karlsen, S.; Niedrist, G.; Gerdol, R.; Zebisch, M.; Rossi, M.; Notarnicola, C. Exploiting Time Series of Sentinel-1 and Sentinel-2 Imagery to Detect Meadow Phenology in Mountain Regions. Remote Sens. 2019, 11, 542. [Google Scholar] [CrossRef] [Green Version]
  54. Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  56. McHugh, M.L. Interrater reliability: The kappa statistic. Biochem. Med. 2012, 22, 276–282. [Google Scholar] [CrossRef]
  57. Zaremba, W.; Sutskever, I.; Vinyals, O. Recurrent Neural Network Regularization. arXiv 2014, arXiv:1409.2329. [Google Scholar]
  58. Biau, G.; Scornet, E. A random forest guided tour. Test 2016, 25, 197–227. [Google Scholar] [CrossRef] [Green Version]
  59. Zhan, P.; Zhu, W.; Li, N. An automated rice mapping method based on flooding signals in synthetic aperture radar time series. Remote Sens. Environ. 2021, 252. [Google Scholar] [CrossRef]
  60. Wang, M.; Wang, J.; Chen, L. Mapping Paddy Rice Using Weakly Supervised Long Short-Term Memory Network with Time Series Sentinel Optical and SAR Images. Agriculture 2020, 10, 483. [Google Scholar] [CrossRef]
  61. Chamorro Martinez, J.A.; Cué La Rosa, L.E.; Feitosa, R.Q.; Sanches, I.D.A.; Happ, P.N. Fully convolutional recurrent networks for multidate crop recognition from multitemporal image sequences. ISPRS J. Photogramm. Remote Sens. 2021, 171, 188–201. [Google Scholar] [CrossRef]
  62. Ajadi, O.A.; Barr, J.; Liang, S.-Z.; Ferreira, R.; Kumpatla, S.P.; Patel, R.; Swatantran, A. Large-scale crop type and crop area mapping across Brazil using synthetic aperture radar and optical imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 97, 102294. [Google Scholar] [CrossRef]
  63. Lavreniuk, M.; Kussul, N.; Shelestov, A.; Dubovyk, O.; Löw, F. Object-based postprocessing method for crop classification maps. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 7058–7061. [Google Scholar]
Figure 1. (a) Geographical location of the study area, (b) the Sentinel-1A data in the test area.
Figure 1. (a) Geographical location of the study area, (b) the Sentinel-1A data in the test area.
Agriculture 11 00977 g001
Figure 2. Flow chart of the proposed framework.
Figure 2. Flow chart of the proposed framework.
Agriculture 11 00977 g002
Figure 3. Distribution diagram of sample areas for statistical characteristic analysis.
Figure 3. Distribution diagram of sample areas for statistical characteristic analysis.
Agriculture 11 00977 g003
Figure 4. The average backscattering coefficient curves of four types of sample points in VH polarization.
Figure 4. The average backscattering coefficient curves of four types of sample points in VH polarization.
Agriculture 11 00977 g004
Figure 5. Time series statistical parameter diagram. (a) Maximum; (b) minimum; (c) average; (d) variance.
Figure 5. Time series statistical parameter diagram. (a) Maximum; (b) minimum; (c) average; (d) variance.
Agriculture 11 00977 g005aAgriculture 11 00977 g005b
Figure 6. False color chart of time series statistical parameters. (a) R: maximum G: minimum B: average; (b) R: maximum G: minimum B: variance; (c) R: maximum G: average B: variance; (d) R: average G: minimum B: variance.
Figure 6. False color chart of time series statistical parameters. (a) R: maximum G: minimum B: average; (b) R: maximum G: minimum B: variance; (c) R: maximum G: average B: variance; (d) R: average G: minimum B: variance.
Agriculture 11 00977 g006aAgriculture 11 00977 g006b
Figure 7. Examples of rice in the optical image and the false color composite images with different combinations of statistical parameters. (a) The Google image of rice area; (b) R: maximum G: minimum B: average; (c) R: maximum G: minimum B: variance; (d) R: maximum G: average B: variance; (e) R: average G: minimum B: variance.
Figure 7. Examples of rice in the optical image and the false color composite images with different combinations of statistical parameters. (a) The Google image of rice area; (b) R: maximum G: minimum B: average; (c) R: maximum G: minimum B: variance; (d) R: maximum G: average B: variance; (e) R: average G: minimum B: variance.
Agriculture 11 00977 g007aAgriculture 11 00977 g007b
Figure 8. Examples of rice in the optical image and the false color composite images with different combinations of statistical parameters. (a) The Google image of rice and water area; (b) R: maximum G: minimum B: average; (c) R: maximum G: minimum B: variance; (d) R: maximum G: average B: variance; (e) R: average G: minimum B: variance.
Figure 8. Examples of rice in the optical image and the false color composite images with different combinations of statistical parameters. (a) The Google image of rice and water area; (b) R: maximum G: minimum B: average; (c) R: maximum G: minimum B: variance; (d) R: maximum G: average B: variance; (e) R: average G: minimum B: variance.
Agriculture 11 00977 g008
Figure 9. Sample data distribution.
Figure 9. Sample data distribution.
Agriculture 11 00977 g009
Figure 10. Structure diagram of BiLSTM-Attention model.
Figure 10. Structure diagram of BiLSTM-Attention model.
Agriculture 11 00977 g010
Figure 11. Comparison of classification results of different methods. (a) Label; (b) RF; (c) BiLSTM; (d) BiLSTM-Attention.
Figure 11. Comparison of classification results of different methods. (a) Label; (b) RF; (c) BiLSTM; (d) BiLSTM-Attention.
Agriculture 11 00977 g011
Figure 12. An example of optimization of classification results using FROM-GLC10 data. (a) The optical image; (b) original classification results; (c) the water layer; (d) the impermeable layer; (e) optimized classification results; (f) removed false alarms.
Figure 12. An example of optimization of classification results using FROM-GLC10 data. (a) The optical image; (b) original classification results; (c) the water layer; (d) the impermeable layer; (e) optimized classification results; (f) removed false alarms.
Agriculture 11 00977 g012aAgriculture 11 00977 g012b
Figure 13. Distribution map of rice in Zhanjiang city.
Figure 13. Distribution map of rice in Zhanjiang city.
Agriculture 11 00977 g013
Table 1. SAR data list table.
Table 1. SAR data list table.
Orbit Number—Frame Number: 157-63
No.Acquisition TimeNo.Acquisition TimeNo.Acquisition TimeNo.Acquisition Time
12019/4/572019/6/28132019/9/8192019/11/19
22019/4/1782019/7/10142019/9/20202019/12/1
32019/5/1192019/7/22152019/10/2212019/12/13
42019/5/12102019/8/3162019/10/14222019/12/25
52019/6/4112019/8/4172019/10/26
62019/6/16122019/8/27182019/11/7
Orbit Number—Frame Number: 157-66
No.Acquisition TimeNo.Acquisition TimeNo.Acquisition TimeNo.Acquisition Time
12019/3/3072019/6/22132019/9/2192019/11/13
22019/4/1182019/7/04142019/9/14202019/11/25
32019/5/592019/7/16152019/9/26212019/12/19
42019/5/17102019/7/28162019/10/8222019/12/31
52019/5/29112019/8/9172019/10/20
62019/6/10122019/8/21182019/11/1
Orbit Number—Frame Number: 84-65
No.Acquisition TimeNo.Acquisition TimeNo.Acquisition TimeNo.Acquisition Time
12019/3/3172019/6/23132019/9/3192019/11/14
22019/4/1282019/7/5142019/9/15202019/11/26
32019/5/692019/7/17152019/9/27212019/12/8
42019/5/18102019/7/29162019/10/9222019/12/20
52019/5/30112019/8/10172019/10/21
62019/6/11122019/8/22182019/11/2
Table 2. Accuracy analysis of different methods.
Table 2. Accuracy analysis of different methods.
AccuracyPrecisionRecallF1Kappa
BiLSTM-Attention0.93510.91910.94950.93410.8703
BiLSTM0.90120.89700.90650.90170.8024
RF0.88090.89100.86800.87940.7619
Table 3. Classified areas of rice in various districts and counties of Zhanjiang City.
Table 3. Classified areas of rice in various districts and counties of Zhanjiang City.
No.Administrative RegionStatistical Area (ha)Classified Area (ha)
1Chikan District260.00155.41
2Leizhou City55,666.6763,589.69
3Lianjiang City52,766.6732,327.90
4Mazhang District11,500.0010,210.96
5Potou District7986.675608.17
6Suixi County24,826.6731,360.29
7Wuchuan City22,160.0019,717.17
8Xiashan District946.67601.21
9Xuwen County14,166.6716,441.59
10total190,280.02180,012.39
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, C.; Zhang, H.; Xu, L.; Wang, C.; Li, L. Rice Mapping Using a BiLSTM-Attention Model from Multitemporal Sentinel-1 Data. Agriculture 2021, 11, 977. https://doi.org/10.3390/agriculture11100977

AMA Style

Sun C, Zhang H, Xu L, Wang C, Li L. Rice Mapping Using a BiLSTM-Attention Model from Multitemporal Sentinel-1 Data. Agriculture. 2021; 11(10):977. https://doi.org/10.3390/agriculture11100977

Chicago/Turabian Style

Sun, Chunling, Hong Zhang, Lu Xu, Chao Wang, and Liutong Li. 2021. "Rice Mapping Using a BiLSTM-Attention Model from Multitemporal Sentinel-1 Data" Agriculture 11, no. 10: 977. https://doi.org/10.3390/agriculture11100977

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop