Next Article in Journal
Using Landsat-8 Images for Quantifying Suspended Sediment Concentration in Red River (Northern Vietnam)
Previous Article in Journal
The AQUI Soil Moisture Network for Satellite Microwave Remote Sensing Validation in South-Western France
Article Menu
Issue 11 (November) cover image

Export Article

Remote Sens. 2018, 10(11), 1840; https://doi.org/10.3390/rs10111840

Article
Mapping Paddy Rice Using a Convolutional Neural Network (CNN) with Landsat 8 Datasets in the Dongting Lake Area, China
1,2,3,*, 1,2,3,*, 1,2,3,4, 1,2,3 and 5
1
Research Center of Forest Remote Sensing and Information Engineering, Central South University of Forestry and Technology, Changsha 410004, China
2
Key Laboratory of Forestry Remote Sensing Based Big Data and Ecological Security for Hunan Province, Changsha 410004, China
3
Key Laboratory of State Forestry Administration on Forest Resources Management and Monitoring in Southern Area, Changsha 410004, China
4
Department of Geography and Environmental Resources, Southern Illinois University, Carbondale, IL 62901, USA
5
Cooperative Innovation Center for Digitalization of Cultural Heritage in traditional Villages and Towns, Hengyang 421002, China
*
Authors to whom correspondence should be addressed.
Received: 20 September 2018 / Accepted: 15 November 2018 / Published: 20 November 2018

Abstract

:
Rice is one of the world’s major staple foods, especially in China. Highly accurate monitoring on rice-producing land is, therefore, crucial for assessing food supplies and productivity. Recently, the deep-learning convolutional neural network (CNN) has achieved considerable success in remote-sensing data analysis. A CNN-based paddy-rice mapping method using the multitemporal Landsat 8, phenology data, and land-surface temperature (LST) was developed during this study. First, the spatial–temporal adaptive reflectance fusion model (STARFM) was used to blend the moderate-resolution imaging spectroradiometer (MODIS) and Landsat data for obtaining multitemporal Landsat-like data. Subsequently, the threshold method is applied to derive the phenological variables from the Landsat-like (Normalized difference vegetation index) NDVI time series. Then, a generalized single-channel algorithm was employed to derive LST from the Landsat 8. Finally, multitemporal Landsat 8 spectral images, combined with phenology and LST data, were employed to extract paddy-rice information using a patch-based deep-learning CNN algorithm. The results show that the proposed method achieved an overall accuracy of 97.06% and a Kappa coefficient of 0.91, which are 6.43% and 0.07 higher than that of the support vector machine method, and 7.68% and 0.09 higher than that of the random forest method, respectively. Moreover, the Landsat-derived rice area is strongly correlated (R2 = 0.9945) with government statistical data, demonstrating that the proposed method has potential in large-scale paddy-rice mapping using moderate spatial resolution images.
Keywords:
rice; phenology; land-surface temperature; fusion model; convolutional neural network

1. Introduction

Food security has always been a problem for China and the rest of the world [1,2]. Rice, as one of the major staple foods, is widely planted in China [3,4]. However, rice production in some areas has been recently facing some challenges. On one hand, the growing population has higher demand on rice, while on the other hand, the area of rice paddies has been decreasing with urbanization. Natural degradation and hazards like floods and droughts are also a problem impacting rice production [5,6,7]. The timely and accurate assessment of rice production is necessary for government decisions, which can be achieved by the high spatial–temporal resolution monitoring of rice-producing lands.
Vegetation phenology, derived from time-series satellite images, plays an important role in vegetation monitoring and land-cover classification because it can capture vegetation information of different growth stages. Its classification accuracy is higher than using monotemporal images over a region with different landscapes [8,9,10]. Over the last decades, time-series remote-sensing data, especially optical remote-sensing images, have found wide applications in paddy-rice monitoring and mapping [11,12,13,14,15,16,17,18]. Moderate resolution imaging spectroradiometer (MODIS) data have been used to map rice worldwide due to their high temporal and moderate spatial resolutions [19,20]. Landsat (30 m) data have a higher spatial resolution than MODIS, so they can be used to develop more accurate rice maps for less extensive areas [21,22]. Recently, Sentinel-2A MSI (Multispectral Instrument), with higher spatial and spectral resolutions than Landsat data, has been used to map paddy rice and other land-cover types [23,24]. High-resolution time-series multispectral images, including QuickBird, IKONOS, and RapidEye, have also been used to map rice or crops, getting highly accurate results [25,26]. Hyperspectral images can improve crop-mapping accuracy by being able to identify more crop classes [27,28,29]. All these optical data are susceptible to weather conditions and image coverage, but reducing feasibility in rice-monitoring applications. Multiseasonal synthetic aperture radar (SAR) data, being immune to weather conditions, have better performance for rice monitoring [30,31,32,33]. The cost of SAR data rises rapidly with the increase of resolution [17,34]. Fortunately, there are some free-of-charge SAR images (Sentinel 1-C band) that can be used for mapping land-use/-cover maps and biomass [35,36]. Considering accessibility and availability, medium-resolution images like MODIS and Landsat are more suitable for multiyear paddy-rice mapping. However, the spatial resolution (250–500 m) of MODIS data is too low to perform detailed vegetation classification, so it cannot be used for regional rice identification [8,9,37,38]. Landsat data, hence, are usually a better choice for multiyear paddy-rice mapping on a large scale.
Landsat data have a long revisit interval (16 days) and are vulnerable to rainy and cloudy weather, so it is not easy to get enough clear imagery for paddy-rice monitoring. Spatial and temporal fusion algorithms can generate time-series Landsat-like data with high-temporal and low-spatial resolution by merging high-spatial resolution data with low-temporal resolution data [39,40]. The spatial and temporal adaptive reflectance fusion model (STARFM) has proven to be effective in blending Landsat–MODIS surface reflectance with simulated or real images [41]. Although there are many improved models, such as the spatial–temporal adaptive algorithm for mapping reflectance change (STAARCH) [42], an enhanced STARFM model (ESTARFM) [43], the spatiotemporal integrated temperature fusion model (STITFM) [44], the robust adaptive spatial and temporal fusion model (RASTARFM) [45] and other models [46,47], none of them can generate images with satisfactory spatial and temporal resolutions using only a single pair of images, as the STARFM model does [48,49,50,51,52]. The STARFM algorithm is used in this study to achieve time series Landsat-like data.
Many machine-learning algorithms have been used for mapping rice or other land-cover types, such as support vector machines (SVM), random forest (RF), and decision trees (DT) [53,54,55]. Some advanced algorithms (rotation forest (RoF), adaptive network-based fuzzy inference system (ANFIS) for example) have also been developed to achieve a higher accuracy of classification [56,57]. Nowadays, deep learning has attracted great attention in fields like image identification and signal processing [58,59]. The convolutional neural network (CNN) performs particularly well in the field of image analysis [60,61], especially in remote-sensing tasks such as vehicle detection [62], road-network extraction [63], scenario classification [64], and semantic segmentation [65]. Regarding scenario classification, CNN can achieve higher classification accuracy than conventional algorithms [66]. Using convolutional layers and max-pooling layers, CNN strengthens the image-classification ability, thereby solving the major problems of conventional shallow-structured machine-learning tools such as SVM and RF [67,68]. CNN has been applied to the high-resolution land-use/-cover (LULC) classification [69,70], and demonstrates great potential in extracting spatial information using a convolutional window and local connections [71,72]. Considering LULC mapping, such as cropland classification, and using moderate-resolution images like Landsat, the CNN and texture features might also have advantages. There are two ways to use CNN for LULC mapping: pretrained CNN and fully trained CNN. The first way uses trained knowledge from natural images, which has been proven to be useful for LULC classification [73]. However, it requires images with only RGB (Red, Green and Blue bands), which is unfeasible as multispectral images often have more than four channels (RGB, near-infrared, and another band). The fully trained CNN is more flexible and expandable for architecture and parameters [74]. A fully trained CNN on Landsat 8 spectral images that have more than seven channel bands is used in this study.
The specific objectives of this study were to develop a precise high spatial-resolution (30 m) paddy-rice extent map for large-scale areas using the Landsat 8 OLI dataset and CNN classifier. A patch-based CNN algorithm was employed for this study, which has been demonstrated to be useful in land-cover classification with large datasets over a large area. This study uses a large reference dataset for training and validation, and 10 blocks from the Landsat 8 dataset, five for training and five for validation, were selected to avoid spatial correlation. Paddy-rice areas, by the proposed method, are compared with SVM and RF algorithms, as well as by statistical results at the county level.

2. Study Area and Materials

2.1. Study Area

The Dongting Lake area (between latitudes 28°30′ and 29°31′N, and longitudes 111°40′ and 113°10′E) covering some parts of Hunan province, is located in the middle reaches of the Yangtze River and the south bank of the Jingjiang River (Figure 1). This area has hills and alluvial plains, with an elevation lower than 50 m. It has a subtropical monsoon climate with an annual average temperature of 17.4 °C and an annual precipitation of 1600 mm. The Dongting Lake area includes three cities, Yueyang, Changde and Yiyang, and 21 counties, covering an area of approximately 25,840 hm2 and accounting for 12.2% of the total area of Hunan province. This area is an important commodity grain base of China, where double-crop (dominant) and single-season rice grow. Paddy rice is mainly distributed in areas near rivers and lakes for the sake of irrigation. Double-cropping rice mainly grows in the north of the study area and single-season rice patches are scattered around the Datong Lake. The two growing seasons of double-cropping rice are April–July for early rice and July–October for late rice. The growing season of the single-season rice is during June–September [75,76]. The paddy-rice calendar of the study area is shown in Figure 2.

2.2. Datasets

2.2.1. Landsat 8 OLI and MODIS13Q1

Clear (<3% cloud cover) Landsat 8 OLI spectral images (path/row: 123/39, 123/40, 124/39, and 124/40) were obtained during the growth period of the two rice types in 2017 from the United States’ Geological Survey (USGS) website (http://glovis.usgs.gov/) (Table 1). The data were preprocessed through the steps of radiance calibration, atmospheric correction, geometric correction, computation of NDVI, and mosaic [77]. The (Normalized difference vegetation index) NDVI was computed by band 4 (red) and band 5 (NIR) using the surface spectral reflectance images. The Landsat 8 OLI images were acquired in early June, late July, and early September, corresponding to the flowering of the single-season rice, maturation of the double-cropping rice, and harvest of the single-season rice, respectively.
MODIS NDVI data (path/row: h27v05, h27v06) from the MOD13Q1 Vegetation Indices were used in this study. MODIS NDVI data span one year from January 2016 to December 2016, a total of 23 scenes. The first invalid values in the MOD13Q1 products were removed using pixel reliability images [78]. Then, the MODIS NDVI data co-ordinate system was converted to a UTM (WGS-84) projection, consistent with the Landsat 8 OLI data. Finally, the MODIS NDVI data were resampled to 30 m in accordance with the Landsat NDVI data.

2.2.2. Reference Data

Reference data were generated from the following auxiliary information: (1) LULC map of Hunan Province (2016) with a scale of 1:10,000; (2) field-survey data in July 2016 acquired by Qcooli3-GPS, which can be used to estimate rice area and location with high accuracy. A total of 580 ground truth points distributed in the rice paddies and other major croplands were collected. These ground points were within five blocks, which were selected before the experiment began; and (3) Google Earth images were used to assist the identification of crop types. The dataset was employed for selecting training samples and assessing accuracies.

2.2.3. Ancillary Data

The data from the Statistical Yearbook of Hunan Province of 2016 (http://www.hntj.gov.cn/), statistical results at the county level were used to validate the derived rice map. The rice-cropping calendar and rice-growth phonological observation data instructive to paddy-rice identification were from the Institute of Subtropical Agriculture of China.

3. Method

A method for paddy-rice mapping using CNN and Landsat 8 datasets, which has four main steps (Figure 3), was proposed for this study. First, the MODIS and Landsat 8 were preprocessed. The STARFM model to generate the synthetic NDVI with a spatial resolution of 30 m and a temporal frequency of 16 days from Landsat and MODIS images was extensively used, and is referred to as the Landsat 8 spectral images hereafter. Then, land-surface temperature (LST) data were derived from Landsat 8 OLI images. Finally, the Landsat 8 spectral images were used together with the fused time-series NDVI and phenological variables to identify paddy rice by the CNN method.

3.1. Fitting of MODIS–NDVI Time Series

The MODIS–NDVI data were derived from the synthetic data using the maximum-value composite method, which reduces the noise caused by cloud and aerosol effects. Other noises and inaccurate phenology were eliminated by a Savizky–Golay (S–G) filter [78], which could clearly describe minor changes in the study area despite complex crop types and broken plots [68]. A locally adapted moving window was adopted in the S–G filtering. The moving window utilizes polynomial least-squares regression to fit the time-series data. A double-cropping rice pixel of the MODIS–NDVI time series filtered by S–G is shown in Figure 4. Additionally, the root mean square error (RMSE) between the original NDVI and the filtered NDVI of the study area was calculated. The small RMSE (<0.15) means that the NDVI time series fitted by the S–G filter had good performance.

3.2. Temporal and Spatial Fusion of Landsat NDVI with MODIS NDVI Data

Landsat NDVI and MODIS NDVI data were fused by the STARFM model during this study. STARFM employs one or two pairs of fine- and coarse-resolution images obtained on the same date and a coarse-resolution image obtained on the prediction date. Landsat surface reflectance L (xi, yi, tk), for example, at date tk can be predicted with Landsat surface reflectance L (xi, yi, t0) and MODIS surface reflectance M (xi, yi, t0) at t0, and M (xi, yi, tk) at tk, respectively [41]. The surface reflectance value of the central pixel is predicted by a weighted average of similar spectral pixels in a moving window:
L ( x w / 2 , y w / 2 , t k ) = i = 1 w j = 1 w W i j ( M ( x i , y j , t k ) + L ( x i , y j , t 0 ) M ( x i , y j , t 0 ) )  
where L (xw/2, yw/2, tk) is the predicted Landsat reflectance value in location (xw/2, yw/2) at date t1, and w is the moving window size. Only the similar spectral pixels in the window are used to calculate the central pixel. L (xi, yi, t0) and M (xi, yi, t0) are the surface reflectance value of Landsat and MODIS at point (xi, yi) at date t0, respectively. Wij is the weight determined by the temporal distance, spectral distance and spatial distance between the central pixel and other pixels in the window [41].
One-input base-pair images and MODIS-NDVI image on prediction date were used, including Landsat 8 NDVI on 30 July, MODIS-NDVI on 27 July (the time nearest to the acquisition date of Landsat 8), as well as MODIS-NDVI on 13 September (prediction date) to predict Landsat 8 NDVI. Using these procedures, the fused Landsat NDVI time series, with a revisit interval of 16 days and a spatial resolution of 30 m, was produced. Figure 5 shows the MODIS NDVI data, Landsat 8 NDVI, and a corresponding fused Landsat-like NDVI over the study area on 13 September. The results demonstrate the fused Landsat-like image was similar to the Landsat 8 NDVI image.

3.3. Phenological Variables Derived from Time Series Landsat-Like NDVI

The threshold method was employed to extract phenological variables from the fused Landsat-like time series, which subsequently was fitted with an asymmetric Gaussian model [78]. The threshold method had an assumption that a phenological phenomenon occurs if NDVI values exceed a given threshold [79]. The Dongting Lake area has a subtropical monsoon climate. The rice ripens once or twice per year. Thus, the seasonality parameter was set as 0.1 to fit the two rice-growing seasons. Additionally, the parameter of the medium-filter option was set as 2 to eliminate spikes and outliers, and the upper-envelope parameter was set as 2 to remove negatively biased noise from the fused Landsat-like NDVI time series.
Considering the growth characteristics of rice and other vegetation, five phenological variables were mapped based on the time-series Landsat-like NDVI: the start of the season, the end of the season, the length of the season, the largest NDVI value, and the NDVI amplitude during each considered season. Their definitions are shown in Table 2.
Phenological variables of the different vegetation types derived from the 16 days fused Landsat-like NDVI time series are shown in Figure 6. Croplands present distinct phenological patterns, but nonvegetated areas show no phenological patterns and low NDVI variability over time. Phenological variables of vegetation classes, therefore, have potential for paddy-rice identification.

3.4. Land-Surface Temperature Derived from Landsat 8 OLI

LST is an important and useful factor for rice mapping on a large scale [15,16,80]. A generalized single-channel algorithm, which has been demonstrated as more effective than other algorithms, was used to extract the LST from band 10 of Landsat 8 OLI for the period from June to September 2016 [81] as follows:
T s = γ ( φ 1 L λ + φ 2 ε ) + δ  
where Ts, Lλ, and ε are land-surface temperature, radiance brightness of band 10, and land-surface emissivity, respectively. The land-surface temperature in Kelvin (K) was converted to degrees Celsius (°C). The derived LST on 30 July is shown in Figure 7.

3.5. Conventional Neural Network Classification

3.5.1. Classification Features

The classification features selected in this paper included spectral bands of the Landsat 8 OLI images, NDVI, and phenological variables. The selected spectral bands were Blue, Green, Red, Near Infrared-NIR, Shortwave Infrared-SWIR 1, and Shortwave Infrared-SWIR 2, which have demonstrated their potential in land-cover classification [82,83,84]. The NDVI was derived from Landsat 8 spectral images on 12 June 2016, 30 July 2016, and 16 September 2016. Phenological variables and LST that can improve crop identification were utilized as classification features. Nine experimental sequences with different feature configurations were designed to evaluate the classification effects of different features (Table 3).

3.5.2. Land-Cover Types and Training and Validation Areas

Combining the natural environment, main vegetation types in the study area, and the land-use/-cover data acquired from the National Earth System Science Data Sharing Infrastructure (http://www.geodata.cn) [85], land cover was classified into eight types: water, double-cropping rice, single-season rice, grassland, dryland, forest, building, and other land.
To avoid spatial correlation between training and validation datasets, 10 blocks from the Landsat dataset were randomly selected for the experiment, five for the training and five for the validation (Figure 8), and the distances between the blocks were above 10 km. All field-survey data were distributed in the yellow blocks. A rule of thumb was applied to confirm the sample size for accuracy assessment [86]. Moreover, the number of pixels per class was limited to 20,000 for training and 15,000 for validation to simplify the computation. These numbers were selected randomly from the available reference data of each class that exceeded these limits. Additionally, training samples for each land-cover type based on the LULC map of Hunan Province (2016) and Google Earth images (2016) were selected randomly, and the field data were used as validation samples.

3.5.3. CNN

A CNN has a main building block composed of multiple layers that are interconnected with each other using a set of learnable weights and biases [87]. Additionally, each convolutional layer might have several feature maps, and the convolutional nodes in the same map have the weights [88,89]. Equation 3 describes the major operations in the CNN:
O l = p o o l p ( σ ( O l 1 W l + b l ) )  
where Ol–1 is the input feature map of the l-th layer with weights Wl and bias bl, that convolve the input feature map through linear convolution *, and σ(.) is the nonlinearity function outside the convolutional layer [88]. Following layer convolution, a max-pooling operation with a window poolp sized p × p was performed to obtain general statistical information of the features within specific regions, and then to generate feature map Ol at the l-th layer.
A relatively simple ConvNet network was applied to plant classification. The network consisted of ten layers, two convolutional, two max-pooling, two batch normalization, two activation functions, and two fully connected layers (Figure 9) [90]. Patches (28 × 28 pixels) centered on the training pixels were randomly selected and could be overlapped. One patch for each pixel was centered and extracted, and then the predicted label was assigned to each pixel. The network was trained using the stochastic gradient descent optimizer for 30 epochs. Early stopping was used to prevent overfitting. The batch size, learning rate, momentum, and weight-decay parameter were set to 100, 0.1, 0.9, and 0.00005, respectively (Table 4). Only the central pixel of the patch belonging to the specific class was selected to extract the training patches.

3.6. Compared Method and Accuracy Assessment

The CNN method was compared with the SVM and RF method. Classification accuracies were assessed by overall accuracy (OA), kappa coefficient, user accuracy (UA), and producer accuracy (PA) calculated using a confusion matrix. Furthermore, the McNemar test to assess the significance of the classification accuracies of the three methods was used.

4. Results and Analysis

4.1. Paddy-Rice Mapping Using CNN with Different Features

Classifications with different feature combinations (Table 5) were performed, and classification accuracies are shown in Table 4. Generally, feature sequences can be divided into three parts: Landsat 8 spectral, Landsat 8 spectral + NDVI + phenological variables (PV), and Landsat 8 spectral + NDVI + LST. When one Landsat 8 spectral image was considered, the image acquired in July, the heading stage of single-season rice and the ripening of double-cropping rice had the highest classification accuracy, followed by that acquired in September, the ripening stage of the single-season rice and the heading stage for the double-cropping rice, and then that acquired in June, the flowering stage of the double-cropping rice and the tillering stage of the single-season rice. This is because the heading of the single-season rice and ripening of the double-cropping rice have quite typical spectral characteristics, which can help distinguish paddy rice from other vegetables. When all Landsat 8 spectral (June, July, and September) images were employed, overall accuracy was 91.23%. Variations of the paddy-rice canopy structure were captured using more images and spectral bands, which increase the separability of vegetation types.
When phenological features (NDVI + phenological variables) were added, the OA, PA, and UA of paddy rice were all increased, demonstrating the potential of typical phenological features in rice identification. Additionally, it is more effective in areas growing vegetation with similar spectral characteristics. Thus, the time-series images and phenological features can improve the accuracy of paddy-rice identification. LST is a very useful parameter to distinguish rice from other vegetables, and the dataset of the Landsat 8 spectral images (June, July, September) + NDVI + PV + LST had the highest classification accuracy (97.06%, 0.91).
Some parameters, such as Vegetation Health Index (VHI), Ratio Vegetation Index (RVI), Vegetation Condition Index (VCI), Brightness Temperature (BT), and Temperature Condition Index (TCI), are also used for paddy-rice identification [16,91]. However, there is a high positive correlation between NDVI, RVI, VCI, and VHI, because VCI and VHI are generated from NDVI. Additionally, there is high correlation between BT, TCI, and LST. NDVI and LST, therefore, can represent other vegetation indices (VHI, RVI, and VCI) and temperature parameters (BT and TCI), and is simpler. NDVI, PV, and LST were used as important basic parameters to extract paddy-rice information of the study area.

4.2. Paddy-Rice Mapping using CNN, SVM, and RF Classifiers

The classification results of CNN were compared with those of other machine-learning classifiers, SVM and RF, using the same feature sets (Landsat 8 spectral (June, July, September) + NDVI + PV). The results (Table 6) demonstrate that the CNN algorithm has higher accuracy (including OA, Kappa coefficient, PA, and UA of paddy rice) than the other two machine-learning classifiers. The OA, UA, and PA were all above 95%, and the Kappa coefficient was also larger than 0.90.
McNemar’s test was used to evaluate the significance of the classification accuracies. The conventional threshold for declaring statistical significance was set as 5%. The p-Value of McNemar test for the three methods, along with the level of significance, are shown in Table 7. Apart from the SVM with an RF pair, other differences are significant at a 5% level of significance. Furthermore, the differences between CNN and the other two machine-learning classifiers have statistical significance at 0.1%.
Figure 10 shows the paddy-rice maps generated by the (a) RF, (b) SVM, and (c) CNN. The rice maps of SVM and CNN are similar, but they are different from that of RF. The differences between the rice maps produced by SVM and CNN, however, emerge at a closer look (red block). The CNN tends to generalize the prediction as an object due to its patch-based nature. Consequently, it generates a more unitary paddy-rice patch and reduces the ‘salt-and-pepper’ phenomenon. The paddy-rice spatial distribution of the differences between rice maps generated by SVM and CNN are scattered in the whole study area, mainly across class boundaries, apparently caused by the generalized CNN output effect. These differences are obvious in areas with a heterogeneous texture, like the rice paddy mixed with roads. Rice fields with continuous coverage show fewer discrepancies between the two paddy-rice maps.

4.3. Paddy-Rice Mapping Using Three CNNs

Two additional full CNNs, a patch-based VGG-16 network [92], and a pixel-based fully convolutional network (FCN) [93], were employed in this study to identify paddy rice (Table 8). Generally, the patch-based CNN (ConvNet and VGG-16) achieved higher classification accuracy than the pixel-based one did. Additionally, ConvNet and VGG-16 produced similar OA, Kappa, PA, and UA.

4.4. Rice-Mapping Results and Accuracy Assessments

The paddy-rice map (including double-cropping rice and single-season rice stated in Section 3.5.2) in the study area was generated by the proposed method using Landsat 8 spectral (June, July, September) + NDVI + PV and ConvNet network. Paddy-rice distribution from 2016 is shown in Figure 11, in which double-cropping rice had much wider distribution than single-season rice, especially in the north area with dense river networks and lakes. The single-season rice was distributed sporadically across the study area, centered around the Dongting Lake or along rivers.
Correlation analysis shows that rice areas derived by the CNN method from the Landsat 8 dataset are strongly correlated with the government statistics at the county level for 2016 (correlation coefficient, 0.9945) (Figure 12). The CNN result is slightly overestimated, however. The relative error in area of rice (REA) between the Landsat 8-derived rice areas (LOD) and the government’s rice area statistics (GRA) ranges from 1.4% to 14.3% (Table 9). The overestimation error of the LOD is the main reason for the discrepancy between the two datasets. The larger the rice area, the greater the REA value because LOD is calculated by pixel, while the GRA is the total sow areas.

5. Discussion

Remote-sensing techniques have been a potential method for timely paddy-rice mapping on a global, continental, or regional scale. Lately, attempts have been made to achieve higher accuracy of rice mapping using data from various image sensors. Optical images, such as AVHRR, MODIS, Landsat, and Sentinel-2A, as well as high spatial resolution images (IKONOS, Quick-Bird) and hyperspectral images [19,20,21,22,23,24,25,26,94] were used for rice mapping. However, most rice-growing regions have serious cloud contamination, which is a challenge for obtaining enough clean optical images. SAR systems are effective for producing good quality images and mapping paddy rice, even in the regions with cloud coverage. Additionally, combining SAR data with optical data can achieve higher-accuracy rice mapping [23]. However, the accuracy of SAR data is usually lower than optical multispectral data of the same resolution [24]. Data fusion is also a promising way to acquire high spatial–temporal resolution data, regardless of the weather condition. The derived multitemporal Landsat-like NDVI data used in this study have high spatial resolution and a medium temporal resolution that can be used to identify paddy rice and capture the phenology of vegetation [39,95,96,97]. Moreover, using the fused time series can produce accurate vegetation (especially crops) distributions. Rice and other crops with similar spectral data are often misclassified. Vegetation phenology has advantages in paddy-rice classification, so paddy-rice classification using spectral data with and without using the phenological features [98,99,100,101] was performed. When only spectral features were considered, paddy rice was misclassified as grass or other vegetation types due to similar spectral characteristics. When phenological features were added, classification accuracy was significantly improved. This significant improvement was a result of the seasonal behavior of different vegetation types. Additionally, it is a challenge to distinguish paddy rice from the other vegetation types using the spectral image of a single day. However, using the Landsat 8 images acquired at the key growth stages of rice can effectively improve classification accuracy. LST is another useful parameter for mapping paddy rice [16,91] that can improve the classification accuracy.
Traditional machine-learning algorithms used for paddy-rice mapping are SVM, RF, and DT [15,102,103]. The CNN, with convolutional layers as well as max-pooling layers that take the neighborhood of a pixel as context information, develops deeper characteristics of images and gets a higher accuracy. A patch-based CNN method was used in this study to map the paddy rice and achieved a higher accuracy than pixel-based methods SVM and RF. Compared with research that used a DT algorithm with multitemporal HJ-1A/B spectral images and phenological variables for rice mapping [104], the classification accuracy of the current study is nearly 3% higher using CNN. CNN was employed to map the detailed land covers with multitemporal Landsat 8 data in northern Greece, and its accuracy was lower than the SVM in that study [90]. This discrepancy with the current results is caused by a few factors. During this study, on the one hand, the Landsat 8 dataset was classified only into eight land-cover types, but the study in Greece developed the Landsat 8 data into more than 25 land-cover types. The study area in the current study is located in the interior of China, where the landscape is relatively homogeneous and rice fields have continuous coverage, while the area of the Greek study was near the sea, presenting highly heterogeneous landscapes. The CNN method causes distortions to the boundary and outline of the land covers in areas with high heterogeneity because CNN input is a feature map set, but the output is a category label. Phenology data might also be another factor. Although the patch-based algorithm is incapable of classifying independent pixels inside a patch, the patch-based CNN works well in homogeneous regions and generally performs better than pixel-based CNN for paddy-rice mapping in this study.
There are some limitations regarding the proposed method. Looking at the fusion model, the selection of base image pairs is difficult. Adding extra image pairs acquired during a time period can improve data-fusion results, especially at key pair dates (such as the nursery, vegetative, productive, and ripening stages for crops) [95]. Additionally, residual cloud contamination in the 16 day MODIS time-series NDVI, which is quite common in tropical and subtropical areas, should be removed [96]. Regarding complex heterogeneous areas, high temporal and spatial variations might lead to greater prediction errors than in homogeneous areas. Different temporal growing patterns of vegetation and spatial heterogeneity in the study area might account for this [97]. Concerning a CNN, the proposed method does not consider the spatial pattern of the study area, which is also a key factor for discriminating vegetation types. Stronger spatial heterogeneity causes mixed pixels, thereby reducing classification accuracy. Deep-learning CNN, combined with other classification methods, is able to solve this problem. The CNN + threshold method and the CNN + object-based method have been successfully applied in land-cover mapping [89]. Second, CNN needs much longer computation time than SVM and RF because it originates an input feature map set and classifies it for each pixel image [90]. Parallel-processing and dimensionality-reduction methods, such as principal-component analysis [105], singular-value decomposition [106] and sparse autoencoder [107], can effectively handle the computation of large datasets. Finally, the problem of mixed pixels affects the identification of paddy-rice fields. Some rice fields are too small to be classified with the 30 m spatial resolution Landsat 8 data. When possible, higher spatial resolution images, such as GF-1 satellite images, should be used. GF-1 satellite sensors have higher spatial resolution (8 and 16 m) and observation frequency (4 and 2 days) than Landsat 8, which can increase the classification efficiency of the proposed method, especially in regions with rainy and cloudy weather.

6. Conclusions

Patch-based deep-learning CNN and multitemporal Landsat 8 data were employed in this study to identify paddy rice in the Dongting Lake area. This study demonstrates the potential for using moderate spatial resolution images combined with CNN for large-area paddy-rice mapping. Despite the impact of the mixed pixels and other problems, the proposed method achieves overall accuracy and Kappa efficiency of higher than 95% and 0.90, respectively. The results were confirmed by the strong correlation between the derived rice area and the government rice area statistics at the county level (R2 > 0.9). The rice area derived from the Landsat 8 data was slightly overestimated, with an REA ranging between 1.4% and 14.3%, while the current study’s paddy-rice mapping algorithm has the potential to provide acceptable spatial distribution of paddy rice in other large-scale areas.

Author Contributions

M.Z. designed and conducted the experiments and wrote the manuscript. H.L., G.W., H.S. provided suggestions for the analysis, discussion, and manuscript writing. J.F. provided and processed some remote sensing data.

Acknowledgments

The authors would like to thank the editors and anonymous reviewers for their valuable comments, which were significant for improving this manuscript. This work was supported by the Twelfth Five-Year Plan Pioneering project of the High Technology Plan of the National Department of Science and Technology (2012AA102001), the National Science and Technology Major Projects of China (No. 21-Y30B05-9001-12/15-2), and the Scientific Research Fund of Hunan Provincial Education Department (17A225): Synergy Simulation of multiresolution remote-sensing data for city vegetation carbon mapping and uncertainty analysis.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dong, J.W.; Xiang, K.L.; Wang, S.; Han, W.; Yuan, W.P. A sub-pixel method for estimating planting fraction of paddy rice in Northeast China. Remote Sens. Environ. 2018, 205, 305–314. [Google Scholar]
  2. Boschetti, M.; Busetto, L.; Manfron, G.; Laborte, A.; Asilo, S.; Pazhanivelan, S.; Nelson, A. PhenoRice: A method for automatic extraction of spatio-temporal information on rice crops using satellite data time series. Remote Sens. Environ. 2017, 194, 347–365. [Google Scholar] [CrossRef]
  3. Yang, Z.; Shao, Y.; Li, K.; Liu, Q.B.; Liu, L.; Brisco, B. An improved scheme for rice phenology estimation based on time-series multispectral HJ-1A/B and polarimetric RADARSAT-2 data. Remote Sens. Environ. 2017, 195, 184–201. [Google Scholar] [CrossRef]
  4. Bouvet, A.; Toan, T.L. Use of ENVISAT/ASAR wide-swath data for timely rice fields mapping in the Mekong River Delta. Remote Sens. Environ. 2011, 115, 1090–1101. [Google Scholar] [CrossRef][Green Version]
  5. Mosleh, M.K.; Hassan, Q.K. Development of a Remote Sensing-Based “Boro” Rice Mapping System. Remote Sens. 2014, 6, 1938–1953. [Google Scholar] [CrossRef][Green Version]
  6. Elert, E. Rice by the numbers: A good grain. Nature 2014, 514, 50–51. [Google Scholar] [CrossRef]
  7. Moharana, S.; Dutta, S. Spatial variability of chlorophyll and nitrogen content of rice from hyperspectral imagery. ISPRS J. Photogramm. Remote Sens. 2016, 122, 17–29. [Google Scholar] [CrossRef]
  8. Xiao, X.; Boles, S.; Liu, J.; Zhuang, D.; Frolking, S.; Li, C.; Salas, W.; Moore, B., III. Mapping paddy rice agriculture in southern China using multi-temporal MODIS images. Remote Sens. Environ. 2005, 95, 480–492. [Google Scholar] [CrossRef]
  9. Xiao, X.; Boles, S.; Frolking, S.; Li, C.; Babu, J.Y.; Salas, W.; Moore, B., III. Mapping paddy rice agriculture in South and Southeast Asia using multi-temporal MODIS images. Remote Sens. Environ. 2006, 100, 95–113. [Google Scholar] [CrossRef]
  10. Sakamoto, T.; Yokozawa, M.; Toritani, H.; Shibayama, M.; Ishitsuka, N.; Ohno, H. A crop phenology detection method using time-series MODIS data. Remote Sens. Environ. 2005, 96, 366–374. [Google Scholar] [CrossRef]
  11. Wardlow, B.D.; Egbert, S.L. Large-area crop mapping using time-series MODIS 250 m NDVI data: An assessment for the U.S. Central Great Plains. Remote Sens. Environ. 2008, 112, 1096–1116. [Google Scholar] [CrossRef]
  12. Son, N.T.; Chen, C.F.; Chen, C.R.; Duc, H.N.; Chang, L.Y. A Phenology-Based Classification of Time-Series MODIS Data for Rice Crop Monitoring in Mekong Delta, Vietnam. Remote Sens. 2013, 6, 135–156. [Google Scholar] [CrossRef][Green Version]
  13. Pittman, K.; Hansen, M.C.; Becker-Reshef, I.; Potapov, P.V.; Justice, C.O. Estimating global cropland extent with multi-year MODIS data. Remote Sens. 2010, 2, 1844–1863. [Google Scholar] [CrossRef]
  14. Peng, D.; Huete, A.R.; Huang, J.; Wang, F.; Sun, H. Detection and estimation of mixed paddy rice cropping patterns with MODIS data. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 13–23. [Google Scholar] [CrossRef]
  15. Qin, Y.; Xiao, X.; Dong, J.; Zhou, Y.; Zhe, Z.; Zhang, G.; Du, G.; Jin, C.; Kou, W.; Wang, J.; et al. Mapping paddy rice planting area in cold temperate climate region through analysis of time series Landsat 8 (OLI), Landsat 7 (ETM+) and MODIS imagery. ISPRS J. Photogramm. Remote Sens. 2015, 105, 220–233. [Google Scholar] [CrossRef] [PubMed][Green Version]
  16. Zhou, Y.; Xiao, X.; Qin, Y.W.; Dong, J.W.; Zhang, G.L.; Kou, W.L.; Jin, C.J.; Wang, J.; Li, X.P. Mapping paddy rice planting area in rice-wetland coexistent areas through analysis of Landsat 8 OLI and MODIS images. Int. J. Appl. Earth Obs. Geoinf. 2016, 46, 1–12. [Google Scholar] [CrossRef] [PubMed][Green Version]
  17. Dong, J.; Xiao, X.; Kou, W.; Qin, Y.W.; Zhang, G.L.; Li, L.; Jin, C.; Zhou, Y.T.; Wang, J.; Biradar, C.; et al. Tracking the dynamics of paddy rice planting area in 1986–2010 through time series Landsat images and phenology-based algorithms. Remote Sens. Environ. 2015, 160, 99–113. [Google Scholar] [CrossRef][Green Version]
  18. Kongtgis, C.; Schneider, A.; Ozdogan, M. Mapping rice paddy extent and intensification in the Vietnamese Mekong River Delta with dense time stacks of Landsat data. Remote Sens. Environ. 2015, 169, 255–269. [Google Scholar] [CrossRef]
  19. Boschetti, M.; Stroppiana, D.; Brivio, P.A.; Bocchi, S. Multi-year monitoring of rice crop phenology through time series analysis of MODIS images. Int. J. Remote Sens. 2009, 30, 4643–4662. [Google Scholar] [CrossRef]
  20. Thenkabail, P.S. Mapping rice areas of South Asia using MODIS multitemporal data. J. Appl. Remote Sens. 2011, 5, 863–871. [Google Scholar]
  21. Dao, P.D.; Liou, Y.A. Object-Based Flood Mapping and Affected Rice Field Estimation with Landsat 8 OLI and MODIS Data. Remote Sens. 2015, 7, 5077–5097. [Google Scholar] [CrossRef][Green Version]
  22. Xu, X.; Ji, X.; Jiang, J.; Yao, X.; Tian, Y.C.; Zhu, Y.; Cao, W.X.; Cao, Q.; Yang, H.J.; Shi, Z.; et al. Evaluation of One-Class Support Vector Classification for Mapping the Paddy Rice Planting Area in Jiangsu Province of China from Landsat 8 OLI Imagery. Remote Sens. 2018, 10, 546. [Google Scholar] [CrossRef]
  23. Erinjery, J.J.; Singh, M.; Kent, R. Mapping and assessment of vegetation types in the tropical rainforests of the Western Ghats using multispectral Sentinel-2 and SAR Sentinel-1 satellite imagery. Remote Sens. Environ. 2018, 216, 345–354. [Google Scholar] [CrossRef]
  24. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  25. Ozdarici-Ok, A.; OK, A.O.; Schindler, K. Mapping of Agricultural Crops from Single High-Resolution Multispectral Images—Data-Driven Smoothing vs. Parcel-Based Smoothing. Remote Sens. 2015, 7, 5611–5638. [Google Scholar] [CrossRef][Green Version]
  26. Turker, M.; Ozdarici, A. Field-based crop classification using SPOT4, SPOT5, IKONOS and QuickBird imagery for agricultural areas: A comparison study. Int. J. Remote Sens. 2011, 32, 9735–9768. [Google Scholar] [CrossRef]
  27. Löw, F.; Conrad, C.; Michel, U. Decision fusion and non-parametric classifiers for land use mapping using multi-temporal RapidEye data. ISPRS J. Photogramm. Remote Sens. 2015, 108, 191–204. [Google Scholar] [CrossRef]
  28. Marshall, M.; Thenkabail, P. Advantage of hyperspectral EO-1 Hyperion over multispectral IKONOS, GeoEye-1, WorldView-2, Landsat ETM+, and MODIS vegetation indices in crop biomass estimation. ISPRS J. Photogramm. Remote Sens. 2015, 108, 205–218. [Google Scholar] [CrossRef]
  29. Mariotto, I.; Thenkabail, P.S.; Huete, A.; Slonecker, ET.; Platonov, A. Hyperspectral versus multispectral crop-productivity modeling and type discrimination for the HyspIRI mission. Remote Sens. Environ. 2013, 139, 291–305. [Google Scholar] [CrossRef]
  30. Du, L.; Gong, W.; Shi, S.; Yang, J.; Sun, J.; Zhu, B.; Song, S. Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR. Int. J. Appl. Earth Obs. Geoinf. 2016, 44, 136–143. [Google Scholar] [CrossRef]
  31. Sonia, A.; Kees, D.B.; Skidmore, A.; Andrew, N.; Massimo, B.; Aileen, M. Complementarity of Two Rice Mapping Approaches: Characterizing Strata Mapped by Hypertemporal MODIS and Rice Paddy Identification Using Multitemporal SAR. Remote Sens. 2014, 6, 12789–12814. [Google Scholar][Green Version]
  32. Zhang, X.; Wu, B.F.; Ponce-Campos, G.E.; Zhang, M.; Chang, S.; Tian, F.Y. Mapping up-to-date paddy rice extent at 10 m resolution in China through the integration of optical and synthetic aperture radar images. Remote Sens. 2018, 10, 1200. [Google Scholar] [CrossRef]
  33. Park, S.; Im, J.; Park, S.; Yoo, C.; Han, H.; Rhee, J.Y. Classification and Mapping of Paddy Rice by Combining Landsat and SAR Time Series Data. Remote Sens. 2018, 10, 447. [Google Scholar] [CrossRef]
  34. Koppe, W.; Gnyp, M.L.; Hütt, C.; Yao, Y.K.; Miao, Y.X.; Chen, X.P.; Bareth, G. Rice monitoring with multi-temporal and dual-polarimetric TerraSAR-X data. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 568–576. [Google Scholar] [CrossRef]
  35. Ndikumana, E.; Minh, D.; Nguyen, H.; Baghdadi, N.; Courault, D.; Hossard, L.; Moussawi, I. Estimation of rice height and biomass using multitemporal SAR Sentinel-1 for Camargue, Southern France. Remote Sens. 2018, 10, 1394. [Google Scholar] [CrossRef]
  36. Vafaei, S.; Soosani, J.; Adeli, K.; Fadaei, H.; Naghavi, H.; Pham, T.; Bui, D. Improving Accuracy Estimation of Forest Aboveground Biomass Based on Incorporation of ALOS-2 PALSAR-2 and Sentinel-2A Imagery and Machine Learning: A Case Study of the Hyrcanian Forest Area (Iran). Remote Sens. 2018, 10, 172. [Google Scholar] [CrossRef]
  37. Dong, J.; Xiao, X. Evolution of regional to global paddy rice mapping methods: A review. ISPRS J. Photogramm. Remote Sens. 2016, 119, 214–227. [Google Scholar] [CrossRef]
  38. Teluguntla, P.; Ryu, D.; George, B.; Walker, J.; Malano, H. Mapping flooded rice paddies using time series of MODIS imagery in the Krishna River Basin, India. Remote Sens. 2015, 7, 8858–8882. [Google Scholar] [CrossRef]
  39. Xie, D.; Gao, F.; Sun, L.; Anderson, M. Improving Spatial-Temporal Data Fusion by Choosing Optimal Input Image Pairs. Remote Sens. 2018, 10, 1142. [Google Scholar] [CrossRef]
  40. Cui, J.; Zhang, X.; Luo, M. Combining Linear pixel unmixing and STARFM for spatiotemporal fusion of Gaofen-1 wide field of view imagery and MODIS imagery. Remote Sens. 2018, 10, 1047. [Google Scholar] [CrossRef]
  41. Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance predicting daily Landsat surface reflectance. IEEE Trans. Geos. Rem Sens. 2006, 44, 2207–2218. [Google Scholar]
  42. Hiker, H.; Wulder, M.A.; Coops, N.C.; Seitz, N.; White, J.C.; Gao, F.; Masek, J.G.; Stenhouse, G. Generation of dense time series synthetic Landsat data through dada blending with MODIS using a spatial and temporal adaptive reflectance fusion model. Remote Sens. Environ. 2009, 113, 1988–1999. [Google Scholar] [CrossRef]
  43. Zhu, X.L.; Chen, J.; Gao, F.; Chen, X.H.; Masek, J.G. An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sens. Environ. 2010, 114, 2610–2623. [Google Scholar] [CrossRef]
  44. Wu, P.; Shen, H.; Zhang, L.; Göttsche, F.M. Integrated fusion of multi-scale polar-orbiting and geostationary satellite observations for the mapping of high spatial and temporal resolution land surface temperature. Remote Sens. Environ. 2015, 156, 169–181. [Google Scholar] [CrossRef]
  45. Zhao, Y.; Huang, B.; Song, H. A robust adaptive spatial and temporal image fusion model for complex land surface changes. Remote Sens. Environ. 2018, 208, 42–62. [Google Scholar] [CrossRef]
  46. Wu, M.Q.; Niu, Z.; Wang, C.; Wu, C.; Wang, L. Use of MODIS and Landsat time series data to generate high-resolution temporal synthetic Landsat data using a spatial and temporal reflectance fusion model. J. Appl. Remote Sens. 2012, 6, 063507. [Google Scholar]
  47. Wang, Q.; Atkinson, P.M. Spatio-temporal fusion for daily Sentinel-2 images. Remote Sens. Environ. 2017, 204, 65–80. [Google Scholar] [CrossRef]
  48. Walker, J.J.; Beurs, K.M.; Wynne, R.H.; Gao, F. Evaluation of Landsat and MODIS data fusion products for analysis of dryland forest phenology. Remote Sens. Environ. 2012, 117, 381–393. [Google Scholar] [CrossRef]
  49. Eemlyanova, I.V.; McVicar, T.R.; Van Niel, T.G.; Li, L.T.; Van Dijk, A. Assessing the accuracy of blending Landsat-MODIS surface reflectances in two landscapes with contrasting spatial and temporal dynamics: A framework for algorithm selection. Remote Sens. Environ. 2013, 133, 93–209. [Google Scholar] [CrossRef]
  50. Jia, K.; Liang, S.; Zhang, N.; Wei, X.; Gu, X.; Zhao, X.; Yao, Y.; Xie, X. Land cover classification of finer resolution remote sensing data integrating temporal features from time series coarser resolution data. ISPRS J. Photogramm. Remote Sens. 2014, 93, 49–55. [Google Scholar] [CrossRef]
  51. Kwan, C.; Budavari, B.; Gao, F.; Zhu, X. A Hybrid Color Mapping Approach to Fusing MODIS and Landsat Images for Forward Prediction. Remote Sens. 2018, 10, 520. [Google Scholar] [CrossRef]
  52. Pan, Y.; Shen, F.; Wei, X. Fusion of Landsat-8/OLI and GOCI data for hourly mapping of suspended particulate matter at high spatial resolution: A case study in the Yangtze (Changjiang) Estuary. Remote Sens. 2018, 10, 158. [Google Scholar] [CrossRef]
  53. Hasituya; Chen, Z.; Li, F.; Honghmei. Mapping Plastic-Mulched Farmland with C-Band Full Polarization SAR Remote Sensing Data. Remote Sens. 2017, 9, 1264. [Google Scholar] [CrossRef]
  54. Dabboor, M.; Montpetit, B.; Howell, S. Assessment of the High Resolution SAR Mode of the RADARSAT Constellation Mission for First Year Ice and Multiyear Ice Characterization. Remote Sens. 2018, 10, 594. [Google Scholar] [CrossRef]
  55. Lin, W.; Chen, G.; Guo, P.; Zhu, W.; Zhang, D. Remote-sensed monitoring of dominant plant species distribution and dynamics at Jiuduansha Wetland in Shanghai, China. Remote Sens. 2015, 7, 10227–10241. [Google Scholar] [CrossRef]
  56. Zhang, H.; Wang, T.; Liu, M.; Jia, M.; Lin, H.; Chu, L.; Devlin, A. Potential of combining optical and Dual Polarimetric SAR data for improving mangrove dpecies fiscrimination using rotation forest. Remote Sens. 2018, 10, 467. [Google Scholar] [CrossRef]
  57. Wang, F.; Gao, J.; Zha, Y. Hyperspectral sensing of heavy metals in soil and vegetation: Feasibility and challenges. ISPRS J. Photogramm. Remote Sens. 2018, 136, 73–84. [Google Scholar] [CrossRef]
  58. Marmanis, D.; Datcu, M.; Esch, T.; Stilla, U. Deep learning earth observation classification using imageNet pretrained networks. IEEE Geosci. Remote Sens. 2016, 13, 105–109. [Google Scholar] [CrossRef]
  59. Cheng, G.; Zhou, P.C.; Han, J.W. Learning rotation-invariant convolutional neural networks for object detection in VHR optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7405–7415. [Google Scholar] [CrossRef]
  60. Pan, X.; Zhao, J. A central-point-enhanced convolutional neural network for high-resolution remote-sensing image classification. Int. J. Remote Sens. 2017, 38, 6554–6581. [Google Scholar] [CrossRef]
  61. Fu, G.; Liu, C.J.; Zhou, R.; Sun, T.; Zhang, Q.J. Classification for high resolution remote sensing imagery using a fully convolutional network. Remote Sens. 2017, 9, 498. [Google Scholar] [CrossRef]
  62. Chen, X.; Xiang, S.; Liu, C.L.; Pan, C.H. Vehicle detection in satellite images by hybrid deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1797–1801. [Google Scholar] [CrossRef]
  63. Cheng, G.; Wang, Y.; Xu, S.; Wang, H.; Xiang, S.; Pan, C. Automatic road detection and centerline extraction via cascaded end-to-end Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3322–3337. [Google Scholar] [CrossRef]
  64. Othman, E.; Bazi, Y.; Alajlan, N.; Alhichri, H.; Melgani, F. Using convolutional features and a sparse autoencoder for land-use scene classification. Int. J. Remote Sens. 2016, 37, 2149–2167. [Google Scholar] [CrossRef]
  65. Zhao, W.; Du, S.; Wang, Q.; Emery, W.J. Contextually guided very-high-resolution imagery classification with semantic segments. ISPRS J. Photogramm. Remote Sens. 2017, 132, 48–60. [Google Scholar] [CrossRef]
  66. Yang, X.; Qian, X.; Mei, T. Learning salient visual word for scalable mobile image retrieval. Pattern Recogn. 2015, 48, 3093–3101. [Google Scholar] [CrossRef]
  67. Ball, J.E.; Anderson, D.T.; Chan, C.S. Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools, and Challenges for the Community. J. Appl. Remote Sens. 2017, 11, 042609. [Google Scholar] [CrossRef]
  68. Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. Remote Sens. 2018, 10, 1119. [Google Scholar] [CrossRef]
  69. Hu, F.; Xia, G.-S.; Hu, J.; Zhang, L. Transferring Deep Convolutional Neural Networks for the Scene classification of High-Resolution Remote Sensing Imagery. Remote Sens. 2015, 7, 14680–14707. [Google Scholar] [CrossRef]
  70. Zhang, C.; Pan, X.; Li, H.; Gardiner, A.; Sargent, I.; Hare, J.; Atkinson, P.M. A Hybrid MLP-CNN Classifier for Very Fine Resolution Remotely Sensed Image Classification. ISPRS J. Photogramm. Remote Sens. 2017, 140, 133–144. [Google Scholar] [CrossRef]
  71. Han, W.; Feng, R.; Wang, L.; Cheng, Y. A Semi-Supervised Generative Framework with Deep Learning Features for High-Resolution Remote Sensing Image Scene Classification. ISPRS J. Photogramm. Remote Sens. 2017, 145, 23–43. [Google Scholar] [CrossRef]
  72. Qayyum, A.; Malik, A.S.; Saad, N.M.; Iqbal, M.; Abdullah, M.F.; Rasheed, W.; Abdullah, T.A.B.R.; Bin Jafaar, M.Y. Scene classification for aerial images based on CNN using sparse coding technique. Int. J. Remote Sens. 2017, 38, 2662–2685. [Google Scholar] [CrossRef]
  73. Nogueira, K.; Penatti, O.A.B.; dos Santos, J.A. Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern Recogn. 2017, 61, 539–556. [Google Scholar] [CrossRef][Green Version]
  74. Luus, F.P.S.; Salmon, B.P.; Van Den Bergh, F.; Maharaj, B.T.J. Multiview deeplearning for land-use classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2448–2452. [Google Scholar] [CrossRef]
  75. Zhang, M.; Zeng, Y. Mapping paddy fields of Dongting Lake area by fusing Landsat and MODIS data. Trans. Chin. Soc. Agric. Eng. 2015, 31, 178–185. [Google Scholar]
  76. Liu, W.; Zeng, Y.; Zhang, M. Mapping rice paddy distribution by using time series HJ blending data and phenological parameters. J. Remote Sens. 2018, 22, 381–391. [Google Scholar]
  77. Hasituya; Chen, Z.; Wang, L.; Wu, W.; Jiang, Z.; Li, H. Monitoring plastic-mulched farmland by Landsat-8 OLI imagery using spectral and textural features. Remote Sens. 2016, 8, 353. [Google Scholar] [CrossRef]
  78. Jönsson, P.; Eklundh, L. A program for analyzingtime-series of satellite sensor data. Comput. Geosci. 2004, 30, 833–845. [Google Scholar] [CrossRef]
  79. Kang, J.; Hou, X.H.; Niu, Z.; Gao, S.; Jia, K. Decision tree classification based on fitted phenology parameters from remotely sensed vegetation data. Trans. Chin. Soc. Agric. Eng. 2014, 30, 148–156. [Google Scholar]
  80. Son, N.T.; Chen, C.F.; Chen, C.R.; Chang, L.Y.; Minh, V.Q. Monitoring agricultural drought in the Lower Mekong Basin using MODIS NDVI and land surface temperature data. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 417–427. [Google Scholar] [CrossRef]
  81. Xu, H. Retrieval of the reflectance and land surface temperature of the newly-launched Landsat 8 satellite. Chin. J. Geophys. 2015, 58, 741–747. [Google Scholar]
  82. Shih, H.; Stow, D.A.; Weeks, J.R.; Coulter, L.L. Determining the Type and Starting Time of Land Cover and Land Use Change in Southern Ghana Based on Discrete Analysis of Dense Landsat Image Time Series. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2064–2073. [Google Scholar] [CrossRef]
  83. Müller, H.; Rufin, P.; Griffiths, P.; Siqueira, A.J.B.; Hostert, P. Mining dense Landsat time series for separating cropland and pasture in a heterogeneous Brazilian savanna landscape. Remote Sens. Environ. 2015, 156, 490–499. [Google Scholar] [CrossRef][Green Version]
  84. Pelletier, C.; Valero, S.; Inglada, J.; Champion, N.; Dedieu, G. Assessing the robustness of Random Forests to map land cover with high resolution satellite image time series over large areas. Remote Sens. Environ. 2016, 187, 156–168. [Google Scholar] [CrossRef]
  85. Zhang, Z.; Wang, X.; Wen, Q.; Zhao, X.; Liu, F.; Hu, S.; Xu, J.; Yi, L.; Liu, B. Research progress of remote sensing application in land resources. J. Remote Sens. 2016, 20, 1243–1258. [Google Scholar]
  86. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; CRC Press: Boca Raton, FL, USA, 2008. [Google Scholar]
  87. Romero, A.; Gatta, C.; Camps-Valls, G.; Member, S. Unsupervised deep feature extraction for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1349–1362. [Google Scholar] [CrossRef]
  88. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  89. Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sens. Environ. 2018, 216, 57–70. [Google Scholar] [CrossRef]
  90. Karakizi, C.; Karantzalos, K.; Vakalopoulou, M.; Antoniou, G. Detailed land cover mapping from multitemporal landsat-8 data of different cloud cover. Remote Sens. 2018, 8, 1214. [Google Scholar] [CrossRef]
  91. Kefi, M.; Pham, T.D.; Kashiwagi, K.; Yoshino, K. Identification of irrigated olive growing farms using remote sensing techniques. Euro-Mediterr. J. Environ. Integr. 2016, 1, 3. [Google Scholar] [CrossRef]
  92. Shresha, S.; Vanneschi, L. Improved fully convolutional network with conditional random fields for build extraction. Remote Sens. 2018, 10, 1135. [Google Scholar] [CrossRef]
  93. Maggiori, E.; Member, S.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Convolutional neural networks for large-scale remote-sensing image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 645–657. [Google Scholar] [CrossRef]
  94. Conrad, C.; Fritsch, S.; Zeidler, J.; Rücker, C.; Dech, S. Per-field irrigated crop classification in arid central asia using SPOT and ASTER data. Remote Sens. 2010, 2, 1035–1056. [Google Scholar] [CrossRef][Green Version]
  95. Jia, K.; Liang, S.; Wei, X.; Yao, Y.; Su, Y.; Jiang, B.; Wang, X. Land cover classification of Landsat data with phenological features extracted from time series MODIS NDVI data. Remote Sens. 2014, 6, 11518–11532. [Google Scholar] [CrossRef]
  96. Bhandari, S.; Phinn, S.; Gill, T. Preparing Landsat image time series (LITS) for monitoring changes in vegetation phenology in Queensland, Australia. Remote Sens. 2012, 4, 1856–1886. [Google Scholar] [CrossRef]
  97. Gervais, N.; Buyantuev, A.; Gao, F. Modeling the effects of the urban built-up environment on plant phenology using fused satellite data. Remote Sens. 2017, 9, 99. [Google Scholar] [CrossRef]
  98. Zhang, B.; Zhang, L.; Xie, D.; Yin, X.; Liu, C.; Liu, G. Application of Synthetic NDVI Time Series Blended from Landsat and MODIS Data for Grassland Biomass Estimation. Remote Sens. 2016, 8, 10. [Google Scholar] [CrossRef]
  99. Olsoy, P.J.; Mitchell, J.; Glenn, N.F.; Flores, A.N. Assessing a Multi-Platform Data Fusion Technique in Capturing Spatiotemporal Dynamics of Heterogeneous Dryland Ecosystems in Topographically Complex Terrain. Remote Sens. 2017, 9, 981. [Google Scholar] [CrossRef]
  100. Wang, J.; Xiao, X.; Qin, Y.; Dong, J.; Zhang, G.; Kou, W.; Jin, C.; Zhou, Y.; Zhang, Y. Mapping paddy rice planting area in wheat-rice double-cropped areas through integration of Landsat-8 OLI, MODIS, and PALSAR images. Sci. Rep. 2015, 5, 10088. [Google Scholar] [CrossRef] [PubMed]
  101. Jia, K.; Wu, B.; Li, Q. Crop classification using HJ satellite multispectral data in the North China Plain. J. Appl. Remote Sens. 2013, 7, 073576. [Google Scholar] [CrossRef]
  102. Zhang, Y.; Wang, C.; Wu, J.; Qi, J.; Salas, W.A. Mapping paddy rice with multitemporal ALOS/PALSAR imagery in southeast China. Int. J. Remote Sens. 2009, 30, 6301–6315. [Google Scholar] [CrossRef]
  103. Thenkabail, P.S.; Dheeravath, V.; Biradar, C.M.; Gangalakunta, O.R.P.; Noojipady, P.; Gurappa, C.; Velpuri, M.; Gumma, M.; Li, Y. Irrigated area maps and statistics of India using remote sensing and national statistics. Remote Sens. 2009, 1, 50–67. [Google Scholar] [CrossRef]
  104. Singha, M.; Wu, B.; Zhang, M. An object-based paddy rice classification using multi-spectral data and crop phenology in Assam, Northeast India. Remote Sens. 2016, 8, 479. [Google Scholar] [CrossRef]
  105. Das, S.; Routray, A.; Deb, A.K. Fast semi-supervised unmixing of Hyperspectral image by mutual coherence reduction and recursive PCA. Remote Sens. 2018, 10, 1106. [Google Scholar] [CrossRef]
  106. Soofbaf, S.R.; Sahebi, M.R.; Mojaradi, B. A Sliding window-based joint sparse representation (SWJSR) method for Hyperspectral anomaly detection. Remote Sens. 2018, 10, 434. [Google Scholar] [CrossRef]
  107. Gong, M.; Yang, H.; Zhang, P. Feature learning and change feature classification based on deep learning for ternary change detection in SAR images. ISPRS J. Photogramm. Remote Sens. 2017, 129, 212–225. [Google Scholar] [CrossRef]
Figure 1. Landsat 8 image of study area (the coastal band was removed).
Figure 1. Landsat 8 image of study area (the coastal band was removed).
Remotesensing 10 01840 g001
Figure 2. Calendar of the paddy rice in the study area.
Figure 2. Calendar of the paddy rice in the study area.
Remotesensing 10 01840 g002
Figure 3. Flowchart of paddy-rice mapping using a convolutional neural network (CNN) based on Landsat 8 datasets.
Figure 3. Flowchart of paddy-rice mapping using a convolutional neural network (CNN) based on Landsat 8 datasets.
Remotesensing 10 01840 g003
Figure 4. A double-cropping rice pixel of the time series moderate-resolution imaging spectroradiometer (MODIS)–Normalized difference vegetation index (NDVI) filtered by a Savizky–Golay (S–G) filter.
Figure 4. A double-cropping rice pixel of the time series moderate-resolution imaging spectroradiometer (MODIS)–Normalized difference vegetation index (NDVI) filtered by a Savizky–Golay (S–G) filter.
Remotesensing 10 01840 g004
Figure 5. (a) MODIS–NDVI, (b) Landsat 8 NDVI, and (c) the fused Landsat-like NDVI over Nanxian County of the study area.
Figure 5. (a) MODIS–NDVI, (b) Landsat 8 NDVI, and (c) the fused Landsat-like NDVI over Nanxian County of the study area.
Remotesensing 10 01840 g005
Figure 6. Five phenological variables of the study area: (a) SOS, (b) EOS, (c) LOS, (d) MON, and (e) AON.
Figure 6. Five phenological variables of the study area: (a) SOS, (b) EOS, (c) LOS, (d) MON, and (e) AON.
Remotesensing 10 01840 g006aRemotesensing 10 01840 g006b
Figure 7. Land-surface temperature (LST) on 30 July 2016 derived from Landsat 8.
Figure 7. Land-surface temperature (LST) on 30 July 2016 derived from Landsat 8.
Remotesensing 10 01840 g007
Figure 8. Training (in red) and validation (in yellow) areas.
Figure 8. Training (in red) and validation (in yellow) areas.
Remotesensing 10 01840 g008
Figure 9. Structure of ConvNet network.
Figure 9. Structure of ConvNet network.
Remotesensing 10 01840 g009
Figure 10. Paddy-rice maps of the study area generated by (a) RF, (b) SVM, and (c) CNN.
Figure 10. Paddy-rice maps of the study area generated by (a) RF, (b) SVM, and (c) CNN.
Remotesensing 10 01840 g010
Figure 11. Paddy-rice distribution in the study area in 2016 generated from a Landsat 8 dataset using the CNN method.
Figure 11. Paddy-rice distribution in the study area in 2016 generated from a Landsat 8 dataset using the CNN method.
Remotesensing 10 01840 g011
Figure 12. Regression analysis between the Landsat 8-derived rice area and the government rice-area statistics at a county level.
Figure 12. Regression analysis between the Landsat 8-derived rice area and the government rice-area statistics at a county level.
Remotesensing 10 01840 g012
Table 1. Acquisition information of remote-sensing data.
Table 1. Acquisition information of remote-sensing data.
Satellite SensorLandsat 8 OLIMODIS13 Q1
Path/row124/39124/40h27v05h27v06
Date2016-06-122016-06-12A total of 23 scenes from January 2016 to December 2016A total of 23 scenes from January 2016 to December 2016
2016-07-302016-07-30
2016-09-162016-09-16
Level of processingLevel 1Level 3
Used bands2–7, 10NDVI
Table 2. Phenological parameters.
Table 2. Phenological parameters.
Phenology ParametersDefinition
Start of the season (SOS)Time for which the left edge of the NDVI curve increased to 20% of the seasonal amplitude measured from the left minimum level.
End of the season (EOS)Time for which the right edge of the NDVI curve has decreased to 20% of the seasonal amplitude measured from the right minimum level.
Length of the season (LOS)EOS–SOS
Max of NDVI (MON)The largest NDVI value of the growing season.
Amplitude of NDVI (AON)Difference between maximum NDVI and the base level.
Table 3. Experiment with different classification features.
Table 3. Experiment with different classification features.
Experimental SequencesFeaturesFeature Dimensions
Sequence 1Six spectral bands on June.6
Sequence 2Six spectral bands on July.6
Sequence 3Six spectral bands on September.6
Sequence 418 spectral bands on June, July and September18
Sequence 5Six spectral bands on June + NDVI + Phenological variables.14
Sequence 6Six spectral bands on July + NDVI + Phenological variables.14
Sequence 7Six spectral bands on September + NDVI + Phenological variables.14
Sequence 818 spectral bands on June, July, and September + NDVI + phenological variables26
Sequence 919 spectral bands on June, July, and September + NDVI + phenological variables + LST29
Table 4. Parameters input into the CNN.
Table 4. Parameters input into the CNN.
ParametersBatch SizeLearning RateMomentumWeight Decay ParameterTraining Patch Size
Value1000.10.90.0000528 × 28
Table 5. Classification accuracies of CNN with different features.
Table 5. Classification accuracies of CNN with different features.
Feature SequencesClassPA (%)UA (%)OA (%)Kappa
Landsat 8 spectral (June)Paddy rice 84.6581.2882.520.62
Nonrice 81.6285.91
Landsat 8 spectral (July)Paddy rice 88.1392.4590.280.80
Nonrice 91.7388.96
Landsat 8 spectral (September)Paddy rice 86.3285.6889.350.80
Nonrice 85.3688.12
Landsat 8 spectral (June, July, September)Paddy rice 89.8891.5491.230.81
Nonrice 90.9589.73
Landsat 8 spectral (June) + NDVI + PVPaddy rice 86.2483.6585.030.71
Nonrice 84.2387.59
Landsat 8 spectral (July) + NDVI + PVPaddy rice 92.3694.6492.630.80
Nonrice 92.5989.27
Landsat 8 spectral (September) + NDVI + PVPaddy rice 88.6587.9291.360.80
Nonrice 86.6889.86
Landsat 8 spectral (June, July, September) + NDVI + PVPaddy rice 95.9896.6595.840.88
Nonrice 94.8293.28
Landsat 8 spectral (June, July, September) + NDVI + PV+LSTPaddy rice97.2996.9297.060.91
Nonrice96.8397.05
PA, UA, OA represent producer accuracy, user accuracy, and overall accuracy, respectively. PV, phenological variables.
Table 6. Classification accuracies of the three methods.
Table 6. Classification accuracies of the three methods.
Classification AlgorithmClassPA (%)UA (%)OA (%)Kappa
CNNPaddy rice 97.2996.9297.060.91
Nonrice 96.8397.05
SVMPaddy rice 91.1590.2690.630.84
Nonrice 92.3793.54
RFPaddy rice 90.6290.8989.380.82
Nonrice 92.3592.46
Table 7. Statistically significant comparison between three machine-learning algorithms.
Table 7. Statistically significant comparison between three machine-learning algorithms.
Methodsp-ValueStatistical Significance
CNN versus Support Vector Machines (SVM)0.0004Yes, 0.1%
CNN versus Random Forest (RF)0.0002Yes, 0.1%
SVM versus RF0.7342No, 5%
Table 8. Classification accuracies of the three methods.
Table 8. Classification accuracies of the three methods.
Classification AlgorithmClassPA(%)UA (%)OA (%)Kappa
ConvNet networkPaddy rice 97.2996.9297.060.91
Nonrice 96.8397.05
VGG-16 networkPaddy rice 96.8396.1896.520.90
Nonrice 96.5296.78
Pixel-based FCNPaddy rice 93.2593.2992.430.85
Nonrice 93.4192.65
Table 9. Relative error in area (REA) between the Landsat 8-derived rice area and the government rice-area statistics.
Table 9. Relative error in area (REA) between the Landsat 8-derived rice area and the government rice-area statistics.
County/DistrictLandsat 8 (ha)Government’s Rice Area (GRA) (ha)Relative Error in Area of Rice (REA) (%)
Huarong15,23515,0261.4
Junshan219821353.0
Yueyanglou206919287.3
Anxiang526851103.1
Hanshou16,56216,0503.2
Linli900683987.2
Nanxiang11,88311,4803.5
Anhua387436805.3
Taojiang936589354.8
Jinshi519248207.7
Lixian13,55811,86314.3
Dingcheng602358203.5
Wuling306529583.6

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top