In-Season Mapping of Irrigated Crops Using Landsat 8 and Sentinel-1 Time Series

Numerous studies have reported the use of multi-spectral and multi-temporal remote sensing images to map irrigated crops. Such maps are useful for water management. The recent availability of optical and radar image time series such as the Sentinel data offers new opportunities to map land cover with high spatial and temporal resolutions. Early identification of irrigated crops is of major importance for irrigation scheduling, but the cloud coverage might significantly reduce the number of available optical images, making crop identification difficult. SAR image time series such as those provided by Sentinel-1 offer the possibility of improving early crop mapping. This paper studies the impact of the Sentinel-1 images when used jointly with optical imagery (Landsat8) and a digital elevation model of the Shuttle Radar Topography Mission (SRTM). The study site is located in a temperate zone (southwest France) with irrigated maize crops. The classifier used is the Random Forest. The combined use of the different data (radar, optical, and SRTM) improves the early classifications of the irrigated crops (k = 0.89) compared to classifications obtained using each type of data separately (k = 0.84). The use of the DEM is significant for the early stages but becomes useless once crops have reached their full development. In conclusion, compared to a “full optical” approach, the “combined” method is more robust over time as radar images permit cloudy conditions to be overcome.


Introduction
More than 324 million hectares are equipped for irrigation in the world (http://www.fao.org,2012), among which about 85% are actually irrigated.Irrigated agriculture accounts for 20% of all cropland and 40% of food production.The Food and Agriculture Organization of the United Nations (FAO) estimates that 80% of the food needs of the 8 billion people expected to be living on Earth in 2025 will be produced by irrigated agriculture.
In Europe, the percentage of land area equipped for irrigation is quite small (65%) compared to the rest of the world.This is largely due to the moderate climate, where agriculture takes advantage of the available rainfall and constant irrigation can be avoided.However, these statistics hide strong spatial disparities between countries.In France, irrigation has developed rapidly since the 1970s, reaching around 1.6 million hectares irrigated in 2010 [1].Conflicts among users concerning water supplies are exacerbated during periods of drought, e.g. during the heat wave of 2003.In order to reduce these conflicts, national authorities have proposed various laws including the law on water and aquatic environments in 2006 and, in 2011, the first National Plan for Adaptation to Climate Change was adopted.These laws aim to help water resources managers.Today, operational water management relies on irrigation demand assessments estimated from static maps of irrigable areas (i.e., areas that could be irrigated given a supply network and availability of resources) or from national and sub-national statistics of irrigated areas, which are often of unreliable quality.Understanding the inter-annual and seasonal dynamics of irrigated areas (i.e., areas actually irrigated) is mandatory for the management of the watershed, particularly in areas experiencing climate variability, where water limitation is becoming a major constraint on irrigation.To successfully implement sustainable management, we need tools able to estimate actual water needs and supplies, which are frequently updated and cover large territories, to help the policy makers and water managers.
In this context, remote sensing is essential.Numerous studies have been carried out to map land use and derive indicators for water management [2,3].Several have reported the use of multi-spectral and multi-temporal remote sensing images to map land cover and land use, including irrigated areas [4][5][6][7][8][9][10][11][12][13].These images cover a range of spectral domains (both radar and optical imagery have been used), spatial areas (sub-national to continental), time periods (single season to multi-year analyses), and thematic issues (i.e. from maps of summer/winter crops to more detailed assessments of crop species and agricultural practices), thus demonstrating the high potential of remote sensing data for crop mapping.
In France, the only operational crop map covering the whole country is provided by the Climate Change Initiative (CCI) supported by the ESA.This map (http://maps.elie.ucl.ac.be/CCI/viewer) was produced annually (from 1992 up to 2015) with a spatial resolution of 300 m, using MERIS, AVHRR, and PROBA-V images [14].The low revisit frequency and the low spatial resolution of such maps lead to a large percentage of mixed classes.In our study region (southwest of France), for example, field sizes are typically less than 5 ha, which explains why all irrigated areas are misclassified as rainfed although more than 30% are irrigated.Moreover, no distinction is made between species that have different water needs, which evolve during the growing season.Thus, there is a strong need for more specific crop classifications with seasonal updates if water management is to become fully operational.
The recent launches of the Sentinel-1 [15] and Sentinel-2 [16] satellites offer a new opportunity for this challenging purpose as they provide near real time images with high resolutions in both time (1 to 5 days) and space (10 to 60 m).Recent works [17][18][19][20][21][22][23][24][25][26][27][28][29][30] have shown that dense optical time series allow crop type mapping over different climates and diverse cropping systems.However, since optical imagery is affected by cloud cover, the performance of the crop mapping with optical data can be reduced in some cases, even with a 6-day revisit time.Other recent studies [31][32][33][34] have shown that the combined use of optical data (Sentinel-2 or Landsat 8) and SAR (Synthetic Aperture Radar) data, such as that available from Sentinel-1, may improve the discrimination of some crop types that are difficult to distinguish with optical imagery alone.
Most of these studies rely on classification techniques.Supervised classifications using decision tree algorithms have been used in irrigation mapping with varying degrees of success, depending on the complexity of irrigation practices and the size and patchiness of the irrigated landscapes [17,[35][36][37][38][39][40][41][42][43].Among the methods using classification trees, the Random Forest model (RF) [44] is an ensemble learning method which has frequently been used for land use and land cover classifications.RF has been found to be faster, easier to parametrize, and more robust than traditional classification trees [19,23,25,26,[45][46][47][48][49][50][51].We therefore selected this method in our study.Song et al. [51], for example, provide in-season crop maps from GF-1 WFV images (four-day revisit and 16 m of spatial resolution) using both spectral and textural information in an RF model.This approach leads to an overall accuracy of 87%.Gao et al. [52] investigated the potential of Sentinel-1 to map irrigated crops and irrigated trees in Spain and Ferran et al. [43] combined Sentinel-1 and Sentinel-2 to classify irrigated crops in India.For both studies, the RF model led to a global accuracy of 82% or higher.These studies revealed the usefulness of the Sentinel-1 and Sentinel-2 images for mapping irrigated crops and their particular suitability for areas with small fields.
No studies have yet evaluated the potential of the combined use of optical and radar imagery with high spatial and temporal resolutions for seasonal mapping of irrigated crops in the temperate zones.
In this paper, we present in-season classifications of irrigated crops performed with Sentinel-1 and Landsat 8 time series.The latter were used as a surrogate for Sentinel 2 time series because Sentinel 2A was launched in June 2015, so the Sentinel 2 images did not cover the whole growing season of the crops.The classifier used was Random Forest.
The study area was located in southwest France, and the crops to be classified were maize (irrigated or not) and sunflower (not irrigated).As only part of maize crop was irrigated, one challenging point was to distinguish between irrigated and non-irrigated crops of the same species.We also investigated the use of a Digital Elevation Model (DEM) in the classification model.The classifications were evaluated using ground truth data collected in 2015.

Study Site
The study site location is shown in Figure 1.Maize crops represent 80% of the irrigated crops and 30% of total summer crops in this area, and the growing season begins between April and May and ends in late October.Maize crops reach their maximum water needs between June and August, involving at least four irrigation events [53].The elevation of the site varies between 243 m and 965 m.The summer season is often hot (daily average T = 28 • C) and dry with frequent rainfall deficits.In 2015, the rainfall in the summer season was 218 mm, corresponding to a 20% rainfall deficit with respect to normal values (calculated over 30 years).The study area was located in southwest France, and the crops to be classified were maize (irrigated or not) and sunflower (not irrigated).As only part of maize crop was irrigated, one challenging point was to distinguish between irrigated and non-irrigated crops of the same species.We also investigated the use of a Digital Elevation Model (DEM) in the classification model.The classifications were evaluated using ground truth data collected in 2015.

Study site
The study site location is shown in Figure 1.Maize crops represent 80% of the irrigated crops and 30% of total summer crops in this area, and the growing season begins between April and May and ends in late October.Maize crops reach their maximum water needs between June and August, involving at least four irrigation events [53].The elevation of the site varies between 243 m and 965 m.The summer season is often hot (daily average T = 28°C) and dry with frequent rainfall deficits.In 2015, the rainfall in the summer season was 218 mm, corresponding to a 20% rainfall deficit with respect to normal values (calculated over 30 years).

Reference Dataset
The reference dataset was established through three field campaigns carried out at different stages of the growing season: after sowing (May 2015), at flowering (July 2015), and during senescence (September 2015).Irrigated maize crops were identified by the presence of irrigation equipment or from information supplied directly by the farmer.The dataset was composed of 317 fields (114 irrigated maize and 203 non-irrigated fields: 171 growing maize and 32 sunflower) (Table 1).The irrigated plots covered more restricted elevations (Figure 2), and the mean size of the fields concerned was larger than for non-irrigated crops.The two irrigation systems most frequently used in our region are pitons and sprinkler systems.The plots equipped with piton systems are often larger and located on flat areas.

Remote Sens. 2018, 10, x FOR PEER REVIEW 4 of 15
The reference dataset was established through three field campaigns carried out at different stages of the growing season: after sowing (May 2015), at flowering (July 2015), and during senescence (September 2015).Irrigated maize crops were identified by the presence of irrigation equipment or from information supplied directly by the farmer.The dataset was composed of 317 fields (114 irrigated maize and 203 non-irrigated fields: 171 growing maize and 32 sunflower) (Table 1).The irrigated plots covered more restricted elevations (Figure 2), and the mean size of the fields concerned was larger than for non-irrigated crops.The two irrigation systems most frequently used in our region are pitons and sprinkler systems.The plots equipped with piton systems are often larger and located on flat areas.

Landsat 8 Images
The Landsat 8 images were processed to level 2A (i.e., surface reflectance atmospherically corrected, with masks for clouds, cloud shadows, snow and water) as described in Hagolle [54].Their acquisition dates, spatial resolution, incident, and viewing angles are listed in Figure 3.The number of cloudy pixels varied from 0.2% to 90% depending on the dates.For this study, we used the visible, near infrared, and short-wave infrared bands dedicated to vegetation surveys (482 nm, 561 nm, 654 nm, 864 nm, 1608 nm, 2200 nm).
The study site was covered by two Landsat 8 tiles (Figure 1).The images were resampled at 10 m in order to be merged with the SAR images as suggested by Inglada et al. [31].

Landsat 8 Images
The Landsat 8 images were processed to level 2A (i.e., surface reflectance atmospherically corrected, with masks for clouds, cloud shadows, snow and water) as described in Hagolle [54].Their acquisition dates, spatial resolution, incident, and viewing angles are listed in Figure 3.The number of cloudy pixels varied from 0.2% to 90% depending on the dates.For this study, we used the visible, near infrared, and short-wave infrared bands dedicated to vegetation surveys (482 nm, 561 nm, 654 nm, 864 nm, 1608 nm, 2200 nm).
The study site was covered by two Landsat 8 tiles (Figure 1).The images were resampled at 10 m in order to be merged with the SAR images as suggested by Inglada et al. [31].

SAR images
The radar images came from the C-SAR instrument onboard the Sentinel-1A satellite (Interferometric Wide Swath mode).At the time of the study, the revisit time was 12 days.Revisit time has since improved (from three to six days) with the combination of S1-A and S1-B platforms.
This instrument supplies images at constant incidence angle in VV and VH polarization and with a 10 m spatial resolution along a 250 km swath.Images acquired at orbits 30 (12 images) and 132 (7 images) were used.All images were Level 1-GRD (Ground Range Detected), radiometrically corrected, filtered for speckle reduction using a Lee 3 by 3 filter [55], and orthorectified using the Sentinel-1 Toolbox (https://sentinel.esa.int/web/sentinel/toolboxes/sentinel-1).The dates of acquisitions, incident angles and orbits of the SAR images are listed in Figure 3.For both sensors, the geometric accuracy is estimated to be better than 30 m [31].

Incremental Classification
We employed an incremental processing scheme to evaluate the usefulness of each date of acquisition and of each sensor for performing the seasonal classifications.For this purpose, we generated a supervised classification every time that a new image acquisition was available, using the previously available imagery.Every new acquisition led to a new classification.
A fully automated processing chain based on OpenCv 2.4 and GDAL 1.11 libraries was developed for this study.The classifier used was the Random Forest algorithm.The number of trees was set to 100, and the maximum depth to 25.
Temporal gap filling of the cloudy pixels was performed with the gap-filling module of the Iota2 software [21].The method relies on a linear interpolation of the cloudy pixels using the previous and following cloud-free dates.The Landsat 8 images used in the classification process were thus gapfilled, providing one image every 16 days.
Every classification was evaluated using the reference dataset, which was randomly split into two parts: 50% of the ground truth polygons were used for training the classification and 50% were used for the validation.The global performance of each classification was evaluated using the Kappa coefficient (k) [56], and the performances of each class were evaluated using the F-score (Equation (1)), the precision (Equation ( 2)), and the recall metrics (Equation ( 3)).

SAR Images
The radar images came from the C-SAR instrument onboard the Sentinel-1A satellite (Interferometric Wide Swath mode).At the time of the study, the revisit time was 12 days.Revisit time has since improved (from three to six days) with the combination of S1-A and S1-B platforms.
This instrument supplies images at constant incidence angle in VV and VH polarization and with a 10 m spatial resolution along a 250 km swath.Images acquired at orbits 30 (12 images) and 132 (7 images) were used.All images were Level 1-GRD (Ground Range Detected), radiometrically corrected, filtered for speckle reduction using a Lee 3 by 3 filter [55], and orthorectified using the Sentinel-1 Toolbox (https://sentinel.esa.int/web/sentinel/toolboxes/sentinel-1).The dates of acquisitions, incident angles and orbits of the SAR images are listed in Figure 3.For both sensors, the geometric accuracy is estimated to be better than 30 m [31].

Incremental Classification
We employed an incremental processing scheme to evaluate the usefulness of each date of acquisition and of each sensor for performing the seasonal classifications.For this purpose, we generated a supervised classification every time that a new image acquisition was available, using the previously available imagery.Every new acquisition led to a new classification.
A fully automated processing chain based on OpenCv 2.4 and GDAL 1.11 libraries was developed for this study.The classifier used was the Random Forest algorithm.The number of trees was set to 100, and the maximum depth to 25.
Temporal gap filling of the cloudy pixels was performed with the gap-filling module of the Iota2 software [21].The method relies on a linear interpolation of the cloudy pixels using the previous and following cloud-free dates.The Landsat 8 images used in the classification process were thus gapfilled, providing one image every 16 days.
Every classification was evaluated using the reference dataset, which was randomly split into two parts: 50% of the ground truth polygons were used for training the classification and 50% were used for the validation.The global performance of each classification was evaluated using the Kappa coefficient (k) [56], and the performances of each class were evaluated using the F-score (Equation (1)), the precision (Equation ( 2)), and the recall metrics (Equation ( 3)).
As in the study by Inglada et al. [31], the random split was done 10 times in order to perform 10 classifications.This allowed the average and confidence intervals for the Kappa, Fscore, and recall to be computed.

Selected Features
Recent studies [21,25,31] have shown that the spectral reflectances, the Normalized Difference Vegetation Index (NDVI) [57], the Normalized Difference Water Index (NDWI) [58], and the Brightness Index [59] are sufficient to capture the main patterns of various crops observed with high spatial (optical) and temporal resolution imagery.We thus used these features to perform the optical classifications.
Since irrigated and non-irrigated crops tend to be cultivated at different elevation ranges (Figure 2), we decided to evaluate the impact of the Digital Elevation Model (DEM) on the classification procedure.The DEM used was the Shuttle Radar Topography Mission (SRTM) DEM with a spatial resolution of 30 m [60].The topographic variables derived from the DEM were the elevation (m), the slope ( • ), and the aspect ( • ).
Inglada et al. [31] showed that the most pertinent features for crop classification performed with SAR imagery were the VV polarization, the Haralik textures (Entropy and Inertia computed from VV polarization), and the polarization ratio (VV/VH).In our study, we added the VH polarization and also the Entropy and the Inertia computed from the VH polarization.To evaluate the importance of these new additional features, we used the variable importance (VI) of the classifier [31,47]: in every tree grown in the forest, the out of bag samples were classified by using random permutations of each variable and computing the increase of classification errors with respect to the case without permutation.The median value of this number over all trees in the forest gave the importance score for each variable (VI).It was analysed for each classification.

Summer Crops Mask
A summer crops mask was produced from Landsat 8 images using the algorithm proposed by Marais-Sicre et al. [61].It is based on a decision tree using multi-temporal NDVI thresholding.The classification algorithm is only applied over agricultural surfaces, previously extracted from the "Registre Parcellaire Graphique" provided by the French Services and Payment Agency.

Effect of Topographic Variables
Figure 4 presents the k coefficient of the classification performed with the topographic variables only.Results reveal that, without the use of satellite imagery, the elevation feature alone enables irrigated and non-irrigated plots to be distinguished with k equal to 0.68.Slopes and aspects were not discriminant (0.06 < k < 0.08).
The elevation feature explains part of the variability existing between irrigated and non-irrigated crops as 90% of the irrigated surfaces are well classified (Table 2).However, confusion subsists as 46% of sunflower and 11% of non-irrigated maize pixels are classified as irrigated maize.The elevation feature explains part of the variability existing between irrigated and non-irrigated crops as 90% of the irrigated surfaces are well classified (Table 2).However, confusion subsists as 46% of sunflower and 11% of non-irrigated maize pixels are classified as irrigated maize.

Incremental Classification Using Optical Images and Elevation Features
Figure 5 shows the evolution of k (Figure 5A) and Fscore (Figure 5B) of the incremental classifications for the following three scenarios: (1) with the elevation feature alone (Figure 5A, yellow curve); (2) with the spectral features of the Landsat 8 images listed in section 3.2 (Figure 5A, blue curve); and (3) with both the elevation and optical imagery (Figure 5A, red curve).

Incremental Classification Using Optical Images and Elevation Features
Figure 5 shows the evolution of k (Figure 5A) and Fscore (Figure 5B) of the incremental classifications for the following three scenarios: (1) with the elevation feature alone (Figure 5A, yellow curve); (2) with the spectral features of the Landsat 8 images listed in Section 3.2 (Figure 5A, blue curve); and (3) with both the elevation and optical imagery (Figure 5A, red curve).
In April, the k values vary between 0.65 (scenario 2) and 0.74 (scenario 3) with intermediate values (k = 0.68) for scenario 1.The addition of the elevation feature provides higher k values than are obtained with optical images alone.The gain due to elevation is significant at the beginning of the growing season, from April to mid-May, which is of major importance for early crop classification.During this period, the joint use of elevation and optical images (scenario 3) provides the highest k values.However, the gain due to the elevation feature decreases from June as the number of images increases.At the end of the season (before August), the k coefficients are similar for both scenarios (2 and 3), suggesting that DEM is not useful anymore.
The high k values from the start of the season hide strong disparities among the classes, with Fscores equal to 0.1 for sunflower and higher than 0.85 for maize (Figure 5B).
Analysis of the temporal evolution of the Fscore reveals that the increase of k observed until August is mainly due to a better classification of the sunflower crops.Figure 6 shows that the accuracy of the sunflower class was not improved by elevation, as precision and recall are decreased by the use of this feature.In contrast, maize classes (irrigated or not) are favourably influenced by elevation.The better identification of sunflower with additional optical images might be explained by the rapid growth of sunflower, making it easier to distinguish from maize.During this phase, the Fscores for maize (irrigated or non-irrigated) increases much less than those of sunflower.
From August to October, the period that corresponds to the maturity phases, k values for both scenarios (2 and 3) remain stable.In April, the k values vary between 0.65 (scenario 2) and 0.74 (scenario 3) with intermediate values (k=0.68) for scenario 1.The addition of the elevation feature provides higher k values than are obtained with optical images alone.The gain due to elevation is significant at the beginning of the growing season, from April to mid-May, which is of major importance for early crop classification.During this period, the joint use of elevation and optical images (scenario 3) provides the highest k values.However, the gain due to the elevation feature decreases from June as the number of images increases.At the end of the season (before August), the k coefficients are similar for both scenarios (2 and 3), suggesting that DEM is not useful anymore.
The high k values from the start of the season hide strong disparities among the classes, with Fscores equal to 0.1 for sunflower and higher than 0.85 for maize (Figure 5B).
Analysis of the temporal evolution of the Fscore reveals that the increase of k observed until August is mainly due to a better classification of the sunflower crops.Figure 6 shows that the accuracy of the sunflower class was not improved by elevation, as precision and recall are decreased by the use of this feature.In contrast, maize classes (irrigated or not) are favourably influenced by elevation.The better identification of sunflower with additional optical images might be explained by the rapid growth of sunflower, making it easier to distinguish from maize.During this phase, the Fscores for maize (irrigated or non-irrigated) increases much less than those of sunflower.
From August to October, the period that corresponds to the maturity phases, k values for both scenarios (2 and 3) remain stable.

Incremental Classification Using Radar Images and Elevation
Figure 7 shows the evolution of k (Figure 7A) and Fscore (Figure 7B) of incremental classifications for the following three scenarios: (1) with the elevation feature only (Figure 7A, yellow curve); (2) with the radar features listed in section 3.2 (Figure 7A, blue curve); and (3) with combined use of elevation and radar imagery (Figure 7A, red curve).

Incremental Classification Using Radar Images and Elevation
Figure 7 shows the evolution of k (Figure 7A) and Fscore (Figure 7B) of incremental classifications for the following three scenarios:

Incremental Classification Using Radar Images and Elevation
Figure 7 shows the evolution of k (Figure 7A) and Fscore (Figure 7B) of incremental classifications for the following three scenarios: (1) with the elevation feature only (Figure 7A, yellow curve); (2) with the radar features listed in section 3.2 (Figure 7A, blue curve); and (3) with combined use of elevation and radar imagery (Figure 7A, red curve).Unlike the results obtained with the optical images, those from radar data alone do not show good overall performance before late June (k = 0.75), when the plants are growing.These results Unlike the results obtained with the optical images, those from radar data alone do not show good overall performance before late June (k = 0.75), when the plants are growing.These results show that, in our case study, the use of radar imagery alone is less accurate than optical imagery alone.Moreover, it is necessary to combine the SAR imagery with elevation (Figure 7A, scenario 3) to obtain performance comparable to that of optical imagery (Figure 5A).However, the performance observed in late June (Figure 7A, scenario 3) is reached about three weeks later than that using optical imagery with DEM (Figure 5A, scenario 3).This gain due to DEM is significant in the early season but becomes less significant in the end of the season.
The temporal analysis of kappa exhibits several phases:

•
In April (during sowing and emergence), the k coefficient of the radar classifications (scenario 2) increases significantly in connection with the Fscore increase for the irrigated and non-irrigated maize (Figure 7B).During this period, sunflower is very poorly classified (Fscore = 0.05) as plots correspond to bare soil mainly subjected to strong variations in moisture and roughness, to which the SAR signal is very sensitive.

•
From May to the end of June, k increases moderately, probably due to a better detection of sunflower, as shown by the increase in the Fscore of this class.

•
From July to October, gains in k and in Fscore are negligible.At this stage, all crops have reached their maximum development.
Whatever the phase, the irrigated maize class is characterized by higher Fscore values than the non-irrigated maize class.This result can be explained by the use of irrigation, which tends to spatially homogenize the crops, whereas non-irrigated maize exhibits larger spatial variations and is thus less easy to classify.Figure 8 reveals that the most important features in the classification scheme are VH/VV ratio and Haralik textures.This result corroborates those of Inglada et al. [31].There is a significant gain of the VI for VH/VV ratio and energy-VH between April 17 and April 24, when they reach 6.46 and 4.63, respectively.This gain explains the increase of the Fscore of maize classes observed during the same period (Figure 7B).Between mid-June and early July, we note successive gains of the VI for VH/VV ratio and energy-VH features.These gains are correlated with the increasing Fscore of Sunflower classes (Figure 7B).These primitives appear to be the most pertinent feature as they are more sensitive to crop architecture.
Whatever the phase, the irrigated maize class is characterized by higher Fscore values than the non-irrigated maize class.This result can be explained by the use of irrigation, which tends to spatially homogenize the crops, whereas non-irrigated maize exhibits larger spatial variations and is thus less easy to classify.
Figure 8 reveals that the most important features in the classification scheme are VH/VV ratio and Haralik textures.This result corroborates those of Inglada et al. [31].There is a significant gain of the VI for VH/VV ratio and energy-VH between April 17 and April 24, when they reach 6.46 and 4.63, respectively.This gain explains the increase of the Fscore of maize classes observed during the same period (Figure 7B).Between mid-June and early July, we note successive gains of the VI for VH/VV ratio and energy-VH features.These gains are correlated with the increasing Fscore of Sunflower classes (Figure 7B).These primitives appear to be the most pertinent feature as they are more sensitive to crop architecture.

Multi-Temporal Classifications Using Optical and SAR Images
Figure 9 shows the evolution of k with time for the following five scenarios: (1) with the elevation (yellow curve); (2) with the optical images (blue curve); (3) with the radar images (green curve); (4) with optical and radar imagery (red curve); and The number over each cell corresponds to the difference between the medians of two successive classifications.

Multi-Temporal Classifications Using Optical and SAR Images
Figure 9 shows the evolution of k with time for the following five scenarios: (1) with the elevation (yellow curve); (2) with the optical images (blue curve); (3) with the radar images (green curve); (4) with optical and radar imagery (red curve); and (5) with optical and radar and elevation (grey curve).
Remote Sens. 2018, 10, x FOR PEER REVIEW 11 of 15 (5) with optical and radar and elevation (grey curve).Figure 9 shows that the combined use of radar and optical imagery (scenarios 4 and 5) leads to the best results (k=0.89)throughout the whole season.The gain due to the elevation (scenario 5) is significant in the early season, from April to the end of May, and becomes negligible thereafter.
Between mid-June and the end of July, the combined use of optical and radar imagery leads to a k value significantly higher than those obtained with optical or radar imagery alone, which is of major importance for early crop detection.We also note that the performance of the optical classifications is higher than that of the radar classifications throughout the season.

Conclusions
In this paper, we investigate the usefulness of both radar and optical imagery combined with a  Between mid-June and the end of July, the combined use of optical and radar imagery leads to a k value significantly higher than those obtained with optical or radar imagery alone, which is of major importance for early crop detection.We also note that the performance of the optical classifications is higher than that of the radar classifications throughout the season.

Conclusions
In this paper, we investigate the usefulness of both radar and optical imagery combined with a digital elevation model to perform in-season classification of irrigated maize crops.The results reveal that the combined use of Sentinel-1 and Landsat 8 data improves the accuracy of the classifications with respect to those performed with optical or radar images alone.The gain is significant between April and the end of June, with k values increasing from 0.25 to 0.85 when the irrigation campaign starts, which is of major interest for water management.After this date, the gain in accuracy is less significant, with k values increasing slightly, from 0.85 to 0.9.
As in other studies [37,39,40], the elevation feature improves early classification until late May.After that, the gain becomes negligible.
Our results also reveal that Sentinel 1 images used alone do not perform early detection well and need to be combined with the elevation feature to reach performance similar to the combined use of optical data and DEM, but with a delay of three weeks.
However, the use of radar data is indispensable if we want to develop methods that are robust through time and over large zones as, unlike optical images, such data are not dependent on meteorological conditions.The use of radar data will also help to limit the use of temporal gap-filled optical data that could cause artefacts in the cloud masking algorithm and thus the classification process.As a follow on to this study, we intend to investigate the usefulness of Sentinel 2 data instead of Landsat 8, combined with Sentinel 1.We also plan to evaluate the usefulness of phenological indicators such as those put forward by Bargiel [62] and of meteorological data such as suggested by Peña-Arancibia et al. [41], which have been shown to significantly improve crop classifications for grasslands, maize, canola, sugar beets and potatoes.

Figure 1 .
Figure 1.Location of the study zone with footprints of the Landsat-8 and Sentinel-1 images (top map) and field data (bottom map).

Figure 1 .
Figure 1.Location of the study zone with footprints of the Landsat-8 and Sentinel-1 images (top map) and field data (bottom map).

Figure 2 .
Figure 2. Statistical distribution of elevations per class.The number of pixels per class is mentioned beside each boxplot.

Figure 2 .
Figure 2. Statistical distribution of elevations per class.The number of pixels per class is mentioned beside each boxplot.

Figure 3 .
Figure 3.Time series of Landsat 8 images with cloud cover expressed in percent (red and black numbers) and Synthetic Aperture Radar (SAR)images with incidence angle expressed in degrees (green and blue numbers).ASC stands for ASCENDING orbit.

Figure 3 .
Figure 3.Time series of Landsat 8 images with cloud cover expressed in percent (red and black numbers) and Synthetic Aperture Radar (SAR)images with incidence angle expressed in degrees (green and blue numbers).ASC stands for ASCENDING orbit.
) Pi = Number o f pixels o f class i o f the re f erence dataset Total number o f pixels in the re f erence dataset (2) Ri = Number o f pixels o f class i in the re f erence dataset Total number o f pixels o f class i

Figure 5 .
Figure 5. Evolution of the Kappa coefficients (A) with time, with 95% confidence intervals, for the 3 scenarios.The Fscore is given for scenario 3 only (B)

Figure 5 .
Figure 5. Evolution of the Kappa coefficients (A) with time, with 95% confidence intervals, for the 3 scenarios.The Fscore is given for scenario 3 only (B) Remote Sens. 2018, 10, x FOR PEER REVIEW 9 of 15

( 1 )
with the elevation feature only (Figure 7A, yellow curve); (2) with the radar features listed in Section 3.2 (Figure 7A, blue curve); and (3) with combined use of elevation and radar imagery (Figure 7A, red curve).

Figure 7 .
Figure 7. Temporal evolution of the k (A) and Fscore (scenario 3) (B)of the incremental classification with 95% confidence intervals for the three scenarios using elevation and radar imagery.

Figure 7 .
Figure 7. Temporal evolution of the k (A) and Fscore (scenario 3) (B) of the incremental classification with 95% confidence intervals for the three scenarios using elevation and radar imagery.

Figure 8 .
Figure 8. Median of the Variable Importance cumulated for each SAR classification (legend bar).The number over each cell corresponds to the difference between the medians of two successive classifications.

Figure 8 .
Figure 8. Median of the Variable Importance cumulated for each SAR classification (legend bar).The number over each cell corresponds to the difference between the medians of two successive classifications.

Figure 9 .
Figure 9. Temporal evolution of Kappa and Fscore coefficients of the incremental classification with 95% confidence intervals for the five scenarios.

Figure 9 .
Figure 9. Temporal evolution of Kappa and Fscore coefficients of the incremental classification with 95% confidence intervals for the five scenarios.

Figure 9
Figure9shows that the combined use of radar and optical imagery (scenarios 4 and 5) leads to the best results (k = 0.89) throughout the whole season.The gain due to the elevation (scenario 5) is significant in the early season, from April to the end of May, and becomes negligible thereafter.Between mid-June and the end of July, the combined use of optical and radar imagery leads to a k value significantly higher than those obtained with optical or radar imagery alone, which is of major The latter were used as a surrogate for Sentinel 2 time series because Sentinel 2A was launched in June 2015, so the Sentinel 2 images did not cover the whole growing season of the crops.The classifier used was Random Forest.

Table 2 .
Confusion matrix using elevation alone.k = 0.68.Predicted classes are in columns and reference data are in rows.

Table 2 .
Confusion matrix using elevation alone.k = 0.68.Predicted classes are in columns and reference data are in rows.