Next Article in Journal
Oxadiazon Dissipation in Water and Topsoil in Flooded and Dry-Seeded Rice Fields
Next Article in Special Issue
Retrieval of Evapotranspiration from Sentinel-2: Comparison of Vegetation Indices, Semi-Empirical Models and SNAP Biophysical Processor Approach
Previous Article in Journal
Optimization of Nitrogen Rate and Planting Density for Improving the Grain Yield of Different Rice Genotypes in Northeast China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Copernicus Sentinel-1 and Sentinel-2 Classification Framework for the 2020+ European Common Agricultural Policy: A Case Study in València (Spain)

by
Manuel Campos-Taberner
*,
Francisco Javier García-Haro
,
Beatriz Martínez
,
Sergio Sánchez-Ruíz
and
María Amparo Gilabert
Environmental Remote Sensing Group (UV-ERS), Universitat de València, Dr. Moliner, 50, 46100 Burjassot, València, Spain
*
Author to whom correspondence should be addressed.
Agronomy 2019, 9(9), 556; https://doi.org/10.3390/agronomy9090556
Submission received: 16 July 2019 / Revised: 12 September 2019 / Accepted: 13 September 2019 / Published: 16 September 2019
(This article belongs to the Special Issue Remote Sensing of Agricultural Monitoring)

Abstract

:
This paper proposes a methodology for deriving an agreement map between the Spanish Land Parcel Information System (LPIS), also known as SIGPAC, and a classification map obtained from multitemporal Sentinel-1 and Sentinel-2 data. The study area comprises the province of València (Spain). The approach exploits predictions and class probabilities obtained from an ensemble method of decision trees (boosting trees). The overall accuracy reaches 91.18% when using only Sentinel-2 data and increases up to 93.96% when Sentinel-1 data are added in the training process. Blending both Setninel-1 and Sentinel-2 data causes a remarkable classification improvement ranging from 3.6 to 8.7 percentage points over shrubs, forest, and pasture with trees, which are the most confusing classes in the optical domain as demonstrated by a spectral separability analysis. The derived agreement map is built upon combining per pixel classifications, their probabilities, and the Spanish LPIS. This map can be exploited into the decision-making chain for subsidies payment to cope with the 2020+ European Common Agricultural Policy (CAP).

Graphical Abstract

1. Introduction

The Common Agricultural Policy (CAP) is one of the most major European Commission (EC) policies. With a budget of €385 billion [1] (28.5% of the overall European budget in the period 2021–2027), the CAP provides support to European farmers with the aim of ensuring the provision of affordable food while also guarantying its quality on European markets. In this context, €65.2 billion (68.9% of the total CAP budget) is for direct payments. CAP requirements for payments are bound to the agricultural land cover and land use. Every member state authority must verify farmers’ declarations in order to comply with legal cross-compliance control mechanisms [2]. If farmers’ declarations do not fit the CAP requirements and standards, associated payments could be canceled. Paying agencies of every member state are responsible of supervising at least 5% of the received declarations. This is achieved by means of in situ field inspections and photo interpretation of very high–resolution images acquired from airborne or satellite platforms. Field inspections are expensive mainly when large areas have to be assessed. In addition, photo interpretation is a time-consuming task, which may lead to different results depending on both inspector visual skills and the quality of the images. On 22 May 2018, the EC adopted a new regulation for 2020+ CAP payments that include the possibility of using Earth Observation (EO) data from the Copernicus Sentinel-1 and Sentinel-2 programs for monitoring farmers’ parcels and assess cross-compliance [3]. This action will be applied to all declarations whereas only 5% of the declarations concerned by eligibility criteria, commitments and other obligations that cannot be monitored by EO data, will be supervised carrying out in situ checks. This oncoming regulation also attempts to alleviate the administrative red tape in order to facilitate the interaction between farmers and paying agencies.
Some of the most widely used remote sensing (cost-free) data in land cover and land use studies are provided by the National Aeronautics and Space Administration (NASA) (e.g., Moderate Resolution Imaging Spectroradiometer (MODIS) data) [4] and the United States Geological Survey (USGS) (e.g., Landsat data) [5,6]). They deliver spectral information of land surface at global coverage with a per pixel spatial resolutions ranging from 250 m–1 km, in the case of MODIS, to 30 m in the case of Landsat. Temporal resolution varies from 16 days for Landsat to 1–16 days for MODIS (depending on the temporal composite period of the product). These features make them suitable for classification and vegetation monitoring over large areas. However, this spatial resolution can be a limitation for classification mainly over very heterogeneous areas characterized by small landholdings. In the frame of the European Space Agency (ESA) Copernicus program, data from the Sentinel-1 [7] and Sentinel-2 [8] are freely disseminated. These data provide global information at decametric spatial resolution every 5–6 days, offering promising possibilities for land use classification. As a result, several classification studies have recently sprung up in the literature with the aim of classifying crop types, tree species, and grassland habitats [9,10,11,12,13].
In general, crop classification approaches can be faced at pixel level or at parcel level (i.e., object-based) depending on both the spatial resolution of available images and the characteristics of the target area. In object-based approaches, the unit to be classified is the so-called object, which is built up from the image segmentation, whereas in pixel-based approaches, pixels are classified individually without taking into account spatial aggregation. Irrespective of the classification unit, EO techniques for deriving classification maps are usually based on supervised learning. Among the most widely used supervised classification algorithms we can find decision trees (DT) [14], k-nearest neighbor (k-NN) [15], support vector machines (SVM) [16], and random forests (RF) [17]. SVM and RF usually trend to outperform DT and k-NN [18,19,20]. In addition, we can find several studies in the literature combining classifiers and multitemporal remote sensing data [21,22], leading to higher accuracies in the classification.
Although many studies have been undertaken automatic vegetation classification from remote sensing data, to date of writing, only a few of them are set up to cope with the context of European CAP subsidies [23,24,25,26]. However, none of them deals with the need for updating the Land Parcel Identification System (LPIS), which is key for both improving LPIS reliability and distribution of CAP payments while also reducing administrative costs [27]. In this context, this paper describes a classification framework from Sentinel-1 and Sentinel-2 developed for improving subsidies control for the CAP in the Valencian Community (Spain) and assessing the LPIS update. The classification exploits a time series of both optical and microwave data, vegetation indices (VIs), as well as features provided by assembling decision trees. The main novelties are two-fold. First, in this paper, we benefit from the full capabilities of ensemble classifiers’ outputs using in both class predictions and the class probability obtained from the ensemble of decision trees, which is not very often exploited in most of remote sensing classification studies. Second, we combine both the prediction and its probability with the Spanish LPIS to derive an agreement map between derived classification and Spanish LPIS. This achievement demonstrates its applicability for decision making in the framework of CAP subsidies.

2. Data Collection

2.1. Study Area

The study area selected in this work belongs to the land portion of an entire Sentinel-2 tile covering part of the Valencian Community (see red square in Figure 1). The Sentinel-2 tile is the T30SYJ embracing 110 km × 110 km. The selected study area falls within a Sentinel-1 tile in descending mode that allows for the combination of both sources of information. The study area is located over the province of Valencia (East of Spain), which has a typical Mediterranean climate, with an average annual temperature and humidity of 17 °C and 65%, respectively. The dominant non-urban classes over the study area are shrubs (SH), fruits (FR), citrus (CI), forest (FO), rice crops (RI), olive grove (OL), pasture with trees (PAT), vineyards (VI), dried fruit (DFR), and pasture (PA), which altogether cover the majority of the study area.

2.2. Sentinel-1 and Sentinel-2 Data

ESA provides open access to Copernicus Sentinel-1 and Sentinel-2 data from the Sentinels Scientific Data Hub (https://scihub.copernicus.eu/). In this study, we used the Sentinel-1 Level-1 Ground Range Detected (GRD) product that uses the Wide swath (IW) mode providing data in VV and VH polarizations. The data were calibrated to compute backscatter coefficient (sigma nought, σ0), and subsequently, a multilooking was applied in order to mitigate the speckle noise effect with a 2 × 2 window size. Eventually, in order to correct geometric distortions caused by slant range, layover, shadows, and foreshortening, a geometric correction was performed using the digital elevation model from the Shuttle Radar Topography Mission at 1″ resolution (SRTM, 1sec Grid). These processing were undertaken using the Sentinel Application Platform (SNAP).
Sentinel-2 top-of-atmosphere reflectances (Level 1C) were downloaded and atmospherically corrected using the Sentinel-2 atmospheric Correction (Sen2Cor) toolbox [28] to obtain top-of-canopy corrected reflectance for all bands excluding the B10 band (cirrus band), which does not contain surface information over the study area.
Sentinel-2 Level 1C data were downloaded jointly with the Sentinel-1 GRD data from April 2017 to March 2018, embracing 30 Sentinel-1 and 11 Sentinel-2 images for the study area (see Table 1). Only completely cloud-free Sentinel-2 images were considered throughout the temporal window from April 2017 to March 2018, which was selected to account for a complete vegetation cycle of all classes within the study area.

2.3. Sistema de Información Geográfica de Parcelas Agrícolas (SIGPAC)

Several European countries, including Spain, have their own Land Parcel Identification System (LPIS), which is annually updated [23,24,25]. In Spain the LPIS is given by the Sistema de Información Geográfica de Parcelas Agrícolas (SIGPAC), which assigns to every agricultural parcel a unique land use (SIGPAC use). SIGPAC allows the geographic identification of parcels declared by farmers and livestock farmers, under any subsidy regime relating to the area which is cultivated or used for livestock. It was built upon an orthophotography mosaic embracing the Spanish territory, in which cadastral information collected by authorities is over-imposed (https://www.fega.es).
SIGPAC classifies the uses into categories including natural vegetation, urban zones and mixed classes. In this study we have selected the main 10 classes that cover the vast majority of agricultural parcels over the study area (listed in Section 2.1. and Section 2.3.). We have excluded urban areas and buildings, mixed classes, water bodies, and very-low representative classes. Figure 2 shows the considered SIGPAC classes over the study area in 2017 and the percentage coverage of every class.

2.4. Ground Truth

The Valencian regional government, specifically its Department of Agriculture, Environment, Climate Change, and Rural Development (http://www.agroambient.gva.es/va/), provided us with an extensive ground truth regarding SIGPAC classes (see Table 2). It was obtained through several field campaigns and parcel inspections as well as thanks to the expert knowledge provided by regional authorities’ technical agronomists.

3. Methodology

The methodology is outlined in Figure 3 which includes the main following steps: (1) data collection of ground truth, and processing selected Sentinel-1 and Sentinel-2 images as described in Section 2.2; (2) feature selection and spectral separability analysis; (3) assessment of different classifiers accuracy over a test set never used during the training process; (4) selection of the best classification method and derivation of both a classification and a class probability map over the study area (masking out the sea and non-interest areas) and (5) generation of an agreement map between the derived classification map and SIGPAC.

3.1. Feature Selection

Although the downloaded Sentinel-2 images are cloud free, it may occur that the atmospheric correction provided artifacts or slight changes in reflectance over the same classes in different dates (changes not related to vegetation phenology), which would introduce noise in the training process, leading to worse classification results. To avoid non-optimal Sentinel-2 images, as well as to alleviate the Hughes phenomenon [29] related with the decrease of the classification performance as increasing the number of features, a Sentinel-2 image selection was made using the Jeffries-Matusita (JM) [30] and Bhattacharyya (BH) [31] distances. The JM distance is a measure of the average distance between a pair of two classes defined as
J i j = 2 ( 1 e B i j )
where Bij is the Bhattacharyya distance given by
B i j = 1 8   ( μ i μ j ) t   [   i   j 2 ] 1   ( μ i μ j ) t + 1 2 ln [ |   i   j | / 2 |   i | |   j | ]
where µi and µj are the mean vectors of the two considered classes and Σi and Σj are the corresponding covariance matrices. The JM distance ranges from 0 to 2 as increasing separability between classes, whereas a Bij larger value corresponds to higher average distance between classes. In the case of having normally distributed classes, Jij becomes Bij [30].
In addition to the selected Sentinel-2 images, a set of VIs (see Table 3) were stacked to the Sentinel-2 images feature space (i.e., used as predictors during the training). In particular, we considered the wide-used Normalized Difference Vegetation Index (NDVI) [32] and its red-edge version (NDVI705) [33], the Modified Chlorophyll Absorption in Reflectance Index (MCARI) [34], which accounts for chlorophyll changes, the Plant Senescence Reflectance Index (PSRI) [35] that is sensitive to plant senescence, and in order to minimize soil background noise, both the Optimized Soil Adjusted Vegetation Index (OSAVI) [36] and its red-edge version (OSAVI705).
The selected Sentinel-2 images and VIs were jointly stacked with the available Sentinel-1 images to build the remote sensing data set (see Figure 4) used for the classification. Hence, the final training feature space was formed by both optical and SAR data. From the microwave domain, the VV and VH polarizations were used, as well as the VH/VV ratio, which is employed in crop classification because it is considered to be a good indicator of vegetation status [10,37]. In addition, to assess the impact of Sentinel-1 in the classification, an experiment was undertaken by removing Sentinel-1 data to the six selected Sentinel-2 images and VIs. In this case, the training feature space was formed only by optical data.

3.2. Classifiers

The ground truth was associated to the remote sensing data and was then split into two sets, training and test set, comprising 2/3 and 1/3 of the pixels, respectively. The big effort made by the Valencian authorities allowed us to build a huge training set with balanced classes, thus avoiding issues introduced by imbalanced training samples [12].
Several classification algorithms based on supervised classification were trained using the training set, and their respective accuracies were assessed over the test set (see results in Section 4.2). For instance, we used a linear discriminant analysis (LDA), which is a classification algorithm based on a linear transformation of variables that reduces an original feature space to a lower one that maximizes separability among class distributions [38]. In addition, non-linear approaches such as the quadratic discriminant analysis (QDA) and the k-NN classifier have been considered. The k-NN algorithm is based on neighbor similarity features where the prediction is done taking into account the k-nearest neighbors, where k is a real number typically ranging from 1 to a given number depending on image spatial features. We also considered the use of SVM, which is a kernel method (i.e., a method that samples input data into a Hilbert space) that divides the training data using hyperplanes maximizing the margin [16].
Finally, for having a more comprehensive sense, we decided to use ensembles of classifiers based on decision trees. For the ensemble, different techniques such as bagging, boosting, and random forest [39] were used. In the bagging ensemble, every decision tree is trained on a random subset of training samples. The same sample can be selected for training all, several, or even none of the decision trees composing the ensemble. In the case of RF, the approach is similar to bagging, but the difference relies in the split of the nodes of every tree. The bagging method uses all input features for splitting the nodes, whereas in the RF approach, the split is done considering a number of features (typically its square root), which are selected randomly. In the boosting method, trees are trained using all training samples in an iterative approach that increases the weights for samples classified incorrectly in previous training rounds. The boosting approach uses multiple iterations to generate a single composite strong learner. In this study, the Adaboost.M2 algorithm [40] was used for boosting decision trees. The algorithm takes a bootstrap sample in every boosting iteration that is used for building a weak classifier. Then the pseudo-loss is computed and used for updating the sampling weights to be used for the next iteration. The final assignation for every pixel is given by the most frequent class, obtained from all decision trees in the ensemble.

3.3. Agreement Map

An agreement map between the classification map derived from Sentinel-1 and Sentinel-2, and SIGPAC was calculated. For this purpose, a series of logical rules have been proposed in order to discriminate eight levels of agreement. The levels of agreement were obtained blending the information provided by SIGPAC, and both the derived classification and confidence maps as described in Table 4.

4. Results and Analysis

4.1. Sentinel-2 Spectral Separability Analysis

Sentinel-2 image selection was made using the aforementioned JM and BH distances over the optical domain (i.e., Sentinel-2 bands). Only the images that maximize the class separability were used, which leaded to the choice of 6 out 11 available and cloud free Sentinel-2 images. Table 5 shows the JM (upper diagonal) and BH (lower diagonal) distances obtained for the first selected Sentinel-2 image, which was acquired on 6 May 2017 over the study area (for the sake of brevity, we show the rest of JM/BH tables in Supplementary Material (Tables S1–S5). It can be observed that several classes reach JM values close to 2, which highlights the good separability between pair of classes. It is worth mentioning rice is the most distinguishable class, exhibiting JM values very close to 2 and the highest BH values. On the other hand, forest, pasture with trees, and shrubs are the classes presenting the lowest JM and BH values (<0.5), which denotes their respective significant spectral confusion, being thus more difficult to classify, a priori. Similarly, fruits and olive grove are also classes with short distances between them and are difficult to discriminate. In addition, the JM and BH distances were computed taking into account all six selected Sentinel-2 images jointly (see Table 6). In this case, the results are better than the ones obtained individually for each of the six selected Sentinel-2 images, which evince the use of multitemporal features improves spectral separability among classes. However, for forest, pasture with trees, and shrubs, although their spectral separability increase, remain the lowest.

4.2. Accuracy Assessment

Different classifiers were trained using the training set and their respective accuracies were assessed over the test set (constituted by one third of the pixels never used during the training). Table 7 shows the overall accuracy (OA) and kappa index (κ) [41] of the results taking into account the different features used in the classifications, i.e., if any vegetation index was stacked to Sentinel-1 and Sentinel-2 data. Among all evaluated classifiers and vegetation indices, the ensembles of decision trees revealed the highest accuracies being the boosting approach the most accurate using the temporal information of OSAVI705. It is interesting to highlight that the boosting approach outperforms both the rest of ensemble approaches and the rest of classifiers irrespective of the VI. If no VI is used in addition to the selected images, the classification performance is reduced (i.e., OA = 88.19% and κ = 0.87 obtained without VIs vs OA = 93.96% and κ = 0.91 obtained in the best case).
Table 8 shows the corresponding confusion matrix (i.e., for the best result in Table 6). In general, the results are good, and the majority of classes reveal both user’s accuracy (UA) and producer’s accuracy (PA) higher than 90% and 94%, respectively. However, shrubs, forest and pasture with trees presented lower accuracies ranging 82–88% with confusion among them. It is worth mentioning that this behavior is in accordance with the a priori analysis carried out using the JM and BH distances. The best result is obtained over rice where both UA and PA reach 99.9%, whereas pasture with trees is the class with worst UA (82%), and shrubs is the class with worst PA (81.8%).
As highlighted by the aforementioned results, the family of ensemble of trees provided the highest accuracies. The sensitivity of the ensembles of decision trees to the number of trees was analyzed by executing different ensembles with increasing number of trees (see Figure 4). OA was calculated from 1 to 100 trees in steps of 10, and from 100 to 1000 in steps of 100. Results show that 1000 decision trees are sufficient for obtaining stable and accurate predictions. Belgiu and Drăguţ [42] suggested that 500 trees are acceptable for remote sensing studies, however other studies use a number of trees ranging from 10 up to 5000 depending on the number of features and spectral-temporal features of the remote sensing data used [43,44].
We also assessed the effect of removing Sentinel-1 data in the classification. In this case, the training feature space was formed only by optical data. The removing of Sentinel-1 images (microwave data) worsen classification accuracies in all cases (see Table 9) compared to the respective results in Table 7. In this case, the best results were obtained when fusing the selected six Sentinel-2 multispectral images and the OSAVI705. Again, the boosting approach outperforms the rest of classifiers (OA = 91.18% and κ = 0.88). Table 10 shows the confusion matrix over the test set obtained in this case. The results are not so accurate than the previous case when using both Sentinel-1 and Sentinel-2 data (see Table 8) revealing a general decrease in accuracy for every class. It is worth mentioning that in terms of percentage points (pp) in UA this loss is greater over shrubs (5.5 pp), forest (8.7 pp), and pasture with trees (3.6 pp), and in PA, the loss is 7.5 pp for shrubs, 5.6 pp for forest, and 4.5 pp for pasture with trees. In addition, the confusion among these classes was also increased. This confirms that the combined use of Sentinel-1 and Sentinel-2 is useful for improving classification results.
For completeness, an experiment taking into account all 11 available Sentinel-2 images in the training was conducted and revealed worse results (OA = 86.21% and κ = 0.82 obtained in the best case. For the sake of brevity, we show the results in Table S6 of Supplementary Material). This confirms that the previous Sentinel-2 spectral separability analysis was useful for improving classification results.

4.3. Classification, Class Probability and Agreement Maps

After the accuracy assessment over the test set, the best classifier (i.e., the ensemble of boosting trees) was executed (using both optical and SAR data as predictors) over the study area to obtain a classification map (see Figure 5a). In addition, the ensemble approach also provides a per pixel probability of belonging to every class taking into account the predictions of every tree forming the ensemble of 1000 boosting tress. The per pixel probability of the class with the maximum probability, which is indeed the predicted class, was used for deriving a map that can be interpreted as the confidence of the classification (see Figure 5b). The 80% of the pixels were classified by a confidence higher than 50% (see greenish colored areas in Figure 5, right, and cumulative histogram in Figure 6). The 50% of all pixels were classified by a confidence greater than 77% (see Figure 6). The maximum confidence was achieved over rice pixels (reaching almost 100% of confidence) that, jointly with the spectral analysis undertaken in Section 4.1, corroborates the high reliability in rice classification from multitemporal Sentinel-1 and -2 data.
Taking into account Table 4 described in Section 3.3, we derived the agreement between the predictions derived from Sentinel-1 and Sentinel-2 data and the SIGPAC (see Figure 7a). Most of the pixels show bluish colors, which indicate the consistency between the classifications performed in this study and the official land parcel information system of Valencia (see Figure 7a). The highest agreement is found over rice crops, whereas significant, high, and very high discrepancies are found over less than 3.5% of the total classified area over different classes (see Figure 7b). In addition, only 0.1% of the pixels are labeled as very high discrepancy, which highlights the robustness of the classifier.

4.4. Utility of the Derived Agreement Map

We provide two examples of the utility of the derived agreement map. The first example is shown in Figure 8, which displays a zoomed area of the three maps mentioned above, containing a very high discrepancy area (highlighted in red in western part of the displayed maps). It is classified as forest, while the SIGPAC classifies it as citrus. Since the probability of the classification map over this zone is >95%, following rules in Table 4, the agreement map assigns to it a very high discrepancy. If this polygon is superimposed over the official orthophoto of 2017 (provided by Institut Cartogràfic Valencià, València, Spain) over the zone, it can be observed that the class corresponds to forest instead of citrus. This means that in this case SIGPAC should be updated.
The second example is exhibited in Figure 9, which shows an example of high discrepancy between the derived classification and SIGPAC. The area highlighted in red is classified as fruits, while the SIGPAC classifies it as citrus. The agreement map assigns to it a high discrepancy. If this polygon is superimposed over the official orthophoto of 2017 over the zone (Figure 9 (middle)), no conclusion can be drawn, so a field inspection is recommended to check out the correct class. If the class is identified as citrus, the corresponding pixels should be labeled as citrus and incorporated in future trainings to avoid similar errors. If the field check verifies that the class is fruits, the SIGPAC should be updated.

5. Discussion

It is recognized that object-based approaches can be useful to delineate polygons over agricultural areas, but their effectiveness for delineating and classifying agricultural parcels using Sentinel-2 time series is reduced [12,45]. In addition, other studies using Sentinel-2 for crop mapping did not found advantages regarding the use of object-based or pixel-based approaches, also highlighting the impossibility of using textural features for small objects [9,46]. In this study, the classification was carried out at pixel-level since both the classification of and object/polygon, which can be actually done aggregating the result of a pixel-level classification, and the report, if there exist different classes over a polygon, are relevant for CAP payments.
A key aspect in every classification process is related with the training samples selection. The close collaboration with the regional authorities, particularly with its technicians, allowed obtaining a very high number of training samples per class. Our results derived from the JM-BH measures revealed that the highest spectral separabilities among classes were obtained using only six out of the available and cloud-free Sentinel-2 images. The inclusion of the remainder images would decrease accuracy. This is mainly due to the non-optimal spectral separability among classes in these dates and the addition of redundant information, which may introduce noise in the features space of the training samples. This type of feature selection can be useful in order to avoid Hughes phenomenon and improve classification accuracy. However, this process must be taken carefully since, if not properly addressed, relevant information can be lost [20].
The classification results reveal the joint use of Sentinel-1, Sentinel-2, and derived vegetation indices improves classification accuracies irrespective of the classification algorithm. Zhong et al. [47] recently reported that the use of a single multitemporal vegetation index outperformed the use of only available Landsat reflectances. However, unlike the present study, in their study, the classes were not spectrally separable. It is interesting to note that when stacking the OSAVI versions (i.e., using either the red (ρ665) or the red-edge band (ρ705)) the best results were obtained. This could be partly due to the OSAVI capabilities of minimizing the noise introduced by soil both at sparse and dense vegetation canopies [35]. In addition, the experiments reveal that the use of red-edge does not always contributes to improve classification accuracies. OSAVI705 seems to capture information provided by the red-edge in the classification process and slightly outperforms the OSAVI. Conversely, classification accuracy decreases when using NDVI705 if compared with cases using NDVI. Furthermore, the results confirmed that the combination of Sentinel-1 and Sentinel-2 leads to improved accuracy in the classification, which highlights the complementary information conveyed by Sentinel-1 and Sentinel-2. In particular, our results demonstrate that the inclusion of Sentinel-1 data is able to better discriminate classes such as shrubs, forest and pasture with trees that presented rather similar features among them in the optical domain of Sentinel-2.
Classification accuracy depends on a number of factors including the number of training and test samples, the number of features and its relevance, the number of classes and its similarity, and the classification algorithm itself. For instance, Schmedtmann and Campagnolo [24] proposed a reliable crop identification (12 classes, OA = 68%) over Portuguese parcels covering 1057 km2, Sitokonstantinou et al. [25] developed a parcel-based crop identification (nine classes, OA = 91.3% and k = 0.87) scheme for CAP and greening obligations over a Spanish area of 215 km2. Kanjir et al. [48] developed a change detection method to support land use control within CAP activities over three areas of 7 km2 in Slovenia, and Blaes et al. [23] developed a framework for area-based subsidies control in Belgium. The present study reported overall accuracy reaching ≈ 94%, which is similar to the aforementioned studies. A remarkably high accuracy was found over rice highlighting that its multitemporal features are extremely informative for rice phenology and classification, which is also reported in other studies [47,49,50,51].
The number of available Sentinel-2 images proved to be sufficient for obtaining accurate and satisfactory classification results in most of the classes. This means that the temporal behavior of the majority of classes is captured with as few as six Sentinel-2 multispectral images. However, there was a lack of cloud free Sentinel-2 images during the summer period in 2017 whose data could be of relevance for our study. The inclusion of Sentinel-1 data tried to mitigate this effect and also incorporate information of vegetation structure that is not provided by optical sensors such as the MSI onboard Sentinel-2. Data provided by Sentinel-1 VV and VH polarizations include information on the vegetation status, structure, and their interaction with soil background [52,53]. In addition, the ratio VH/VV can provide time series information on vegetation and soil properties such as soil moisture also reducing SAR acquisition system errors [10,54,55].
With the implementation of the new 2020+ CAP regulation, field inspections are earmarked for only 5% of the territory. Therefore, the identification of zones with well-founded suspicions of incompatibility between farmers’ declarations and SIGPAC is key. The derived agreement map is intended to help in this process by highlighting agreement and discrepancies between the classification map retrieved with Sentinels data and SIGPAC. Certainly, the definition of the logical rules for deriving the agreement map influences the final result. The rules proposed in Table 4 are fairly reasonable. When some level of discrepancy is found between maps, a photo interpretation using very high–resolution imagery such as orthophotography is recommended. Then, if the interpretation tips the scale in favor of the reported classification, SIGPAC should be updated with this class. On the contrary, those pixels should be labeled as SIGPAC does and can be incorporated into the training process to avoid similar misclassifications in the future. If no conclusion can be drawn, a field inspection is needed. In this study, the agreement map reported less than 3.5% of the pixels as significant, high, and very high discrepancies, hence, in the extreme case of impossibility of establish a class by photo interpretation, the number of field visits would fall within the 2020+ CAP regulation allowed range. If the level of discrepancy is low, a field inspection is not recommended, just photo interpretation, since the confidence of the derived classifications is not very high (<50% according to the established rules in Table 4) and the classifier may not be performing well over those pixels.

6. Conclusions

A classification framework based on the predictions provided by an ensemble of decision trees using a boosting approach has been developed. Ten classes were classified covering an area of 3568 km2, which is much bigger than the previous studies dealing with CAP framework. In addition, we also benefit from the per class probability given by the ensemble in order to develop an agreement map between derived classifications and the official LPIS of Spain (SIGPAC).
The results revealed an overall accuracy of 93.96% and κ = 0.9 when using both Sentinel-1 and Sentinel-2 data, and the boosting approach in an ensemble of decision trees. The addition of Sentinel-1 to Sentinel-2 data improved classification accuracy with regard to using only Sentinel-2 data. The accuracies were improved in all classes, and the inter-class confusions were reduced mainly for shrubs, forest, and pasture with trees, stressing the usefulness of blending Sentinel-1 and Sentinel-2 data. The use of VIs as predictors improved classification accuracies in all cases, being the OSAVI705, the VI with which best results were obtained.
The derived agreement map reported less than 3.5% of the pixels as significant, high, and very high discrepancies with respect to the classification obtained from Sentinels data. In the case of needing a field inspection for checking the class, the number of field recognitions would fall within the 2020+ CAP regulation allowed range. The agreement map is a clear example of the utility of the proposed methodology and is intended to help in the CAP subsidies payment process. The derivation of this map is a major feature of this work, which is suitable to update Spanish SIGPAC as suggested by the European Court of Auditors also following with the 2020+ CAP regulation.

Supplementary Materials

The following are available online at https://www.mdpi.com/2073-4395/9/9/556/s1, Table S1: Jeffries–Matusita (JM) and Bhattacharyya (BH) distances for Sentinel-2 image acquired on 15 June, 2017, Table S2: Jeffries–Matusita and Bhattacharyya distances for Sentinel-2 image acquired on 13 October, 2017, Table S3: Jeffries–Matusita and Bhattacharyya distances for Sentinel-2 image acquired on 17 December, 2017, Table S4: Jeffries–Matusita and Bhattacharyya distances for Sentinel-2 image acquired on 21 January, 2018, Table S5: Jeffries–Matusita and Bhattacharyya distances for Sentinel-2 image acquired on 7 March, 2018, Table S6: Overall accuracy (OA) and kappa index (κ) obtained in the test set for every classifier using all (11) available Sentinel-2 images. Best result highlighted in bold.

Author Contributions

All authors contributed equally in the work.

Funding

This research was funded by the Conselleria d’Agricultura, Medi Ambient, Canvi Climàtic i Desenvolupament Rural (Generalitat Valenciana) through agreement CNME18/71450/1.

Acknowledgments

The Authors acknowledge the the Conselleria d’Agricultura, Medi Ambient, Canvi Climàtic i Desenvolupament Rural (Generalitat Valenciana) for providing funding, ground data and SIGPAC.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. European Commission. Fact Sheet. EU Budget: The Common Agricultural Policy beyond 2020. Available online: http://europa.eu/rapid/press-release_MEMO-18-3974_en.htm (accessed on 1 April 2019).
  2. European Union. Regulation (EU) No. 1306/2013. Available online: http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2013:347:0549:0607:EN:PDF (accessed on 1 December 2018).
  3. European Union. Commission Implementing Regulation (EU) 2018/746 of 18 May 2018 amending Implementing Regulation (EU) No 809/2014 as regards modification of single applications and payment claims and checks. Off. J. Eur. Union 2018, 61, L125CL1. [Google Scholar]
  4. Friedl, M.A.; Sulla-Menashe, D.; Tan, B.; Schneider, A.; Ramankutty, N.; Sibley, A.; Huang, X. MODIS Collection 5 global land cover: Algorithm refinements and characterization of new datasets. Remote Sens. Environ. 2010, 114, 168–182. [Google Scholar] [CrossRef]
  5. Congalton, R.G.; Oderwald, R.G.; Mead, R.A. Assessing Landsat Classification Accuracy Using Discrete Multivariate Analysis Statistical Techniques. Photogramm. Eng. Remote Sens. 1983, 49, 1671–1678. [Google Scholar]
  6. Song, C.; Woodcock, C.E.; Seto, K.C.; Lenney, M.P.; Macomber, S.A. Classification and change detection using landsat tm data: When and how to correct atmospheric effects? Remote Sens. Environ. 2001, 75, 230–244. [Google Scholar] [CrossRef]
  7. Torres, R.; Snoeij, P.; Geudtner, D.; Bibby, D.; Davidson, M.; Attema, E.; Potin, P.; Rommen, B.; Floury, N.; Brown, M.; et al. GMES Sentinel-1 mission. Remote Sens. Environ. 2012, 120, 9–24. [Google Scholar] [CrossRef]
  8. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  9. Immitzer, M.; Vuolo, F.; Atzberger, C. First Experience with Sentinel-2 Data for Crop and Tree Species Classifications in Central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
  10. Veloso, A.; Mermoz, S.; Bouvet, A.; Le Toan, T.; Planells, M.; Dejoux, J.-F.; Ceschia, E. Understanding the temporal behavior of crops using Sentinel-1 and Sentinel-2-like data for agricultural applications. Remote Sens. Environ. 2017, 199, 415–426. [Google Scholar] [CrossRef]
  11. Rüetschi, M.; Schaepman, M.E.; Small, D. Using Multitemporal Sentinel-1 C-band Backscatter to Monitor Phenology and Classify Deciduous and Coniferous Forests in Northern Switzerland. Remote Sens. 2018, 10, 55. [Google Scholar] [CrossRef]
  12. Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
  13. Van Tricht, K.; Gobin, A.; Gilliams, S.; Piccard, I. Synergistic Use of Radar Sentinel-1 and Optical Sentinel-2 Imagery for Crop Mapping: A Case Study for Belgium. Remote Sens. 2018, 10, 1642. [Google Scholar] [CrossRef]
  14. Breiman, L.; Friedman, J.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Taylor & Francis: London, UK, 1984. [Google Scholar]
  15. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  16. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  17. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  18. Pal, M.; Mather, P.M. An assessment of the effectiveness of decision tree methods for land cover classification. Remote Sens. Environ. 2003, 86, 554–565. [Google Scholar] [CrossRef]
  19. Foody, G.M.; Mathur, A. Toward intelligent training of supervised image classifications: Directing training data acquisition for SVM classification. Remote Sens. Environ. 2004, 93, 107–117. [Google Scholar] [CrossRef]
  20. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  21. Schneider, A. Monitoring land cover change in urban and peri-urban areas using dense time stacks of Landsat satellite data and a data mining approach. Remote Sens. Environ. 2012, 124, 689–704. [Google Scholar] [CrossRef]
  22. Vuolo, F.; Neuwirth, M.; Immitzer, M.; Atzberger, C.; Ng, W.-T. How much does multi-temporal Sentinel-2 data improve crop type classification? Int. J. Appl. Earth Obs. Geoinf. 2018, 72, 122–130. [Google Scholar] [CrossRef]
  23. Blaes, X.; Vanhalle, L.; Dautrebande, G.; Defourny, P. Operational control with remote sensing of area-based subsidies in the framework of the common agricultural policy: What role for the SAR sensors? In Proceedings of the Retrieval of Bio-and Geo-Physical Parameters from SAR Data for Land Applications, Sheffield, UK, 11–14 January 2002; pp. 87–92. [Google Scholar]
  24. Schmedtmann, J.; Campagnolo, M.L. Reliable Crop Identification with Satellite Imagery in the Context of Common Agriculture Policy Subsidy Control. Remote Sens. 2015, 7, 9325–9346. [Google Scholar] [CrossRef] [Green Version]
  25. Sitokonstantinou, V.; Papoutsis, I.; Kontoes, C.; Lafarga Arnal, A.; Armesto Andrés, A.P.; Garraza Zurbano, J.A. Scalable Parcel-Based Crop Identification Scheme Using Sentinel-2 Data Time-Series for the Monitoring of the Common Agricultural Policy. Remote Sens. 2018, 10, 911. [Google Scholar] [CrossRef]
  26. Estrada, J.; Sánchez, H.; Hernanz, L.; Checa, M.J.; Roman, D. Enabling the Use of Sentinel-2 and LiDAR Data for Common Agriculture Policy Funds Assignment. ISPRS Int. J. Geo-Inf. 2017, 6, 255. [Google Scholar] [CrossRef]
  27. European Court of Auditors. The Land Parcel Identification System: A Useful Tool to Determine the Eligibility of Agricultural Land–but Its Management Could be Further Improved; Publications Office of the European Union: Luxemburg, 2016; ISBN 978-92-872-5967-7. Available online: https://www.eca.europa.eu/Lists/News/NEWS1610_25/SR_LPIS_EN.pdf (accessed on 1 Apr 2019).
  28. Müller-Wilm, U. Sentinel-2 MSI—Level-2A Prototype Processor Installation and User Manual. Available online: http://step.esa.int/thirdparties/sen2cor/2.2.1/S2PAD-VEGA-SUM-0001-2.2.pdf (accessed on 1 April 2019).
  29. Hughes, G.P. On the Mean Accuracy of Statistical Pattern Recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
  30. Richards, J.A. Remote Sensing Digital Image Analysis: An Introduction; Springer: Berlin/Heidelberg, Germany, 2006; pp. 47–54. [Google Scholar]
  31. Bhattacharyya, A. On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math. Soc. 1943, 35, 99–109. [Google Scholar]
  32. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring vegetation system in the great plains with ERTS. In Proceedings of the Third Earth Resources Technology Satellite-1 Symposium, Greenbelt, MD, USA, 10–14 December 1973; pp. 3010–3017. [Google Scholar]
  33. Delegido, J.; Verrelst, J.; Alonso, L.; Moreno, J. Evaluation of Sentinel-2 red-edge bands for empirical estimation of green lai and chlorophyll content. Sensors 2011, 11, 7063–7081. [Google Scholar] [CrossRef] [PubMed]
  34. Daughtry, C.S.T.; Walthall, C.L.; Kim, M.S.; Brown de Colstoun, E.; McMurtrey, J.E., III. Estimating corn leaf chlorophyll concentration from leaf and canopy reflectance. Remote Sens. Environ. 2000, 74, 229–239. [Google Scholar] [CrossRef]
  35. Merzlyak, M.N.; Gitelson, A.A.; Chivkunova, O.B.; Rakitin, V.Y. Non-destructive optical detection of leaf senescence and fruit ripening. Physiol. Plant. 1999, 106, 135–141. [Google Scholar] [CrossRef]
  36. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  37. Vreugdenhil, M.; Wagner, W.; Bauer-Marschallinger, B.; Pfeil, I.; Teubner, I.; Rüdiger, C.; Strauss, P. Sensitivity of Sentinel-1 Backscatter to Vegetation Dynamics: An Austrian Case Study. Remote Sens. 2018, 10, 1396. [Google Scholar] [CrossRef]
  38. Fukunaga, K. Introduction to Statistical Pattern Recognition; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  39. Prasad, A.M.; Iverson, L.R.; Liaw, A. Newer classification and regression tree techniques: Bagging and random forests for ecological prediction. Ecosystems 2006, 9, 181–199. [Google Scholar] [CrossRef]
  40. Lodha, S.K.; Fitzpatrick, D.M.; Helmbold, D.P. Aerial lidar data classification using adaboost. In Proceedings of the Sixth International Conference on 3-D Digital Imaging and Modeling, Montreal, QC, Canada, 21–23 August 2007; pp. 435–442. [Google Scholar]
  41. Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 1960, 20, 36–47. [Google Scholar] [CrossRef]
  42. Belgiu, M.; Drăguţ, L. Random Forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  43. Du, P.; Samat, A.; Waske, B.; Liu, S.; Li, Z. Random forest and rotation forest for fully polarized SAR image classification using polarimetric and spatial features. ISPRS J. Photogramm. Remote Sens. 2015, 105, 38–53. [Google Scholar] [CrossRef]
  44. Stumpf, A.; Kerle, N. Object-oriented mapping of landslides using Random Forests. Remote Sens. Environ 2011, 115, 2564–2577. [Google Scholar] [CrossRef]
  45. Yan, L.; Roy, D.P. Conterminous United States crop field size quantification from multi-temporal Landsat data. Remote Sens. Environ. 2016, 172, 67–86. [Google Scholar] [CrossRef] [Green Version]
  46. Immitzer, M.; Toscani, P.; Atzberger, C. The Utility of Wavelet-based Texture Measures to Improve Object-based Classification of Aerial Images. South.-East. Eur. J. Earth Obs. Geomat. 2014, 3, 79–84. [Google Scholar]
  47. Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
  48. Kanjir, U.; Đurić, N.; Veljanovski, T. Sentinel-2 Based Temporal Detection of Agricultural Land Use Anomalies in Support of Common Agricultural Policy Monitoring. ISPRS Int. J. Geo-Inf. 2018, 7, 405. [Google Scholar] [CrossRef]
  49. Dong, J.; Xiao, X.; Kou, W.; Qin, Y.; Zhang, G.; Li, L.; Jin, C.; Zhou, Y.; Wang, J.; Biradar, C.; et al. Tracking the dynamics of paddy rice planting area in 1986–2010 through time series Landsat images and phenology-based algorithms. Remote Sens. Environ. 2015, 160, 99–113. [Google Scholar] [CrossRef]
  50. Campos-Taberner, M.; García-Haro, F.J.; Camps-Valls, G.; Grau-Muedra, G.; Nutini, F.; Crema, A.; Boschetti, M. Multitemporal and multiresolution leaf area index retrieval for operational local rice crop monitoring. Remote Sens. Environ. 2016, 187, 102–118. [Google Scholar] [CrossRef]
  51. Campos-Taberner, M.; García-Haro, F.J.; Camps-Valls, G.; Grau-Muedra, G.; Nutini, F.; Busetto, L.; Katsantonis, D.; Stavrakoudis, D.; Minakou, C.; Gatti, L.; et al. Exploitation of SAR and Optical Sentinel Data to Detect Rice Crop and Estimate Seasonal Dynamics of Leaf Area Index. Remote Sens. 2017, 9, 248. [Google Scholar] [CrossRef]
  52. Brown, S.C.; Quegan, S.; Morrison, K.; Bennett, J.C.; Cookmartin, G. High-resolution measurements of scattering in wheat canopies-Implications for crop parameter retrieval. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1602–1610. [Google Scholar] [CrossRef]
  53. Mattia, F.; Le Toan, T.; Picard, G.; Posa, F.I.; D’Alessio, A.; Notarnicola, C.; Gatti, A.M.; Rinaldi, M.; Satalino, G.; Pasquariello, G. Multitemporal C-band radar measurements on wheat fields. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1551–1560. [Google Scholar] [CrossRef]
  54. Lopez-Samchez, J.M.; Ballester-Berman, J.D.; Hajnsek, I. First results of rice monitoring practice in Spain by means of time series of TerraSAR-X dual-pol images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens 2011, 4, 412–422. [Google Scholar] [CrossRef]
  55. Canisius, F.; Shang, J.; Liu, J.; Huang, X.; Ma, B.; Jiao, X.; Geng, X.; Kovacs, J.; Walters, D. Tracking crop phenological development using multi-temporal polarimetric Radarsat-2 data. Remote Sens. Environ. 2018, 210, 508–518. [Google Scholar] [CrossRef]
Figure 1. Location of the study area in the Valencian Community (blue shape), East of Spain, and the corresponding Sentinel-1 (green) and Sentinel-2 (red) tiles.
Figure 1. Location of the study area in the Valencian Community (blue shape), East of Spain, and the corresponding Sentinel-1 (green) and Sentinel-2 (red) tiles.
Agronomy 09 00556 g001
Figure 2. (a) Sistema de Información Geográfica de Parcelas Agrícolas (SIGPAC) over the study area. Mediterranean Sea and classes of non-interest are masked out. (b) Coverage of every class over the area.
Figure 2. (a) Sistema de Información Geográfica de Parcelas Agrícolas (SIGPAC) over the study area. Mediterranean Sea and classes of non-interest are masked out. (b) Coverage of every class over the area.
Agronomy 09 00556 g002
Figure 3. Schema of the followed methodology for the derivation of the proposed agreement map.
Figure 3. Schema of the followed methodology for the derivation of the proposed agreement map.
Agronomy 09 00556 g003
Figure 4. Influence of number of trees on the overall accuracy of the ensemble of decision trees.
Figure 4. Influence of number of trees on the overall accuracy of the ensemble of decision trees.
Agronomy 09 00556 g004
Figure 5. Classification map (a) and associated confidence (b) over the study area.
Figure 5. Classification map (a) and associated confidence (b) over the study area.
Agronomy 09 00556 g005
Figure 6. Cumulative histogram of the confidence map.
Figure 6. Cumulative histogram of the confidence map.
Agronomy 09 00556 g006
Figure 7. (a) Agreement map showing the level of agreement between classification and SIGPAC over the study area, and (b) the corresponding pie chart showing the percentage of pixels for every level of agreement.
Figure 7. (a) Agreement map showing the level of agreement between classification and SIGPAC over the study area, and (b) the corresponding pie chart showing the percentage of pixels for every level of agreement.
Agronomy 09 00556 g007
Figure 8. Classification (top), SIGPAC (bottom), and agreement (right) over an area identified as very high discrepancy.
Figure 8. Classification (top), SIGPAC (bottom), and agreement (right) over an area identified as very high discrepancy.
Agronomy 09 00556 g008
Figure 9. Classification (top), SIGPAC (bottom), and agreement (right) over an area identified as high discrepancy.
Figure 9. Classification (top), SIGPAC (bottom), and agreement (right) over an area identified as high discrepancy.
Agronomy 09 00556 g009
Table 1. Sentinel-1 and Sentinel-2 acquisition dates in 2017 and 2018.
Table 1. Sentinel-1 and Sentinel-2 acquisition dates in 2017 and 2018.
20172018
Sentinel-112 April, 24 April, 6 May, 18 May, 30 May, 5 June, 17 June, 29 June, 11 July, 23 July, 4 August, 16 August, 28 August, 9 September, 21 September, 3 October, 15 October, 27 October, 8 November, 20 November, 2 December, 14 December, 26 December.7 January, 31 January, 19 January, 12 February, 24 February, 20 March, 8 March.
Sentinel-26 May, 26 May, 15 June, 13 September, 13 October, 2 December, 17 December.21 January, 26 May 26, 15 February, 7 March, 27 March.
Table 2. Ground truth: number of pixels for every class.
Table 2. Ground truth: number of pixels for every class.
ClassShrubsDried FruitFruitsPastureRiceForestVineyardPasture with TreesOlive GrooveCitrus
No. Pixels20,38220,70319,80020,05820,15720,47820,29518,84619,43420,385
Table 3. Vegetation indices used in the feature space. Reflectance is denoted by ρ λ (λ in nm).
Table 3. Vegetation indices used in the feature space. Reflectance is denoted by ρ λ (λ in nm).
Vegetation IndexEquation
NDVI ρ 842 ρ 665 ρ 842 + ρ 665
OSAVI ρ 842 ρ 665 ρ 842 + ρ 665 + 0.16
NDVI705 ρ 842 ρ 705 ρ 842 + ρ 705
OSAVI705 ρ 842 ρ 705 ρ 842 + ρ 705 + 0.16
MCARI [ ( ρ 705 ρ 665 ) 0.2 ( ρ 705 ρ 560 ) ] ( ρ 705 ρ 665 )
PSRI ρ 665 ρ 560 ρ 740
Table 4. Agreement levels between classifications and SIGPAC.
Table 4. Agreement levels between classifications and SIGPAC.
Classification Map and SIGPAC Classification ConfidenceLevel of Agreement
Same classAND>95%Very high agreement
Same classAND70–95%High agreement
Same classAND50–70%Significant agreement
Same classAND<50%Low agreement
Different classesAND<50%Low discrepancy
Different classesAND50–70%Significant discrepancy
Different classesAND50–70%High discrepancy
Different classesAND>95%Very high discrepancy
Table 5. Jeffries–Matusita (JM) and Bhattacharyya (BH) distances for Sentinel-2 image acquired on 6 May 2017. Classes: shrubs (SH), fruits (FR), citrus (CI), forest (FO), rice crops (RI), olive grove (OL), pasture with trees (PAT), vineyards (VI), dried fruit (DFR), and pasture (PA).
Table 5. Jeffries–Matusita (JM) and Bhattacharyya (BH) distances for Sentinel-2 image acquired on 6 May 2017. Classes: shrubs (SH), fruits (FR), citrus (CI), forest (FO), rice crops (RI), olive grove (OL), pasture with trees (PAT), vineyards (VI), dried fruit (DFR), and pasture (PA).
JMSHDFRFRPARIFOVIPATOLCI
BH
SH 1.7501.5971.5081.9960.6361.8700.2531.4971.879
DFR2.078 0.3920.6981.9831.8630.4661.7170.4021.439
FR1.6030.218 0.6211.9851.7600.6721.5380.2501.472
PA1.4030.4290.371 1.9861.6531.2021.4100.5051.123
RI6.1024.7764.8804.939 1.9891.9891.9861.9881.995
FO0.3822.6792.1201.7525.228 1.9510.2961.7131.899
VI2.7310.2650.4090.9195.2493.714 1.8720.7921.692
PAT0.1351.9551.4651.2214.9910.1602.752 1.4461.837
OL1.3810.2250.1330.2915.1351.9420.5041.283 1.382
CI2.8021.2711.3310.8256.0332.9881.8702.5091.174
Table 6. Jeffries–Matusita (JM) and Bhattacharyya (BH) distances taking into account all the six selected Sentinel-2 images. Classes: shrubs (SH), fruits (FR), citrus (CI), forest (FO), rice crops (RI), olive grove (OL), pasture with trees (PAT), vineyards (VI), dried fruit (DFR), and pasture (PA).
Table 6. Jeffries–Matusita (JM) and Bhattacharyya (BH) distances taking into account all the six selected Sentinel-2 images. Classes: shrubs (SH), fruits (FR), citrus (CI), forest (FO), rice crops (RI), olive grove (OL), pasture with trees (PAT), vineyards (VI), dried fruit (DFR), and pasture (PA).
JMSHDFRFRPARIFOVIPATOLCI
BH
SH 1.7501.5971.5081.9960.6361.8700.2531.4971.879
DFR2.078 0.3920.6981.9831.8630.4661.7170.4021.439
FR1.6030.218 0.6211.9851.7600.6721.5380.2501.472
PA1.4030.4290.371 1.9861.6531.2021.4100.5051.123
RI6.1024.7764.8804.939 1.9891.9891.9861.9881.995
FO0.3822.6792.1201.7525.228 1.9510.2961.7131.899
VI2.7310.2650.4090.9195.2493.714 1.8720.7921.692
PAT0.1351.9551.4651.2214.9910.1602.752 1.4461.837
OL1.3810.2250.1330.2915.1351.9420.5041.283 1.382
CI2.8021.2711.3310.8256.0332.9881.8702.5091.174
Table 7. Overall accuracy (OA) and kappa index (κ) obtained in the test set for every classifier.
Table 7. Overall accuracy (OA) and kappa index (κ) obtained in the test set for every classifier.
Multitemporal Features (Optical + SAR)Overall Accuracy (%) (κ)
LDAQDAk-NNSVMRFBagging TreesBoosting Trees
6 × Sentinel-2 (12 bands) + NDVI
30 × Sentinel-1 (3 bands)
69.82 (0.68)80.36 (0.79)84.93 (0.83)86.80 (0.85)88.69 (0.87)88.30 (0.87)92.66 (0.90)
6 × Sentinel-2 (12 bands) + OSAVI
30 × Sentinel-1 (3 bands)
70.58 (0.69)80.45 (0.79)85.68 (0.84)86.97 (0.85)88.86 (0.87)88.48 (0.87)93.58 (0.91)
6 × Sentinel-2 (12 bands) + NDVI705
30 × Sentinel-1 (3 bands)
67.54 (0.66)76.41 (0.75)83.29 (0.82)85.29 (0.84)87.64 (0.86)87.23 (0.86)90.75 (0.89)
6 × Sentinel-2 (12 bands) + OSAVI705
30 × Sentinel-1 (3 bands)
70.95 (0.70)80.50 (0.79)85.94 (0.85)86.99 (0.84)89.05 (0.88)88.76 (0.87)93.96 (0.91)
6 × Sentinel-2 (12 bands) + MCARI
30 × Sentinel-1 (3 bands)
68.93 (0.68)77.83 (0.76)83.15 (0.82)85.04 (0.84)88.15 (0.87)87.51 (0.86)90.15 (0.89)
6 × Sentinel-2 (12 bands) + PSRI
30 × Sentinel-1 (3 bands)
66.94 (0.66)71.60 (0.69)76.83 (0.75)84.07 (0.83)87.10 (0.86)87.08 (0.86)88.96 (0.88)
6 × Sentinel-2 (12 bands)
30 × Sentinel-1 (3 bands)
66.73 (0.65)70.23 (0.69)75.84 (0.74)85.59 (0.84)86.80 (0.85)86.25 (0.85)88.19 (0.87)
Table 8. Confusion matrix of the classification obtained for the boosting approach using as predictors Sentinel-1 and Sentinel-2 imagery, and OSAVI705. Classes: shrubs (SH), fruits (FR), citrus (CI), forest (FO), rice crops (RI), olive grove (OL), pasture with trees (PAT), vineyards (VI), dried fruit (DFR), and pasture (PA).
Table 8. Confusion matrix of the classification obtained for the boosting approach using as predictors Sentinel-1 and Sentinel-2 imagery, and OSAVI705. Classes: shrubs (SH), fruits (FR), citrus (CI), forest (FO), rice crops (RI), olive grove (OL), pasture with trees (PAT), vineyards (VI), dried fruit (DFR), and pasture (PA).
Ground TruthTotalUA (%)
SHDFRFRPARIFOVIPATOLCI
ClassifiedSH55601919102591329697626488.8
DFR246619101075127212676297.9
FR70636450413620368716678395.1
PA38166610321103669299.5
RI1102671710110672499.9
FO455111550584513493812673186.8
VI3111810067191353679198.9
PAT500362721569054435010663882.0
OL175118537010514104610136671390.9
CI315720136156696674899.2
Total
PA (%)
6794690166006686671968266765628264786795OA = 93.96%
κ = 0.91
81.895.997.799.699.985.699.386.694.298.5
Table 9. Overall accuracy (OA) and kappa index (κ) obtained in the test set for every classifier using as predictors only Sentinel-2 data and VIs. Best result highlighted in bold.
Table 9. Overall accuracy (OA) and kappa index (κ) obtained in the test set for every classifier using as predictors only Sentinel-2 data and VIs. Best result highlighted in bold.
Multitemporal Features (Optical + SAR)Overall Accuracy (%) (κ)
LDAQDAk-NNSVMRFBagging TreesBoosting Trees
6 × Sentinel-2 (12 bands) + NDVI
30 × Sentinel-1 (3 bands)
69.82
(0.68)
80.36
(0.79)
84.93
(0.83)
86.80
(0.85)
88.69
(0.87)
88.30
(0.87)
92.66
(0.90)
6 × Sentinel-2 (12 bands) + OSAVI
30 × Sentinel-1 (3 bands)
70.58
(0.69)
80.45
(0.79)
85.68
(0.84)
86.97
(0.85)
88.86
(0.87)
88.48
(0.87)
93.58
(0.91)
6 × Sentinel-2 (12 bands) + NDVI705
30 × Sentinel-1 (3 bands)
67.54
(0.66)
76.41
(0.75)
83.29
(0.82)
85.29
(0.84)
87.64
(0.86)
87.23
(0.86)
90.75
(0.89)
6 × Sentinel-2 (12 bands) + OSAVI705
30 × Sentinel-1 (3 bands)
70.95
(0.70)
80.50
(0.79)
85.94
(0.85)
86.99
(0.84)
89.05
(0.88)
88.76
(0.87)
93.96
(0.91)
6 × Sentinel-2 (12 bands) + MCARI
30 × Sentinel-1 (3 bands)
68.93
(0.68)
77.83
(0.76)
83.15
(0.82)
85.04
(0.84)
88.15
(0.87)
87.51
(0.86)
90.15
(0.89)
6 × Sentinel-2 (12 bands) + PSRI
30 × Sentinel-1 (3 bands)
66.94
(0.66)
71.60
(0.69)
76.83
(0.75)
84.07
(0.83)
87.10
(0.86)
87.08
(0.86)
88.96
(0.88)
6 × Sentinel-2 (12 bands)
30 × Sentinel-1 (3 bands)
66.73
(0.65)
70.23
(0.69)
75.84
(0.74)
85.59
(0.84)
86.80
(0.85)
86.25
(0.85)
88.19
(0.87)
Table 10. Confusion matrix of the classification obtained for the boosting approach using as predictors only the six selected Sentinel-2 images, and OSAVI705. Classes: shrubs (SH), fruits (FR), citrus (CI), forest (FO), rice crops (RI), olive grove (OL), pasture with trees (PAT), vineyards (VI), dried fruit (DFR), and pasture (PA).
Table 10. Confusion matrix of the classification obtained for the boosting approach using as predictors only the six selected Sentinel-2 images, and OSAVI705. Classes: shrubs (SH), fruits (FR), citrus (CI), forest (FO), rice crops (RI), olive grove (OL), pasture with trees (PAT), vineyards (VI), dried fruit (DFR), and pasture (PA).
Ground TruthTotalUA(%)
SHDFRFRPARIFOVIPAT OLCI
ClassifiedSH50502032204092475679606683.3
DFR16651991602719167230672497.0
FR10111163491734070258740684392.8
PA381656602124374664698.8
RI2303671420320672999.8
FO9551117310551514743914705778.1
VI13205520065938974679297.1
PAT5193451142689151609911658078.4
OL12915578320109621035962160679087.8
CI620830141514166523661998.5
Total
PA (%)
6794690166006686671968266765628264786795OA = 91.18%
κ = 0.88
74.394.596.298.299.980.897.582.192.096.0

Share and Cite

MDPI and ACS Style

Campos-Taberner, M.; García-Haro, F.J.; Martínez, B.; Sánchez-Ruíz, S.; Gilabert, M.A. A Copernicus Sentinel-1 and Sentinel-2 Classification Framework for the 2020+ European Common Agricultural Policy: A Case Study in València (Spain). Agronomy 2019, 9, 556. https://doi.org/10.3390/agronomy9090556

AMA Style

Campos-Taberner M, García-Haro FJ, Martínez B, Sánchez-Ruíz S, Gilabert MA. A Copernicus Sentinel-1 and Sentinel-2 Classification Framework for the 2020+ European Common Agricultural Policy: A Case Study in València (Spain). Agronomy. 2019; 9(9):556. https://doi.org/10.3390/agronomy9090556

Chicago/Turabian Style

Campos-Taberner, Manuel, Francisco Javier García-Haro, Beatriz Martínez, Sergio Sánchez-Ruíz, and María Amparo Gilabert. 2019. "A Copernicus Sentinel-1 and Sentinel-2 Classification Framework for the 2020+ European Common Agricultural Policy: A Case Study in València (Spain)" Agronomy 9, no. 9: 556. https://doi.org/10.3390/agronomy9090556

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop