Next Article in Journal
Validation of Near-Real-Time NOAA-20 CrIS Outgoing Longwave Radiation with Multi-Satellite Datasets on Broad Timescales
Next Article in Special Issue
On the Joint Exploitation of Satellite DInSAR Measurements and DBSCAN-Based Techniques for Preliminary Identification and Ranking of Critical Constructions in a Built Environment
Previous Article in Journal
Inversion for Inhomogeneous Surface Duct without a Base Layer Based on Ocean-Scattered Low-Elevation BDS Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusion of Multi-Temporal PAZ and Sentinel-1 Data for Crop Classification

by
Mario Busquier
1,
Rubén Valcarce-Diñeiro
2,
Juan M. Lopez-Sanchez
1,*,
Javier Plaza
3,
Nilda Sánchez
3,4 and
Benjamín Arias-Pérez
4
1
Institute for Computer Research (IUII), University of Alicante, 03080 Alicante, Spain
2
School of Natural and Environmental Sciences, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
3
Faculty of Agrarian and Environmental Sciences, University of Salamanca, 37007 Salamanca, Spain
4
Department of Cartographic and Land Engineering, University of Salamanca, 05003 Ávila, Spain
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(19), 3915; https://doi.org/10.3390/rs13193915
Submission received: 19 August 2021 / Revised: 25 September 2021 / Accepted: 27 September 2021 / Published: 30 September 2021

Abstract

:
The accurate identification of crops is essential to help environmental sustainability and support agricultural policies. This study presents the use of a Spanish radar mission, PAZ, to classify agricultural areas with a very high spatial resolution. PAZ was recently launched, and it operates at X band, joining the synthetic aperture radar (SAR) constellation along with TerraSAR-X and TanDEM-X satellites. Owing to its novelty and its ability to classify crop areas (both taking individually its time series and blending with the Sentinel-1 series), it has been tested in an agricultural area of the central-western part of Spain during 2020. The random forest algorithm was selected to classify the time series under five alternatives of standalone/fused data. The map accuracy resulting from the PAZ series standalone was acceptable, but it highlighted the need for a denser time-series of data. The overall accuracy provided by eight PAZ images or by eight Sentinel-1 images was below 60%. The fusion of both sets of eight images improved the overall accuracy by more than 10%. In addition, the exploitation of the whole Sentinel-1 series, with many more observations (up to 40 in the same temporal window) improved the results, reaching an overall accuracy around 76%. This overall performance was similar to that obtained by the joint use of all the available images of the two frequency bands (C and X).

1. Introduction

The rapid growth of the world population, which is expected to reach 8.5 billion in 2030 according to the United Nations [1], along with the economic and social importance of the agricultural sector and the uncertainty in the changes of production caused by climate change [2,3], calls for the development of procedures and techniques to control and efficiently manage natural resources. Within this context, the classification and identification of agricultural crops is one of the research topics which help to manage the earth’s natural vegetation cover. Mapping the crop present on each agricultural field has become an important input for hydrological and ecological purposes, yield estimation [4,5] and water resources management, and it has been the core for the design, implementation, and check of the European Common Agricultural Policy (CAP) [6,7,8,9].
For crop classification, one of the most fruitful technologies is based on remote sensing databases. Remote sensing imagery provides routinely large cover areas, collects a wide range of observations in a timely manner, and is a cost-effective means to complement or even replace field work. Optical remote sensing at high-resolution (e.g., Sentinel-2, WorldView, Landsat, etc.) has become one key data source to create crop-type maps [7,10,11,12,13]. However, acquisitions by this type of sensor are limited to nearly cloud-free conditions. An alternative to optical remote sensing is the synthetic aperture radar (SAR) remote sensing, which has the ability of continually collecting data despite of light and weather conditions, therefore providing gap-free time series data and information about the interactions between microwaves and crop canopies [14,15].
Over the past few years, many SAR satellite missions such as Sentinel-1 [16], RADARSAT constellation mission [17], SAOCOM (Satélite Argentino de Observación Con Microondas), among others, have been launched at different frequencies and polarizations. From the Spanish perspective, a milestone was the launch of the PAZ satellite in 2018, as a part of the first generation of X-band SAR satellites, and fully compatible with TerraSAR-X and TanDEM-X. PAZ is aimed to create a high-resolution constellation mission with them, which will reduce the revisit time and increase the overall acquisition capacity [18].
To date, time series of SAR images have been successfully applied for crop classification by applying different classification methods that include random forest (RF), decision tree (DT), neural network (NN), support vector machine (SVMs), maximum likelihood classification (MLC), and many others [19,20,21,22,23,24,25].
The use of time series for crop-type mapping exploits the sensitivity to the growth cycle of crops and its calendar (dates of sowing, growth, and harvest), which is characteristic of each crop type in the same geographical region. Therefore, the radar response of each crop type changes with time and is different among crop types [6,19]. Regarding the performances as a function of the number of images (time series), results in [26,27,28] show that the improvement depends in many cases on the time separation of the images (to provide independent information) and on the specific acquisition dates with respect to the crop calendar, since differences among crop types are more evident at specific dates.
In this context of increasing availability of radar sensors operating at different frequency bands, and given the improved classification methodologies, the combination of data gathered at different frequency bands for enhancing crop-type mapping becomes a relevant question. Multi-frequency data are expected to contribute to the identification of responses from different crop types at different growth stages, thanks to the sensitivity of the wavelength employed by the radar to the size of the scene elements and its structure. For instance, fully developed corn or rapeseed may saturate the backscatter at X band but can be separated at C band, whereas short crops can be separated at X band, but not at C band.
The effective combination of radar data acquired at different bands for crop-type mapping has been explored in the past by several groups, with first experiments carried out back in the 1980s. The early study in [29] evaluates multi-frequency (L, C, and X band) SAR data in form of backscattering coefficient for different polarimetric channels, using the maximum likelihood classifier. Results show that using jointly all of the different bands provides the best accuracy (on the average higher than 90%). Later, in [30], different data configurations were compared and combined for crop-type mapping with a dynamic learning NN, including multi-frequency (P, L, and C band) single-pol data, single-frequency and multi-polarization data, and multi-frequency and multi-polarization data. Results confirmed that multi-polarization and multi-frequency data produce the best accuracy, which reached 95%, whereas the use of individual bands only provide up to 78%, the P band being the best one. The same frequency bands were compared and combined in [31] using full-pol data. Results show that C band performs better than L band, with overall accuracy (OA) reaching 90.4% and 88.7% for C and L band, respectively. When multiple frequencies are combined, the combination of C and L band provides an accuracy of 96.3%, while the combination of P, L, and C bands yields reaches 97%. In contrast, the experiment in [26] found that L band provided better results than C band (OA equal to 95.8% and 91.2%, respectively), and again confirmed that the joint use of both bands improved the results, reaching a 98% OA.
To the best of the authors’ knowledge there are not examples in the literature in which multi-temporal multi-frequency data are effectively combined for crop classification. The use of multi-temporal data at different frequency bands is limited to a comparison in [27,28], but they are not merged.
In this work we aim at testing PAZ data for the first time for crop classification. For this purpose, we show an experiment in which sets of images acquired by PAZ and Sentinel-1 were employed, both separately and in combination, for crop classification over an agricultural area in Spain. The experiment is designed to check the complementarity of both sensors and the added value of the frequency fusion in this context.
Regarding the relatively low number of images employed (8 and 40 for PAZ and Sentinel-1, respectively), in contrast to massive data sets exploited in other machine learning-based classification problems, the classification approach can be considered as an example of the so-called few-shot classification, for which specific methodologies are described in the literature [32,33,34].
It is important to note that the focus of this work is not placed on the classification methodology, but on the added value of the fusion of images acquired by satellites operating at two different frequency bands. For this purpose, we have used a widely used and well-known classification approach (random forest, RF), as with those based on NN, including deep learning [32,33,34], could improve the final scores to some extent but the contribution of the fusion of frequency bands can be equally assessed with only RF.
The main novelty of this work consists in the evaluation of the fusion of two series of SAR images acquired at C and X band for crop-type mapping. As a key contribution, it was shown that the fusion is clearly beneficial when the number and the dates of the images at both bands were similar. In contrast, a denser time series composed of more images at a single band provided better results, hence demonstrating the crucial role of the time coordinate in crop classification.

2. Materials and Methods

2.1. Test Site and Ground Campaign

The study area is located in the Iberian Peninsula (Spain), in the Castilla y León region, 20 km away from the city of Salamanca (Figure 1). It is a cropland mosaic mainly devoted to rainfed and irrigated crops mixed with several patches of sparse trees, pastures, and fallow plots. The climate is continental Mediterranean, characterized by scarce rainfall (from 300 to 400 mm/year), hot summers (mean maxima 30 °C in July) and cold winters (mean minima −1 °C in January) (Data from the Spanish Meteorological Agency, http://www.aemet.es/en/, accessed on 19 September 2021).
The long cycle of rainfed crops, from fallow to late spring, is well adapted to the hard climatic conditions, therefore the yield and consequent economic performance of these crops are low. Among this group, the most popular crops are cereals and legumes. Searching for more productive alternatives, the irrigated crops have been raising dramatically in the last decades. The use of the deep aquifers has led to the availability of water in the spring-summer short growing cycle of crops such as corn, sugar beet, and potato.
The crops (n = 12) considered in this classification, together with their growing cycle duration and regime are depicted in Table 1. To train and validate the classification, several reference plots have been collected, i.e., a number of samples of each crop considered in the classification legend geolocated with a GPS receiver (Table 1). The number of plots registered is commensurate with the importance of the crop, in order to make the samples representative of the area. Due to the pandemic lockdown during the spring of 2020 in Spain, the field campaign was only possible at the end of June. However, this date was convenient to identify the rainfed crops at the end of their growing cycle together with the development phase of the irrigated ones (Table 1). A total number of 323 plots were recorded.

2.2. Satellite Data and Pre-Processing

The satellite data employed in this study came from two different sensors: PAZ and Sentinel-1 (S1). The main characteristics of the images are shown in Table 2. In both cases, single-look complex (SLC) images were used as input products. The SLC images from PAZ were acquired in stripmap mode with the dual-pol HHVV combination. The images from the S1 constellation correspond to orbit 154, acquired at interferometric wide swath mode (IW) with the standard dual-pol VV-VH combination. For this work, only the IW2 subswath was used since it comprises completely the study area.
The data set of PAZ images is composed of 8 images, acquired from March to October 2020. The acquisition dates are shown in Figure 2. Regarding the S1 images, they are routinely acquired since 2016 with a six-day revisit period. For this study we restricted the observation interval to the same interval provided by PAZ, i.e., from March to October 2020. Therefore, 40 S1 images are considered, whose acquisition dates are also shown in Figure 2. In that figure we have also depicted in a special manner (in green color) the S1 images closest to the PAZ images, since they will be analyzed separately in some of the classification tests described in Section 3.
All the PAZ images were pre-processed with the following steps: (1) co-registration with respect to a reference image, (2) calibration, (3) formation of 2 × 2 polarimetric covariance matrices, (4) speckle filtering using a 7 × 7 boxcar, (5) geocoding, and (6) generation of products. The geocoded products were obtained on a uniform grid in UTM coordinates with 5 m posting. In this study, the products employed as input features for classification purposes were the backscatter coefficient (in dB) at the two available channels (HH and VV) and the normalized correlation between both channels. The use of these input features, directly derived from the polarimetric covariance matrices has demonstrated an excellent performance for crop-type mapping with TerraSAR-X data [35], and specifically better than using just the backscatter coefficient [36]. The pre-processing was carried out with a dedicated software developed at University of Alicante, but it could have been performed equivalently by means of the ESA SNAP platform.
As for the S1 images, the SLC images were pre-processed using the ESA SNAP software with the following steps: (1) TOPSAR split, (2) apply orbit file, (3) calibration, (4) TOPSAR deburst, (5) subset of the region of interest, (6) speckle filtering using a 10 × 2 boxcar, (7) conversion to dB, and (8) geocoding. In this case, due to the lower spatial resolution of the input data, the geocoding was carried out in an UTM grid with 10 m pixel spacing in both coordinates. The products employed as input features for the classifier were the backscatter coefficient (in dB) at the two available channels (VV and VH).
All S1 images are public and freely accessible. Regarding the images from PAZ, they can be accessed by request to the Spanish Centre for the Development of Industrial Technology (CDTI) in the framework of an approved research project.

2.3. Classification Methodology and Evaluation

The purpose of this work is to analyze the performance of the two available data sets from PAZ and S1 in crop-type mapping, both separately and combined. Therefore, the strategy for both standalone classification and merged classification will be explained in the following subsections. An overall scheme of the classification approach is shown in Figure 3.

2.3.1. Standalone Methodology

The classification process was carried out at pixel level by the random forest algorithm [37], a popular machine learning method available in many different software platforms. It is a supervised classifier well known by its good performance in crop classification. The implementation employed here is provided by the scikit-learn package in Python [38] and was run mainly with default hyperparameters. The number of decision trees was set to 1000, and the number of features considered when looking for the best split was left to the default case (i.e., the square root of the total number of features). The specific strategy followed for the training and the application of the classifier is explained in the next paragraphs.
One of the issues when working with radar data is the effect of the speckle filtering. The spatial averaging performed by the filtering makes the values of the features for every pixel to be correlated with the adjacent ones. Thus, to prevent the classifier from being influenced by this pixel correlation, we carried out an initial split at field level: 50% of the fields of each crop type were selected for training and the remaining 50% for testing.
Another important aspect we need to address is the strong imbalance that exists in the number of pixels for each class in the training dataset. As the amount of ground truth areas (number of fields as well as their surface) devoted for each crop is different, the number of pixels (i.e., samples) will be very different among classes. If not counteracted, the crops with more pixels would be benefited by the classifier over the ones with less pixels. To solve this issue, we performed over the training data what we call an ‘equal random sampling’. This consists in a random selection of pixels for each class but restricting its number to the total value of the class with least pixels in the dataset. By doing so, all pixels of the smallest class are selected in the training dataset, whereas the pixels of the other classes are present in the same amount, but randomly selected.
Once this selection is carried out, the corresponding input features are introduced in the classifier to start the training phase. Then, the trained classifier is applied to all the pixels present in the testing data (50% of the fields for each class from the whole dataset). The output of the classification is a vector for each pixel, whose size is the number of classes, which indicates the likelihood of that pixel to belong to each possible class. As a final decision in the standalone classification methodology, the highest likelihood present in the vector of each pixel defines the final class assigned to it.
The whole classification procedure is repeated 10 times (iterations) in order to avoid biased accuracy metrics resulting from a specific split of the reference data. By shuffling the reference data 10 times we ensure having different combinations of training and testing fields which guarantees stability in the final metrics. In all experiments, the maximum standard deviations of overall accuracy (OA) and kappa coefficient, were 3.3% and 0.036, respectively. At the 10th iteration, the average OA and kappa coefficient changed less than 0.25% and 0.002, respectively.

2.3.2. Fusion Methodology

The fusion methodology employed in this work has been recently used in [39] for combining Sentinel-1 and 2 data for crop-type mapping. The approach is based on exploiting the results of the two independent classifications (i.e., the vectors for each pixel) with the likelihood of belonging to every possible class. These two vectors (one for PAZ and another for S1) are combined, for each pixel, at the end of the process. However, as the two spatial grids employed for the PAZ and S1 features do not coincide, we have devised a specific strategy to make that combination possible, which is detailed in the following.
First, we perform the same field-level 50/50 splitting for both sensors. While we have stated that the splitting is performed randomly, that randomness is the same for both datasets, so the same polygons will be selected for each crop to form the training and testing data for the classification of the data from both satellites. In this manner, we achieve an equivalence in the training data and a direct correspondence between the testing pixels coming from both sensors.
Once this division was finished, we perform independently the equal random sampling over the training data of each dataset. Then, the classification process was run by entering the testing data of each sensor at the input of the two trained classifiers. However, the final selection of the testing data is slightly different for the data coming from both sensors. As the spatial grids defined in the pre-processing step are different, their pixels are not geographically equivalent, so there is a mismatch between them that makes unable to directly compare and fuse their results.
The grid of S1 has a coarser spatial resolution than the one of PAZ, hence for S1 we make use of all the pixels to be classified. However, when it comes to the testing data of PAZ, we select only those pixels which are located within the same geographical position of the ones used in the testing dataset of S1. In this way, one testing pixel of S1 will be equivalent to several ones in the PAZ case. This strategy serves to keep the finer spatial resolution of PAZ since we will have several different likelihood vectors in those pixels which are geographically coincident to one single S1 pixel.
After having finished the classification processes of both classifiers we calculate the fusion of both results by means of the product of experts [39]:
P c i S 1 ,   PAZ ( x ) = P c i S 1 ( x ) P c i PAZ ( x ) i = 1 N P c i S 1 ( x ) P c i PAZ ( x )
where P c i y ( x ) denotes the probability of pixel i to belong to class ci according to the result of classifier y, being y either PAZ or S1. This expression makes use of the class probability vectors given by each classifier, and the result is a combined probability vector whose maximum defines the final class assigned to the pixel. Consequently, we have three different classification results available: two obtained independently for each sensor, and one originated from the fusion of both.

2.3.3. Evaluation

Confusion matrices are computed from the results, and several accuracy metrics are obtained [40]: overall accuracy (OA), kappa coefficient, producer’s accuracy (PA), and user’s accuracy (UA).
OA shows the relation of correctly classified pixels out of the total, and kappa, which is also a global metric, is a more conservative variable since it takes into account also results that occur by chance.
PA and UA are two common metrics used for assessing the individual performance of each class. Although they both offer a percentage of accuracy at class level, their value and interpretation differ due to the way in which PA and UA are formulated. PA computes, for each class, the amount of well-classified pixels with respect to the total number of pixels which pertain to that class in the ground truth. In short, it indicates how well pixels of the ground truth are classified in the product map. On the other hand, UA measures the number of correct classified pixels with respect to the total number of pixels that have been classified as the observed class. Therefore, UA indicates how reliable the product map is.

3. Results

In order to test the performance of different data sets and their combination, the classification has been carried out in five different situations considering the different number of total data in each series (8 for PAZ vs. 40 of S1), which are detailed in the following subsections.

3.1. Results with 8 PAZ Images

The results obtained classifying the series of eight PAZ images alone (sparsely acquired from the beginning of March to the end of October) were assessed in first place. The scores of the results from this experiment are summarised in Table 3.
The overall results are not good, with OA and Kappa equal to 59.8 and 0.54, respectively. These low global values are the consequence of having only very few images and of their acquisition dates, which are very irregular over the season.
Regarding the PA, the best results are given by beet (94.4%), and then corn and rape, which also achieve accuracies above 75%. However, very low PA values are obtained by classes such as rye and fallow, which do not exceed clearly 30%. Specifically, rye and fallow show 30.2 and an 18.5% which in comparison with the scores achieved by other crops are quite low. In the case of chickpea and potato, they also stand out for their low accuracy as their PA is below 50%.
A similar behavior is found in the UA, for which beet, corn, and rape are also the best classified classes whilst chickpea, rye and fallow remain among the worst ones. In quantitative terms, the best UA is found for corn (84.7%) and wheat (74.6%), while beet and rape get average values above a 70 and a 55%. For this analysis, alfalfa is the crop type with the worst UA (10.7%).
Alfalfa is a special case as there is a large difference between the values obtained by PA and UA. Alfalfa’s PA is around 60%, however, its UA is below 11%. This low UA means that there are many pixels belonging to other classes which are being wrongly classified as alfalfa in the resulting map.

3.2. Results with 8 Sentinel-1 Images

In this test we limited the input data set to the 8 S1 images acquired in the dates closest to the PAZ acquisitions in order to perform a fair comparison between sensors, so that the influence of the frequency band (i.e., C band for S1 vs. X band for PAZ) upon the classification would be studied. Still, there is a slight dissimilarity in the acquisition dates of both sensors. The scores of the results for this experiment are included in Table 4.
The overall scores in Table 4 are as poor as they were for PAZ, then there are not many differences between them: 0 in Kappa and 0.1% in OA. This means that the overall performance of both frequency bands in crop-type mapping is very similar, provided that both time series are irregular and there are only eight images in eight months. Therefore, the bad results of the classification should be attributed to the lack of data rather than to the influence of a given band.
As for PA and UA, they also show mixed results. For PA, beet is also the crop with highest accuracy (90.9%). However, in this case potato is the second-best class since it reaches 80.3%, i.e., a notable improvement in comparison with the 49.9% obtained with the PAZ images. With regard the worst classes, the PA of fallow is the lowest by using S1 images (28.2%), whilst alfalfa and rye are only slightly better (37%).
The rank of UA scores does not change much when compared to PA, but there are clear exceptions. As in the case of PAZ images, the best UA is for corn (84.5%), followed by rape and beet. Alfalfa and chickpea are also worst classified classes, showing UA values of 6.6% and 30.7%.

3.3. Results with 40 Sentinel-1 Images

In the previous case we restricted the S1 time series to coincide with the available PAZ images. Instead, in this case we considered the same observation period (March to October) but all the S1 images available in that interval (n = 40, acquired every six days). The resulting scores are listed in Table 5.
As expected, results exhibit a clear improvement when the length of the time series is increased. OA and Kappa now reach 76.1% and 0.72, respectively, which means an increase of 16.4% and 0.18 points for these two global scores when compared to the results with only 8 S1 images. This improvement demonstrates the importance of the time dimension for crop classification.
In comparison with the previous experiment, all the crops obtain better classification accuracies (PA and UA) using 40 images. In fact, the PA of 9 out of the 12 classes is above 70%, with 3 of them (beet, corn, and potato) above 90%. The worst classified class is rye (PA and UA around 40%). On the other hand, we may highlight the improvement achieved by alfalfa, as now reaches a PA of a 53.7% which contrasts with the 37% obtained in the eight images experiment. Regarding UA, 7 classes show values above 70%, and the highest accuracies correspond to corn, rape, beet and potato.

3.4. Fusion Results with 8 Sentinel-1 Images and 8 PAZ Images

The following experiments evaluate the fusion of the data from both sensors to get the final combined classification. The first of such fusion scenarios consists in selecting the same number of images for S1 and PAZ (eight for each sensor) using the closest S1 images to the acquisition date of PAZ series. Consequently, this alternative combines the results described in Section 3.1 and Section 3.2. This combination is expected to exploit the complementarity of the two frequency bands, without favoring any of them in terms of number of images. The classification scores obtained in this test are shown in Table 6. In Figure 4 we compare the fused results with the ones obtained by each dataset independently.
OA and Kappa present a notable improvement with respect to the results obtained with the individual datasets, achieving 70.2% and 0.66, respectively, which is an increase of more than 10% for OA and 0.1 for Kappa (see Table 3 and Table 4). Therefore, the two frequency bands altogether provide complementary information for crop-type mapping.
This complementarity is better understood by inspecting the PA and UA values for the individual crops, which can be visualized in Figure 4 together with the PA and UA values of the individual experiments.
The most relevant aspect of the PA and UA bars shown in Figure 4 is that 11 of the 12 crops for PA and all crops for UA show better accuracies in the fused case (blue bar) than the results coming from both independent cases (grey and orange).
The only exception is alfalfa in the PA case which in all cases shows a poor accuracy. Comparing the results of each sensor independently, PAZ performs better than S1 in 7 out of 12 classes for PA, whilst looking at the UA from both sensors, they behave in a similar fashion. Notably, there are clear examples in which one of the sensors outperforms the other in UA and/or PA by more than 10%, but the fused results are even better than the individual ones. Moreover, in some cases the performance of PAZ and S1 is similar, but then the fused results are much better than them.
We can highlight several crops to assess the contribution of the fused results from both classifiers. In terms of PA, sunflower and barley obtain the greatest improvements after fusing both independent results with respect to the individual sensors. The most noticeable crop type is sunflower, which gets 74% of PA in the fused case, in comparison with S1 and PAZ with PA’s of only 60.3 and 59%, respectively.
UA behaves similarly to PA according to the results displayed in Figure 4. An approximate 10% of accuracy improvement is achieved, on average, when comparing the fused UA with the other independent cases. Comparing with PAZ standalone, the most noticeable cases correspond to potato and rape which display a much better performance after the results fusion than for PAZ alone, with UA increments of more than 20%. Similarly, chickpea and sunflower stand out for getting the greatest improvements in the fused scenario when compared to S1 alone, as they both get UA values of 47.8% and 64.3%, respectively (i.e., with improvements of 24.4% and 15.2%).

3.5. Fusion Results with 40 Sentinel-1 Images and 8 PAZ Images

As a final experiment we tested the usage of the 8 available PAZ images in conjunction with all the 40 available S1 images. In this test, we exploit the rich time resolution of the S1 data, which by itself has achieved the best results so far (see Section 3.3), and the complementarity of the X-band data, which has been evidenced in Section 3.4. The accuracy scores obtained for this test are displayed in Table 7.
The use of this complete dataset allows to obtain the best overall accuracies among the five alternatives of data combination. OA and Kappa are 76.3% and 0.73, respectively. These values are clearly higher than the values found with 8 S1 and 8 PAZ images (70.2% and 0.66), but only slightly higher than the 76.1% and the 0.72 obtained by using only 40 S1 images. In short, the addition of the 8 PAZ images contributes to an 0.2% improvement of the OA achieved with the 40 images from S1.
To discuss the PA and UA of the individual classes we make use of the bar charts shown in Figure 5. As in the case analyzed in Section 3.4, after the fusion the trend is to improve the accuracy of all the individual crops, but in this case the improvement with respect to S1 alone (with 40 images) is not that notable, and there are some cases without improvement.
Alfalfa, wasteland and corn are among the classes that get an increase in PA after using the fusion, whereas chickpea, rye, and potato are among those that are not. For instance, alfalfa achieves a 64.2% percent of accuracy with the fusion of the results whilst it gets just a 53.7 and 59.1% when using S1 and PAZ independently. Likewise, wasteland stands out for getting 83.3% of PA which means 7.8 points of improvement with respect to the 75.5% of the second-best result which was given by S1.
In the UA bar chart, we observe some classes, such as rape and wasteland, which improve their accuracy and others, e.g., sunflower and barley, which do not. In the fused case, rape increases UA in 4.1% with respect to S1 alone, reaching 88.6%. Wasteland which in the PAZ case obtained only a UA equal to 53.9%, now increases up to 57.6%. On the other hand, S1 alone is the best dataset for sunflower and barley, providing UA values of 77.3 and 78.9%, in contrast to the fused case: 73.9 and 71.2%, respectively.

4. Discussion

When comparing examples of crops classifications using X-band and C-band SAR data, it should be first mentioned that each study has unique features, including the SAR data available and the number and type of crops present. In this study, and in terms of overall accuracy, the results obtained from the experiments using only eight PAZ and eight S1 images were not as good as expected with regard to previous results found in the literature with time series of X- and C-band SAR data [23,41,42,43,44]. For comparison purposes, the OA and kappa values obtained in all five experiments are depicted in Figure 6. The difference in accuracy between their results and our findings could be related to the scarcity of images. With a large number of images (40 S1), the accuracy (76.1%) increased compared with the use of only 8 S1 images (59.7%). This finding is in agreement with [45], in which an OA of 76% was achieved with 60 S1 images. This is particularly relevant for the rainfed crops, whose growing cycle is longer than the irrigated ones, and therefore need a longer temporal coverage to be characterized. So not surprisingly, the rainfed categories (rye, chickpea, wheat, barley, fallow and wasteland) afforded worse results than those of the irrigated ones (potato, rape, beet, and corn), as seen especially in the standalone experiments with eight images.
Some reasons may also explain this worse behavior of wheat, barley, and rye. One caveat regarding cereals is that normally they show a similar spectral signature during the growing season due to their similar plant structure and phenology [46], which makes them difficult to classify. Worse still, new hybrid varieties of winter cereals such as triticale (a hybrid of wheat and rye) make the separability between winter cereal species difficult [46]. Finally, cereals have lower biomass, and thus greater penetration of the signal through the crop canopy, resulting in greater contributions from the underlying soil as well as from vegetation to soil interactions [47]. Then, scattering contributions from the soil appear to contribute to class confusion [47] as also occurred for wasteland and chickpeas.
The PA and UA for winter cereals showed low results as those found in previous studies [6,35]. However, the fusion experiments clearly improved the results in terms of PA and UA for wheat and barley when compared with the results achieved in [48].
Opposite to the rainfed crops, the irrigated crops showed higher accuracies, owing their higher and denser biomass. Beet achieved the best PA scores (>90%), followed by corn with PA ranging from 68.8% to 93.7%, which also showed the best UA of the experiments performed. The scores reported in this study for beet and corn improved those found by [24]. Rape reached high PA (>82%) in three experiments (40 S1, 8 PAZ and 8 S1, 8 PAZ and 40 S1), while UA was also high for the same experiments. The PA and UA results obtained for potato in this study were high, obtaining its highest PA score (94.8%) when the whole S1 dataset was used to classify. The accuracy reported in this study with S1 data is higher than the accuracy provided by [24,49]. Moreover, the PA and UA from the fusion experiments also improve the results found by [48].
Special mention should be made of the alfalfa. This is, together with rye, the worst classified crop in all of the alternatives, even though it is an irrigated crop with a very dense canopy. The explanation of this behavior is purely agronomic. Alfalfa is an annual crop that is harvested three or even four times a year, mainly in summer [50,51]. Therefore, the backscattering response must have changed dramatically during the observation period, since the coverage suddenly shifted from a dense cover to an almost bare soil several times in season, leading to a failed classification.
Regarding the multi-frequency SAR data fusion for crop classification, the two frequency bands provide complementary information, since their combination improves the OA of the standalone experiments by more than 10%. The use of the complete dataset from S1 provided the best overall accuracy, improving the classification by 6% with regard to the use of 8 S1 and 8 PAZ images. The improvement that a multi-frequency fusion approach provides for crop classification was also found by [48]. They achieved an overall accuracy of 77.1% in one of their experiments from a decision level fusion of S1 and TerraSAR-X. The fusion of C-band radar satellites such as Sentinel-1 and Radarsat-2 with operational X-band satellites such as TerraSAR-X, TanDEM-X, and the new PAZ, makes an ideal scenario recognized in some works [52].

5. Conclusions

The present study has investigated the suitability of images acquired by the first Spanish radar satellite, PAZ, and S1, both separately and in a fusion scheme, for crop classification. In this way, the added value of the fusion of frequency bands has been assessed.
In the case of PAZ data, which has been tested for the first time in this work for crop classification, the classification accuracy was the same obtained by S1 for an equal number of images. However, when using the enlarged dataset of S1 images, the accuracy improved (16.9%) with respect to the results provided with only eight S1 or PAZ images. Therefore, the importance of time dimension in crop classification applications was clear.
This experiment with PAZ data gathered during 2020 (the mission was launched in 2018) offered acceptable results when used standalone. In the second part of this study, we presented a fusion approach by combining PAZ and S1. With this approach, crop types can be effectively mapped, achieving overall accuracy scores higher than the standalone experiments, except when comparing the accuracy of the whole S1 dataset standalone with the fusion of 8 PAZ and 8 S1. The whole S1 dataset provided a better overall accuracy. Again, the need of dense time series was highlighted. The short series favour the classification of summer irrigated crops, along with their clearer backscattering signal. On the contrary, the winter cereals, and the rainfed categories in general, although well classified, showed more confusion between classes.
One of the limitations of this work is the small number of images available for PAZ when compared to S1, which does not allow us to evaluate the fusion of bands in an ideal case of long and dense time series from both sensors. In addition, the acquisition dates of the satellite data should be better adapted to the crop calendar (i.e., gathering information earlier in the year). This was not possible with PAZ in this campaign, hence limiting the comparation of alternatives.
Based on the results achieved in this study, it would be interesting to perform further multi-frequency exploiting data collected from recent (e.g., SAOCOM 1A/1B, L-band) or future (e.g., Tandem-L, L-band; NISAR, L- and S-band) SAR sensors. This would help to increase the value of multi-frequency fusion approach for crop classification.

Author Contributions

Conceptualization, N.S. and J.M.L.-S.; methodology, M.B. and J.M.L.-S.; software, M.B.; validation, N.S., B.A.-P. and R.V.-D.; data curation, N.S. and B.A.-P.; writing—original draft preparation, M.B., R.V.-D., J.P. and J.M.L.-S.; writing—review and editing, all. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Spanish Ministry of Science and Innovation, the State Agency of Research (AEI) and the European Funds for Regional Development (EFRD) under Project TEC2017-85244-C2-1-P.

Acknowledgments

The authors would like to thank to INTA-PAZ Science Team for providing the PAZ data in the framework of AO-001-015 project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. UNDESA. Population 2030: Demographic Challenges and Opportunities for Sustainable Development Planning; United Nations: New York, NY, USA, 2015. [Google Scholar]
  2. Ray, D.K.; West, P.C.; Clark, M.; Gerber, J.S.; Prishchepov, A.V.; Chatterjee, S. Climate Change Has Likely Already Affected Global Food Production. PLoS ONE 2019, 14, e0217148. [Google Scholar] [CrossRef] [PubMed]
  3. Van Meijl, H.; Havlik, P.; Lotze-Campen, H.; Stehfest, E.; Witzke, P.; Domínguez, I.P.; Bodirsky, B.L.; Van Dijk, M.; Doelman, J.; Fellmann, T.; et al. Comparing Impacts of Climate Change and Mitigation on Global Agriculture by 2050. Environ. Res. Lett. 2018, 13, 064021. [Google Scholar] [CrossRef] [Green Version]
  4. Luciani, R.; Laneve, G.; Jahjah, M. Agricultural Monitoring, an Automatic Procedure for Crop Mapping and Yield Estimation: The Great Rift Valley of Kenya Case. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2196–2208. [Google Scholar] [CrossRef]
  5. Skakun, S.; Vermote, E.; Franch, B.; Roger, J.C.; Kussul, N.; Ju, J.; Masek, J. Winter Wheat Yield Assessment from Landsat 8 and Sentinel-2 Data: Incorporating Surface Reflectance, through Phenological Fitting, into Regression Yield Models. Remote Sens. 2019, 11, 1768. [Google Scholar] [CrossRef] [Green Version]
  6. Arias, M.; Campo-Bescós, M.Á.; Álvarez-Mozos, J. Crop Classification Based on Temporal Signatures of Sentinel-1 Observations over Navarre Province, Spain. Remote Sens. 2020, 12, 278. [Google Scholar] [CrossRef] [Green Version]
  7. Palchowdhuri, Y.; Valcarce-Diñeiro, R.; King, P.; Sanabria-Soto, M. Classification of Multi-Temporal Spectral Indices for Crop Type Mapping: A Case Study in Coalville, UK. J. Agric. Sci. 2018, 156, 24–36. [Google Scholar] [CrossRef]
  8. Schmedtmann, J.; Campagnolo, M.L. Reliable Crop Identification with Satellite Imagery in the Context of Common Agriculture Policy Subsidy Control. Remote Sens. 2015, 7, 9325–9346. [Google Scholar] [CrossRef] [Green Version]
  9. Sitokonstantinou, V.; Papoutsis, I.; Kontoes, C.; Arnal, A.L.; Andrés, A.P.A.; Zurbano, J.A.G. Scalable Parcel-Based Crop Identification Scheme Using Sentinel-2 Data Time-Series for the Monitoring of the Common Agricultural Policy. Remote Sens. 2018, 10, 911. [Google Scholar] [CrossRef] [Green Version]
  10. Azar, R.; Villa, P.; Stroppiana, D.; Crema, A.; Boschetti, M.; Brivio, P.A. Assessing In-Season Crop Classification Performance Using Satellite Data: A Test Case in Northern Italy. Eur. J. Remote Sens. 2016, 49, 361–380. [Google Scholar] [CrossRef] [Green Version]
  11. Inglada, J.; Arias, M.; Tardy, B.; Hagolle, O.; Valero, S.; Morin, D.; Dedieu, G.; Sepulcre, G.; Bontemps, S.; Defourny, P.; et al. Assessment of an Operational System for Crop Type Map Production Using High Temporal and Spatial Resolution Satellite Optical Imagery. Remote Sens. 2015, 7, 12356–12379. [Google Scholar] [CrossRef] [Green Version]
  12. Kobayashi, N.; Tani, H.; Wang, X.; Sonobe, R. Crop Classification Using Spectral Indices Derived from Sentinel-2A Imagery. J. Inf. Telecommun. 2020, 4, 67–90. [Google Scholar] [CrossRef]
  13. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K. Crop Classification from Sentinel-2-Derived Vegetation Indices Using Ensemble Learning. J. Appl. Remote Sens. 2018, 12, 026019. [Google Scholar] [CrossRef] [Green Version]
  14. Liu, C.A.; Chen, Z.X.; Shao, Y.; Chen, J.S.; Hasi, T.; Pan, H. Research Advances of SAR Remote Sensing for Agriculture Applications: A Review. J. Integr. Agric. 2019, 18, 506–525. [Google Scholar] [CrossRef] [Green Version]
  15. Steele-Dunne, S.C.; McNairn, H.; Monsivais-Huertero, A.; Judge, J.; Liu, P.W.; Papathanassiou, K. Radar Remote Sensing of Agricultural Canopies: A Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2249–2273. [Google Scholar] [CrossRef] [Green Version]
  16. Torres, R.; Snoeij, P.; Geudtner, D.; Bibby, D.; Davidson, M.; Attema, E.; Potin, P.; Rommen, B.; Floury, N.; Brown, M.; et al. GMES Sentinel-1 Mission. Remote Sens. Environ. 2012, 120, 9–24. [Google Scholar] [CrossRef]
  17. Thompson, A.A. Overview of the RADARSAT Constellation Mission. Can. J. Remote Sens. 2015, 41, 401–407. [Google Scholar] [CrossRef]
  18. Bach, K.; Kahabka, H.; Fernando, C.; Perez, J.C. The TerraSAR-X / PAZ Constellation: Post-Launch Update. In Proceedings of the EUSAR 2018, 12th European Conference on Synthetic Aperture Radar, Aachen, Germany, 4–7 June 2018; VDE: Aachen, Germany, 2018. [Google Scholar]
  19. Bargiel, D. A New Method for Crop Classification Combining Time Series of Radar Images and Crop Phenology Information. Remote Sens. Environ. 2017, 198, 369–383. [Google Scholar] [CrossRef]
  20. Busquier, M.; Lopez-Sanchez, J.M.; Mestre-Quereda, A.; Navarro, E.; González-Dugo, M.P.; Mateos, L. Exploring TanDEM-X Interferometric Products for Crop-Type Mapping. Remote Sens. 2020, 12, 1774. [Google Scholar] [CrossRef]
  21. Denize, J.; Hubert-Moy, L.; Pottier, E. Polarimetric SAR Time-Series for Identification of Winter Land Use. Sensors 2019, 19, 5574. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Dey, S.; Mandal, D.; Robertson, L.D.; Banerjee, B.; Kumar, V.; McNairn, H.; Bhattacharya, A.; Rao, Y.S. In-Season Crop Classification Using Elements of the Kennaugh Matrix Derived from Polarimetric RADARSAT-2 SAR Data. Int. J. Appl. Earth Obs. Geoinf. 2020, 88, 102059. [Google Scholar] [CrossRef]
  23. Sonobe, R. Parcel-Based Crop Classification Using Multi-Temporal TerraSAR-X Dual Polarimetric Data. Remote Sens. 2019, 11, 1148. [Google Scholar] [CrossRef] [Green Version]
  24. Valcarce-Diñeiro, R.; Arias-Pérez, B.; Lopez-Sanchez, J.M.; Sánchez, N. Multi-Temporal Dual- and Quad-Polarimetric Synthetic Aperture Radar Data for Crop-Type Mapping. Remote Sens. 2019, 11, 1518. [Google Scholar] [CrossRef] [Green Version]
  25. Zhao, H.; Chen, Z.; Jiang, H.; Jing, W.; Sun, L.; Feng, M. Evaluation of Three Deep Learning Models for Early Crop Classification Using Sentinel-1A Imagery Time Series—A Case Study in Zhanjiang, China. Remote Sens. 2019, 11, 2673. [Google Scholar] [CrossRef] [Green Version]
  26. Hoekman, D.H.; Vissers, M.A.M.; Tran, T.N. Unsupervised Full-Polarimetric SAR Data Segmentation as a Tool for Classification of Agricultural Areas. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 402–411. [Google Scholar] [CrossRef]
  27. Skriver, H.; Mattia, F.; Satalino, G.; Balenzano, A.; Pauwels, V.R.N.; Verhoest, N.E.C.; Davidson, M. Crop Classification Using Short-Revisit Multitemporal SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 423–431. [Google Scholar] [CrossRef]
  28. Skriver, H. Crop Classification by Multitemporal C- and L-Band Single- and Dual-Polarization and Fully Polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2138–2149. [Google Scholar] [CrossRef]
  29. Guindon, B.; Teillet, P.M.; Goodenough, D.G.; Palimaka, J.J.; Sieber, A. Evaluation of the Crop Classification Performance of X, L and C-Band Sar Imagery. Can. J. Remote Sens. 1984, 10, 4–16. [Google Scholar] [CrossRef]
  30. Chen, K.S.; Huang, W.P.; Tsay, D.H.; Amar, F. Classification of Multifrequency Polarimetric SAR Imagery Using a Dynamic Learning Neural Network. IEEE Trans. Geosci. Remote Sens. 1996, 34, 814–820. [Google Scholar] [CrossRef]
  31. Hoekman, D.H.; Vissers, M.A.M. A New Polarimetric Classification Approach Evaluated for Agricultural Crops. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2881–2889. [Google Scholar] [CrossRef] [Green Version]
  32. Li, Y.; Nie, J.; Chao, X. Do we really need deep CNN for plant diseases identification? Comput. Electron. Agric. 2020, 178, 105803. [Google Scholar] [CrossRef]
  33. Argüeso, D.; Picon, A.; Irusta, U.; Medela, A.; San-Emeterio, M.G.; Bereciartua, A.; Álvarez-Gila, A. Few-Shot Learning approach for plant disease classification using images taken in the field. Comput. Electron. Agric. 2020, 175, 105542. [Google Scholar] [CrossRef]
  34. Li, Y.; Chao, X. Semi-supervised few-shot learning approach for plant diseases recognition. Plant Methods 2021, 17, 68. [Google Scholar] [CrossRef]
  35. Busquier, M.; Lopez-Sanchez, J.M.; Bargiel, D. Added Value of Coherent Copolar Polarimetry at X-Band for Crop-Type Mapping. IEEE Geosci. Remote Sens. Lett. 2020, 17, 819–823. [Google Scholar] [CrossRef]
  36. Bargiel, D.; Herrmann, S. Multi-Temporal Land-Cover Classification of Agricultural Areas in Two European Regions with High Resolution Spotlight TerraSAR-X Data. Remote Sens. 2011, 3, 859–877. [Google Scholar] [CrossRef] [Green Version]
  37. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  38. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar] [CrossRef]
  39. Valero, S.; Arnaud, L.; Planells, M.; Ceschia, E.; Dedieu, G. Sentinel’s Classifier Fusion System for Seasonal Crop Mapping. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Yokohama, Japan, 28 July–2 August 2019; pp. 6243–6246. [Google Scholar] [CrossRef]
  40. Stehman, S.V. Selecting and Interpreting Measures of Thematic Classification Accuracy. Remote Sens. Environ. 1997, 62, 77–89. [Google Scholar] [CrossRef]
  41. Luo, C.; Qi, B.; Liu, H.; Guo, D.; Lu, L.; Fu, Q.; Shao, Y. Using Time Series Sentinel-1 Images for Object-Oriented Crop Classification in Google Earth Engine. Remote Sens. 2021, 13, 561. [Google Scholar] [CrossRef]
  42. Ndikumana, E.; Minh, D.H.T.; Baghdadi, N.; Courault, D.; Hossard, L. Deep Recurrent Neural Network for Agricultural Classification Using Multitemporal SAR Sentinel-1 for Camargue, France. Remote Sens. 2018, 10, 1217. [Google Scholar] [CrossRef] [Green Version]
  43. Sonobe, R.; Tani, H.; Wang, X.; Kobayashi, N.; Shimamura, H. Random Forest Classification of Crop Type Using Multi-Temporal TerraSAR-X Dual-Polarimetric Data. Remote Sens. Lett. 2014, 5, 157–164. [Google Scholar] [CrossRef] [Green Version]
  44. Sonobe, R.; Tani, H.; Wang, X.; Kobayashi, N.; Shimamura, H. Discrimination of Crop Types with TerraSAR-X-Derived Information. Phys. Chem. Earth 2015, 83–84, 2–13. [Google Scholar] [CrossRef] [Green Version]
  45. Van Tricht, K.; Gobin, A.; Gilliams, S.; Piccard, I. Synergistic Use of Radar Sentinel-1 and Optical Sentinel-2 Imagery for Crop Mapping: A Case Study for Belgium. Remote Sens. 2018, 10, 1642. [Google Scholar] [CrossRef] [Green Version]
  46. Kyere, I.; Astor, T.; Graß, R.; Wachendorf, M. Agricultural Crop Discrimination in a Heterogeneous Low-Mountain Range Region Based on Multi-Temporal and Multi-Sensor Satellite Data. Comput. Electron. Agric. 2020, 179, 105864. [Google Scholar] [CrossRef]
  47. Jiao, X.; Kovacs, J.M.; Shang, J.; McNairn, H.; Walters, D.; Ma, B.; Geng, X. Object-Oriented Crop Mapping and Monitoring Using Multi-Temporal Polarimetric RADARSAT-2 Data. ISPRS J. Photogramm. Remote Sens. 2014, 96, 38–46. [Google Scholar] [CrossRef]
  48. Gella, G.W.; Bijker, W.; Belgiu, M. Mapping Crop Types in Complex Farming Areas Using SAR Imagery with Dynamic Time Warping. ISPRS J. Photogramm. Remote Sens. 2021, 175, 171–183. [Google Scholar] [CrossRef]
  49. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K. ichiro Assessing the Suitability of Data from Sentinel-1A and 2A for Crop Classification. GIScience Remote Sens. 2017, 54, 918–938. [Google Scholar] [CrossRef]
  50. Guo, G.; Shen, C.; Liu, Q.; Zhang, S.L.; Wang, C.; Chen, L.; Xu, Q.F.; Wang, Y.X.; Huo, W.J. Fermentation Quality and in Vitro Digestibility of First and Second Cut Alfalfa (Medicago Sativa L.) Silages Harvested at Three Stages of Maturity. Anim. Feed Sci. Technol. 2019, 257, 114274. [Google Scholar] [CrossRef]
  51. Chandel, A.K.; Khot, L.R.; Yu, L.X. Alfalfa (Medicago Sativa L.) Crop Vigor and Yield Characterization Using High-Resolution Aerial Multispectral and Thermal Infrared Imaging Technique. Comput. Electron. Agric. 2021, 182, 105999. [Google Scholar] [CrossRef]
  52. Hütt, C.; Koppe, W.; Miao, Y.; Bareth, G. Best Accuracy Land Use/Land Cover (LULC) Classification to Derive Crop Types Using Multitemporal, Multisensor, and Multi-Polarization SAR Satellite Images. Remote Sens. 2016, 8, 684. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Location map of the study area and ground-truth dataset.
Figure 1. Location map of the study area and ground-truth dataset.
Remotesensing 13 03915 g001
Figure 2. Acquisition dates of the PAZ and Sentinel-1 products.
Figure 2. Acquisition dates of the PAZ and Sentinel-1 products.
Remotesensing 13 03915 g002
Figure 3. Classification methodology flowchart.
Figure 3. Classification methodology flowchart.
Remotesensing 13 03915 g003
Figure 4. PA and UA bar charts with eight Sentinel-1 and eight PAZ images, both independently and with fusion. (a) Producer’s accuracy. (b) User’s accuracy.
Figure 4. PA and UA bar charts with eight Sentinel-1 and eight PAZ images, both independently and with fusion. (a) Producer’s accuracy. (b) User’s accuracy.
Remotesensing 13 03915 g004
Figure 5. PA and UA bar charts with 40 Sentinel-1 images and 8 PAZ images, both independently and with fusion. (a) Producer’s accuracy. (b) User’s accuracy.
Figure 5. PA and UA bar charts with 40 Sentinel-1 images and 8 PAZ images, both independently and with fusion. (a) Producer’s accuracy. (b) User’s accuracy.
Remotesensing 13 03915 g005
Figure 6. OA (a) and kappa (b) bar charts comparing all five experiments.
Figure 6. OA (a) and kappa (b) bar charts comparing all five experiments.
Remotesensing 13 03915 g006
Table 1. Ground-truth dataset for classification purposes.
Table 1. Ground-truth dataset for classification purposes.
Crop TypeNumber of FieldsArea (ha)Regime Growing Cycle
Potato3159.38IrrigatedApril to September
Rape1030.43RainfedSeptember (long cycle) or February (short cycle) to June
Wasteland2785.22NoneNone
Sunflower2092.46Rainfed/IrrigatedApril to September
Alfalfa42.70IrrigatedPluriannual
Rye2171.63RainfedSeptember to June
Chickpea615.43RainfedFebruary to June
Beet723.27IrrigatedFebruary to October
Corn66217.19IrrigatedApril to November
Wheat64176.94RainfedSeptember to June
Fallow30113.37NoneNone
Barley37129.39RainfedSeptember to June
Table 2. Details of the input SAR images.
Table 2. Details of the input SAR images.
SensorCentre
Frequency
Polarization
Channels
Incidence
Angle
Spatial
Resolution
PAZ9.65 GHzHH, VV41 deg.2.66 m × 6.6 m
Sentinel 1-A & B5.405 GHzVV, VH39 deg.2.98 m × 13.92 m
Table 3. Classification scores at pixel level with eight PAZ images.
Table 3. Classification scores at pixel level with eight PAZ images.
Overall Accuracy & Kappa Score
FeaturesOAKappa
PAZ59.80.54
Producer’s Accuracy (%)
CropPotatoRapeWastelandSunflowerAlfalfaRyeChickpeaBeetCornWheatFallowBarley
PAZ49.977.473.859.058.730.242.094.479.661.018.560.6
User’s Accuracy (%)
CropPotatoRapeWastelandSunflowerAlfalfaRyeChickpeaBeetCornWheatFallowBarley
PAZ44.556.953.949.010.728.323.472.384.774.640.359.1
Table 4. Classification scores at pixel level with eight Sentinel-1 images.
Table 4. Classification scores at pixel level with eight Sentinel-1 images.
Overall Accuracy & Kappa Score
FeaturesOAKappa
S159.70.54
Producer’s Accuracy (%)
CropPotatoRapeWastelandSunflowerAlfalfaRyeChickpeaBeetCornWheatFallowBarley
S180.368.268.060.337.037.052.090.968.855.828.260.3
User’s Accuracy (%)
CropPotatoRapeWastelandSunflowerAlfalfaRyeChickpeaBeetCornWheatFallowBarley
S158.469.845.357.06.633.730.769.984.566.448.359.3
Table 5. Classification scores at pixel level with 40 Sentinel-1 images.
Table 5. Classification scores at pixel level with 40 Sentinel-1 images.
Overall Accuracy & Kappa Score
FeaturesOAKappa
S176.10.72
Producer’s Accuracy (%)
CropPotatoRapeWastelandSunflowerAlfalfaRyeChickpeaBeetCornWheatFallowBarley
S194.886.375.582.353.736.272.297.692.983.229.972.4
User’s Accuracy (%)
CropPotatoRapeWastelandSunflowerAlfalfaRyeChickpeaBeetCornWheatFallowBarley
S181.684.550.477.324.145.462.587.295.179.060.478.9
Table 6. Classification scores with eight Sentinel-1 images and eight PAZ images.
Table 6. Classification scores with eight Sentinel-1 images and eight PAZ images.
Overall Accuracy & Kappa Score
FeaturesOAKappa
Merge70.20.66
Producer’s Accuracy (%)
CropPotatoRapeWastelandSunflowerAlfalfaRyeChickpeaBeetCornWheatFallowBarley
Merge81.482.681.074.050.639.056.096.786.867.330.871.3
User’s Accuracy (%)
CropPotatoRapeWastelandSunflowerAlfalfaRyeChickpeaBeetCornWheatFallowBarley
Merge69.583.957.264.320.439.147.881.988.077.660.565.5
Table 7. Classification scores with 40 Sentinel-1 images and 8 PAZ images.
Table 7. Classification scores with 40 Sentinel-1 images and 8 PAZ images.
Overall Accuracy & Kappa Score
FeaturesOAKappa
Merge76.30.73
Producer’s Accuracy (%)
CropPotatoRapeWastelandSunflowerAlfalfaRyeChickpeaBeetCornWheatFallowBarley
Merge91.588.583.383.664.235.066.998.193.777.737.072.2
User’s Accuracy (%)
CropPotatoRapeWastelandSunflowerAlfalfaRyeChickpeaBeetCornWheatFallowBarley
Merge84.688.657.673.936.144.166.187.892.279.367.471.2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Busquier, M.; Valcarce-Diñeiro, R.; Lopez-Sanchez, J.M.; Plaza, J.; Sánchez, N.; Arias-Pérez, B. Fusion of Multi-Temporal PAZ and Sentinel-1 Data for Crop Classification. Remote Sens. 2021, 13, 3915. https://doi.org/10.3390/rs13193915

AMA Style

Busquier M, Valcarce-Diñeiro R, Lopez-Sanchez JM, Plaza J, Sánchez N, Arias-Pérez B. Fusion of Multi-Temporal PAZ and Sentinel-1 Data for Crop Classification. Remote Sensing. 2021; 13(19):3915. https://doi.org/10.3390/rs13193915

Chicago/Turabian Style

Busquier, Mario, Rubén Valcarce-Diñeiro, Juan M. Lopez-Sanchez, Javier Plaza, Nilda Sánchez, and Benjamín Arias-Pérez. 2021. "Fusion of Multi-Temporal PAZ and Sentinel-1 Data for Crop Classification" Remote Sensing 13, no. 19: 3915. https://doi.org/10.3390/rs13193915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop