Next Article in Journal
Expanding the Application of Sentinel-2 Chlorophyll Monitoring across United States Lakes
Next Article in Special Issue
LUCA: A Sentinel-1 SAR-Based Global Forest Land Use Change Alert
Previous Article in Journal
Retrieval of Atmospheric Temperature Profiles from FY-4A/GIIRS Hyperspectral Data Based on TPE-MLP: Analysis of Retrieval Accuracy and Influencing Factors
Previous Article in Special Issue
The Amazon’s 2023 Drought: Sentinel-1 Reveals Extreme Rio Negro River Contraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Crop Area Using PALSAR, Sentinel-1, and Planet Data for the NISAR Mission

1
Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, 00184 Rome, Italy
2
Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA
3
Department of Electrical and Computer Engineering, University of Massachusetts Amherst, Amherst, MA 01003, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(11), 1975; https://doi.org/10.3390/rs16111975
Submission received: 26 March 2024 / Revised: 8 May 2024 / Accepted: 28 May 2024 / Published: 30 May 2024
(This article belongs to the Special Issue NISAR Global Observations for Ecosystem Science and Applications)

Abstract

:
An algorithm for classifying crop areas using multi-frequency Synthetic Aperture Radar (SAR) and optical data is evaluated for the upcoming NASA ISRO SAR (NISAR) mission and its active crop area products. Two time-series of L-band ALOS-2 and C-band Sentinel-1A images over an agricultural region in the Southern United States are used as the input, as well as high-resolution Planet optical data. To overcome the delay by at least one year of existing landcover maps, training and validation sets of crop/non-crop polygons are derived with the contemporary Planet images. The classification results show that the 80% requirement on the NISAR science accuracy is achievable only with L-band HV input and with a resolution of 100 m. In comparison, HH polarized images do not meet this target. The spatial resolution is a key factor: 100 m is necessary to accomplish the 80% goal, while 10 m do not produce the desired accuracy. Unlike the previous study reporting that C-band performs better than L-band, we found otherwise in this study. This suggests that the performance likely depends on the site of interest and crop types. Alternative to the SAR images, the Normalized Difference Vegetation Index (NDVI) from the Planet data is not effective either as an input to the classification algorithm or as ground truth for training the algorithm. The reason is that NDVI becomes saturated and temporally static, thus rendering crop pixels to be misclassified as non-crop.

1. Introduction

Crop area classification is essential for yield monitoring and food security [1]. Since the characteristics of agricultural fields may change rapidly due to land use and natural events, cloud-penetrating Synthetic Aperture Radar (SAR) and high-resolution optical data have become very useful, especially when capturing the full growth cycle of crops is needed.
Optical images from current and past satellite missions have been widely used for the monitoring of crop properties [2,3,4,5,6]. Sentinel-2 and Landsat missions offered valuable contributions to this end [7,8,9,10]. Unfortunately, optical sensors are affected by weather and sunlight conditions, implying significant limitations when frequent monitoring is required. Commonly distributed satellite-derived indices such as the Normalized Difference Vegetation Index (NDVI) have a too coarse resolution, unsuitable for monitoring agricultural fields with a relatively small size (~1–2 ha). Other sources of crop maps include Cropland Data Layer (CDL) [11] or the European Space Agency (ESA) WorldCover [12], which are updated yearly with at least one year delay.
SAR data, such as those collected by the C-band Sentinel-1 and the L-band ALOS-2 satellites, can map the Earth regardless of day, night or cloud presence, with a frequent revisit time (i.e., 12 to 14 days) and high spatial resolution (up to 3 m for ALOS-2). Different polarizations are important for studying how the radar signal interacts with crop structure and land cover classes. SAR polarimetry can reveal how different scattering mechanisms are impacted by diverse crop types and vegetation covers [13,14]. Studies on the integration of SAR and optical data have also been carried out [15,16]. The upcoming NASA ISRO SAR (NISAR) mission will collect L-band and S-band data with a revisit time of 12 days (6 days including ascending and descending passes) [17]. NISAR science data will have high spatial resolution (up to 6 m) and will be freely distributed for addressing various scientific needs. The L-band NISAR’s cropland products will be released at 100 m resolution, aiming at the overall accuracy requirement of 80% on a seasonal interval.
Previous studies already presented the results obtained by applying the NISAR science algorithm for crop area classification [18,19,20,21]. The algorithm is based on the Coefficient of Variation (CV) computed over time from a dense time-series of data. The past studies emphasized the efficiency of cross-polarization in distinguishing between crop and non-crop areas. S. Kraatz et al. used C-band Sentinel-1 and L-band ALOS-2 data for classifying crop/non-crop areas over an agricultural region in Manitoba (Canada) [20]. The overall accuracy values were reported as 83.5% and 73.2% for Sentinel-1 data and ALOS-2 data, respectively, showing the superior performance of C-band. Using Sentinel-1 data acquired over the contiguous United States, the overall accuracy was greater than 80% [18]. L-band Uninhabited Aerial Vehicle SAR (UAVSAR) airborne data were simulated into the NISAR products by modifying the spatial resolutions and frequency band [19]. The accuracy values exceeded 80%, except for the 10 m simulated data. These past studies pose questions on what the optimal frequency, spatial resolution, and polarization are, especially in the context of preparing an operational algorithm for the upcoming NISAR mission.
Another important element to achieve the operational quality of the algorithm is the source of the ground truth data for the calibration and validation of the crop area classification algorithm. Although land cover products such as the Cropland Data Layer (CDL) are widely used as reference datasets for calibrating and evaluating the algorithm, they are released with a delay of at least 1 year. With this latency, such data are not useful for operational validation at the seasonal interval of the NISAR products. In this study, we explored whether alternative contemporary optical data could generate a reliable ground reference.
The objectives of this study are: (1) to continue verifying the performance of the NISAR science algorithm for classifying crop and non-crop areas on a new location; (2) to compare the algorithm’s performance between L-band and C-band inputs and analyze how results are frequency-dependent; (3) to introduce operationally feasible ground reference for training and validating the algorithm instead of CDL or other land cover products; (4) to evaluate the classification results by analyzing different polarizations (co-pol vs. cross-pol) and spatial resolutions (10 m vs. 100 m); (5) to investigate the capability of a high-resolution optical dataset to classify crop areas as an alternative to the radar data input. These investigations will eventually contribute to support the development of the NISAR cropland products. The novelty of this study lies in: (1) the provision of contemporaneous ground truth prepared by analyzing the Planet optical images for training and validation; (2) finding that only the L-band HV at 100 m resolution satisfied the accuracy goal; (3) documentation of the fact that optical NDVI is not helpful as a ground truth dataset for crop area classification. Section 2 describes the datasets used for the analysis and the methodology followed for achieving the objectives of the study. Section 3 reports and discusses the results.

2. Materials and Methods

2.1. Datasets

Different sources of data were used over an agricultural region in the Southern United States over Arkansas and Missouri. Two time-series of SAR images at C-band and L-band were analyzed, in addition to the time-series of high-resolution optical data acquired by the PlanetScope satellites constellation. A total of five images for each dataset were acquired between July 2019 and September 2019. For L-band, ALOS-2 PALSAR-2 images were collected in Stripmap Fine mode with a pixel spacing of 10 m and a revisit time of fourteen days. The original data were then processed to simulate NISAR properties in terms of bandwidth, center frequency, and spatial resolution using the InSAR Scientific Computing Environment (ISCE) 3 software (version 0.12.0). HH and HV polarizations were both analyzed in this work. For C-band, five Sentinel-1A Ground Range Detected (GRD) products (VH, VV polarizations) were used, with a pixel spacing of 10 m × 10 m and a revisit time of twelve days (Relative Orbit 165). Five Planet surface reflectance products with a resolution of 3 m were also selected for the analysis. They are derived from the standard Analytic Product (Radiance) processed to top-of-atmosphere reflectance and then to bottom-of-atmosphere reflectance after atmospheric correction [22]. Table 1 reports the acquisition dates for each of the above-mentioned datasets.
Taking advantage of the fine resolution of the PlanetScope data, they were used for providing the ground truth of two different sets of crop and non-crop polygons by following a visual inspection process of the images itself. A total of 78 polygons were delineated to capture crop and non-crop pixels over the area of interest. For crop polygons, dominant land cover classes were selected by looking at the 2019 CDL. The 2019 CDL product is deemed very accurate through extensive quality check and provides a ground truth for the crop/non-crop areas. Although CDL cannot be used in the contemporaneous calibration/validation of the NISAR products, in this study’s time-frame of 2019 the CDL is synchronized with the radar data. The CDL exploits a decision tree-supervised classification to generate a final product with high accuracy, which includes many different land cover classes with a wide variety of both crop and non-crop types. Released on an annual basis, it covers the Continental United States with a spatial resolution of 30 m × 30 m. Data acquired by the Landsat 8 OLI/TIRS sensor, the Disaster Monitoring Constellation (DMC) DEIMOS-1 and UK2, the ISRO ResourceSat-2 LISS-3, and the ESA Sentinel-2 instruments were processed [23]. Figure 1 shows the 2019 CDL product for the site of interest.
A dominant crop type was only selected if it accounted for more than the 2% of pixels within the entire covered scene, leading to Corn (3.8%), Cotton (46.8%), Soybeans (18.5%), Peanuts (5.2%), and Rice (4.2%). Although rice accounted for 4.2% of pixels, it was not included. Rice fields are often flooded and can result in anomalous CV values due to high variability of a water surface seen by radar, leading to a potential misclassification. Non-crop polygons were also not selected over water for the same reason. The size of the polygons has been chosen so that each crop and non-crop region had a statistically significant number of pixels when using images at 100 m resolution. This procedure resulted in 20 non-crop polygons (8 urban areas, 8 woody wetlands, 4 fallow) and 58 crop polygons (20 cotton, 16 soybeans, 12 peanuts, 10 corn). Out of the 78 drawn polygons, 39 (4 urban areas, 4 woody wetlands, 2 fallow, 10 cotton, 8 soybeans, 6 peanuts, 5 corn) were used as a training set for deriving the optimal CV threshold. The other 39 (4 urban areas, 4 woody wetlands, 2 fallow, 10 cotton, 8 soybeans, 6 peanuts, 5 corn) were used for validation. Crop/non-crop breakdown was equal to approximately 70% (crop) and 30% (non-crop) for the training set, and approximately 73% (crop) and 27% (non-crop) for the validation set. Table 2 summarizes the characteristics of the training and validation sets of polygons.
Figure 2 shows the site of interest (orange), along with the PlanetScope image acquired on 7 July 2019 and the four sets of crop and non-crop polygons that were used as sources for training (crop: green, non-crop: light blue) and validation (crop: yellow, non-crop: red).

2.2. Dataset Preparation and Processing

ALOS-2 PALSAR-2 images were processed using the InSAR Scientific Computing Environment (ISCE) 3 software (version 0.12.0) to generate the NISAR L2 GCOV (Geocoded Polarimetric Covariance Matrix) products with a pixel spacing of 10 m and were converted to gamma-naught [17]. Data were reprojected to WGS84 Lat/Lon, co-registered to the geographical grid of the first time-series image and resampled to obtain a coarser resolution (i.e., pixel spacing of approximately 100 m × 100 m). Data at two polarizations (HH, HV) and two resolutions (10 m and 100 m) were used as input and their results were then compared. To be consistent with L-band images, Sentinel-1A data were converted to gamma-naught. They were geocoded and terrain corrected using the 30 m Shuttle Radar Topography Mission (SRTM) Digital Elevation Model (DEM). Data were co-registered to the same geographical grid of the first image and resampled to a pixel spacing of approximately 100 m × 100 m. The Normalized Difference Vegetation Index was derived from the 3 m-resolution PlanetScope surface reflectance products. Images were reprojected to WGS84 Lat/Lon, co-registered and resampled to 10 m and 100 m, respectively. The NDVI was computed as follows:
N D V I = N I R R E D N I R + R E D
where the two terms NIR and RED are the Near-Infrared and Red bands. Non-significant pixels (i.e., clouds, shadow, light haze, heavy haze) were masked out and a linear interpolation over time synchronized NDVI images to ALOS-2 acquisition days (Figure 3).

2.3. Coefficient of Variation

The Coefficient of Variation (CV) was computed at pixel level from the mean and standard deviation over time. This approach was already used in past studies [18,19,20,21]. It relies on the fact that non-crop pixels do not exhibit high CV values; on the contrary, crop pixels are characterized by relevant changes over time due to agricultural practices and crop reaching the mature stage, resulting in high CV values over a quarter period of the NISAR’s crop area product. The Coefficient of Variation, computed for the three datasets involved in this study, including the NDVI, was given by the following:
C V = σ μ
where σ is the standard deviation, μ is the mean. Following the approach reported in [24] to properly distinguish between crop and non-crop pixels, an optimal CV threshold C V t h r was derived, so that each pixel greater than or equal to this threshold were classified as crop, non-crop otherwise:
C V C V t h r : c r o p C V < C V t h r   : n o n c r o p
C V t h r was obtained by applying a Receiver Operating Characteristics (ROC) curve, as will be illustrated in Section 2.4. The threshold was used for generating a binary crop and non-crop map of the area of interest. Then, the performances of the classification were evaluated for the outputs generated with different input parameters (i.e., resolution, polarization, frequency) and datasets (i.e., SAR and optical).

2.4. Receiver Characteristics Operating Curve

As in previous studies [18,19,20,21], the optimal C V t h r was obtained by deriving the Youden’s J-Index [25] following an iterative procedure based on a Receiver Operating Characteristic (ROC) curve technique. This approach defines C V t h r as the point on the ROC curve in which the maximum value of the Youden’s Index is found. The ROC curve was generated by plotting True Positive Rate (TPR) as a function of False Positive Rate (FPR) after creating a confusion matrix at each C V t h r increments of 0.01 (resulting in a total of 100 thresholds). Based on this, the maximum value of the J-Index corresponds to having the maximum difference between TPR and FPR or the maximum vertical distance from the curve’s line of chance (1:1 line). Classification results are generally poor if the points of the curve are close to the 1:1 line. From a mathematical point of view, TPR and FPR were given by (4) and (5):
T P R = T P T P + F N = s e n s i t i v i t y
F P R = F P F P + T N = 1 s p e c i f i c i t y
where True Positive (TP), False Negative (FN), False Positive (FP), and True Negative (TN) were the elements of the confusion matrix. Then, the maximum Youden’s J-Index value was obtained as follows:
J = m a x ( T P R F P R )
To apply the ROC curve and derive the Youden’s J-Index (i.e., C V t h r ), the 39 training polygons were used.

2.5. Classification Accuracy Assessment

Classification results were evaluated by creating a confusion matrix between the crop/non-crop CV binary map obtained after applying the optimal C V t h r value and the validation set of 39 polygons. Table 3 defines the confusion matrix.
The overall accuracy (OA) was computed as follows:
O A = T P + T N T P + F P + F N + T N × 100
Figure 4 lists the flowchart of classification.

3. Results and Discussion

In this section, the results are separately reported for the three input datasets: L-band ALOS-2 PALSAR-2 NISAR simulated (3.1), C-band Sentinel-1A (3.2), and NDVI from PlanetScope (3.3) data.

3.1. L-Band ALOS-2 PALSAR-2 NISAR Simulated Data (L2 GCOV)

The algorithm was applied to CV computed from the L-band NISAR L2 GCOV products for two polarizations (HH, HV) and resolutions (10 m, 100 m). Figure 5 and Figure 6 show pixel-based CV values over the site of interest for HH and HV 10 m (Figure 5), and for HH and HV 100 m (Figure 6).
CV from 10 m resolution images is affected by speckle noise (Figure 5), hindering the distinguishability of details. The consequent high variability is not suitable for crop area classification. As the resolution becomes coarser, speckle effects are mitigated, thus allowing for a clearer delineation of agricultural fields boundaries and other land cover classes such as forests. The mean CV values averaged over the entire image were 0.51 and 0.44 for HH and HV at a 10 m resolution, respectively. At 100 m, the mean values were 0.28 and 0.24 for HH and HV, respectively. Additionally, pixels corresponding to non-crop (such as forests) are notably characterized by low CV values (typically below 0.2), which is expected given their lower likelihood to experience significant variation over time compared to cropland. The ROC curve approach was then applied to CV images to derive the optimal CV threshold values by using the training set of 39 crop/non-crop polygons. Figure 7 shows the ROC curves obtained for each polarization and resolution. The red points on the curves correspond to the optimal CV thresholds reported in Table 4. In the 10 m resolution case, the curve is very close to the 1:1 line, indicating that the classification results are very poor (Figure 7). At 100 m, threshold values of 0.14 and 0.15 were derived for HH and HV, respectively.
Figure 8 shows the 100 m binary CV crop/non-crop maps with HH and HV inputs and the derived thresholds. HV polarization appears to better distinguish between crop and non-crop areas than HH does. For example, most forest pixels shown in purple in center right are classified as non-crop when compared to HH, likely because cross-polarization is more affected by vegetation than co-polarization.
The set of 39 crop/non-crop validation polygons was used to create the confusion matrix defined in Table 3 and assess the overall accuracy of the classification according to Equation (7). The results are reported in Table 5. HH 10 m and HV 10 m do not meet the 80% requirement of the NISAR science accuracy, likely due to the presence of speckle noise effects. On the other hand, HH 100 m and HV 100 m improved overall accuracy values: still, only HV exceeds 80%. These are different from the findings reported in [19]. In [19], L-band airborne UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar) data (10 m) and simulated NISAR products generated from the UAVSAR (10 m, 30 m, and 100 m) led to the 80% accuracy achieved in all cases except for the 10 m NISAR simulated data. An explanation can be the lower noise level of the UAVSAR and the simulated NISAR data than that of the PALSAR-2 data. In summary, 100 m resolution and HV input are necessary to accomplish the NISAR science requirement.

3.2. C-Band Sentinel-1A Data

C-band performances were found to be not as good as that of L-band, which was different from the results obtained in [20] with C-band Sentinel-1B and L-band ALOS-2 inputs. The classification performance most likely depends on the site of interest, dominant crop types, and crop structures. Our algorithm was applied to the CV computed from C-band Sentinel-1A images for two polarizations (VH, VV) and resolutions (10 m, 100 m). Figure 9 and Figure 10 show pixel-based CV values over the site of interest for VH and VV 10 m (Figure 9), and for VH and VV 100 m (Figure 10).
The CV values averaged over the scene were 0.46, 0.46, 0.19, 0.18 for VV 10 m, VH 10 m, VV 100 m, and VH 100 m, respectively. VH 10 m and VV 10 m values were comparable with the ones obtained for the L-band 10 m images, while C-band values for 100 m spatial resolution were smaller. The C-band 10 m images were affected by speckle noise, while the 100 m images improved the delineation of various features within the site of interest. L-band was found to better discriminate those features, likely because it has a higher capability to penetrate the canopy layers and interact with the soil. On the contrary, C-band has a weaker penetration than the L-band does. As also reported in [20], C-band data can reach a saturation level earlier during the crop season and especially when crop fields approach the mature stage, resulting in lower CV values during this period.
Classification results obtained with 10 m images were not reliable, since CV threshold values were very low and produced entire images to be classified as crop (Table 6). This is also confirmed by the ROC curves reported in Figure 11 being very close to the 1:1 line. For VV 100 m and VH 100 m, the threshold values are 0.15 and 0.12, respectively. Differently with respect to L-band, C V t h r for C-band increases when the resolution becomes coarser. Also, for VH 100 m the threshold value is 0.12, while for L-band HV 100 m the value is 0.15: this result reflects the larger CV values in general at L-band.
Co-polarization VV better delineates non-crop pixels related to forests than VH does (Figure 12). The overall accuracy at 10 m resolution is poor as explained earlier (Table 7). The overall accuracy of 73% exactly matches the crop percentage that is obtained when counting all the pixels within the validation polygons that correspond to crop. As also reported in [19], in a situation where C V t h r is 0 or 1 and all the pixels are classified as either crop or non-crop, the overall accuracy becomes equal to the crop/non-crop breakdown. Differently from L-band, the results for VV 100 m were found to be slightly better than VH 100 m: this is because of VH saturation at C-band.

3.3. NDVI from PlanetScope Data

Optical NDVI was used to derive CV and crop/non-crop classification with two motivations in this section: (1) to explore which one between optical and radar is superior in classification; (2) to evaluate if the NDVI-based crop/non-crop map can be generated automatically and accurately enough to serve in lieu of ground truth. The algorithm was applied to NDVI CV computed from the PlanetScope images after downsampling to 10 m and 100 m. NDVI was linearly interpolated over time at the same acquisition dates of ALOS-2 and non-significant pixels (i.e., clouds, shadow, light haze, heavy haze) were masked out.
CV values in Figure 13 are not as high in comparison to those derived from the L- and C-band radar data. When vegetation approaches a mature growth stage, it is expected for NDVI to saturate, resulting in little or no change over time. Consequently, it is not able to produce high values for crop fields. This is in contrast to the radar observation, in which the scattered power can increase with the growing canopy and structures even when the NDVI becomes saturated optically. The saturation of NDVI has an impact on the overall accuracy assessment since many pixels were classified as False Positive and False Negative.
The ROC curves at 10 m and 100 m are very similar to each other (Figure 14), suggesting that the resolution is not a key factor impacting the results obtained for NDVI. This finding is different from what was observed for L-band and C-band data, where spatial averaging reduced speckle noise.
From Figure 15, many cropland pixels are misclassified as non-crop, due to the low threshold values (Table 8) and the inability of NDVI CV to produce high values for crop fields when crops reach the mature stage. Results in Table 9 show that NDVI-based classification does not meet the 80% NISAR science requirement. This is supported by the presence of a high percentage of crop pixels incorrectly classified as non-crop (False Negative), which was equal to 22% and 34% for 10 m and 100 m, respectively. The percentage of non-crop pixels incorrectly classified as crop (False Positive) was also not negligible, being equal to 10% for 10 m and 8% for 100 m. The results also suggest that it is not possible to use the NDVI-based crop/non-crop map as a ground truth, since the overall accuracy of the classification was not high enough (68% and 58% for 10 and 100 m, respectively).
Table 8. Optimal CV thresholds for NDVI.
Table 8. Optimal CV thresholds for NDVI.
NDVI C V t h r
10 m0.07
100 m0.08
Figure 15. NDVI-based crop/non-crop map.
Figure 15. NDVI-based crop/non-crop map.
Remotesensing 16 01975 g015
Table 9. Overall accuracy for NDVI crop map.
Table 9. Overall accuracy for NDVI crop map.
NDVIOA
10 m68%
100 m58%

4. Conclusions

The algorithm that uses the Coefficient of Variation (CV) for classifying crop and non-crop areas was validated in preparation for the L-band NISAR crop area products. The NISAR science requirement for the cropland products specifies that the overall accuracy should exceed 80% on a seasonal time interval. The algorithm was applied to the time-series of the L-band NISAR simulated products (L2 GCOV) generated from the ALOS-2 PALSAR-2 images. Its performance was compared with those from the C-band Sentinel-1A and the NDVI derived from the high-resolution PlanetScope optical data. The classification results were analyzed with different polarizations (HH, HV for L-band, VH, VV for C-band) and spatial resolutions (10 m, 100 m).
As our first new feature of this paper, crop and non-crop polygons were delineated manually with a Planet image, to train the classification threshold. The polygons were divided into training and validation sets and used for deriving the optimal CV threshold by applying the ROC curve approach and for validating the classification accuracy. In comparison, previous studies [18,19,26] adopted the CDL as ground truth. A shortcoming of this approach is the fact that the CDL is a yearly map and released with a one-year delay. Alternatively, we tested the Planet-based NDVI to compare its classification performances with the radar products, and to explore its potential as ground truth. The results revealed that NDVI was not sufficiently effective since the optical data suffered from saturation when the crops are mature. NDVI itself (not its CV) was not useful either, in that forests and crop fields were both classified as crop due to their high values.
Our second finding is that our analysis showed that L-band PALSAR-2 data at HV polarization and 100 m resolution was the only case in which the 80% NISAR science requirement was satisfied. The overall accuracy and optimal CV threshold were 86% and 0.15, respectively. In comparison, L-band HH both at 10 m and 100 m resolution did not meet this condition (60% and 76% as overall accuracy, respectively), likely because cross-polarization temporally varied more than co-polarization did. The speckle noise was too strong at 10 m for the image to be useful. The optimal CV threshold was found to be dependent on the spatial resolution, with values increasing as the resolution became finer.
Third, we found that a coarse resolution (i.e., 100 m) was necessary to meet the 80% requirement. In comparison, a past study [19] using L-band airborne UAVSAR data and the NISAR products simulated from the UAVSAR data found that UAVSAR data met the 80% accuracy requirement at 10 m resolution and the simulated data were effective at 30 m and 100 m. The difference between our and the previous work is due to the UAVSAR and simulated data having a lower level of noise with respect to the PALSAR-2 data. The PALSAR-2 data were strongly affected by speckle effects at 10 m but not at 100 m.
Fourth, the classification results at C-band Sentinel-1A 10 m resolution were found to be unreliable with the very low CV threshold values. In this case, pixels were almost entirely classified as crop. The 100 m products did not meet the 80% requirement. Unlike the previous study [20], we found that L-band performances were better than C-band’s, suggesting that classification results likely depend on the site of interest, dominant crop types, and crop structures.
Lastly, to demonstrate that a reliable result in terms of classification accuracy estimate can be achieved using the selected number of 39 validation polygons, we followed the approach proposed in [27] to compute the confidence interval for the estimated accuracy. Given the overall accuracy requirement of 80%, for each case study investigated in the paper we derived, at 95% level of confidence, the limits of the interval. Using 39 validation polygons, the best result was obtained for L-band HV 100 m: we found that our estimate of the accuracy (i.e., 86%) was bounded between 83% and 89%. It is worth noting that the lower limit for L-band HV 100 m (83%) was greater than the upper limit derived for each of the other analyzed cases. This confirms that the validation set was large enough to support our conclusion, i.e., L-band HV polarized data at 100 m resolution were necessary to satisfy the 80% requirement.

Author Contributions

Conceptualization: G.A., S.-B.K., B.C. and P.S.; methodology: G.A., S.-B.K., J.M., B.C. and P.S.; software: G.A. and J.M.; validation: G.A., S.-B.K., B.C., J.M., P.S. and N.P.; formal analysis: G.A.; investigation: G.A. and S.-B.K.; resources: S.-B.K. and B.C.; data curation: G.A.; writing—original draft preparation: G.A.; writing—review and editing: G.A., S.-B.K., B.C., J.M., P.S. and N.P.; visualization: G.A.; supervision: S.-B.K.; project administration: S.-B.K.; funding acquisition: S.-B.K. and B.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the NISAR calibration/validation project funded by the National Aeronautics and Space Administration, grant number 80NM0018F0591.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the Japan Aerospace Exploration Agency (JAXA) for providing the ALOS-2 data and the NASA Jet Propulsion Laboratory for generating the NISAR simulated images. M. Lavalle provided valuable discussions about the topic being investigated in the paper. X. Huang provided the NISAR simulated products generated from ALOS-2 PALSAR-2.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Fritz, S.; See, L.; Bayas, J.C.L.; Waldner, F.; Jacques, D.; Becker-Reshef, I.; Whitcraft, A.; Baruth, B.; Bonifacio, R.; Crutchfield, J.; et al. A comparison of global agricultural monitoring systems and current gaps. Agric. Syst. 2019, 168, 258–272. [Google Scholar] [CrossRef]
  2. Griffiths, P.; Nendel, C.; Hostert, P. Intra-annual reflectance composites from Sentinel-2 and Landsat for national-scale crop and land cover mapping. Remote Sens. Environ. 2019, 220, 135–151. [Google Scholar] [CrossRef]
  3. Liu, L.; Xiao, X.; Qin, Y.; Wang, J.; Xu, X.; Hu, Y.; Qiao, Z. Mapping cropping intensity in China using time series Landsat and Sentinel-2 images and Google Earth Engine. Remote Sens. Environ. 2020, 239, 111624. [Google Scholar] [CrossRef]
  4. Chen, Y.; Lu, D.; Moran, E.; Batistella, M.; Dutra, L.V.; Del’Arco Sanches, I.; Bicudo da Silva, R.F.; Huang, J.; Barreto Luiz, A.J.; Falcão de Oliveira, M.A. Mapping croplands, cropping patterns, and crop types using MODIS time-series data. Int. J. Appl. Earth Obs. Geoinf. 2018, 69, 133–147. [Google Scholar] [CrossRef]
  5. Yan, S.; Yao, X.; Zhu, D.; Liu, D.; Zhang, L.; Yu, G.; Gao, B.; Yang, J.; Yun, W. Large-scale crop mapping from multi-source optical satellite imageries using machine learning with discrete grids. Int. J. Appl. Earth Obs. 2021, 103, 102485. [Google Scholar] [CrossRef]
  6. Gumma, M.K.; Tummala, K.; Dixit, S.; Collivignarelli, F.; Holecz, F.; Kolli, R.N.; Whitbread, A.M. Crop type identification and spatial mapping using Sentinel-2 satellite data with focus on field-level information. Geocarto Int. 2022, 37, 1833–1849. [Google Scholar] [CrossRef]
  7. Shen, Y.; Zhang, X.; Yang, Z. Mapping corn and soybean phenometrics at field scales over the United States Corn Belt by fusing time series of Landsat 8 and Sentinel-2 data with VIIRS data. ISPRS J. Photogramm. Remote Sens. 2022, 186, 55–69. [Google Scholar] [CrossRef]
  8. Ibrahim, E.S.; Rufin, P.; Nill, L.; Kamali, B.; Nendel, C.; Hostert, P. Mapping Crop Types and Cropping Systems in Nigeria with Sentinel-2 Imagery. Remote Sens. 2021, 13, 3523. [Google Scholar] [CrossRef]
  9. Tariq, A.; Yan, J.; Gagnon, A.S.; Riaz Khan, M.; Mumtaz, F. Mapping of cropland, cropping patterns and crop types by combining optical remote sensing images with decision tree classifier and random forest. Geo-Spat. Inf. Sci. 2023, 26, 302–320. [Google Scholar] [CrossRef]
  10. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K. Mapping crop cover using multi-temporal Landsat 8 OLI imagery. Int. J. Remote Sens. 2017, 38, 4348–4361. [Google Scholar] [CrossRef]
  11. Boryan, C.; Yang, Z.; Mueller, R.; Craig, M. Monitoring US agriculture: The US Department of Agriculture, National Agricultural Statistics Service, Cropland Data Layer Program. Geocarto Int. 2011, 26, 341–358. [Google Scholar] [CrossRef]
  12. ESA WorldCover. Available online: https://esa-worldcover.org/en (accessed on 1 December 2023).
  13. Huang, X.; Reba, M.; Coffin, A.; Runkle, B.R.K.; Huang, Y.; Chapman, B.; Ziniti, B.; Skakun, S.; Kraatz, S.; Siqueira, P.; et al. Cropland mapping with L-band UAVSAR and development of NISAR products. Remote Sens. Environ. 2021, 253, 112180. [Google Scholar] [CrossRef]
  14. Wang, H.; Magagi, R.; Goita, K. Polarimetric Decomposition for Monitoring Crop Growth Status. IEEE Geosci. Remote Sens. Lett. 2016, 13, 870–874. [Google Scholar] [CrossRef]
  15. Van Tricht, K.; Gobin, A.; Gilliams, S.; Piccard, I. Synergistic Use of Radar Sentinel-1 and Optical Sentinel-2 Imagery for Crop Mapping: A Case Study for Belgium. Remote Sens. 2018, 10, 1642. [Google Scholar] [CrossRef]
  16. Niantang, L.; Zhao, Q.; Williams, R.; Barrett, B. Enhanced Crop Classification through Integrated Optical and SAR Data: A Deep Learning Approach for Multi-Source Image Fusion. Int. J. Remote Sens. 2023, 1–29. [Google Scholar] [CrossRef]
  17. NASA-ISRO SAR (NISAR) Mission Science Users’ Handbook. Available online: https://nisar.jpl.nasa.gov/documents/26/NISAR_FINAL_9-6-19.pdf (accessed on 10 July 2023).
  18. Rose, S.; Kraatz, S.; Kellndorfer, J.; Cosh, M.H.; Torbick, N.; Huang, X.; Siqueira, P. Evaluating NISAR’s cropland mapping algorithm over the conterminous United States using Sentinel-1 data. Remote Sens. Environ. 2021, 260, 112472. [Google Scholar] [CrossRef]
  19. Kraatz, S.; Rose, S.; Cosh, M.H.; Torbick, N.; Huang, X.; Siqueira, P. Performance evaluation of UAVSAR and simulated NISAR data for crop/noncrop classification over Stoneville, MS. Earth Space Sci. 2021, 8, e2020EA001363. [Google Scholar] [CrossRef] [PubMed]
  20. Kraatz, S.; Torbick, N.; Jiao, X.; Huang, X.; Robertson, L.D.; Davidson, A.; McNairn, H.; Cosh, M.H.; Siqueira, P. Comparison between Dense L-Band and C-Band Synthetic Aperture Radar (SAR) Time Series for Crop Area Mapping over a NISAR Calibration-Validation Site. Agronomy 2021, 11, 273. [Google Scholar] [CrossRef]
  21. Kraatz, S.; Lamb, B.T.; Hively, W.D.; Jennewein, J.S.; Gao, F.; Cosh, M.H.; Siqueira, P. Comparing NISAR (Using Sentinel-1), USDA/NASS CDL, and Ground Truth Crop/Non-Crop Areas in an Urban Agricultural Region. Sensors 2023, 23, 8595. [Google Scholar] [CrossRef]
  22. PlanetScope Product Specifications. Available online: https://www.planet.com (accessed on 10 October 2023).
  23. USDA National Agricultural Statistics Service, Research and Science, CropScape and Cropland Data Layers—FAQs. Available online: https://www.nass.usda.gov/Research_and_Science/Cropland/sarsfaqs2.php#Section3_2.0 (accessed on 1 December 2023).
  24. Whelen, T.; Siqueira, P.P. Coefficient of variation for use in crop area classification across multiple climates. Int. J. Appl. Earth Obs. Geoinf. 2018, 67, 114–122. [Google Scholar] [CrossRef]
  25. Youden, W.J. Index for rating diagnostic tests. Cancer 1950, 3, 32–35. [Google Scholar] [CrossRef] [PubMed]
  26. Kraatz, S.; Siqueira, P.; Kellndorfer, J.; Torbick, N.; Huang, X.; Cosh, M. Evaluating the robustness of NISAR’s cropland product to time of observation, observing mode, and dithering. Earth Space Sci. 2022, 9, e2022EA002366. [Google Scholar] [CrossRef]
  27. Richards, J.A.; Jia, X. Remote Sensing Digital Image Analysis: An Introduction, 4th ed.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 303–307. [Google Scholar]
Figure 1. The 2019 CDL product for the site of interest.
Figure 1. The 2019 CDL product for the site of interest.
Remotesensing 16 01975 g001
Figure 2. Site of interest (orange). PlanetScope image collected on 7 July 2019 is also displayed along with the four sets of crop and non-crop polygons used for training (crop: green, non-crop: light blue) and validation (crop: yellow, non-crop: red).
Figure 2. Site of interest (orange). PlanetScope image collected on 7 July 2019 is also displayed along with the four sets of crop and non-crop polygons used for training (crop: green, non-crop: light blue) and validation (crop: yellow, non-crop: red).
Remotesensing 16 01975 g002
Figure 3. 10 m resolution NDVI map for 18 July 2019. White pixels correspond to shadow, light haze, heavy haze, and clouds pixels.
Figure 3. 10 m resolution NDVI map for 18 July 2019. White pixels correspond to shadow, light haze, heavy haze, and clouds pixels.
Remotesensing 16 01975 g003
Figure 4. Block diagram summarizing the steps that have been followed for the analysis.
Figure 4. Block diagram summarizing the steps that have been followed for the analysis.
Remotesensing 16 01975 g004
Figure 5. 10 m L-band Coefficient of Variation map for HH (left) and HV (right).
Figure 5. 10 m L-band Coefficient of Variation map for HH (left) and HV (right).
Remotesensing 16 01975 g005
Figure 6. 100 m L-band Coefficient of Variation map for HH (left) and HV (right).
Figure 6. 100 m L-band Coefficient of Variation map for HH (left) and HV (right).
Remotesensing 16 01975 g006
Figure 7. L-band PALSAR-2 ROC curves. The blue points indicate the obtained TPR and FPR for each of the tested CV threshold values; the red points correspond to the optimal CV thresholds; the red line is the curve’s line of chance (1:1 line).
Figure 7. L-band PALSAR-2 ROC curves. The blue points indicate the obtained TPR and FPR for each of the tested CV threshold values; the red points correspond to the optimal CV thresholds; the red line is the curve’s line of chance (1:1 line).
Remotesensing 16 01975 g007
Figure 8. 100 m L-band crop/non-crop map obtained for HH (left) and HV (right).
Figure 8. 100 m L-band crop/non-crop map obtained for HH (left) and HV (right).
Remotesensing 16 01975 g008
Figure 9. 10 m C-band Coefficient of Variation map for VH (left) and VV (right).
Figure 9. 10 m C-band Coefficient of Variation map for VH (left) and VV (right).
Remotesensing 16 01975 g009
Figure 10. 100 m C-band Coefficient of Variation map for VH (left) and VV (right).
Figure 10. 100 m C-band Coefficient of Variation map for VH (left) and VV (right).
Remotesensing 16 01975 g010
Figure 11. C-band Sentinel-1A ROC curves. The blue points indicate the obtained TPR and FPR for each of the tested CV threshold values; the red points correspond to the optimal CV thresholds; the red line is the curve’s line of chance (1:1 line).
Figure 11. C-band Sentinel-1A ROC curves. The blue points indicate the obtained TPR and FPR for each of the tested CV threshold values; the red points correspond to the optimal CV thresholds; the red line is the curve’s line of chance (1:1 line).
Remotesensing 16 01975 g011
Figure 12. 100 m C-band crop/non-crop map obtained for VH (left) and VV (right).
Figure 12. 100 m C-band crop/non-crop map obtained for VH (left) and VV (right).
Remotesensing 16 01975 g012
Figure 13. NDVI Coefficient of Variation maps at 10 m (left) and 100 m (right).
Figure 13. NDVI Coefficient of Variation maps at 10 m (left) and 100 m (right).
Remotesensing 16 01975 g013
Figure 14. ROC curves obtained for NDVI. The blue points indicate the obtained TPR and FPR for each of the tested CV threshold values; the red points correspond to the optimal CV thresholds; the red line is the curve’s line of chance (1:1 line).
Figure 14. ROC curves obtained for NDVI. The blue points indicate the obtained TPR and FPR for each of the tested CV threshold values; the red points correspond to the optimal CV thresholds; the red line is the curve’s line of chance (1:1 line).
Remotesensing 16 01975 g014
Table 1. ALOS-2, Sentinel-1A and PlanetScope acquisitions.
Table 1. ALOS-2, Sentinel-1A and PlanetScope acquisitions.
ALOS-2Sentinel-1APlanetScope
18 July 201923 July 20197 July 2019
1 August 20194 August 201923 July 2019
15 August 201916 August 201928 July 2019
29 August 201928 August 201915 August 2019
12 September 20199 September 201914 September 2019
Table 2. Characteristics of the 39 training and 39 validation polygons.
Table 2. Characteristics of the 39 training and 39 validation polygons.
20 Non-Crop58 Crop
10 for training
(4 urban, 4 woody wetlands, 2 fallow)
29 for training
(10 cotton, 8 soybeans, 6 peanuts, 5 corn)
10 for validation
(4 urban, 4 woody wetlands, 2 fallow)
29 for validation
(10 cotton, 8 soybeans, 6 peanuts, 5 corn)
Table 3. Definition of confusion matrix.
Table 3. Definition of confusion matrix.
Model (CV)
Non-cropCrop
Reference
(Polygons)
Non-cropTNFP
CropFNTP
Table 4. Optimal CV thresholds at L-band.
Table 4. Optimal CV thresholds at L-band.
L-Band C V t h r
HH 10 m0.37
HV 10 m0.30
HH 100 m0.14
HV 100 m0.15
Table 5. Overall accuracy for L-band crop map.
Table 5. Overall accuracy for L-band crop map.
L-BandOA
HH 10 m60%
HV 10 m63%
HH 100 m76%
HV 100 m86%
Table 6. Optimal CV thresholds at C-band.
Table 6. Optimal CV thresholds at C-band.
C-Band C V t h r
VV 10 m0.08
VH 10 m0.03
VV 100 m0.15
VH 100 m0.12
Table 7. Overall accuracy for C-band crop map.
Table 7. Overall accuracy for C-band crop map.
C-BandOA
VV 10 m73%
VH 10 m73%
VV 100 m69%
VH 100 m66%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anconitano, G.; Kim, S.-B.; Chapman, B.; Martinez, J.; Siqueira, P.; Pierdicca, N. Classification of Crop Area Using PALSAR, Sentinel-1, and Planet Data for the NISAR Mission. Remote Sens. 2024, 16, 1975. https://doi.org/10.3390/rs16111975

AMA Style

Anconitano G, Kim S-B, Chapman B, Martinez J, Siqueira P, Pierdicca N. Classification of Crop Area Using PALSAR, Sentinel-1, and Planet Data for the NISAR Mission. Remote Sensing. 2024; 16(11):1975. https://doi.org/10.3390/rs16111975

Chicago/Turabian Style

Anconitano, Giovanni, Seung-Bum Kim, Bruce Chapman, Jessica Martinez, Paul Siqueira, and Nazzareno Pierdicca. 2024. "Classification of Crop Area Using PALSAR, Sentinel-1, and Planet Data for the NISAR Mission" Remote Sensing 16, no. 11: 1975. https://doi.org/10.3390/rs16111975

APA Style

Anconitano, G., Kim, S. -B., Chapman, B., Martinez, J., Siqueira, P., & Pierdicca, N. (2024). Classification of Crop Area Using PALSAR, Sentinel-1, and Planet Data for the NISAR Mission. Remote Sensing, 16(11), 1975. https://doi.org/10.3390/rs16111975

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop