Next Article in Journal
FQDNet: A Fusion-Enhanced Quad-Head Network for RGB-Infrared Object Detection
Previous Article in Journal
Extraction Method for Factory Aquaculture Based on Multiscale Residual Attention Network
Previous Article in Special Issue
Characterizing Tidal Marsh Inundation with Synthetic Aperture Radar, Radiometric Modeling, and In Situ Water Level Observations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modifying NISAR’s Cropland Area Algorithm to Map Cropland Extent Globally

1
Earth System Science Center, University of Alabama in Huntsville, Huntsville, AL 35805, USA
2
Earth Science Branch, NASA Marshall Space Flight Center, Huntsville, AL 35812, USA
3
NASA Postdoctoral Program, NASA Marshall Space Flight Center, Huntsville, AL 35805, USA
4
Geophysical Institute, University of Alaska Fairbanks, Fairbanks, AK 99775, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(6), 1094; https://doi.org/10.3390/rs17061094
Submission received: 22 October 2024 / Revised: 5 December 2024 / Accepted: 11 March 2025 / Published: 20 March 2025
(This article belongs to the Special Issue NISAR Global Observations for Ecosystem Science and Applications)

Abstract

:
Synthetic aperture radar (SAR) is emerging as a valuable dataset for monitoring crops globally. Unlike optical remote sensing, SAR can provide earth observations regardless of solar illumination or atmospheric conditions. Several methods that utilize SAR to identify agriculture rely on computationally expensive algorithms, such as machine learning, that require extensive training datasets, complex data pre-processing, or specialized software. The coefficient of variation (CV) method has been successful in identifying agricultural activity using several SAR sensors and is the basis of the Cropland Area algorithm for the upcoming NASA-Indian Space Research Organization (ISRO) SAR mission. The CV method derives a unique threshold for an AOI by optimizing Youden’s J-Statistic, where pixels above the threshold are classified as crop and pixels below are classified as non-crop, producing a binary crop/non-crop classification. Training this optimization process requires at least some existing cropland classification as an external reference dataset. In this paper, general CV thresholds are derived that can discriminate active agriculture (i.e., fields in use) from other land cover types without requiring a cropland reference dataset. We demonstrate the validity of our approach for three crop types: corn/soybean, wheat, and rice. Using data from the European Space Agency’s (ESA) Sentinel-1, a C-band SAR instrument, nine global AOIs, three for each crop type, were evaluated. Optimal thresholds were calculated and averaged for two AOIs per crop type for 2018–2022, resulting in 0.53, 0.31, and 0.26 thresholds for corn/soybean, wheat, and rice regions, respectively. The crop type average thresholds were then applied to an additional AOI of the same crop type, where they achieved 92%, 84%, and 83% accuracy for corn/soybean, wheat, and rice, respectively, when compared to ESA’s 2021 land cover product, WorldCover. The results of this study indicate that the use of the CV, along with the average crop type thresholds presented, is a fast, simple, and reliable technique to detect active agriculture in areas where either corn/soybean, wheat, or rice is the dominant crop type and where outdated or no reference datasets exist.

1. Introduction

Accurate and timely global agriculture datasets are essential tools in monitoring food security [1], economic development [2], global health [3], and environmental integrity [4]. Such global datasets rely heavily on satellite remote sensing data that routinely observe Earth’s land surface. Agricultural activity is frequently monitored using optical remote sensing instruments, typically observing the land surface in the visible through thermal infrared portions (400 nm–1 mm) of the electromagnetic spectrum frequently (~1–3 days). These instruments are used to monitor near-real-time crop conditions, develop crop classification models, and practice precision agriculture [5]. Optical remote sensing has also been used for early detection of crop stress from drought conditions [6,7,8] or other environmental factors (i.e., diseases) [9,10] and to assess the impacts of severe weather on mature crops [11,12]. Though optical remote sensing has long been proven successful in monitoring agriculture globally, it is dependent upon atmospheric conditions and solar illumination.
In recent years, the access to and availability of active remote sensing instruments, such as synthetic aperture radar (SAR), has increased. SAR can provide routine collections of the land surface regardless of atmospheric conditions or time of day [13]. SAR is advantageous for agricultural applications, especially in regions more susceptible to frequent cloud cover (i.e., tropical latitudes) during their primary growing seasons. SAR is also sensitive to changes in the land surface’s dielectric constant (e.g., soil moisture) and structural properties (e.g., crop volume or height) [14], making SAR useful in monitoring soil moisture, crop health, and crop type [15,16,17,18]. SAR has been proven useful in complementing optical remote sensing datasets in the identification of damaged crops after impacts from severe weather that bring damaging winds and hail [19,20].
Several methods utilizing SAR have been successful in identifying agriculture, such as using Interferometric SAR (InSAR) coherence pairs [21,22], polarimetric SAR [18,23], or SAR backscatter time series [24,25]. However, many of these methods rely on computationally expensive algorithms, such as machine learning, that require extensive training datasets, complex data pre-processing, or specialized software. Whelen and Siqueira introduced and tested a methodology for mapping active agricultural areas (i.e., fields in use) across the United States using the coefficient of variation (CV) to produce binary crop/non-crop classifications [26]. The CV method separates crop pixels from non-crop pixels using simple statistical measurements, requiring only a SAR time series and an agricultural reference or land use/land cover dataset. CV is a unitless measurement of the variability within a stack of data as it relates to the mean [27]. The CV is defined as
C V = s t d ( X ) m e a n ( X ) ,
where X refers to the input variable, in this case, the SAR backscatter values per pixel location. The CV values and ranges are dependent on the scale of the input variable. Although the CV is unbounded, the CV of the SAR backscatter has a typical range of 0–1. The CV of a pixel will be closer to 1 if there is greater variation in the backscatter across the time series and closer to 0 if there is less variation in the backscatter. Because SAR is sensitive to the structural properties of vegetation (e.g., crop volume or height), SAR backscatter values will change as the crops grow throughout the season. The highest backscatter values occur when the crops are near maturity and the lowest values after harvest (Figure 1). The wide distribution of backscatter values in agricultural areas across the growing season results in high CV values, while forested or urban areas have relatively low CV values as their backscatter values do not change much throughout the year. The binary crop/non-crop classifications can be created using a thresholding approach, where pixels with CV values above or equal to the CV threshold are classified as crop, and pixels with CV values below the threshold are classified as non-crop.
Whelen and Siqueira tested several thresholding approaches to create crop/non-crop classifications using L-band SAR data from the Japanese Aerospace Exploration Agency’s (JAXA) Phased Array L-band SAR (PALSAR) sensor onboard the Advanced Land Observing Satellite (ALOS-1), using images collected between 2007 and 2010 [26]. The CV of the SAR time series was calculated for 11 Areas of Interest (AOI) across the United States. Crop/non-crop classifications were conducted in two phases. In the first phase, two unique optimal thresholds were generated for each AOI using (1) the receiver operating characteristic (ROC) curve where the threshold was derived from the greatest distance of the line of no discrimination and (2) from the best separation of histograms between the crop/non-crop pixels. The ROC curve displays the performance of all tested thresholds, and the optimal threshold is determined by the point on the curve that is furthest from the line of no discrimination [28]. For the histogram method, the ranges of CV values for both crop and non-crop pixels were plotted in a histogram, and the point of least overlap was chosen as the optimal threshold. The ROC curve and histogram methods generated similar thresholds ranging from 0.35 to 0.60. In the second phase, a generic threshold of 0.5 was applied to every AOI. The first phase achieved accuracies between 66% and 81%, while the second phase achieved accuracies between 62% and 84% across the AOIs. The overall methodology outlined by Whelen and Siqueira demonstrated that using SAR and the CV method can classify agriculture with over 80% accuracy, proving to be a viable tool to monitor agricultural areas globally [26].
Rose et al. expanded the use of the CV method outlined by Whelen and Siqueira [26] by using routinely collected 12-day C-band SAR data from the European Space Agency’s (ESA’s) Sentinel-1 [29]. Rose et al. used acquisitions collected during 2017 to evaluate 100 1° by 1° (~111 km by 111 km) tiles across the United States [29]. The CV was calculated for each tile, and unique CV thresholds were generated by optimizing the Youden’s J-statistic (YJS) [30]. The J-statistic is a summary measurement of the ROC curve that represents the distance from the line of no-discrimination to the curve [31]. This method is equivalent to the ROC curve method employed by Whelen and Siqueria but without the computational expense of generating curves for each tile. The J-statistic optimal thresholds demonstrated a geographic dependence with thresholds ranging from 0.2 on the coasts to 0.6 in the Midwest. These optimal thresholds achieved an average accuracy of 86.8%. In addition, a generic threshold of 0.5 was evaluated on every tile and achieved an average accuracy of 81.5%. The performance of the 0.5 threshold also varied by geographic region, with accuracy improving in the West from 73.5% using the optimal thresholds to 76.1%, while accuracy decreased from 90.2 and 90.0 using the optimal thresholds to 80.2 and 89.1 in the Midwest and South respectively. The observed relationship between geographic region, optimal thresholds, and performance suggests that while a threshold of 0.5 may be sufficient to achieve 80% accuracy across the U.S., region-specific thresholds may be more reliable.
Although the CV method is less computationally expensive than many other existing algorithms, generating unique thresholds for specific areas can be time-consuming. To avoid generating unique thresholds, Kraatz et al. [32] used the thresholds already generated by Rose et al. [29]. Kraatz et al., calculated the CV using Sentinel-1 data from 2017 to 2021 covering a ~2670 ha agricultural area in Maryland and selected a CV threshold of 0.25 [32]. Rose et al., reported CV thresholds between 0.2 and 0.3 for nearby sites in North Carolina and Pennsylvania [29]. The resulting crop/non-crop CV classification was compared to detailed ground truth information from U.S. Department of Agriculture (USDA) research fields. The USDA’s Cropland Data Layer (CDL) [33], which reports >80% accuracy across the U.S., was also compared to the ground truth data as a benchmark for the performance of the CV method. Compared to the ground truth information, the CV method crop/non-crop classification and the CDL achieved accuracies, respectively, of 96% and 77% for cropland alone and 94% and 86% for overall accuracy. Kraatz et al. demonstrates that using the CV method combined with a region-specific threshold can give accuracies close to or better than datasets derived from optical imagery, such as the CDL [32].
The CV method combined with a thresholding approach has proven to be very useful in mapping cropland area extent in SAR and was chosen as the basis for the NASA-Indian Space Research Organization (ISRO) SAR (NISAR) mission’s Level-2 Cropland Area algorithm. The NISAR mission (2025-) will collect 12-day, global, L-band imagery, and one of its Level-2 Science Requirements is to classify croplands with a minimum of 80% accuracy [34]. After one year of calibration, NISAR will produce crop area maps at 1 ha resolution every three months with four classes: active crop area, newly active crop area, inactive crop area, and not crop. Although the NISAR crop area maps are expected to be very useful, further development in mapping cropland area extent using SAR is necessary to ensure that global crop/non-crop classifications remain available regardless of NISAR’s duration [35].
The NISAR Cropland Area algorithm, which was designed to identify active agricultural areas and is less computationally intensive than other proposed algorithms and classifying schemes [18,21,22,23,24,25], is the base algorithm used in the study. This study will further streamline the CV threshold optimization process or bypass optimization altogether by identifying general thresholds for areas with certain dominant crop types, which is not currently being addressed in the NISAR Cropland Area algorithm. Rose et al. [29] found that optimal CV thresholds in the United States were regionally dependent, where nearby tiles shared similar crop types, climate, and management practices. Kraatz et al. [32] demonstrated that an optimal CV threshold from one location can be applied to another nearby location within the same region of the United States with high accuracy. These studies suggest that regionally specific thresholds may provide an acceptable trade-off between accuracy and computational intensity. Such thresholds may be general enough to be applied to large areas without needing to derive optimal thresholds for each SAR time series and still provide greater overall accuracy than only one threshold for an entire country, such as 0.5 [29]. To extend the concept of general CV thresholds to a global scale, we propose shifting the focus from regions defined by geographic proximity to regions defined by shared crop types.
This study compares the optimal thresholds derived across several years in areas with the same dominant crop type (corn/soybean, wheat, or rice) in different geographic regions worldwide with likely varied agricultural practices (i.e., irrigation and tilling) and climates (i.e., precipitation and temperatures). This study demonstrates that areas that grow the same crops have similar optimal CV thresholds to identify active agriculture. A general crop type threshold is derived from these optimal thresholds and then evaluated on a new area with the same dominant crop type. These general thresholds are only intended to generate binary crop/non-crop classifications, not to classify crop types. The proper application of these thresholds requires the user to know what crop types are generally grown in the area. This technique is aimed at end-users and stakeholders, such as the USDA Foreign Agricultural Service (FAS), who have some awareness of the region’s agricultural variety but would benefit from a rapidly derived low computational cost, crop/non-crop classification. This binary classification can estimate the active cropland area from year to year to help identify changes due to conflict and environmental factors, especially in regions where on-the-ground information is limited.

2. Materials and Methods

2.1. Data

2.1.1. Sentinel-1 Data and Processing

ESA’s Sentinel-1 constellation was the primary data source for this study. Sentinel-1A and -1B host a C-band SAR instrument with a phase-preserving dual-polarization system. The current orbit allows each Sentinel-1 satellite to have a twelve-day global repeat cycle with the primary collection mode over land being Interferometric Wide (IW), 250 km by 250 km frame. Since Sentinel-1B was decommissioned in mid-2022, data from only Sentinel-1A was used for this research. Sentinel-1 acquisitions were obtained and processed through the Alaska Satellite Facility (ASF), one of NASA’s twelve Distributed Active Archive Centers (DAACs). Level-1 Ground Range Detected High-Resolution (GRD-HD) Sentinel-1 products were processed to 30 m resolution Radiometrically Terrain Corrected (RTC) products using ASF’s On-Demand HyP3 (Hybrid Pluggable Processing Pipeline) processing platform [36,37]. HyP3 uses the workflow in Small [38] and the Copernicus GLO-30 digital elevation model (DEM) to generate its RTC output. An output resolution of 30 m was chosen because it matches the native GLO-30 DEM resolution and has a smaller relative file size than the 10 or 20 m products, reducing processing time. RTC products were used as the geometric and radiometric distortions inherent with side-looking instruments have been removed. RTCs provide an analysis-ready data (ARD) product that is easy to incorporate into time-series workflows [39]. Each image was speckle filtered using an Enhanced Lee filter, coregistered to the Copernicus GLO-30 DEM, and delivered as a gamma0 image in power units. Gamma0 radiometry was selected as it is the backscatter coefficient normalized by the illuminated area in the look direction of the satellite where the local topography has been included [38]. Gamma0 is preferred due to its sensitivity over topography, and the power scale was used as this scale is optimal for statistical analysis. Only the cross-polarized (‘VH’) channel from each acquisition was used in the analysis, as cross-polarized data are more effective than co-polarized data for crop monitoring in both C-band SAR data [40,41] and L-band SAR data [42,43].

2.1.2. Reference Agricultural Data

Two globally derived land cover classification datasets were used as references: ESA’s Copernicus Global Land Service: Land Cover (CGLS-LC) [44,45] and ESA’s WorldCover [46,47]. These two datasets provide different land classes, but both include the same basic land cover classes of open water, urban, forests, and cropland. The CGLS-LC 100 m product was trained on high-resolution optical datasets from Google Maps and Bing and 100 m multi-spectral data from the vegetation instrument on ESA’s PROBA satellite (PROBA-V). PROBA was launched in 2013 as a precursor to Sentinel-3 and acquired an image of Earth’s surface every two days. The CGLS-LC achieved 80.6 ± 0.4% accuracy and was used as reference data for the 2018 and 2019 growing seasons, as it was only produced for 2015–2019. ESA produced a higher resolution (10 m) global land cover map, WorldCover, that uses a similar algorithm to the CGLS-LC but incorporates Sentinel-2 multi-spectral data and Sentinel-1 C-band SAR data instead of PROBA-V data. WorldCover was generated for only two years, 2020 and 2021. The 2021 WorldCover incorporated more training data than 2020 and used an improved classification algorithm to improve overall accuracy to 76.7% from 74.4% in 2020. CGLS-LC and WorldCover were not produced for 2022, so with the WorldCover of 2021 having the best overall accuracy, it was used as the reference data for 2022. These two products were chosen as reference datasets due to their comparable algorithms, high accuracy, and relatively high spatial resolution compared to other global land cover maps. The goal of this research is to generate binary crop/non-crop classifications, not to classify pixels by crop type. In addition, country-specific reference datasets that include detailed crop type classifications on par with the CDL are not publicly available for every country and therefore were not used in this study.

2.2. Methodology

2.2.1. Areas of Interest

The selected study areas were based on the global production of corn/soybean, wheat, or rice. These crop types were chosen because they represent the top three crops produced worldwide [48]. Corn and soybean are grouped because they are often grown in an annual rotation [49,50,51,52]. For the Midwest U.S. AOI, the cropland area was split between corn and soybean, with ~5% more corn than soybean. For the AOIs outside of the U.S., detailed crop type reference datasets can be difficult to find, especially those produced annually. The USDA FAS’s Commodity Explorer was used to select countries that are major producers of either corn/soybean, wheat, or rice, and specific AOIs within one country or across two neighboring countries were selected using USDA FAS’s Crop Production Maps [53]. These maps show the states/provinces where the cultivation of a certain crop is concentrated, but only for select commodity crops. AOIs were selected with minimal overlap between other major crop types, but some crop diversity is guaranteed.
The AOIs were selected based on geographic diversity, percentage of world crop production, and variation in the percentage of the region dedicated to cropland (Figure 2). At least 25% of the land area was devoted to agriculture in each AOI. The corn/soybean AOIs selected were the Midwest United States, covering portions of several states (Nebraska, Iowa, Kansas, and Missouri); Ukraine, covering several provinces in Central Ukraine (Cherkasy, Poltava, and Kirovohrad) and Brazil, covering the central portion of one state (Mato Grosso). The wheat AOIs selected were France/Belgium, covering several regions of Eastern France (Champagne-Ardenne, Bourgogne, and Île-de-France) and most of Belgium; Morocco, covering most of the non-desert land; and East China, covering portions of multiple provinces (Henan, Shandong, Jiangsu, and Anhui). The rice AOIs were Myanmar, covering areas in Central Myanmar (Magway, Mandalay, and Bago); Thailand, covering several provinces in Central Thailand (Nakhon Ratchasima, Phetchabun, and Chaiyaphum); and East India/Bangladesh, covering portions of several states (West Bengal, Jharkhand, and Bihar) and most of Bangladesh.
For each crop type, these AOIs are then split into two “base case” AOIs and one “test case “AOI. The base case AOIs are so called because they are used as the basis for deriving generic thresholds for identifying active agriculture in areas growing each crop type. The test case AOI was used to assess the generic threshold’s performance in a new area. Five years of data (2018–2022) was used for each base case, while data from only one year, 2021, was used for each test case. The year 2021 was selected because the 2021 WorldCover reference dataset had the overall highest accuracy for validation.

2.2.2. Data Preparation

Five adjacent Sentinel-1 frames were selected for each of the AOIs covering approximately 190,000 km2 (Figure 2). Sentinel-1′s 12-day repeat cycle resulted in approximately 30 images for each frame location per year, or approximately 150 images per year per AOI. ASF’s OpenSARLab (OSL) was used to download and analyze the Sentinel-1 RTC images for each AOI [54]. OSL is a cloud-based JupyterHub computing environment designed for SAR workflows using Jupyter Notebooks. All the images for each frame are stacked in chronological order from January to December for each year. Within the stack, no data values were often caused by misalignment between images (coregistration) or by SAR distortions over rugged terrain. If one or more no data value was present, the area was masked and not considered when calculating the CV. Removing pixels with no data values ensures that the CV is calculated uniformly across space and time. After masking the no data pixels, the CV was calculated on the SAR stack. The CV layers for the five frames in the AOI for a given year were then merged. Merging the SAR imagery by date first would invalidate the need for some of the above data preparation steps, but due to computing limitations within OSL, the CV layers of the five frames were merged instead (Figure 3).
All agricultural reference datasets were reprojected, cropped, and resampled using the nearest neighbor method to match the coordinate reference system, extent, and 30 m resolution of the merged CV layer for each AOI. The CGLS-LC and WorldCover products provide several discrete land cover classes including one cropland class. Because the objective of this research is to produce crop/non-crop classifications, all the reference datasets were reclassified into binary crop/non-crop maps. The reference datasets were also reclassified into binary water/no-water maps to exclude water pixels in determining the optimal thresholds. Water is excluded because the surface roughness of water varies considerably across the year (e.g., smooth, calm water vs. rough, choppy water), resulting in high CV values and frequent misclassification of water as crop.

2.2.3. Deriving Crop Type Thresholds

The crop type thresholds are the result of averaging the identified optimal CV thresholds that best discriminate crop pixels from non-crop pixels in each of the base cases. In this study, crop type refers to what the primary crop is grown in the base cases according to USDA FAS Crop Production Maps. To account for temporal and geographic variation in the CV, optimal thresholds for the two base cases associated with each crop type are generated for each calendar year from 2018 to 2022 (Figure 3). The optimal threshold for each base case is determined by the J-Statistic.
For each year in a base case, a random 0.01% sample (~20,000 pixels) of the valid, non-water pixels in the CV layer was selected to determine the optimal threshold. In previous testing, a 0.01% sample generated the same thresholds as using all the pixels within an AOI while reducing computation time. Thresholds 0.00 through 0.99 with a step size of 0.01 were tested on each CV sample. For each tested threshold, pixels with a CV value greater than or equal to the threshold were classified as crop, and the remaining pixels were classified as non-crop. These crop/non-crop classifications were compared to the corresponding reference data, CGLS-LC for 2018–2019 and WorldCover for 2020–2022. The J-Statistic was calculated from the number of true positives, false positives, true negatives, and false negatives. The J-Statistic is defined as
Y J S = t r u e   p o s i t i v e s t r u e   p o s i t i v e s + f a l s e   n e g a t i v e s + t r u e   n e g a t i v e s t r u e   n e g a t i v e s + f a l s e   p o s i t i v e s 1
A positive classification is a pixel that is classified as crop after a threshold is applied to the CV layer. A negative classification is a pixel that is classified as non-crop after a threshold is applied to the CV layer. The true/false qualifier refers to the pixel’s comparison to the reference data. A pixel’s classification is true if it matches the reference data and false if it does not (Table 1). The threshold with the highest J-Statistic was determined to be the optimal classification threshold for the given base case and year.
Two base case AOIs per crop type across five years yielded ten optimal thresholds per crop type. These ten base case optimal thresholds were then averaged to generate the crop type threshold. This new crop type threshold was then applied to the test case’s CV layer for the corresponding crop type. As stated above, a CV pixel greater than or equal to the crop type threshold is classified as a crop. The resulting binary crop/non-crop classification identifies areas of active agriculture and does not classify pixels as a certain crop type. For each test case, the average crop type threshold was used, thereby removing the need to have a cropland extent reference dataset.

2.2.4. Performance Metrics

Accuracy, sensitivity, and specificity were calculated to assess the base case optimal CV thresholds derived for 2018–2022, as well as each crop type threshold applied to the test case. Accuracy is the percentage of classifications that match the reference data (true positives and true negatives). Sensitivity (1—false negative rate) is the percentage of crop pixels in the reference data that were classified as crop. Specificity (1—false positive rate) is the percentage of non-crop pixels in the reference data classified as non-crop. Sensitivity and specificity are inversely related, meaning that as sensitivity increases, specificity will decrease.
A c c u r a c y = t r u e   p o s i t i v e s + t r u e   n e g a t i v e s t r u e   p o s i t i v e s + f a l s e   p o s i t i v e s + t r u e   n e g a t i v e s + f a l s e   n e g a t i v e s
S e n s i t i v i t y = t r u e   p o s i t i v e s t r u e   p o s i t i v e s + f a l s e   n e g a t i v e s
S p e c i f i c i t y = t r u e   n e g a t i v e s t r u e   n e g a t i v e s + f a l s e   p o s i t i v e s
While accuracy can provide a general understanding of a classification’s performance, comparing sensitivity and specificity can reveal if there is an imbalance in the classification’s performance on crop (positive class) or non-crop (negative class). For example, two cases could both have an 80% accuracy, but one case could have a sensitivity of 90% and a specificity of 70%, while the other case could be 70% and 90%, respectively. The first case, with a sensitivity of 90%, would be better at classifying crops, while the second case, with a higher specificity, would be better at classifying non-crop. This imbalance is particularly applicable to cases where the ratio of crop to non-crop pixels in the reference data is weighted one way or the other because misclassified pixels will carry a greater weight in the underrepresented class. In cases where cropland accounts for less than one-third of the area, sensitivity and specificity should be considered when evaluating the performance.

3. Results

All six base cases approached or exceeded 80% accuracy on average using the optimal CV thresholds. Most of the six base cases also routinely approached or exceeded 80% in sensitivity and specificity (Table 2). The Ukraine (corn/soybean) and France/Belgium (wheat) base cases performed the best overall with 90% accuracy, sensitivity, and specificity in 2021 and 2022. Midwest U.S. (corn/soybean) and Morocco (wheat) received several metrics below 80% in 2018–2019. Midwest U.S. performed poorly in 2018 and 2019, with an accuracy and sensitivity below 80%. The accuracy and sensitivity of the Midwest U.S. improved in 2020 by approximately 6% while keeping specificity close to 80%. Morocco only surpassed 80% accuracy in 2021, mainly driven by correctly classifying non-crop pixels with specificity near 90% and sensitivity near 70%. Morocco’s highest sensitivity was ~74% in 2022, with the lowest sensitivity of ~60% in 2019, but the specificity surpassed 80% for four of the five years.
The two base case AOIs for each of the three crop types produced similar CV threshold ranges, with an average threshold of 0.53, 0.31, and 0.26 for corn/soybean, wheat, and rice, respectively (Figure 4). The CV thresholds for wheat and rice overlapped between 0.28 and 0.3 for some years, but most wheat thresholds were above 0.3, while most rice thresholds were below 0.3. The corn/soybean CV thresholds were significantly higher than wheat or rice, with most thresholds above 0.5 and the lowest threshold being 0.48.
Applying the crop type threshold for corn/soybean (0.53), wheat (0.31), and rice (0.26) to the corresponding test cases performed as well as or better than the base cases with optimized thresholds. The three test cases all had above 80% accuracy, with Central Brazil (corn/soybean) surpassing 90% (Table 3). All test cases also achieved over 80% sensitivity. China (wheat) and East India/Bangladesh (rice) had a higher sensitivity than specificity, while Central Brazil had a higher specificity. For each of the three test cases, the lowest threshold and the lowest accuracy occurred in 2018–2019, while the highest threshold and highest accuracy occurred in 2020–2022 (Table 2).

4. Discussion

While the CV method can produce binary crop/non-crop classifications with >80% accuracy, it has some inherent limitations due to the properties of SAR. Because of the side-looking nature of SAR, geometric distortions can result in brighter (foreshortening) or darker (radar shadow) backscatter values than expected in areas with rugged terrain [55]. While most of these distortions are removed during the RTC process, some no data values, DEM artifacts (i.e., mountainous areas), and geolocation errors can still be present in the processed RTC image, contributing to misclassification of pixels. This resulted in many false positive pixels in areas of mountainous terrain in Myanmar and Thailand, where the land cover was classified as tree cover, as well as in Morocco, where the land cover was classified as bare/sparse vegetation (Figure 5). Airports, multi-lane highways, and other large areas of smooth asphalt or concrete can also result in false positives. These features usually reflect the SAR signal away from the satellite, resulting in darker areas, but the orientation of the road in relation to the sensor, as well as the interference of buildings, may cause brightening [56]. In addition, slight errors in geolocation between SAR images of the same area can result in the edges of fields being misclassified.
If fields are bordered by windbreaker trees, slight geometric offsets of the SAR data can cause a pixel location to “shift” between cropland and forest. This shifting can cause false positives if a tree pixel shifts into a cropland area and false negatives if the opposite occurs. This was particularly an issue in Myanmar and Thailand, where small groupings of trees next to fields or bordering fields were most prevalent. These false positives, combined with the false positives in mountainous regions, led to tree cover contributing significantly to the false positive counts for Myanmar and Thailand. Water would result in false positives if not masked due to continuous changes in surface roughness from precipitation and wind [57,58]. This study nullified the need for cropland reference data by providing crop type thresholds for corn/soybean, wheat, and rice regions. However, a permanent water reference dataset would still be needed to produce the most accurate crop/non-crop classification map. Water might still be captured if flooding occurred and was captured by one of the SAR images. The flooding could cause a high CV value and trigger false positives, as these flooded pixels would not be included in the permanent water mask.
The CV method is most applicable to crops that grow substantial foliage throughout the growing season and are harvested to bare ground. Land cover classes like grasslands and pastures mimic this growth and harvest cycle, but are difficult to classify using SAR [29,59]. The grassland land cover classification consisted of the highest percentage of missed pixels and resulted in a majority of the false positives (Figure 5). Crops with foliage for most of the year (i.e., tree crops) are also very challenging to differentiate using the CV method and C-band SAR. C-band SAR (~5.6 cm wavelength), as it does not penetrate beyond the tree canopy. Fruit tree crops (e.g., almonds, apples, and olives) produce a similar backscatter signal to a tree without fruit [60]. Agricultural practices such as tilling and irrigation can also trigger a change in backscatter and thereby increase the CV [61,62,63,64].
Each base case was impacted by the differences between the CGLS-LC (2018–2019) and WorldCover (2020–2022) reference layers. The coarser resolution of the CGLS-LC (100 m) appears to have led to an overestimation of cropland extent in the CGLS-LC as cropland area dropped by ~1.85 million hectares between 2019 and 2020 across all base cases (Figure 6). In addition, every base case had its highest accuracy when using the higher resolution WorldCover (10 m) dataset between 2020 and 2022. The overall accuracy increased by an average of 4% between 2019 and 2020 when transitioning from the CGLS-LC to WorldCover.
To further investigate the impact that the reference data had on the overall accuracy, the Midwest U.S. base case for corn/soybean using the CDL was used to compare against the CGLS-LC and WorldCover performance. The CDL is a large-scale agriculture map produced by the USDA National Agricultural Statistics Service using satellite imagery and agricultural ground reference data. The CDL is generated annually over the entire continental U.S. and reports ~82% accuracy [33]. Using the CDL as the reference, the generated thresholds were within 0.03 of those generated by the CGLS-LC and WorldCover. When comparing the accuracy, the CDL resulted in an increase in accuracy of ~5% for 2018 and ~6% for 2019 compared to the CGLS-LC. For 2020–2022, WorldCover and the CDL achieved accuracies above 80% and within less than 1% of each other. The similarities in accuracy between WorldCover and the CDL suggest that WorldCover is comparable to the CDL, at least for one AOI in the U.S., while the CGLS-LC underperforms. Such differences in accuracy between the two reference datasets highlight the need for a yearly high resolution, accurate global land cover dataset, as the CGLS-LC and WorldCover were not produced after 2019 and 2021, respectively.
Cropland extent appears to be underestimated in every base case for 2018–2019, most likely because it is being compared to the CGLS-LC (Figure 6). Because cropland extent is overestimated in the CGLS-LC, it is difficult to discern if what is classified as cropland in 2018–2019 is under- or overestimated in comparison to the true extent. For most base cases, the 2019 classified cropland extent would be considered an overestimation compared to the 2020 WorldCover cropland extent. With WorldCover as the reference layer in 2020–2022, the number of acres over/underpredicted was reduced, and the accuracy was raised to over 80%.
However, Myanmar’s cropland extent was overpredicted by ~2 million hectares in 2020 (WorldCover) compared to ~50,000 hectares in 2019 (CGLS-LC). Most of the false positives occurred in either the Central Dry Zone (CDZ), a semi-arid steppe dominated by grasslands, or the mountain ranges bordering the CDZ to the east and west. Cropland extent in Morocco, another semi-arid region with terrain, was also overpredicted in 2020 and 2022 compared to WorldCover. This suggests that the CV method may be more likely to overpredict cropland extent in dry, mountainous locations due to a greater prevalence of grasslands and terrain-induced SAR distortions. In the tropical southern region of Myanmar, where the cropland is surrounded by trees, more false negatives than false positives occurred. Similarly, in Ukraine, Midwest U.S., France/Belgium, and Thailand, areas of trees bordering fields often triggered false negatives as the trees shadowed the cropland and blocked the SAR signal. However, these areas still performed better overall, with France/Belgium achieving almost 90% and Ukraine achieving 88% accuracy, respectively. This suggests that although trees can trigger false negatives, the CV method performs better overall in temperate or tropical climates where the cropland is bordered by trees rather than grasslands or bare ground, as in arid climates.
The Central Brazil case, used to test thecorn/soybean threshold was the best performing test case with ~92% accuracy (Figure 7). The Central Brazil test case outperformed both the Ukraine and Midwest, U.S. base cases’ performance metrics (Table 2 and Table 3). Central Brazil was an ideal case for the SAR-based CV method as most of the farms across the AOI are large (over 1000 hectares on average) and in flat regions surrounded by forests. Corn/soybean pixels produce a relatively high CV (>0.5) compared to wheat and rice, while forest pixels produce very low CV values (<0.1). The phenology of the corn/soybean changes drastically throughout the growing season compared to the surrounding forest, allowing for a greater degree of separation between the two land cover classes when using the CV approach. Most of the false positives (~95%) were classified as grassland in the reference data, similar to the two base cases (Figure 5). Grassland was the only land cover class that consistently produced CV values on par with corn/soybean (>0.5). Many areas of false negatives were where circular agricultural fields used center-pivot irrigation systems (Figure 8). These fields could be different crops with a lower CV that cannot be captured by the corn/soybean threshold, such as cotton, which is often grown in rotation with corn and soybeans in Brazil [65].
The East China case, used to test the threshold for wheat, had an accuracy of ~84%. This accuracy rating was impacted by the numerous cities and villages surrounding the cropland (Figure 7). As China’s urbanization rate has more than tripled in the last fifty years [66], the edges of many urban areas had higher CV values. As a result, nearly ~33% of East China’s false positives were built-up areas, while built-up areas comprised less than 5% of false positives in most other AOIs. This test case AOI also had a large area of false negatives in the southern Henan Province (Figure 9). This area of false negatives is likely rice fields that are not being captured because the threshold calibrated for wheat is too high, with rice pixels usually producing CV values less than 0.3. The Henan Province is indicated as a rice-growing region by the USDA, and the southern portion of the AOI in particular contains rice fields, as mapped by Shen et al. [67]. This suggests that even though the optimal CV threshold ranges for wheat and rice overlapped (Figure 4), the 0.05 difference between the thresholds may be enough to separate wheat and rice. Classifying croplands and other land cover classes based on a range of CV values rather than a single binary threshold could improve cropland extent mapping in areas that grow multiple crops.
An AOI over East India and Bangladesh was used to test the rice threshold and produced an accuracy of ~83% (Figure 7). This accuracy was most likely impacted by the misclassification of non-cropland areas, especially water, in the WorldCover reference dataset, as evidenced by a sensitivity of over 80% and specificity closer to 75%. The large rivers and the deltaic areas in the southern portion of Bangladesh are masked as permanent water, but they are outlined by false positive pixels, potentially because of changing water levels during the monsoon season. There are many small tributaries that are partially or completely unresolved by the reference data, resulting in false positives (Figure 7). Wetlands were not masked as permanent water, resulting in large areas of false positives (Figure 10). Potential solutions could be masking wetlands and including a buffer around water bodies or using a custom water detection algorithm to generate water masks. This would further decrease the need for external reference data in producing binary crop/non-crop classifications [38].
Although the crop type thresholds produced accuracies >80% in the test cases, these thresholds are unlikely to be the threshold that produces the maximum possible accuracy for the test case. To determine how close the crop type threshold is to the maximum accuracy threshold (i.e., the true optimal threshold), thresholds ± 0.1 of the crop type thresholds with a step size of 0.01 were applied to the test cases. This also assesses how sensitive each test case is to small changes in the threshold. Accuracies for the CV method are expected to follow a unimodal distribution, with accuracies peaking at the optimal threshold and decreasing monotonically with distance from the optimal threshold. Therefore, if a local maximum is observed within ± 0.1 of the crop type threshold, then that maximum is the true optimal threshold for that AOI. Given that the range of the base case optimal thresholds for each crop type was ~0.1 (Table 2), testing only ± 0.1 of the crop type threshold was deemed sufficient to find the true optimal threshold for the test cases.
The crop type thresholds were each within 0.02 of the thresholds that produced the peak accuracy for each test case AOI (Figure 11). This demonstrates that the crop type thresholds generated, are very close to to the ones found for the base cases obtained via the optimization process. The tested corn/soybean threshold produced the peak accuracy for Central Brazil. The Central Brazil AOI was not very sensitive to small threshold changes, with accuracy around 90% for both 0.1 below and 0.1 above the crop type threshold. The wheat threshold was 0.01 away from the crop type threshold with the peak accuracy in East China. The East China AOI was more sensitive to changes in the threshold, with accuracy dipping below 80% as the thresholds neared 0.1 below and 0.1 above the crop type threshold. The rice threshold was 0.02 from the crop type threshold with peak accuracy in East India/Bangladesh. This AOI was the most sensitive to changes in the threshold with accuracy below 75% at 0.1 above the crop type threshold. Due to the sensitivity of some AOIs to small changes in the threshold, a constrained threshold range of ±0.02 of the crop type threshold is most likely to capture the optimal threshold for a given AOI while minimizing the chances that the accuracy dips below 80% at the ends of the range (Table 4). Providing a range of thresholds may also help to account for some crop type diversity present in an AOI, which could require the threshold to be slightly lower or higher.
The recommended threshold ranges are intended to be applied to areas where crop diversity is generally low, and has been determined by an end-user who is familiar with the areas’ dominant crop types. Future work will investigate how much crop diversity is tolerable. This would require a crop type classification map and may be limited to the United States and a few other countries. The wheat threshold did seem to capture mainly wheat fields and not rice fields in the East China test case, but the recommended threshold ranges are not intended to delineate between corn/soybean, wheat, and rice in a given area. Classifying pixels by crop type would require additional data, such as optical imagery, and/or more advanced techniques such as machine learning [68,69]. Additional SAR wavelengths such as L-band from NISAR or optical imagery and derived products such as NDVI could be added to the algorithm without significant computational costs. L-band (~23.5 cm wavelength) SAR may increase accuracy in high-biomass crops such as corn [70]. L-band SAR may also provide better discrimination between irrigated and non-irrigated crops, as the longer wavelength is more sensitive to irrigation and soil moisture in well-developed Gramineae crops (which includes corn, wheat, and rice) [63,64,71,72]. NDVI products may help reduce the number of false positives triggered by land cover (e.g., roads and airports) that can have high CV values, but often low NDVI values.

5. Conclusions

Three distinct CV thresholds for corn/soybean, wheat, and rice regions were identified to reduce the computational cost and the reliance of reference data needed for the CV method to classify cropland extent using SAR. These thresholds were derived by averaging the optimal CV thresholds for multiple base cases, with over 25% cropland present in each AOI, over five years for regions where either corn/soybean, wheat, or rice is the dominant crop type. When the averaged thresholds were applied to the corresponding test case, the crop type thresholds produced binary crop/non-crop classifications that all achieved an overall accuracy exceeding 80%. The total cropland areal extent from these classifications was within ±3% of the total reference dataset cropland areal extent. The identified thresholds were also within ±0.02 of the CV threshold that would produce the maximum accuracy for the corresponding AOI. The binary crop/non-crop classifications for each test case using the crop type thresholds achieved accuracies that were as good or better than the optimized thresholds for each base case. This was done with less computation cost and no cropland reference data. When using the methodology outlined in this paper, a margin of 0.02 is recommended for corn/soybean, wheat, and rice regions to allow for flexibility and slight differences between years and locations. The CV method used in conjunction with the recommended thresholds is a fast and simple technique for identifying active agriculture, in areas where corn/soybean, wheat, or rice is the dominant crop type and where up-to-date cropland reference data is unavailable.

Author Contributions

Conceptualization, K.G.S., J.R.B. and H.G.P.; methodology, K.G.S., J.R.B. and H.G.P.; software, K.G.S.; validation, K.G.S. and H.G.P.; formal analysis, K.G.S.; investigation, K.G.S., J.R.B. and H.G.P.; resources, F.J.M.; data curation, K.G.S.; writing—original draft preparation, K.G.S.; writing—review and editing, J.R.B., H.G.P., L.A.S., R.L. and A.L.M.; visualization, K.G.S.; supervision, K.G.S.; project administration, F.J.M.; funding acquisition, F.J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NASA Science Mission Directorate under grant numbers #80NSSC20K0164 and #80NSSC19K1109.

Data Availability Statement

All data presented and used to create analysis in this submission are freely available from the Alaska Satellite Facility.

Acknowledgments

Sentinel-1 data and its products are derived from Copernicus Sentinel data, processed by ESA. Sentinel-1 data are attributed to the services of the NASA Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC). The authors would like to thank colleagues within the Earth Science Branch at the NASA Marshall Space Flight Center and the Earth System Science Center at the University of Alabama in Huntsville who provided a review of this manuscript prior to submission. We thank the ASF HyP3 team for their support in providing access to a cloud processing environment and technical support in keeping the environment running.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Karthikeyan, L.; Chawla, I.; Mishra, A.K. A Review of Remote Sensing Applications in Agriculture for Food Security: Crop Growth and Yield, Irrigation, and Crop Losses. J. Hydrol. 2020, 586, 124905. [Google Scholar] [CrossRef]
  2. Economic Importance of Agriculture for Poverty Reduction; OECD Food, Agriculture and Fisheries Papers; OECD: Paris, France, 2010; Volume 23.
  3. Gillespie, S.; Van Den Bold, M. Agriculture, Food Systems, and Nutrition: Meeting the Challenge. Glob. Chall. 2017, 1, 1600002. [Google Scholar] [CrossRef] [PubMed]
  4. Robertson, G.P.; Swinton, S.M. Reconciling Agricultural Productivity and Environmental Integrity: A Grand Challenge for Agriculture. Front. Ecol. Environ. 2005, 3, 38–46. [Google Scholar] [CrossRef]
  5. Khanal, S.; Kc, K.; Fulton, J.P.; Shearer, S.; Ozkan, E. Remote Sensing in Agriculture—Accomplishments, Limitations, and Opportunities. Remote Sens. 2020, 12, 3783. [Google Scholar] [CrossRef]
  6. Benedict, T.D.; Brown, J.F.; Boyte, S.P.; Howard, D.M.; Fuchs, B.A.; Wardlow, B.D.; Tadesse, T.; Evenson, K.A. Exploring VIIRS Continuity with MODIS in an Expedited Capability for Monitoring Drought-Related Vegetation Conditions. Remote Sens. 2021, 13, 1210. [Google Scholar] [CrossRef]
  7. Ahmad, U.; Alvino, A.; Marino, S. A Review of Crop Water Stress Assessment Using Remote Sensing. Remote Sens. 2021, 13, 4155. [Google Scholar] [CrossRef]
  8. Hain, C.R.; Anderson, M.C. Estimating Morning Change in Land Surface Temperature from MODIS Day/Night Observations: Applications for Surface Energy Balance Modeling. Geophys. Res. Lett. 2017, 44, 9723–9733. [Google Scholar] [CrossRef]
  9. Abdullah, H.M.; Mohana, N.T.; Khan, B.M.; Ahmed, S.M.; Hossain, M.; Islam, K.S.; Redoy, M.H.; Ferdush, J.; Bhuiyan, M.A.H.B.; Hossain, M.M.; et al. Present and Future Scopes and Challenges of Plant Pest and Disease (P&D) Monitoring: Remote Sensing, Image Processing, and Artificial Intelligence Perspectives. Remote Sens. Appl. Soc. Environ. 2023, 32, 100996. [Google Scholar] [CrossRef]
  10. Debeurs, K.; Townsend, P. Estimating the Effect of Gypsy Moth Defoliation Using MODIS. Remote Sens. Environ. 2008, 112, 3983–3990. [Google Scholar] [CrossRef]
  11. Molthan, A.; Burks, J.; McGrath, K.; LaFontaine, F. Multi-Sensor Examination of Hail Damage Swaths for near Real-Time Applications and Assessment. J. Oper. Meteorol. 2013, 1, 144–156. [Google Scholar] [CrossRef]
  12. Bell, J.; Molthan, A. Evaluation of Approaches to Identifying Hail Damage to Crop Vegetation Using Satellite Imagery. J. Oper. Meteorol. 2016, 04, 142–159. [Google Scholar] [CrossRef]
  13. Flores-Anderson, A.; Herndon, K.; Cherrington, E.; Thapa, R. The SAR Handbook: Comprehensive Meth. for Forest Monitoring and Biomass Estimation; NASA: Washington, DC, USA, 2019; 307p. [Google Scholar]
  14. Prudente, V.H.R.; Martins, V.S.; Vieira, D.C.; Silva, N.R.D.F.E.; Adami, M.; Sanches, I.D. Limitations of Cloud Cover for Optical Remote Sensing of Agricultural Areas across South America. Remote Sens. Appl. Soc. Environ. 2020, 20, 100414. [Google Scholar] [CrossRef]
  15. McNairn, H.; Brisco, B. The Application of C-Band Polarimetric SAR for Agriculture: A Review. Can. J. Remote Sens. 2004, 30, 525–542. [Google Scholar] [CrossRef]
  16. Cable, J.; Kovacs, J.; Jiao, X.; Shang, J. Agricultural Monitoring in Northeastern Ontario, Canada, Using Multi-Temporal Polarimetric RADARSAT-2 Data. Remote Sens. 2014, 6, 2343–2371. [Google Scholar] [CrossRef]
  17. Forkuor, G.; Conrad, C.; Thiel, M.; Ullmann, T.; Zoungrana, E. Integration of Optical and Synthetic Aperture Radar Imagery for Improving Crop Mapping in Northwestern Benin, West Africa. Remote Sens. 2014, 6, 6472–6499. [Google Scholar] [CrossRef]
  18. Canisius, F.; Shang, J.; Liu, J.; Huang, X.; Ma, B.; Jiao, X.; Geng, X.; Kovacs, J.M.; Walters, D. Tracking Crop Phenological Development Using Multi-Temporal Polarimetric Radarsat-2 Data. Remote Sens. Environ. 2018, 210, 508–518. [Google Scholar] [CrossRef]
  19. Bell, J.R.; Gebremichael, E.; Molthan, A.L.; Schultz, L.A.; Meyer, F.J.; Hain, C.R.; Shrestha, S.; Payne, K.C. Complementing Optical Remote Sensing with Synthetic Aperture Radar Observations of Hail Damage Swaths to Agricultural Crops in the Central United States. J. Appl. Meteorol. Climatol. 2020, 59, 665–685. [Google Scholar] [CrossRef]
  20. Bell, J.R.; Bedka, K.M.; Schultz, C.J.; Molthan, A.L.; Bang, S.D.; Glisan, J.; Ford, T.; Lincoln, W.S.; Schultz, L.A.; Melancon, A.M.; et al. Satellite-Based Characterization of Convection and Impacts from the Catastrophic 10 August 2020 Midwest U.S. Derecho. Bull. Am. Meteorol. Soc. 2022, 103, E1172–E1196. [Google Scholar] [CrossRef]
  21. Khabbazan, S.; Vermunt, P.; Steele-Dunne, S.; Ratering Arntz, L.; Marinetti, C.; Van Der Valk, D.; Iannini, L.; Molijn, R.; Westerdijk, K.; Van Der Sande, C. Crop Monitoring Using Sentinel-1 Data: A Case Study from The Netherlands. Remote Sens. 2019, 11, 1887. [Google Scholar] [CrossRef]
  22. Nikaein, T.; Iannini, L.; Molijn, R.A.; Lopez-Dekker, P. On the Value of Sentinel-1 InSAR Coherence Time-Series for Vegetation Classification. Remote Sens. 2021, 13, 3300. [Google Scholar] [CrossRef]
  23. Nasirzadehdizaji, R.; Balik Sanli, F.; Abdikan, S.; Cakir, Z.; Sekertekin, A.; Ustuner, M. Sensitivity Analysis of Multi-Temporal Sentinel-1 SAR Parameters to Crop Height and Canopy Coverage. Appl. Sci. 2019, 9, 655. [Google Scholar] [CrossRef]
  24. Abdikan, S.; Sekertekin, A.; Ustunern, M.; Balik Sanli, F.; Nasirzadehdizaji, R. Backscatter Analysis Using Multi-Temporal Sentinel-1 Sar Data for Crop Growth of Maize in Konya Basin, Turkey. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII–3, 9–13. [Google Scholar] [CrossRef]
  25. Dingle Robertson, L.; Davidson, A.M.; McNairn, H.; Hosseini, M.; Mitchell, S.; De Abelleyra, D.; Verón, S.; Le Maire, G.; Plannells, M.; Valero, S.; et al. C-Band Synthetic Aperture Radar (SAR) Imagery for the Classification of Diverse Cropping Systems. Int. J. Remote Sens. 2020, 41, 9628–9649. [Google Scholar] [CrossRef]
  26. Whelen, T.; Siqueira, P. Coefficient of Variation for Use in Crop Area Classification across Multiple Climates. Int. J. Appl. Earth Obs. Geoinf. 2018, 67, 114–122. [Google Scholar] [CrossRef]
  27. Kendall, M.G.; Stuart, A.; Ord, J.K. The Advanced Theory of Statistics. 1: Distribution Theory, 3rd ed.; Griffin: London, UK, 1969; ISBN 978-0-85264-141-5. [Google Scholar]
  28. Fawcett, T. An Introduction to ROC Analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
  29. Rose, S.; Kraatz, S.; Kellndorfer, J.; Cosh, M.H.; Torbick, N.; Huang, X.; Siqueira, P. Evaluating NISAR’s Cropland Mapping Algorithm over the Conterminous United States Using Sentinel-1 Data. Remote Sens. Environ. 2021, 260, 112472. [Google Scholar] [CrossRef]
  30. Youden, W.J. Index for Rating Diagnostic Tests. Cancer 1950, 3, 32–35. [Google Scholar] [CrossRef] [PubMed]
  31. Habibzadeh, F.; Habibzadeh, P.; Yadollahie, M. On Determining the Most Appropriate Test Cut-off Value: The Case of Tests with Continuous Results. Biochem. Med. 2016, 26, 297–307. [Google Scholar] [CrossRef]
  32. Kraatz, S.; Lamb, B.T.; Hively, W.D.; Jennewein, J.S.; Gao, F.; Cosh, M.H.; Siqueira, P. Comparing NISAR (Using Sentinel-1), USDA/NASS CDL, and Ground Truth Crop/Non-Crop Areas in an Urban Agricultural Region. Sensors 2023, 23, 8595. [Google Scholar] [CrossRef]
  33. Boryan, C.; Yang, Z.; Mueller, R.; Craig, M. Monitoring US Agriculture: The US Department of Agriculture, National Agricultural Statistics Service, Cropland Data Layer Program. Geocarto Int. 2011, 26, 341–358. [Google Scholar] [CrossRef]
  34. NASA-ISRO SAR (NISAR) Mission Science Users’ Handbook, NASA, 9 April 2018, Version 1. Available online: https://nisar.jpl.nasa.gov/system/documents/files/26_NISAR_FINAL_9-6-19.pdf (accessed on 31 July 2024).
  35. Kraatz, S.; Siqueira, P.; Kellndorfer, J.; Torbick, N.; Huang, X.; Cosh, M. Evaluating the Robustness of NISAR’s Cropland Product to Time of Observation, Observing Mode, and Dithering. Earth Space Sci. 2022, 9, e2022EA002366. [Google Scholar] [CrossRef]
  36. Small, D. Flattening Gamma: Radiometric Terrain Correction for SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3081–3093. [Google Scholar] [CrossRef]
  37. Hogenson, K.; Arko, S.A.; Buechler, B.; Hogenson, R.; Herrmann, J.; Geiger, A. Hybrid Pluggable Processing Pipeline (HyP3): A cloud-based infrastructure for generic processing of SAR data. In Proceedings of the Agu Fall Meeting Abstracts, San Francisco, CA, USA, 12–16 December 2016; Volume 2016, p. IN21B-1740. [Google Scholar]
  38. Meyer, F.J.; Schultz, L.A.; Osmanoglu, B.; Kennedy, J.H.; Jo, M.; Thapa, R.B.; Bell, J.R.; Pradhan, S.; Shrestha, M.; Smale, J.; et al. HydroSAR: A Cloud-Based Service for the Monitoring of Inundation Events in the Hindu Kush Himalaya. Remote Sens. 2024, 16, 3244. [Google Scholar] [CrossRef]
  39. Loew, A.; Mauser, W. Generation of Geometrically and Radiometrically Terrain Corrected SAR Image Products. Remote Sens. Environ. 2007, 106, 337–349. [Google Scholar] [CrossRef]
  40. Vreugdenhil, M.; Wagner, W.; Bauer-Marschallinger, B.; Pfeil, I.; Teubner, I.; Rüdiger, C.; Strauss, P. Sensitivity of Sentinel-1 Backscatter to Vegetation Dynamics: An Austrian Case Study. Remote Sens. 2018, 10, 1396. [Google Scholar] [CrossRef]
  41. Wang, J.; Dai, Q.; Shang, J.; Jin, X.; Sun, Q.; Zhou, G.; Dai, Q. Field-Scale Rice Yield Estimation Using Sentinel-1A Synthetic Aperture Radar (SAR) Data in Coastal Saline Region of Jiangsu Province, China. Remote Sens. 2019, 11, 2274. [Google Scholar] [CrossRef]
  42. Huang, X.; Reba, M.; Coffin, A.; Runkle, B.R.K.; Huang, Y.; Chapman, B.; Ziniti, B.; Skakun, S.; Kraatz, S.; Siqueira, P.; et al. Cropland Mapping with L-Band UAVSAR and Development of NISAR Products. Remote Sens. Environ. 2021, 253, 112180. [Google Scholar] [CrossRef]
  43. McNairn, H.; Shang, J.; Jiao, X.; Champagne, C. The Contribution of ALOS PALSAR Multipolarization and Polarimetric Data to Crop Classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3981–3992. [Google Scholar] [CrossRef]
  44. Buchhorn, M.; Smets, B.; Bertels, L.; Roo, B.D.; Lesiv, M.; Tsendbazar, N.-E.; Herold, M.; Fritz, S. Copernicus Global Land Service: Land Cover 2018 (Raster 100 m), Global, Yearly–Version 3, Epoch 2018, European Union’s Copernicus Land Monitoring Service Information. Available online: https://land.copernicus.eu/en/products/global-dynamic-land-cover/copernicus-global-land-service-land-cover-100m-collection-3-epoch-2018-globe (accessed on 21 August 2024).
  45. Buchhorn, M.; Smets, B.; Bertels, L.; Roo, B.D.; Lesiv, M.; Tsendbazar, N.-E.; Herold, M.; Fritz, S. Copernicus Global Land Service: Land Cover 2019 (Raster 100 m), Global, Yearly–Version 3, Epoch 2019, European Union’s Copernicus Land Monitoring Service Information. Available online: https://land.copernicus.eu/en/products/global-dynamic-land-cover/copernicus-global-land-service-land-cover-100m-collection-3-epoch-2019-globe (accessed on 24 August 2024).
  46. Zanaga, D.; Van De Kerchove, R.; De Keersmaecker, W.; Souverijns, N.; Brockmann, C.; Quast, R.; Wevers, J.; Grosu, A.; Paccini, A.; Vergnaud, S.; et al. ESA WorldCover 10 m 2020 v100. 2021. Available online: https://zenodo.org/records/5571936 (accessed on 27 August 2024).
  47. Zanaga, D.; Van De Kerchove, R.; Daems, D.; De Keersmaecker, W.; Brockmann, C.; Kirches, G.; Wevers, J.; Cartus, O.; Santoro, M.; Fritz, S.; et al. ESA WorldCover 10 m 2021 v200. 2022. Available online: https://zenodo.org/records/7254221 (accessed on 30 August 2024).
  48. Food and Agriculture Organization of the United States (FAO). Agricultural Production Statistics 2020–2021. 2021. Available online: https://openknowledge.fao.org/server/api/core/bitstreams/58971ed8-c831-4ee6-ab0a-e47ea66a7e6a/content (accessed on 20 September 2024).
  49. Bullock, D.G. Crop Rotation. Crit. Rev. Plant Sci. 1992, 11, 309–326. [Google Scholar] [CrossRef]
  50. Sindelar, A.J.; Schmer, M.R.; Jin, V.L.; Wienhold, B.J.; Varvel, G.E. Long-Term Corn and Soybean Response to Crop Rotation and Tillage. Agron. J. 2015, 107, 2241–2252. [Google Scholar] [CrossRef]
  51. Rotundo, J.L.; Rech, R.; Cardoso, M.M.; Fang, Y.; Tang, T.; Olson, N.; Pyrik, B.; Conrad, G.; Borras, L.; Mihura, E.; et al. Development of a Decision-Making Application for Optimum Soybean and Maize Fertilization Strategies in Mato Grosso. Comput. Electron. Agric. 2022, 193, 106659. [Google Scholar] [CrossRef]
  52. Kussul, N.; Deininger, K.; Lavreniuk, M.; Ali, D.A.; Nivievskyi, O. Using Machine Learning to Assess Yield Impacts of Crop Rotation: Combining Satellite and Statistical Data for Ukraine; World Bank: Washington, DC, USA, 2020. [Google Scholar]
  53. USDA FAS Crop Production Maps, USDA. Available online: https://ipad.fas.usda.gov/ogamaps/cropproductionmaps.aspx (accessed on 31 July 2024).
  54. Meyer, F.J.; Rosen, P.A.; Flores, A.; Anderson, E.R.; Cherrington, E.A. Making Sar Accessible: Education & Training in Preparation for Nisar. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11 July 2021; pp. 6–9. [Google Scholar]
  55. Cigna, F.; Bateson, L.B.; Jordan, C.J.; Dashwood, C. Simulating SAR Geometric Distortions and Predicting Persistent Scatterer Densities for ERS-1/2 and ENVISAT C-Band SAR and InSAR Applications: Nationwide Feasibility Assessment to Monitor the Landmass of Great Britain with SAR Imagery. Remote Sens. Environ. 2014, 152, 441–466. [Google Scholar] [CrossRef]
  56. Tupin, F.; Houshmand, B.; Datcu, M. Road Detection in Dense Urban Areas Using SAR Imagery and the Usefulness of Multiple Views. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2405–2414. [Google Scholar] [CrossRef]
  57. Reschke, J.; Bartsch, A.; Schlaffer, S.; Schepaschenko, D. Capability of C-Band SAR for Operational Wetland Monitoring at High Latitudes. Remote Sens. 2012, 4, 2923–2943. [Google Scholar] [CrossRef]
  58. Bartsch, A.; Trofaier, A.M.; Hayman, G.; Sabel, D.; Schlaffer, S.; Clark, D.B.; Blyth, E. Detection of Open Water Dynamics with ENVISAT ASAR in Support of Land Surface Modelling at High Latitudes. Biogeosciences 2012, 9, 703–714. [Google Scholar] [CrossRef]
  59. Lark, T.J.; Mueller, R.M.; Johnson, D.M.; Gibbs, H.K. Measuring Land-Use and Land-Cover Change Using the U.S. Department of Agriculture’s Cropland Data Layer: Cautions and Recommendations. Int. J. Appl. Earth Obs. Geoinf. 2017, 62, 224–235. [Google Scholar] [CrossRef]
  60. Wang, Y.; Hess, L.L.; Filoso, S.; Melack, J.M. Canopy Penetration Studies: Modeled Radar Backscatter from Amazon Floodplain Forests at C-, L-, and P-Band. In Proceedings of the IGARSS ’94—1994 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 8–12 August 1994; Volume 2, pp. 1060–1062. [Google Scholar] [CrossRef]
  61. McNairn, H.; Boisvert, J.B.; Major, D.J.; Gwyn, Q.H.J.; Brown, R.J.; Smith, A.M. Identification of Agricultural Tillage Practices from C-Band Radar Backscatter. Can. J. Remote Sens. 1996, 22, 154–162. [Google Scholar] [CrossRef]
  62. Zheng, B.; Campbell, J.B.; Serbin, G.; Galbraith, J.M. Remote Sensing of Crop Residue and Tillage Practices: Present Capabilities and Future Prospects. Soil Tillage Res. 2014, 138, 26–34. [Google Scholar] [CrossRef]
  63. Bazzi, H.; Baghdadi, N.; Charron, F.; Zribi, M. Comparative Analysis of the Sensitivity of SAR Data in C and L Bands for the Detection of Irrigation Events. Remote Sens. 2022, 14, 2312. [Google Scholar] [CrossRef]
  64. Ranjbar, S.; Akhoondzadeh, M.; Brisco, B.; Amani, M.; Hosseini, M. Soil Moisture Change Monitoring from C and L-Band SAR Interferometric Phase Observations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7179–7197. [Google Scholar] [CrossRef]
  65. Ajadi, O.A.; Barr, J.; Liang, S.-Z.; Ferreira, R.; Kumpatla, S.P.; Patel, R.; Swatantran, A. Large-Scale Crop Type and Crop Area Mapping across Brazil Using Synthetic Aperture Radar and Optical Imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 97, 102294. [Google Scholar] [CrossRef]
  66. Wang, X.; Zhang, X. A Regional Comparative Study on the Mismatch between Population Urbanization and Land Urbanization in China. PLoS ONE 2023, 18, e0287366. [Google Scholar] [CrossRef] [PubMed]
  67. Shen, R.; Pan, B.; Peng, Q.; Dong, J.; Chen, X.; Zhang, X.; Ye, T.; Huang, J.; Yuan, W. High-Resolution Distribution Maps of Single-Season Rice in China from 2017 to 2022. Earth Syst. Sci. Data 2023, 15, 3203–3222. [Google Scholar] [CrossRef]
  68. Moumni, A.; Lahrouni, A. Machine Learning-Based Classification for Crop-Type Mapping Using the Fusion of High-Resolution Satellite Imagery in a Semiarid Area. Scientifica 2021, 2021, 1–20. [Google Scholar] [CrossRef]
  69. Tufail, R.; Ahmad, A.; Javed, M.A.; Ahmad, S.R. A Machine Learning Approach for Accurate Crop Type Mapping Using Combined SAR and Optical Time Series Data. Adv. Space Res. 2022, 69, 331–346. [Google Scholar] [CrossRef]
  70. Kraatz, S.; Rose, S.; Cosh, M.H.; Torbick, N.; Huang, X.; Siqueira, P. Performance Evaluation of UAVSAR and Simulated NISAR Data for Crop/Noncrop Classification Over Stoneville, MS. Earth Space Sci. 2021, 8, e2020EA001363. [Google Scholar] [CrossRef]
  71. El Hajj, M.; Baghdadi, N.; Bazzi, H.; Zribi, M. Penetration Analysis of SAR Signals in the C and L Bands for Wheat, Maize, and Grasslands. Remote Sens. 2018, 11, 31. [Google Scholar] [CrossRef]
  72. Rosenqvist, A.; Killough, B. A Layman’s Interpretation Guide to L-Band and C-Band Synthetic Aperture Radar Data; Committee on Earth Observation Satellites (CEOS): Washington, DC, USA, 2023. [Google Scholar]
Figure 1. Sentinel-1 amplitude images displayed in power units over the Midwest United States (a) and Central Brazil (b), showing backscatter change throughout one year from each hemisphere. Backscatter is highest (white) when the crops reach maturity and lowest (black) after harvest. The coefficient of variation from one year is also displayed, where a value of 1 indicates high backscatter variation and a value of 0 indicates low backscatter variation.
Figure 1. Sentinel-1 amplitude images displayed in power units over the Midwest United States (a) and Central Brazil (b), showing backscatter change throughout one year from each hemisphere. Backscatter is highest (white) when the crops reach maturity and lowest (black) after harvest. The coefficient of variation from one year is also displayed, where a value of 1 indicates high backscatter variation and a value of 0 indicates low backscatter variation.
Remotesensing 17 01094 g001
Figure 2. Sentinel-1 frame locations and path numbers for each base and test case AOI. The color of the Sentinel-1 frames indicates the primary crop type in the region, pink (corn/soybean), orange (wheat), and blue (rice). If a majority of the country was included in the AOI, individual states/provinces were not labeled. States/provinces were included for the Midwest U.S., Central Brazil, East China and East India. The percentage of the total area that is classified as cropland in the 2021 WorldCover is also displayed.
Figure 2. Sentinel-1 frame locations and path numbers for each base and test case AOI. The color of the Sentinel-1 frames indicates the primary crop type in the region, pink (corn/soybean), orange (wheat), and blue (rice). If a majority of the country was included in the AOI, individual states/provinces were not labeled. States/provinces were included for the Midwest U.S., Central Brazil, East China and East India. The percentage of the total area that is classified as cropland in the 2021 WorldCover is also displayed.
Remotesensing 17 01094 g002
Figure 3. Workflow diagram detailing the inputs and processes to create crop/non-crop classifications for both the base cases and the test cases, with an example of each shown from corn/soybean AOIs. The arrows indicate each step involved in the workflow and progresses from the left to the right side of the diagram. The base cases were completed first in order to generate the crop type average thresholds. The crop type average thresholds were then applied to the test case of that specific crop type.
Figure 3. Workflow diagram detailing the inputs and processes to create crop/non-crop classifications for both the base cases and the test cases, with an example of each shown from corn/soybean AOIs. The arrows indicate each step involved in the workflow and progresses from the left to the right side of the diagram. The base cases were completed first in order to generate the crop type average thresholds. The crop type average thresholds were then applied to the test case of that specific crop type.
Remotesensing 17 01094 g003
Figure 4. Optimal CV threshold ranges for the corn/soybean, wheat, and rice base case AOIs for 2018–2022. The base case average CV threshold is displayed along with the crop type average CV threshold that is later applied to the test case AOIs.
Figure 4. Optimal CV threshold ranges for the corn/soybean, wheat, and rice base case AOIs for 2018–2022. The base case average CV threshold is displayed along with the crop type average CV threshold that is later applied to the test case AOIs.
Remotesensing 17 01094 g004
Figure 5. False positives for each base case in 2021 broken down by land cover classification. The percentages displayed are in relation to the total number of false positive pixels (e.g., ~91% of the false positive pixels in the Midwest U.S. base case were classified as grassland in the 2021 WorldCover). Percentages less than 2% are not labeled.
Figure 5. False positives for each base case in 2021 broken down by land cover classification. The percentages displayed are in relation to the total number of false positive pixels (e.g., ~91% of the false positive pixels in the Midwest U.S. base case were classified as grassland in the 2021 WorldCover). Percentages less than 2% are not labeled.
Remotesensing 17 01094 g005
Figure 6. Total area mapped as cropland in the crop/non-crop classifications versus in the corresponding reference data, CGLS-LC for 2018–2019 and WorldCover for 2020–2022.
Figure 6. Total area mapped as cropland in the crop/non-crop classifications versus in the corresponding reference data, CGLS-LC for 2018–2019 and WorldCover for 2020–2022.
Remotesensing 17 01094 g006
Figure 7. Maps showing classification performance and water in all three test cases from 2021. Black boxes denote areas that are highlighted in Figure 8, Figure 9 and Figure 10. The stacked bar graph shows the percentage of classified pixels (water excluded) that are true positive, false positive, true negative, and false negative.
Figure 7. Maps showing classification performance and water in all three test cases from 2021. Black boxes denote areas that are highlighted in Figure 8, Figure 9 and Figure 10. The stacked bar graph shows the percentage of classified pixels (water excluded) that are true positive, false positive, true negative, and false negative.
Remotesensing 17 01094 g007
Figure 8. Circular false negative fields in Central Brazil, perhaps representing a crop other than corn/soybean.
Figure 8. Circular false negative fields in Central Brazil, perhaps representing a crop other than corn/soybean.
Remotesensing 17 01094 g008
Figure 9. Rice fields resulting in false negatives in in the Henan Province of East China when using the wheat threshold of 0.3.
Figure 9. Rice fields resulting in false negatives in in the Henan Province of East China when using the wheat threshold of 0.3.
Remotesensing 17 01094 g009
Figure 10. Wetlands resulting in false positives in Bangladesh.
Figure 10. Wetlands resulting in false positives in Bangladesh.
Remotesensing 17 01094 g010
Figure 11. Accuracy for thresholds ± 0.1 of the crop type threshold (μ) with a step size of 0.01 for each test case.
Figure 11. Accuracy for thresholds ± 0.1 of the crop type threshold (μ) with a step size of 0.01 for each test case.
Remotesensing 17 01094 g011
Table 1. Descriptions of true positive, false positive, true negative, and false negative in terms of this application.
Table 1. Descriptions of true positive, false positive, true negative, and false negative in terms of this application.
Positive ClassificationNegative Classification
√ True Positive: Pixel classified as “crop” by CV threshold, and it is cropland in the reference layer.√ True Negative: Pixel classified as “non-crop” by CV threshold, and it is non-cropland in the reference layer.
× False Positive: Pixel classified as “crop” by CV threshold, but it is not cropland in the reference layer.× False Negative: Pixel classified as “non-crop” by CV threshold, but it is not cropland in the reference layer.
Table 2. Optimal CV thresholds, accuracy, sensitivity, and specificity for the corn/soybean, wheat, and rice base case AOIs for 2018–2022. Specific states/provinces within each AOI are located in Section 2.2.1.
Table 2. Optimal CV thresholds, accuracy, sensitivity, and specificity for the corn/soybean, wheat, and rice base case AOIs for 2018–2022. Specific states/provinces within each AOI are located in Section 2.2.1.
AOIYearOptimal CV ThresholdAccuracySensitivitySpecificity
Ukraine20180.5680.4978.7185.88
20190.5081.4080.0985.33
20200.5386.4588.2782.26
20210.5786.7488.4483.10
20220.5987.9987.9388.11
Midwest
United States
20180.5076.3976.7575.63
20190.4877.5375.1382.65
20200.5182.3783.5390.76
20210.5583.8384.4083.10
20220.5580.0286.9571.11
Morocco20180.3376.7166.5083.13
20190.2874.1760.1383.02
20200.2877.0667.3280.97
20210.3682.8271.3887.83
20220.2878.2173.8580.12
France/Belgium20180.3382.1678.5885.71
20190.2982.3079.7084.87
20200.3388.0388.5687.68
20210.3189.2389.2389.02
20220.3289.4789.4789.02
Thailand20180.2181.7480.9183.52
20190.2283.5383.9382.67
20200.2986.8888.5584.63
20210.2585.9788.0283.53
20220.2484.7686.7882.39
Myanmar20180.2580.3876.6783.47
20190.2680.6978.4782.54
20200.3082.5085.7480.83
20210.2983.8787.6281.91
20220.2784.2385.0283.82
Table 3. Crop type thresholds, accuracy, sensitivity, and specificity for the corn/soybean, wheat, and rice test case AOIs for 2021.
Table 3. Crop type thresholds, accuracy, sensitivity, and specificity for the corn/soybean, wheat, and rice test case AOIs for 2021.
Test Case AOICrop Type ThresholdAccuracySensitivitySpecificity
Central Brazil (Corn/soybean)0.5392.2386.1694.52
East China (Wheat)0.3184.4389.3673.99
East India/Bangladesh (Rice)0.2683.6784.5482.44
Table 4. Recommended threshold ranges for corn/soybean, wheat, and rice.
Table 4. Recommended threshold ranges for corn/soybean, wheat, and rice.
Crop TypeRecommended Threshold
Corn/soybean0.53 ± 0.02
Wheat0.31 ± 0.02
Rice0.26 ± 0.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sharp, K.G.; Bell, J.R.; Pankratz, H.G.; Schultz, L.A.; Lucey, R.; Meyer, F.J.; Molthan, A.L. Modifying NISAR’s Cropland Area Algorithm to Map Cropland Extent Globally. Remote Sens. 2025, 17, 1094. https://doi.org/10.3390/rs17061094

AMA Style

Sharp KG, Bell JR, Pankratz HG, Schultz LA, Lucey R, Meyer FJ, Molthan AL. Modifying NISAR’s Cropland Area Algorithm to Map Cropland Extent Globally. Remote Sensing. 2025; 17(6):1094. https://doi.org/10.3390/rs17061094

Chicago/Turabian Style

Sharp, Kaylee G., Jordan R. Bell, Hannah G. Pankratz, Lori A. Schultz, Ronan Lucey, Franz J. Meyer, and Andrew L. Molthan. 2025. "Modifying NISAR’s Cropland Area Algorithm to Map Cropland Extent Globally" Remote Sensing 17, no. 6: 1094. https://doi.org/10.3390/rs17061094

APA Style

Sharp, K. G., Bell, J. R., Pankratz, H. G., Schultz, L. A., Lucey, R., Meyer, F. J., & Molthan, A. L. (2025). Modifying NISAR’s Cropland Area Algorithm to Map Cropland Extent Globally. Remote Sensing, 17(6), 1094. https://doi.org/10.3390/rs17061094

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop