Next Article in Journal
A Multimodal Feature Fusion Framework for UAV Positioning in Weak GNSS Environments Using a Priori High-Resolution Satellite Imagery
Previous Article in Journal
Effect of Spatial Resolution on Land Cover Mapping in an Agropastoral Area of Niger (Aguié and Mayahi) Using Sentinel-2 and Landsat 8 Imagery Within a Random Forest Regression Framework
Previous Article in Special Issue
A Class-Aware Unsupervised Domain Adaptation Framework for Cross-Continental Crop Classification with Sentinel-2 Time Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid RTM-Informed Machine Learning Framework with Crop-Specific Canopy Structural Parameterization for Crop Fractional Vegetation Cover Estimation

1
Key Laboratory for Geographical Process Analysis & Simulation of Hubei Province/College of Urban and Environmental Sciences, Central China Normal University, Wuhan 430079, China
2
National Engineering and Technology Center for Information Agriculture, MOE Engineering Research Center of Smart Agriculture, MARA Key Laboratory of Crop System Analysis and Decision Making, Jiangsu Key Laboratory for Information Agriculture, Nanjing Agricultural University, 666 Binjiang Blvd, Nanjing 210095, China
3
State Key Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(5), 751; https://doi.org/10.3390/rs18050751
Submission received: 26 January 2026 / Revised: 23 February 2026 / Accepted: 25 February 2026 / Published: 2 March 2026

Highlights

What are the main findings?
  • A physically consistent crop FVC retrieval framework was developed by dynamically parameterizing the G (0) function within a PROSAIL-based inversion scheme.
  • Large-scale validation across China demonstrates that the proposed method significantly improves FVC estimation accuracy for four major crops compared with SNAP and GEOV3.
What are the implications of the main findings?
  • Dynamic treatment of canopy structural parameters reduces structural uncertainty in RTM-based crop FVC retrieval.
  • The proposed approach provides a robust and scalable solution for high-resolution crop monitoring using Sentinel-2 imagery.

Abstract

Fractional vegetation cover of crops (CropFVC) is a critical indicator for remote sensing-based crop monitoring. However, existing inversion models are largely developed for general vegetation types, limiting their effectiveness for crop-specific applications. Here, we developed a gap-fraction-refined hybrid CropFVC model that integrates crop-specific PROSAIL calibration, an ALA (averages of leaf angle) -based dynamic projection function, and a Random Forest model. The model was validated with 43343 CropFVC samples of four major crops (winter wheat, rice, maize, and soybean) across China during March to August 2024, spanning key phenological stages, and further compared against SNAP (10 m) and GEOV3 (300 m) products. Results showed that (1) the proposed model achieved stable performance across diverse canopy structures, with average RMSE < 9.3% for wheat, rice, maize, and soybean; (2) compared with SNAP (10 m), RMSE decreased by 4.83%, 3.10%, 7.51%, and 8.63% for wheat, rice, maize, and soybean, respectively; compared with GEOV3 (300 m), reductions reached 7.88%, 9.49%, 13.63%, and 19.75%, respectively. Further observations showed that the model-derived CropFVC captured intra-field variability and abnormal crop conditions well, enabling more accurate monitoring of crop-specific FVC dynamics across phenological stages. The proposed operational framework enhances CropFVC estimation by improving canopy structural representation and reducing retrieval bias. By enabling more accurate 10 m CropFVC mapping at the field scale, the crop-specific approach provides practical support for precision agriculture and crop-related food security monitoring.

1. Introduction

Fractional vegetation cover (FVC) is defined as the percentage of vertically projected area of green vegetation elements within a unit ground area [1]. As a key canopy parameter of vegetation, it plays a significant role in multiple Earth system processes [2]. The FVC of crops (hereafter referred to as CropFVC) is an important indicator of crop growth and a critical input for crop growth models [3]; therefore, it provides essential support for precise and smart agriculture.
Traditional CropFVC observation using in situ measurements is time-consuming, costly, and unsuitable for large-scale, dynamic CropFVC monitoring [4]. With advances in remote sensing, multi-temporal, multi-scale, and multi-source data acquisition has become feasible, enabling large-scale and continuous monitoring of CropFVC. Although recent advances have explored vegetation fraction extraction through RGB-based classification strategies, such as adaptive thresholding approaches applied to camera or Unmanned Aerial Vehicle (UAV) imagery [5], these methods enable fine-scale vegetation delineation and primarily operate through image segmentation. CropFVC retrieval from multispectral satellite reflectance observations operates under different spatial scales and radiometric conditions and is typically addressed through reflectance-driven inversion approaches. These methods are generally classified into five categories: empirical models, spectral mixture analysis (SMA), physical models, machine learning models, and hybrid inversion approaches [6,7].
Empirical models relate spectral indices (e.g., (Normalized Difference Vegetation Index) NDVI, Enhanced Vegetation Index (EVI)) to field-measured FVC. They are simple, efficient, and can achieve high accuracy in specific regions. However, their lack of physical interpretability limits their transferability across regions and sensors [8]. SMA is based on physical assumptions of spectral mixing, and has been widely applied in large-scale products such as the LSA SAF FVC [9]. A commonly used method for FVC estimation is the pixel dichotomy model [10], which assumes a two-endmember (vegetation–soil) linear mixture. Various techniques have been developed to determine Vv and Vs, the vegetation index values corresponding to full vegetation cover and bare soil, respectively. These techniques include the observational data approach [4,11], image statistical analysis method [12], and the multi-angle observations [13]. More recently, three-endmember models [14] and crop-specific five-endmember models [15] have been developed. Nevertheless, accurate endmember extraction remains a major challenge, particularly for high-resolution imagery.
Radiative Transfer Models (RTMs) establish a quantitative relationship between canopy reflectance and FVC based on radiative transfer theory. PROSAIL is the most widely used model for crop parameter retrieval [16], including FVC estimation. However, traditional RTM-based inversion suffers from ill-posedness, meaning that multiple parameter combinations can produce similar reflectance spectra [17]. More recently, data-driven approaches such as machine learning, particularly deep learning, have emerged as powerful tools for FVC inversion and are now widely used in the generation of global and regional FVC products, such as GLASS FVC [18] and HiGLASS FVC [19]. Compared with traditional physical or empirical approaches, machine learning models are capable of capturing complex, nonlinear relationships between multi-source remote sensing datasets and vegetation structural parameters [20]. To integrate the strengths of both physical and data-driven approaches, hybrid inversion strategies have been proposed, combining physically constrained samples or RTM-simulated datasets with machine learning models to improve the accuracy of FVC estimation [21]. Notable examples of hybrid-based operational products include CYCLOPES FVC [22], the GEOV3 FVC product series [23,24], and Sentinel-2 -based FVC generated using the Sentinel Application Platform (SNAP).
Hybrid FVC estimation methods often incorporate gap fraction models to establish a physically based mapping in which canopy structural and physiological parameters determine gap fraction and canopy reflectance, and, consequently, FVC [25,26]. In practice, RTMs are used to simulate canopy reflectance under vary structural and physiological conditions, where gap fraction models are applied to derive the corresponding FVC values, thereby generating labeled datasets that serve as training data for machine learning algorithms. While hybrid inversion strategies are widely adopted for estimating FVC for general vegetation at a regional-to-global scale, their performance often degrades when applied to crop-specific retrievals [27]. This limitation could arise from two structural inconsistencies: (1) Most hybrid frameworks rely on generic or fixed canopy parameterization schemes that fail to capture crop-specific variations in canopy architecture and phenological dynamics, leading to systematic mismatches between modeled radiative transfer behavior and actual crop canopies. For instance, the average leaf angle (ALA), a key parameter influencing canopy spectral characteristics [28], is typically allowed to range from 0° to 90° in general models, whereas actual crop canopies exhibit much narrower leaf-angle distributions under typical management, making extreme angles uncommon. Calibrating ALA to crop-specific distributions improves structural realism and better captures inter-species structural heterogeneity [29]. (2) The projection function governing canopy gap fraction and extinction processes is commonly assumed to be constant (G (0) = 0.5), implicitly assuming a spherical leaf-angle distribution. However, many crop canopies deviate from this distribution [30], which alters the functional relationship between gap fraction and Leaf Area Index (LAI), and consequently degrades the inversion accuracy.
To address these issues, we developed a gap-fraction-refined hybrid CropFVC inversion framework that integrates crop-differentiated parameterization and pixel-based dynamic G (0) within an RTM-informed learning scheme. The objective is to improve the physical consistency and robustness of large-scale CropFVC estimation under Sentinel-2 observations. By explicitly accounting for crop-specific canopy structures, the proposed framework enables practical 10 m CropFVC mapping for large-scale and operational agricultural applications.

2. Data and Methods

2.1. Study Area and Field Sampling

From March to August 2024, our research team conducted ten province-scale field sampling campaigns for FVC measurement across four major crops, i.e., winter wheat, rice, maize, and soybean, covering major grain-producing regions in China (Figure 1). In total, we surveyed 38 experimental sites, each consisting of a homogeneous crop area exceeding 400 m × 400 m. Sampling was conducted using low-altitude unmanned aerial vehicles (UAVs), specifically the DJI Mavic 3E (M3E) and DJI Mavic 3M (M3M) platforms (DJI, Shenzhen, China). To ensure optimal image quality, UAV flights were conducted under clear, cloud-free conditions between 10 a.m. and 2 p.m., with a front overlap of 80% and a side overlap of 70%. Flight altitudes were selected to achieve centimeter-level spatial resolution, with RGB imagery ranging from 0.85 cm to 1.9 cm, and multispectral imagery ranging from 1.44 cm to 3.25 cm.
To obtain representative CropFVC samples, surveys were conducted in Heilongjiang, Henan, and Hubei provinces. Each experimental site was visited at least twice to capture FVC measurements across key phenological stages of winter wheat, rice, maize, and soybean. Specifically, Henan Province included 20 winter wheat experimental sites and eight maize sites; Heilongjiang Province included five soybean sites, two maize sites, and two rice sites; and Hubei Province included one rice site (Figure 1). The sampling schedule for each specific crop was as follows: (i) Winter wheat: Sites W1~4 in Huaxian (28 March, 27 April and 15 May), W5~8 in Xiayi (18 April and 18 May), W9~12 in Taikang (22 March, 1 May and 21 May), W13~W16 in Zhengyang and W17~20 in Dengzhou (21 March, 26 April and 14 May); (ii) Maize: sites C1~4 in Taikang (14 July, 23 July), C5~8 in Shangshui (23 July, 15 August), and C9~10 in Hailun (18 June, 23 July); (iii) Rice: site R1 in Huanggang (17 July, 1 August, 23 August), R2 in Hailun and R3 in Beilin (18 June, 23 July); (iv) Soybean: sites S1~S4 in Hailun and S5 in Beilin (18 June, 23 July).

2.2. Construction of 10 m FVC Samples

For each experimental site, the original centimeter-level resolution UAV imagery was preprocessed using DJI Terra software (V4.2.5). Then, the imagery was classified using a supervised Support Vector Machine classifier trained with numerous visually interpreted samples for crop classification, which has been widely adopted in high-resolution crop mapping studies [31,32]. Mapping accuracy was assessed through visual interpretation, in which 40 circular plots (10 m diameter) were randomly selected within each 400 m × 400 m site. In cases of obvious misclassification or omission, the workflow was repeated from the first step to re-optimize the training samples. To match the spatial resolution of Sentinel-2 imagery (10 m), grids with 10 m × 10 m cells were generated based on the corresponding Sentinel-2 scene. The ground-truth CropFVC for each grid cell was then calculated as the proportion of centimeter-level crop pixels within that cell (Figure 2).
To supplement CropFVC samples for branching-stage soybeans, 0.75 m Jilin-1 satellite imagery on 1 July 2024 was used for the S4 and S5 soybean experimental areas. Preprocessing of this imagery was performed using the PIE-Ortho 7.0 platform. The 0.75 m CropFVC was then calculated using the pixel dichotomy model. Finally, 10 m × 10 m grids were generated from co-related Sentinel-2 imagery, and the 10 m ground-truth CropFVC values were obtained by aggregating the proportion of 0.75 m crop pixels within each grid cell (Figure 2). The quality control process to further integrate the 10 m CropFVC samples from UAV and Jilin-1 satellite imagery included (i) visual interpretation and imagery quality assessment, (ii) Sentinel-2 NDVI-based calibration to remove outliers, and (iii) crop-specific noise removal using statistical residual analysis. In total, the ten province-scale field sampling campaigns conducted in 2024 produced a database of 43,343 10 m pure CropFVC samples, comprising 8662 wheat, 8074 soybean, 5945 rice, and 20,662 maize samples.

2.3. Remote Sensing Imagery

In this study, the SNAP FVC and GEOV3 FVC products at the experiment sites were used for indirect validation of the proposed model, as they adopted an RTM-based inversion framework comparable to our framework and provide relatively high-resolution FVC for crop monitoring [33].
Cloud-free, level-L2A Sentinel-2 multispectral imagery from the 38 experimental sites was the main input data for the proposed model and for the Biophysical Processor (S2BP) processor to estimate CropFVC. The Sentinel-2 imagery, with acquisition dates as close as possible to the field sampling dates, was obtained from the Hub (https://browser.dataspace.copernicus.eu/, accessed on 10 April 2025). The acquisition dates and corresponding tiles were as follows: (i) Winter wheat: 26 April 2024 (50SKE), 19 March 2024, 18 May 2024 (50SKC), 19 March 2024, 26 April 2024, 13 May 2024 (50SKB), 20 March 2024, 24 April 2024 (49SES), 20 March 2024 (49SFS); (ii) Rice: 16 June 2024, 16 July 2024 (with number 52TCT), 6 August 2024, 16 August 2024 (with number 50RKU); (iii) Maize: 16 June 2024, 16 July 2024 (with number 52TCT), 30 July 2025, 11 August 2024 (with number 50SCK); (iv) Soybean: 16 June 2024, 16 July 2024 (with number 52TCT).
GEOV3 FVC products were obtained from the Copernicus Global Land Service portal (https://land.copernicus.eu/, accessed on 27 April 2025), with a spatiotemporal resolution of 10 days and 300 m. In this study, GEOV3 FVC products with acquisition dates closest to the corresponding Sentinel-2 imagery were selected: (i) Henan Province: winter wheat (20240320, 20240420, 20240430, 20240510, 20240520), maize (20240731, 20240810); (ii) Rice in Huangzhou, Hubei Province (20240801, 20240820); (iii) Soybean, maize, and rice in Hailun, Suihua, and Beilin, Heilongjiang Province (20240620, 20240720).

2.4. A Gap Fraction-Refined Hybrid FVC Inversion Framework

The overall workflow of the proposed model is illustrated in Figure 3. First, differentiated PROSAIL-D input-parameter tables were established for four major crops, i.e., winter wheat, rice, soybean, and maize, based on a systematic literature review of crop-specific physical models and expert agronomic knowledge. A global sensitivity analysis was then conducted within these predefined parameter ranges to support the refinement of crop-specific parameterization (see Section 2.4.1). The PROSAIL-D model (Python version PROSAIL-2.0.alpha, released 9 September 2022) was subsequently used to simulate canopy spectral reflectance across the 400–2500 nm range. These spectra were then resampled with the Sentinel-2 MSI A/B spectral response functions to generate a canopy reflectance dataset consistent with Sentinel-2 band characteristics (see Section 2.4.2). Next, using a refined Beer–Lambert formulation (see Section 2.4.3), the simulated LAI values were converted into theoretical FVC values, forming a simulated training dataset consisting of reflectance in 12 spectral bands and the corresponding FVC labels. Finally, this dataset was used to train an RF regression model on the Google Earth Engine (GEE) platform for CropFVC inversion based on Sentinel-2 imagery (see Section 2.4.4). Details on each step are described in the following subsections.

2.4.1. Crop-Specific Parameterization of the PROSAIL-D Model

Following the initial establishment of the crop-specific PROSAIL-D input-parameter tables, a global sensitivity-based parameter relevance assessment was conducted using SimLab 2.2 software to evaluate both the individual and interactive effects of each parameter on the model output. Together with agronomic prior knowledge and previous studies [16,29,34], this analysis supported the refinement of crop-specific input ranges summarized in Table 1. It is important to note that we selected the ellipsoidal option (LIDFtype = 2) to parameterize the leaf angle distribution (LAD) using ALA, rather than the two-parameter LIDF formulation (LIDFa and LIDFb). This choice is a pragmatic compromise and follows most operational retrieval schemes in large-scale PROSAIL applications and the Sentinel-2 biophysical product chains [35,36].

2.4.2. Canopy Reflectance Simulation

The PROSAIL-D model couples the PROSPECT-D model [37] and the SAIL canopy model [38]. The PROSPECT-D simulates leaf reflectance and transmittance as functions of structural and biochemical parameters, including leaf structure, chlorophyll content, water content, and dry matter content. The SAIL model is a bidirectional canopy reflectance model that assumes the canopy is a horizontally homogeneous, infinitely extended layer of leaves with isotropically distributed azimuth. The raw spectra simulated by PROSAIL-D cannot be directly applied for parameter inversion and were therefore convolved with the spectral response functions of Sentinel-2 MSI to match its band characteristics. This step generated crop-specific sample datasets, comprising 33,600 winter wheat, 48,000 rice, 43,200 maize, and 50,400 soybean samples.

2.4.3. Refinement of the Gap Fraction Model

Since FVC is not a direct input or output variable of the PROSAIL model, it is commonly derived using the gap fraction model [39], which is based on Beer–Lambert’s law:
P 0 θ = exp G ( θ ) · Ω   · L A I   c o s θ  
where P 0 θ is the canopy gap fraction in a given direction θ; LAI is the leaf area index; Ω is the clumping index, typically taken as one, as per prior studies [40]; and G ( θ ) is the projection function that describes the average projected leaf area in the direction θ. According to the definition of FVC, the calculation is usually performed at the nadir direction ( θ = 0 o )
[25], therefore yielding
F V C = 1 P 0 θ
F V C = 1 exp G 0 L A I  
Thus, the key to accurate FVC estimation lies in the proper determination of G 0 . While the extinction coefficient K , which is a key parameter that characterizes the attenuation of solar radiation within the vegetation canopy, physically represents the ratio of the canopy’s projected area on a horizontal plane to the total leaf area [41] and is defined as:
K θ = G ( θ ) c o s θ
Campbell [42,43] proposed an ellipsoidal distribution model, in which K is defined as the ratio of the projected shadow area of an ellipsoid to its surface area. The formula and description are as follows: assuming the vertical semi-axis is a and the horizontal semi-axis of the ellipsoid is b, the projected shadow area at a zenith angle θ (SA) is given by:
S A = π b 2 1 + ( a 2 t a n 2 θ / b 2 ) 1 2
And the surface area of an ellipsoid is given by the following formulas:
S 1 = 2 π b 2 [ 1 + a b ε 1 sin 1 ε 1 ] S 2 = 2 π b 2 1 + a 2 2 b 2 ε 2 l n [ ( 1 + ε 2 ) / ( 1 ε 1 ) ] }
where S1 and S2 refer to the surface areas of a prolate spheroid and an oblate spheroid. Once the projected shadow area and the ellipsoid surface area are determined, K is calculated as follows:
K = 2 S A S 1 = χ 2 + t a n 2 θ χ + s i n 1 ε 1 ε 1 , χ < 1 K = 2 S A S 2 = χ 2 + t a n 2 θ χ + 1 2 ε 2 χ ln 1 + ε 2 1 ε 2 , χ 1
Here, ε   is the eccentricity of the ellipsoid, and χ is the ratio of the horizontal semi-axis to the vertical semi-axis of the ellipsoid, defined as χ = b / a . The expressions for ε are:
ε 1 = 1 χ 2 1 2 , χ < 1 ε 2 = 1 χ 2 1 2 , χ 1
Accordingly, the expression for G θ is:
G θ = χ 2 + t a n 2 θ χ + s i n 1 ε 1 ε 1 c o s θ , χ 1 G θ = χ 2 + t a n 2 θ χ + 1 2 ε 2 χ ln 1 + ε 2 1 ε 2 c o s θ , χ > 1
Generally, the value of χ ranges from 0.1 to 10. When χ < 1 , the ellipsoid’s horizontal axis is shorter than its vertical axis, indicating a larger average leaf inclination angle. When χ = 1 , the ellipsoid becomes a theoretical sphere. When χ > 1 , the horizontal axis is longer than the vertical axis, indicating a smaller average leaf inclination angle. The parameter χ is strongly related to the mean leaf inclination angle (MLIA), and their empirical relationship is expressed as follows [30,44]:
χ = 3 + α 9.65 0.6061
where α refers to MLIA. In this study, the PROSAIL-retrieved ALA at the pixel level was treated as a proxy for the MLIA in the Campbell model to parameterize the ellipsoidal LAD, under a unimodal ellipsoidal LAD assumption and negligible explicit clumping effects. This is an explicit approximation that is considered reasonable when LADs are unimodal, as typically assumed in ellipsoidal models [42,43]. The distinction between arithmetic averages of leaf angles (ALAs) and distribution-based means (e.g., MLIA) has been well documented in canopy architecture studies [45]. We emphasize that ALA ≃ MLIA is not a universal identity, but a practical simplification adopted here because Sentinel-2 provides single-view multispectral observations with limited sensitivity to independent LAD shape parameters, leading to equifinality with LAI and leaf optical traits (see Section 2.4.1). To ensure internal consistency, we verified the use of consistent angular conventions (leaf angle defined relative to zenith) and unit conversions (degrees-radians). By substituting ALA into Equation (10), χ can be possibly estimated, which in turn allows for the calculation of G 0 (Equations (8) and (9)), and ultimately, FVC when combined with PROSAIL-derived LAI values (Equation (3)). Although approximate, this substitution allows the estimation of the ellipsoidal parameter χ from PROSAIL-retrieved ALA, which in turn enables the calculation of a pixel-wise, dynamic, and crop-specific G 0 rather than assuming a fixed spherical value of 0.5. Based on the simple yet physically consistent chain (PROSPECT-ALA → χ G 0 → FVC), we bridged physical consistency with operational feasibility, providing a more realistic representation of crop canopy projection geometry, which directly benefits accurate CropFVC estimation when combined with LAI.
Based on the refined gap fraction model introduced above, the crop-specific datasets derived in Section 2.4.2 from PROSAIL-D were integrated with FVC labels, providing large-scale crop-specific theoretical FVC simulations. Here, the total number of simulated training samples reached 33,600 for winter wheat, 43,200 for rice, 43,200 for maize, and 50,400 for soybeans.

2.4.4. RTM-Informed Random Forest Training for CropFVC Retrieval

The RF model was selected for CropFVC inversion due to its suitability for handling high-dimensional Sentinel-2 spectral data, capturing nonlinear relationships between reflectance and FVC, and maintaining robustness against noise and overfitting. Here, the canopy reflectance data from Sentinel-2 bands B3, B4, B5, B6, B7, B8, B8A, B11, and B12 in the constructed training set were used as input features, while the theoretical FVC values served as labels for training the RF model. The trained model was then applied to real Sentinel-2 imagery at experimental sites to estimate crop-specific FVC.

2.5. Model Evaluation

The estimated crop-specific FVC was directly validated against the 10 m pure CropFVC sample database collected in 2024. A stratified random sampling strategy was employed to construct the validation sample set. For each crop type, the full set of CropFVC samples was first divided into 10 gradient intervals, from which 8 samples were randomly selected per interval. An additional 70 samples were then randomly drawn from the remaining data, yielding a total of 150 validation points per crop. Model performance was evaluated using standard metrics (Equations (11)–(13)). To enhance robustness and stability, the stratified random sampling procedure was repeated 10 times, and the final inversion accuracy for each crop was expressed as the average of the evaluation metrics across all iterations.
R M S E = i = 1 n y i x ¯ i 2 n 100
R 2 = i = 1 n x i x ¯ y i y ¯ 2 i = 1 n x i x ¯ 2 i = 1 n y i y ¯ 2
B i a s = i = 1 n y i x i n
where x i is the field measured FVC; y i is the FVC from model inversion; x ¯ is the mean of the field measured FVC; y ¯ is the mean of the FVC from model inversion; n is the sample size.
In addition to direct validation, indirect validation was conducted by comparing the model-inverted CropFVC with existing RTM-based operational retrieval chains. Specifically, SNAP-based FVC values for experimental sites were used for comparison. To ensure consistency and avoid potential biases from different sample batches, the same 40 sets of measured samples (10 iterations per crop) used for method validation in Section 3.1 were also employed for validating the SNAP FVC. Furthermore, the GEOV3 FVC product was compared with the model-inverted FVC across the sampling areas, enabling performance assessment from both temporal and spatial perspectives.

3. Results

3.1. Model Validation Using Field Samples of 2024

The results of ten independent random sampling trials for the four crop types are summarized in Table S1. Overall, the proposed method exhibited consistently high accuracy in 10 m FVC retrieval across different crops. For winter wheat, the RMSE ranged from 6.34% to 6.98% (average: 6.61%), R2 from 0.81 to 0.85 (average: 0.83), and bias averaged 0.01. For rice, the RMSE ranged from 6.16% to 7.12% (average: 6.61%), R2 from 0.90 to 0.92 (average: 0.91), and averaged bias was −0.01. For maize, the RMSE ranged from 8.78% to 9.92% (average: 9.3%), R2 from 0.90 to 0.92 (average: 0.91), and average bias ~−0.02. For soybean, the RMSE ranged from 5.94% to 6.65% (average: 6.26%), R2 consistently reached ~0.93, and the average bias was 0.01. These results indicate that the model provides crop-specific FVC estimates with strong agreement with reference data across the four crop types.
Taking the best-performing trial (with the lowest RMSE) as an example (Figure 4), the model-inverted FVC exhibited strong agreement with observations across all four crops, with R2 consistently above 0.84 and RMSE below 8.8%. Specifically, RMSE values were 6.34% for winter wheat, 6.16% for rice, 8.78% for maize, and 5.94% for soybeans. The average bias for all crops remained within ±0.02, indicating minimal systematic error and further demonstrating the robustness and reliability of the proposed method across diverse crop types.

3.2. Performance Comparison

3.2.1. Comparison with SNAP FVC

To compare the accuracy of the proposed model with that of the SNAP-based FVC product at 10 m resolution, the latter was validated using the same 40 sets of measured samples (150 samples per set) described in Section 3.1. This ensured consistency in evaluation and avoided potential biases caused by differences in sample batches. The results (Table S2) show that the average RMSE values for four crops (winter wheat, rice, maize, and soybean) were 11.17%, 9.26%, 16.29%, and 14.57%, respectively, and the average bias values were approximately 0.09, −0.01, 0.11, and 0.12, respectively. Using the same 600 samples as in Figure 4, the accuracy of the SNAP-derived FVC for winter wheat, rice, maize, and soybean was demonstrated in Figure 5. Compared with Figure 4, the FVC values estimated by the proposed method show markedly higher consistency with the field-measures, especially for maize and soybean. Moreover, the proposed method effectively mitigates the overestimation tendency of SNAP FVC in high-value regions and yields more reliable estimates during the later growth stages of the crops.

3.2.2. Comparison with GEOV3 FVC

To further evaluate robustness, the model-inverted FVC was also compared with the GEOV3 FVC product from a temporal and spatial perspective. Here, spatiotemporal matching was performed by aggregating dense field-measured 10 m FVC sampling points into 300 m resolution sample units. Temporally, the “closest-date” principle was applied, whereby the FVC product observation closest to the sample acquisition date was selected. Spatially, the GEOV3 grids were first overlaid with experimental sites and only those grids where field samples covered more than 30% of the area were retained. To ensure reliability, grids with high uncertainty were excluded based on visual interpretation and a neighborhood statistical consistency check. For the remaining valid grids, the average field-measured FVC and model-inverted FVC were calculated and then compared against the corresponding GEOV3 FVC values.
The accuracy comparison of the four major crops at the 300 m scale (Figure 6) indicates that the GEOV3 FVC consistently shows a systematic overestimation, which becomes more pronounced in the later stages of crop growth. Specifically, the RMSE values for winter wheat, rice, maize, and soybean were 12.77%, 13.91%, 22.24%, and 25.15%, respectively; all substantially higher than those of the proposed model. Moreover, the proposed method outperformed the GEOV3 in terms of R2, RMSE and bias across all four crops, demonstrating superior accuracy and stability. It is worth noting that, compared to the point-based evaluation with 150 individual 10 m samples, the grid-based aggregation reduced the impact of noise and local outliers, thereby slightly improving the overall accuracy.
Two representative GEOV3 grids were selected for further spatiotemporal comparison of CropFVC dynamics between the model-derived and GEOV3 products. Both grids correspond to winter wheat–maize rotation areas, located entirely within 400 m Site W11/C3 and Site W16, respectively. These two grids were chosen because they fully capture a complete winter wheat–maize rotation cycle and are representative of typical crop management practices in Henan, the major grain-producing province of China. Figure 7 presents the time series trajectories of model-derived FVC and GEOV3 FVC at the 300 m scale from October 2023 to November 2024. In addition, the top panels of Figure 7a,b provide visual comparisons, with the upper rows showing model-derived FVC tiles at 10 m resolution and the lower rows presenting the corresponding GEOV3 FVC images at 300 m resolution on the closest date.
The results show that the model-derived FVC exhibits overall consistent temporal dynamics with the GEOV3 FVC products across both rotation sites (Figure 7a,b). However, GEOV3 FVC values are systematically higher than those of the proposed model, particularly during rapid growth phases and around peak FVC periods in both crop seasons. This overestimation trend is consistent with the findings reported in the previous section.
Compared with GEOV3, the proposed method based on Sentinel-2 imagery provides FVC estimates at a finer spatial resolution (10 m), allowing more detailed capture of within-field variation and spatial heterogeneity. This higher resolution enables the model to better represent local differences during crop development, particularly around critical growth stages, where coarse-resolution products such as GEOV3 (300 m) tend to smooth or overestimate canopy density. For example, in Figure 7a, the GEOV3 FVC at the peak growth stage of maize is noticeably higher than that derived from the proposed method. Field verification with UAV imagery confirms that lodging occurred in this site, which reduced the actual canopy cover. While the coarse-resolution GEOV3 product failed to detect this localized phenomenon, the proposed method successfully captured the reduced FVC, demonstrating its advantage in accurately reflecting fine-scale crop conditions.

4. Discussion

4.1. Impact of Canopy Structural Refinement on CropFVC Estimation

By combining the strengths of physical modeling, machine learning, and crop-specific refinements, the proposed framework improves large-scale CropFVC estimation at 10 m resolution. This improvement can be attributed to two complementary canopy-structure refinements:
(1)
Crop-specific parameterization of the RTM. This study constructed differentiated PROSAIL input parameter tables for each crop by incorporating prior agronomic knowledge and systematically reviewing previous research. Previous research has demonstrated that a tailored physical model parameterization improves the representation of forward-modeled canopy variables for specific vegetation cover. For example, Berger [16] summarized typical PROSAIL-D parameter ranges for wheat, rice, maize, soybean, and sugar beet, providing important reference values for input settings. Jiao [29] optimized ALA through field spectral measurements and RF, identifying mean angles of 62° for wheat and 45° for soybean, which markedly improved chlorophyll inversion accuracy. These findings underline the importance of crop-specific parameterization in producing more realistic synthetic datasets while reducing redundancy. Therefore, PROSAIL-D parameters in our study were calibrated in a crop-specific manner to ensure that canopy structure and optical traits were more accurately represented for each individual crop type. In this context, LAI and ALA are introduced as simulation constraints within the PROSAIL-based training process rather than as operational input variables. Here, ALA serves as a proxy for canopy angular structure, ensuring that the framework remains applicable at large scales without requiring direct structural measurements.
(2)
Crop-specific refinement of the projection function with dynamic G (0). The canopy projection function G (0) plays a significant role in FVC estimation [46] but is often oversimplified as a constant value of 0.5, corresponding to the assumption of a spherical LAD where leaf normal is uniformly distributed with MLIA approximate 57.3° [46,47]. However, studies have shown that this spherical assumption may significantly underestimate canopy transmittance [48]. In crop-specific applications, such an assumption may be invalid due to the diversity, seasonality, and heterogeneity of crop canopies [30].
This study refined a 10 m pixel-based dynamic G (0) for crops by a simple but physically consistent chain (PROSPECT-ALA → χ → G (0)), allowing for improved CropFVC retrieval compared with the conventional assumption of G (0) = 0.5. To further assess the influences of G (0) on model accuracy, a reference hybrid model identical to the proposed one was constructed, except that the dynamic crop-specific G (0) was replaced with the constant value of 0.5. Using the same 40 sample sets from four crop types (150 samples per set, Section 3.1) and the same 10-fold cross-validation strategy, the reference model with a fixed projection function, G (0) = 0.5, exhibited reduced accuracy compared to the proposed framework (Table S3). Specifically, the average RMSE increased by 5.14% (0.83% ~10.07%) across all crops, and the mean bias increased from −0.02 to 0.15. These results indicate that refining G (0) contributes to improved CropFVC retrieval performance when integrated into the proposed crop-specific inversion framework.
The magnitude of improvement varied among crop types and did not follow a strictly crop-dependent pattern, despite theoretical expectations that erectophile (e.g., maize) or planophile (e.g., soybean) canopy structures would benefit more from dynamic G (0). This reflects the fact that the performance gains reported in Section 3 arise from the combined effects of crop-specific PROSAIL parameterization and dynamic refinement of the projection function, rather than from dynamic G (0) alone. In practice, dynamic G (0) enhances the structural consistency between simulated and observed canopies, while residual uncertainties related to canopy clumping and sub-pixel heterogeneity, particularly in row-structured crops, may attenuate its observable impact. Given the exponential sensitivity of FVC to G (0) (Equation (3)), incorporating per-pixel G (0) corrections derived from inverted ALA provides a physically consistent and numerically influential, yet targeted, improvement over the oversimplified assumption of a constant spherical leaf angle distribution.
The above experiment also proved that the explicit operational chain (ALA → χ → G (0) → FVC) used in our study can disentangle the effect of the fixed spherical assumption (G (0) = 0.5) from that of a dynamic, canopy-dependent G (0). By making this chain explicit outside the PROSAIL black box, this explicit framework is not only beneficial for interpretability and transparency but also provides flexibility for further optimization of the canopy extinction process. It offers an ‘interface’ to integrate additional structural corrections, such as canopy clumping indices, row orientation, or non-random foliage distributions, that are known to affect extinction and gap fraction but are not represented in the classical random canopy hypothesis of SAIL [43].

4.2. Toward Crop-Specific High-Resolution FVC Products

High-resolution, crop-specific FVC products are increasingly required to support smart and precision agricultural management. However, existing global or generic FVC products, such as SNAP and GEOV, often show limited accuracy when applied to individual crop types, particularly at the field scale.
SNAP- and GEOV3-based FVC products are primarily designed for large-scale, multi-biome applications, where robustness and global consistency are prioritized over crop-specific structural representation. In this study, these generic FVC products tend to exhibit a systematic overestimation under moderate to high FVC conditions across all four crops, which is consistent with earlier studies [49]. The bias constrains the ability of these products to capture strong phenological dynamics and complex canopy structures.
Moreover, the magnitude and manifestation of these biases vary across crop types. Our comparison reveals a consistent crop-dependent pattern in both SNAP FVC (10 m) and GEOV3 FVC (300 m): wheat and rice are estimated more accurately than maize and soybean, consistent with previous findings [50]. The relatively higher RMSE values observed for maize (16.29% in SANP; 22.24% in GEOV3) and soybean (14.57% in SNAP; 25.15% in GEOV3) may be attributed to the generalized parameterization schemes and canopy structure assumptions embedded in the training database of these products, which are not fully representative of the complex and heterogeneous canopy architectures of row crops.
The limitations noted above highlight the need for crop-specific FVC products that explicitly account for canopy structural variability and phenological dynamics, rather than relying on generic parameterizations optimized for global land surface monitoring. Developing such crop-specific, high-precision FVC products is therefore essential for advancing large-scale agricultural monitoring and precision farming applications.

4.3. Limitations and Outlooks

Several limitations should be acknowledged and addressed in future work. First, the explicit operational chain (PROSAIL-ALA → χ → G (0) → FVC) is based on a single-parameter ellipsoidal LAD assumption. Despite its operational feasibility, the ellipsoidal LAD assumption cannot always fully represent the actual LADs of row crops [51,52]. Future work should therefore explore the use of two-parameter LAD formulations, supported by either multi-angle satellite observations or dedicated LAD field campaigns, to better capture crop- and stage-specific LAD dynamics. Second, several factors such as canopy clumping ( Ω ) , crop row structure, and canopy heterogeneity also affect extinction processes. Future improvements could incorporate Ω derived from direct LAD measurements or high-resolution canopy reconstructions to capture a realistic characterization of LAD and canopy aggregation. Third, although UAV imagery provides high spatial resolution, uncertainties associated with image acquisition, radiometric inconsistencies, and classification-based vegetation extraction remain unavoidable [5,53]. Future studies could validate UAV-derived FVC with ground-based photography to better quantify its influence and associated uncertainties.

5. Conclusions

CropFVC is a key indicator of crop growth, yet existing RTM-based FVC retrieval models and products, largely developed for general vegetation types, often exhibit reduced accuracy when applied to crop-specific FVC estimations. To address this limitation, we proposed a gap fraction-refined hybrid FVC inversion model to enhance CropFVC estimation.
Leveraging ten province-scale field-sampling campaigns across four major crops (winter wheat, rice, maize, and soybean) spanning March to August 2024, the proposed model was validated against a CropFVC database covering key phenological stages (e.g., jointing, heading, and grain filling) across major grain-producing regions in China. Results showed that (1) the model achieved stable performance across all crops, with R2 > 0.85 and RMSE < 10% for wheat, rice, maize, and soybean; and (2) compared with SNAP FVC (10 m), the proposed method reduced RMSE, by 4.83% (winter wheat), 3.10% (rice), 7.51% (maize), and 8.63% (soybean). Compared to GEOV3 FVC (300 m), the RMSE reductions were even greater, reaching 7.88%, 9.49%, 13.63% and 19.75%, respectively. Moreover, the model better captured intra-field variability and abnormal crop conditions, providing a more accurate representation of CropFVC dynamic across growth stages.
By introducing crop-specific calibration and a dynamic gap fraction formulation, the proposed model enhances CropFVC estimation and outperforms existing RTM-based inversion FVC products across scales, enabling high-resolution (10 m) CropFVC mapping over large areas. These advances point to a promising pathway for the development of next-generation FVC products tailored to agricultural applications, with important implications for precision agriculture and food security monitoring.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs18050751/s1, Table S1: Ten independent random sampling trials for the four crop types (Proposed model); Table S2: Ten independent random sampling trials for the four crop types (SNAP); Table S3: Ten independent random sampling trials for the four crop types.

Author Contributions

Conceptualization, L.X. and H.W.; Methodology, L.X., J.Z., T.C., Q.J. and Y.Q.; Software, J.Z. and Y.Q.; Validation, L.X., J.Z., Y.Q. and H.M.; Formal analysis, L.X., J.Z., Y.Q. and H.W.; Investigation, L.X., J.Z., Y.Q. and H.M.; Resources, L.X., T.C., Q.J. and H.W.; Data curation, L.X.; Writing—original draft, L.X. and J.Z.; Writing—review & editing, L.X., T.C., Q.J. and H.W.; Visualization, L.X. and J.Z.; Supervision, L.X., T.C. and H.W.; Project administration, T.C. and H.W.; Funding acquisition, L.X. and H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the National Key R&D Program of China (2023YFB3906202), the National Natural Science Foundation of China (41701474 and U23A2020), and the Hubei Provincial Natural Science Foundation of China (2024AFA032).

Data Availability Statement

Sentinel-2 MSI imagery used in this study is publicly available from the Copernicus Open Access Hub. GEOV3 FVC products were obtained from the Copernicus Global Land Service portal. The authors constructed the 10 m FVC validation samples, which are available from the corresponding author upon reasonable request.

Acknowledgments

The authors are grateful to the anonymous reviewers for their constructive criticism and comments. We sincerely thank Xu Ma (Xinjiang University) for his valuable discussions on the FVC retrieval, which provided helpful insights for this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar]
  2. Dashpurev, B.; Dorj, M.; Phan, T.N.; Bendix, J.; Lehnert, L.W. Estimating fractional vegetation cover and aboveground biomass for land degradation assessment in eastern Mongolia steppe: Combining ground vegetation data and remote sensing. Int. J. Remote Sens. 2023, 44, 452–468. [Google Scholar]
  3. Fang, H.; Liu, W.; Wei, S.; Liu, X.; Myneni, R.B. New insights of global vegetation structural properties through an analysis of canopy clumping index, fractional vegetation cover, and leaf area index. Sci. Remote Sens. 2021, 4, 100027. [Google Scholar]
  4. Jiapaer, G.; Chen, X.; Bao, A. A comparison of methods for estimating fractional vegetation cover in arid regions. Agr. Forest Meteorol. 2011, 151, 1698–1710. [Google Scholar]
  5. Ye, X.; Zhu, W.; Liu, R.; He, B.; Yang, X.; Zhao, C. An automated method for estimating fractional vegetation cover from camera-based field measurements: Saturation-adaptive threshold for ExG (SATE). ISPRS J. Photogramm. Remote Sens. 2025, 229, 170–187. [Google Scholar] [CrossRef]
  6. Jia, K.; Yao, Y.; Wei, X.; Gao, S.; Jiang, B.; Zhao, X. A review on fractional vegetation cover estimation using remote sensing. Adv. Earth Sci. 2013, 28, 774–782. [Google Scholar]
  7. Verrelst, J.; Malenovský, Z.; Van der Tol, C.; Camps-Valls, G.; Gastellu-Etchegorry, J.P.; Lewis, P.; Moreno, J. Quantifying vegetation biophysical variables from imaging spectroscopy data: A review on retrieval methods. Surv. Geophys. 2019, 40, 589–629. [Google Scholar] [CrossRef]
  8. Yue, J.; Guo, W.; Yang, G.; Zhou, C.; Feng, H.; Qiao, H. Method for accurate multi-growth-stage estimation of fractional vegetation cover using unmanned aerial vehicle remote sensing. Plant Methods 2021, 17, 51. [Google Scholar]
  9. García-Haro, F.J.; Camacho, F.; Verger, A.; Meliá, J. Current status and potential applications of the LSA SAF suite of vegetation products. In Proceedings of the 29th EARSeL Symposium, Chania, Greece, 15–18 June 2009; pp. 15–18. [Google Scholar]
  10. Mu, X.; Song, W.; Gao, Z.; McVicar, T.R.; Donohue, R.J.; Yan, G. Fractional vegetation cover estimation by using multi-angle vegetation index. Remote Sens. Environ. 2018, 216, 44–56. [Google Scholar]
  11. Zhang, X.; Liao, C.; Li, J.; Sun, Q. Fractional vegetation cover estimation in arid and semi-arid environments using HJ-1 satellite hyperspectral data. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 506–512. [Google Scholar]
  12. Zeng, X.; Dickinson, R.E.; Walker, A.; Shaikh, M.; DeFries, R.S.; Qi, J. Derivation and evaluation of global 1-km fractional vegetation cover data for land modeling. J. Appl. Meteorol. 2000, 39, 826–839. [Google Scholar] [CrossRef]
  13. Song, W.; Mu, X.; McVicar, T.R.; Knyazikhin, Y.; Liu, X.; Wang, L.; Yan, G. Global quasi-daily fractional vegetation cover estimated from the DSCOVR EPIC directional hotspot dataset. Remote Sens. Environ. 2022, 269, 112835. [Google Scholar] [CrossRef]
  14. Filipponi, F.; Valentini, E.; Nguyen Xuan, A.; Guerra, C.A.; Wolf, F.; Andrzejak, M.; Taramelli, A. Global MODIS fraction of green vegetation cover for monitoring abrupt and gradual vegetation changes. Remote Sens. 2018, 10, 653. [Google Scholar] [CrossRef]
  15. Ma, X.; Lu, L.; Ding, J.; Zhang, F.; He, B. Estimating fractional vegetation cover of row crops from high spatial resolution image. Remote Sens. 2021, 13, 3874. [Google Scholar] [CrossRef]
  16. Berger, K.; Atzberger, C.; Danner, M.; D’Urso, G.; Mauser, W.; Vuolo, F.; Hank, T. Evaluation of the PROSAIL model capabilities for future hyperspectral model environments: A review study. Remote Sens. 2018, 10, 85. [Google Scholar] [CrossRef]
  17. Atzberger, C. Object-based retrieval of biophysical canopy variables using artificial neural nets and radiative transfer models. Remote Sens. Environ. 2004, 93, 53–67. [Google Scholar] [CrossRef]
  18. Tu, Y.; Jia, K.; Wei, X.; Yao, Y.; Xia, M.; Zhang, X.; Jiang, B. A time-efficient fractional vegetation cover estimation method using the dynamic vegetation growth information from time series Glass FVC product. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1672–1676. [Google Scholar] [CrossRef]
  19. Song, D.X.; Wang, Z.; He, T.; Wang, H.; Liang, S. Estimation and validation of 30 m fractional vegetation cover over China through integrated use of Landsat 8 and Gaofen 2 data. Sci. Remote Sens. 2022, 6, 100058. [Google Scholar] [CrossRef]
  20. Yang, L.; Jia, K.; Liang, S.; Liu, J.; Wang, X. Comparison of four machine learning methods for generating the GLASS fractional vegetation cover product from MODIS data. Remote Sens. 2016, 8, 682. [Google Scholar] [CrossRef]
  21. Wu, Z.; Zheng, X.; Ding, Y.; Tao, Z.; Sun, Y.; Li, B.; Li, X. A Method for Retrieving Maize Fractional Vegetation Cover by Combining 3D Radiative Transfer Model and Transfer Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 15671–15684. [Google Scholar] [CrossRef]
  22. Baret, F.; Hagolle, O.; Geiger, B.; Bicheron, P.; Miras, B.; Huc, M.; Leroy, M. LAI, fAPAR and fCover CYCLOPES global products derived from VEGETATION: Part 1: Principles of the algorithm. Remote Sens. Environ. 2007, 110, 275–286. [Google Scholar] [CrossRef]
  23. Baret, F.; Weiss, M.; Verger, A.; Smets, B. Atbd for Lai, Fapar and Fcover from Proba-V Products at 300 m Resolution (Geov3); Imagines_rp2. 1_atbd-lai, 300; INRAE: Paris, France, 2016. [Google Scholar]
  24. Verger, A.; Sánchez-Zapero, J.; Weiss, M.; Descals, A.; Camacho, F.; Lacaze, R.; Baret, F. GEOV2: Improved smoothed and gap filled time series of LAI, FAPAR and FCover 1 km Copernicus Global Land products. Int. J. Appl. Earth Obs. Geoinf. 2023, 123, 103479. [Google Scholar]
  25. Liu, D.; Jia, K.; Jiang, H.; Xia, M.; Tao, G.; Wang, B.; Li, J. Fractional vegetation cover estimation algorithm for FY-3B reflectance data based on random forest regression method. Remote Sens. 2021, 13, 2165. [Google Scholar] [CrossRef]
  26. Wang, X.; Jia, K.; Liang, S.; Li, Q.; Wei, X.; Yao, Y.; Tu, Y. Estimating fractional vegetation cover from landsat-7 ETM+ reflectance data based on a coupled radiative transfer and crop growth model. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5539–5546. [Google Scholar] [CrossRef]
  27. Wan, L.; Zhu, J.; Du, X.; Zhang, J.; Han, X.; Zhou, W.; Cen, H. A model for phenotyping crop fractional vegetation cover using imagery from unmanned aerial vehicles. J. Exp. Bot. 2021, 72, 4691–4707. [Google Scholar] [CrossRef]
  28. Jacquemoud, S.; Verhoef, W.; Baret, F.; Bacour, C.; Zarco-Tejada, P.J.; Asner, G.P.; Ustin, S.L. PROSPECT+ SAIL models: A review of use for vegetation characterization. Remote Sens. Environ. 2009, 113, S56–S66. [Google Scholar] [CrossRef]
  29. Jiao, Q.; Sun, Q.; Zhang, B.; Huang, W.; Ye, H.; Zhang, Z.; Qian, B. A random forest algorithm for retrieving canopy chlorophyll content of wheat and soybean trained with PROSAIL simulations using adjusted average leaf angle. Remote Sens. 2021, 14, 98. [Google Scholar] [CrossRef]
  30. Li, S.; Fang, H. Mapping global leaf inclination angle (LIA) based on field measurement data. Earth Syst. Sci. Data Discuss. 2024, 17, 1347–1366. [Google Scholar] [CrossRef]
  31. Liu, B.; Shi, Y.; Duan, Y.; Wu, W. UAV-BASED CROPS CLASSIFICATION WITH JOINT FEATURES FROM ORTHOIMAGE AND DSM DATA. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2018, XLII-3, 1023–1028. [Google Scholar] [CrossRef]
  32. Eskandari, R.; Mahdianpari, M.; Mohammadimanesh, F.; Salehi, B.; Brisco, B.; Homayouni, S. Meta-analysis of Unmanned Aerial Vehicle (UAV) Imagery for Agro-environmental Monitoring Using Machine Learning and Statistical Models. Remote Sens. 2020, 12, 3511. [Google Scholar] [CrossRef]
  33. Weiss, M.; Baret, F.; Jay, S. S2ToolBox Level 2 Products: LAI, FAPAR, FCOVER (Version 2.1); INRAE/ESA: Paris, France, 2020. [Google Scholar]
  34. Li, D.; Chen, J.M.; Yu, W.; Zheng, H.; Yao, X.; Zhu, Y.; Cao, W.; Cheng, T. A chlorophyll-constrained semi-empirical model for estimating leaf area index using a red-edge vegetation index. Comput. Electron. Agric. 2024, 220, 108891. [Google Scholar] [CrossRef]
  35. Weiss, M.; Baret, F. S2ToolBox Level-2 Products: LAI, FAPAR, FCOVER (ATBD); European Space Agency: Paris, France, 2016. [Google Scholar]
  36. Verrelst, J.; Rivera, J.P.; Veroustraete, F.; Muñoz-Marí, J.; Clevers, J.G.P.W.; Camps-Valls, G.; Moreno, J. Optical remote sensing and the retrieval of terrestrial vegetation bio-geophysical properties—A review. ISPRS J. Photogramm. Remote Sens. 2015, 108, 273–290. [Google Scholar] [CrossRef]
  37. Féret, J.B.; Gitelson, A.A.; Noble, S.D.; Jacquemoud, S. PROSPECT-D: Towards modeling leaf optical properties through a complete lifecycle. Remote Sens. Environ. 2017, 193, 204–215. [Google Scholar] [CrossRef]
  38. Verhoef, W. Light scattering by leaf layers with application to canopy reflectance modeling: The SAIL model. Remote Sens. Environ. 1984, 16, 125–141. [Google Scholar] [CrossRef]
  39. Weiss, M.; Baret, F.; Myneni, R.; Pragnère, A.; Knyazikhin, Y. Investigation of a model inversion technique to estimate canopy biophysical variables from spectral and directional reflectance data. Agronomie 2000, 20, 3–22. [Google Scholar] [CrossRef]
  40. Yang, R.; Li, S.; Zhang, B.; Jiao, Q.; Peng, D.; Yang, S.; Yu, R. A Multispectral Feature Selection Method Based on a Dual-Attention Network for the Accurate Estimation of Fractional Vegetation Cover in Winter Wheat. Remote Sens. 2024, 16, 4441. [Google Scholar] [CrossRef]
  41. Monteith, J.L.; Unsworth, M.H.; Webb, A. Principles of environmental physics. Q. J. R. Meteorol. Soc. 1994, 120, 1699. [Google Scholar] [CrossRef]
  42. Campbell, G.S. Extinction coefficients for radiation in plant canopies calculated using an ellipsoidal inclination angle distribution. Agr. Forest Meteorol. 1986, 36, 317–321. [Google Scholar] [CrossRef]
  43. Campbell, G.S. Derivation of an angle distribution of foliage from a distribution of leaf inclination. Agr. Forest Meteorol. 1990, 49, 173–182. [Google Scholar]
  44. Wang, W.M.; Li, Z.L.; Su, H.B. Comparison of leaf angle distribution functions: Effects on extinction coefficient and fraction of sunlit foliage. Agr. Forest Meteorol. 2007, 143, 106–122. [Google Scholar] [CrossRef]
  45. Campbell, G.S.; Norman, J.M. Environmental Biophysics, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  46. Zhao, J.; Li, J.; Liu, Q.; Xu, B.; Yu, W.; Lin, S.; Hu, Z. Estimating fractional vegetation cover from leaf area index and clumping index based on the gap probability theory. Int. J. Appl. Earth Obs. Geoinf. 2020, 90, 102112. [Google Scholar] [CrossRef]
  47. Zhao, J.; Li, J.; Liu, Q.; Xu, B.; Mu, X.; Dong, Y. Generation of a 16 m/10-day fractional vegetation cover product over China based on Chinese GaoFen-1 observations: Method and validation. Int. J. Digit. Earth. 2023, 16, 4229–4246. [Google Scholar] [CrossRef]
  48. Stadt, K.J.; Lieffers, V.J. MIXLIGHT: A flexible light transmission model for mixed-species forest stands. Agr. Forest Meteorol. 2000, 102, 235–252. [Google Scholar]
  49. Liu, D.; Jia, K.; Wei, X.; Xia, M.; Zhang, X.; Yao, Y.; Wang, B. Spatiotemporal comparison and validation of three global-scale fractional vegetation cover products. Remote Sens. 2019, 11, 2524. [Google Scholar]
  50. Hu, Q.; Yang, J.; Xu, B.; Huang, J.; Memon, M.S.; Yin, G.; Liu, K. Evaluation of global decametric-resolution LAI, FAPAR and FVC estimates derived from Sentinel-2 imagery. Remote Sens. 2020, 12, 912. [Google Scholar] [CrossRef]
  51. Jiang, Q.; Wang, Y. Leaf angle regulation toward a maize smart canopy. Plant J. 2025, 117, 447–460. [Google Scholar] [CrossRef]
  52. Elli, E.F.; Edwards, J.; Yu, J.; Trifunovic, S.; Eudy, D.M.; Kosola, K.R.; Schnable, P.S.; Lamkey, K.R.; Archontoulis, S.V. Maize leaf angle genetic gain is slowing down in the last decades. Crop Sci. 2023, 63, 3520–3533. [Google Scholar] [CrossRef]
  53. Du, M.; Li, M.; Noguchi, N.; Ji, J.; Ye, M. Retrieval of fractional vegetation cover from remote sensing image of unmanned aerial vehicle based on mixed pixel decomposition method. Drones 2023, 7, 43. [Google Scholar] [CrossRef]
Figure 1. Study area and sampling design in 2024. (a1,a2) UAV platforms and field sampling setup; (b1b4) examples of Unmanned Aerial Vehicle (UAV) imagery collected at sample sites; (c1c4) ground photographs of representative sampling locations; (d) overview of the study area; (eg) spatial distribution of sample sites across three provinces in China.
Figure 1. Study area and sampling design in 2024. (a1,a2) UAV platforms and field sampling setup; (b1b4) examples of Unmanned Aerial Vehicle (UAV) imagery collected at sample sites; (c1c4) ground photographs of representative sampling locations; (d) overview of the study area; (eg) spatial distribution of sample sites across three provinces in China.
Remotesensing 18 00751 g001
Figure 2. Workflow for generating 10 m UAV-derived CropFVC.
Figure 2. Workflow for generating 10 m UAV-derived CropFVC.
Remotesensing 18 00751 g002
Figure 3. The gap fraction-refined hybrid FVC inversion framework.
Figure 3. The gap fraction-refined hybrid FVC inversion framework.
Remotesensing 18 00751 g003
Figure 4. Validation of the proposed model based on CropFVC samples collected in 2024. Results are shown separately for four major crops, with each panel representing a specific crop (winter wheat, rice, maize, and soybean).
Figure 4. Validation of the proposed model based on CropFVC samples collected in 2024. Results are shown separately for four major crops, with each panel representing a specific crop (winter wheat, rice, maize, and soybean).
Remotesensing 18 00751 g004
Figure 5. Comparison of SNAP-based FVC and model-derived FVC using samples collected in 2024. Results are shown separately for four major crops, with each panel representing a specific crop (winter wheat, rice, maize, and soybean).
Figure 5. Comparison of SNAP-based FVC and model-derived FVC using samples collected in 2024. Results are shown separately for four major crops, with each panel representing a specific crop (winter wheat, rice, maize, and soybean).
Remotesensing 18 00751 g005
Figure 6. Comparison between the model-derived and GEOV3 FVC values at the 300 m scale. Results are shown separately for four major crops, with each panel representing a specific crop (winter wheat, rice, maize, and soybean).
Figure 6. Comparison between the model-derived and GEOV3 FVC values at the 300 m scale. Results are shown separately for four major crops, with each panel representing a specific crop (winter wheat, rice, maize, and soybean).
Remotesensing 18 00751 g006
Figure 7. Spatiotemporal comparison of FVC dynamics in typical croplands: (a) 400 m × 400 m site W11/C3 (34.0685°N, 114.6815°E) and (b) site W16 (32.3394°N, 114.2679°E).
Figure 7. Spatiotemporal comparison of FVC dynamics in typical croplands: (a) 400 m × 400 m site W11/C3 (34.0685°N, 114.6815°E) and (b) site W16 (32.3394°N, 114.2679°E).
Remotesensing 18 00751 g007
Table 1. Parameter setup for four major crops.
Table 1. Parameter setup for four major crops.
Parameters (Units)Range (or Value)Steps
WheatRiceMaizeSoybean
N1–1.51–1.51.2–1.81.2–20.5/0.5/0.6/0.8
Cab ( μ g / c m 2 ) 10–8020/10/10/20
Car ( μ g / c m 2 ) 25% Cab-
Cw ( c m ) 0.02-
Cm ( μ g / c m 2 ) 0.002–0.0080.002–0.0080.004–0.020.004–0.0320.006/0.003/0.016/0.014
Canth ( μ g / c m 2 ) 2-
Cb ( μ g / c m 2 ) 0-
LAI0–70.5
ALA (°)40–7050–7050–7030–605
Psoil0–10–0.30–10–10.25
Hots0.05-
tts (°)0–6020
tto (°)0-
psi (°)0-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, L.; Zhang, J.; Cheng, T.; Jiao, Q.; Qin, Y.; Ma, H.; Wu, H. A Hybrid RTM-Informed Machine Learning Framework with Crop-Specific Canopy Structural Parameterization for Crop Fractional Vegetation Cover Estimation. Remote Sens. 2026, 18, 751. https://doi.org/10.3390/rs18050751

AMA Style

Xu L, Zhang J, Cheng T, Jiao Q, Qin Y, Ma H, Wu H. A Hybrid RTM-Informed Machine Learning Framework with Crop-Specific Canopy Structural Parameterization for Crop Fractional Vegetation Cover Estimation. Remote Sensing. 2026; 18(5):751. https://doi.org/10.3390/rs18050751

Chicago/Turabian Style

Xu, Lili, Junya Zhang, Tao Cheng, Quanjun Jiao, Yelu Qin, Haoyan Ma, and Hao Wu. 2026. "A Hybrid RTM-Informed Machine Learning Framework with Crop-Specific Canopy Structural Parameterization for Crop Fractional Vegetation Cover Estimation" Remote Sensing 18, no. 5: 751. https://doi.org/10.3390/rs18050751

APA Style

Xu, L., Zhang, J., Cheng, T., Jiao, Q., Qin, Y., Ma, H., & Wu, H. (2026). A Hybrid RTM-Informed Machine Learning Framework with Crop-Specific Canopy Structural Parameterization for Crop Fractional Vegetation Cover Estimation. Remote Sensing, 18(5), 751. https://doi.org/10.3390/rs18050751

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop