Next Article in Journal
Anomaly Detection of Sensor Arrays of Underwater Methane Remote Sensing by Explainable Sparse Spatio-Temporal Transformer
Next Article in Special Issue
Analysis of Spatial and Temporal Variations in Evapotranspiration and Its Driving Factors Based on Multi-Source Remote Sensing Data: A Case Study of the Heihe River Basin
Previous Article in Journal
Interactions between MSTIDs and Ionospheric Irregularities in the Equatorial Region Observed on 13–14 May 2013
Previous Article in Special Issue
Evaluation and Drivers of Four Evapotranspiration Products in the Yellow River Basin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Crop Evapotranspiration by Combining the Unmixing and Weight Image Fusion Methods

State Key Laboratory of Water Resources Engineering and Management, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(13), 2414; https://doi.org/10.3390/rs16132414
Submission received: 31 May 2024 / Revised: 28 June 2024 / Accepted: 30 June 2024 / Published: 1 July 2024

Abstract

:
The demand for freshwater is increasing with population growth and rapid socio-economic development. It is more and more important for refined irrigation water management to conduct research on crop evapotranspiration (ET) data with a high spatiotemporal resolution in agricultural regions. We propose the unmixing–weight ET image fusion model (UWET), which integrates the advantages of the unmixing method in spatial downscaling and the weight-based method in temporal prediction to produce daily ET maps with a high spatial resolution. The Landsat-ET and MODIS-ET datasets for the UWET fusion data are retrieved from Landsat and MODIS images based on the surface energy balance model. The UWET model considers the effects of crop phenology, precipitation, and land cover in the process of the ET image fusion. The precision evaluation is conducted on the UWET results, and the measured ET values are monitored by eddy covariance at the Luancheng station, with average MAE values of 0.57 mm/day. The image results of UWET show fine spatial details and capture the dynamic ET changes. The seasonal ET values of winter wheat from the ET map mainly range from 350 to 660 mm in 2019–2020 and from 300 to 620 mm in 2020–2021. The average seasonal ET in 2019–2020 is 499.89 mm, and in 2020–2021, it is 459.44 mm. The performance of UWET is compared with two other fusion models: the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and the Spatial and Temporal Reflectance Unmixing Model (STRUM). UWET performs better in the spatial details than the STARFM and is better in the temporal characteristics than the STRUM. The results indicate that UWET is suitable for generating ET products with a high spatial–temporal resolution in agricultural regions.

1. Introduction

Agriculture is the largest water consumer, and 70% of global freshwater intake is used for agricultural irrigation [1]. In Asia and the Pacific, with population growth and rapid socio-economic development, the increasing water use for domestic and industrial use will further deplete available freshwater resources and threaten agricultural production and food security [2,3,4]. The previous extensive and inefficient water resource utilization methods are no longer suitable for the demand of agricultural production regarding intensification, standardization, and informatization. Irrigation water management is heading for the refinement and precision direction [5]. In order to meet the need for fine management of irrigation water, it is important to generate a high spatial–temporal ET estimation by the remote sensing inversion [6].
There are two main types of remote sensing data applied to crop ET estimation at a regional scale: high temporal resolution data and high spatial resolution data. Remote sensing satellite images with high temporal resolution have been applied to the inversion of regional crop evapotranspiration, such as the National Oceanic and Atmospheric Administration (NOAA), the Moderate-resolution Imaging Spectroradiometer (MODIS), and FengYun (FY) series meteorological satellites of China [7,8], and daily ET products are extracted from these satellite images. However, it is difficult to show more detailed spatial information due to the low spatial resolution, generally at the kilometer level or hectometer level [9]. Some remote sensing satellites, such as Landsat, meet the demand of the medium or high spatial resolution but have a long re-entry period and monitor few daily ET images during the crop growth period [10,11,12,13]. In order to solve the contradiction between the temporal and spatial resolution, many scholars applied multi-source remote sensing spatiotemporal fusion technology to obtain remote sensing data with a high spatiotemporal resolution [14,15,16,17,18,19,20,21,22]. However, there are few studies on the image fusion methods of crop ET with a high spatiotemporal resolution, so obtaining a high spatiotemporal resolution ET data is worth studying further.
The spatiotemporal fusion method of remote sensing images has experienced a fast development in the past decades [23]. There are four main types, including unmixing-based models [14], weight-based models [15], learning-based models [24], and hybrid models [18,25]. Therefore, learning-based models need a long training time and lack a strong theoretical foundation, and these models are not widely applied in the spatiotemporal fusion of images [25]. The unmixing-based models by hyperspectral unmixing technology are based on the linear mixing theory. The Multisensor Multiresolution Technique (MMT) proposed by Zhukov et al. [14] is the classical unmixing-based model, which unmixes remote sensing images with a lower spatial resolution by introducing the classification map with a higher spatial resolution. The unmixing-based models improve the spatial resolution of images and produce more spatial details by importing the land cover information, but the unmixed pixel values with the same land cover type are the same in the different positional points. This may be not consistent with the actual circumstances, and for example, ET values of winter wheat are different under the different microclimate and soil conditions [26]. The STARFM algorithm as a weight-based model is the widely-used data fusion algorithm for Landsat and MODIS images, which predicts fine pixel values by combining medium- and high-resolution images by weight functions within a moving window [15,27]. The STARFM also improves the spatial resolution, but the accuracy of central pixel values decreases if the coarse pixels have a mixture of different land cover types. The STARFM as the representative of weighted models finely portrays the temporal changes of long time series images in the fusion proceed, and the unmixing-based model is less sensitive to the temporal change than the STARFM [18]. Hybrid methods combine unmixing-based and weight-based methods, such as Flexible Spatiotemporal Data Fusion (FSDAF) [21], the Spatial and Temporal Reflectance Unmixing Model (STRUM) [18], and Virtual Image Pair-based Spatial–Temporal Fusion (VIPSTF) [28]. The fused image results by the hybrid methods have high fusion accuracy, but the above hybrid methods are mostly applied in the fusion of surface reflectance images.
In recent years, some spatiotemporal fusion methods have been used in the remote sensing reversion of crop ET [29]. Many scholars obtained daily ET maps at a 30 m spatial resolution using the STARFM method [30,31,32]. Cammaller et al. [33] fused MODIS and Landsat ET maps using the STARFM method and precipitation data and obtained daily ET maps at a 30 m spatial resolution. Wang et al. [25] proposed a hybrid model, the classification-based spatiotemporal adaptive fusion model (CSAFM), which unmixes the coarse pixels of daily ET images using a land cover map and then inputs the unmixed results into the weight-based process by the STARFM method. The CSAFM considers the effects of soil moisture and land cover on ET rates but does not include the phenological periods in the whole fusion process. Many studies do not consider precipitation, phenology, and land cover during the fusion process. The precipitation, phenology, and land cover are contributing factors of crop ET, in which the variation of phenology and land cover influence the remote sensing inversion accuracy of crop ET [34,35], and the precipitation element is the leading role of crop ET in hydropenic areas [36]. In this paper, we propose an unmixing–weight ET image fusion model (UWET), which integrates the advantages of the unmixing method in spatial downscaling and the weight-based method in temporal prediction. In UWET, the high-resolution land cover map is used for the unmixing of the low- and medium-resolution ET images and the selection of the initial similar pixels during the temporal prediction process. The phenological period and precipitation are used to determine the base dates of the image pair when the Landsat-ET and MODIS-ET images are available for the weight-based fusion method.
Winter wheat is an important grain crop in China. The scarcity of water resources is one of the main constraints to achieving more production of winter wheat, and supplementary irrigation is needed to ensure the yield. Evapotranspiration (ET) is the main water consumption method during the growth period of winter wheat. Integrating spatiotemporal characteristics of different satellite data through a multi-source remote sensing fusion model is an effective way to build a high spatiotemporal resolution ET dataset and has great significance for guiding irrigation planning. The remainder of this paper is organized into five sections. Section 2 introduces the data used in this study and information on the study area. Section 3 introduces the basic algorithms of ET estimation and remote sensing image fusion and the UWET model, which combines the main features of the unmixing-based and weight-based fusion methods. Section 4 and Section 5 present the results of UWET and compare them with other fusion models. Section 6 is the conclusion of this paper.

2. Study Area and Data

2.1. Study Area and Ground Test Station

The study area is Luancheng County (37°47′N–37°59′N, 114°28′E–114°47′E) with an area of 345 km2 in the southeast of Shijiazhuang City (Figure 1). Luancheng County has good light conditions and fertile soil. It is flat and has an elevation from 45 m to 66 m. The climate of Luancheng County is a semi-humid monsoon climate, with an average annual temperature of 12.8 °C and precipitation of 474 mm. Winter wheat is the most widely grown crop, and it is sown in autumn and harvested in summer. Luancheng County, located in the piedmont plain of Taihang Mountain, is a representative area with high-intensity agricultural production of winter wheat in the Northern Plain of China.
The meteorological data and measured ET data of the croplands are obtained from the Luancheng Agroecosystem Experimental Station (37°53′N, 114°41′E) [37]. The meteorological data include air temperature, wind speed, and precipitation. The measured ET data are obtained using eddy covariance (EC). The meteorological data and measured ET data are collected every half hour during the wheat growing season. The daily air temperature and wind speed are the average values of the data every half hour. The daily precipitation and measured ET are the accumulated values of the data every half hour.
The network observation, quality control, and storage process of this dataset strictly abide by the ChinaFLUX data management technology system to ensure data reliability. The regression slope of turbulent energy (sum of sensible heat flux and latent heat flux) and available energy (difference of net radiation and soil heat flux) indicates that the closure of energy balance is 85% and the data quality is high.

2.2. Satellite Data

The description of the satellite data used in this study is shown in Table 1. Sentinel-2 data products are available online (https://dataspace.copernicus.eu/, accessed on 20 December 2023) and used for the extraction of land cover maps and crop phenology. A total of fifty-four cloudless Sentinel-2 images during two growing periods of winter wheat from 5 September 2019 to 21 June 2020 and from 4 September 2020 to 26 June 2021 are used to generate NDIV time series data for the extraction of land cover maps and crop phenology. Landsat 8 and Landsat 9 images are available online (https://glovis.usgs.gov/app/, accessed on 22 December 2023) and used for ET estimation with the medium spatial and low temporal resolutions. From September 2019 to June 2021, twenty-one cloudless Landsat images are available, and specific dates are shown in Table 1. The MODIS data are available online (https://ladsweb.modaps.eosdis.nasa.gov/, accessed on 15 December 2023) and used for ET estimation with the low spatial and high temporal resolutions. Three MODIS products for the ET inversion include MCD43A3, MOD09GA, and MOD11A1, and are used for albedo, surface reflectance, and surface temperature, respectively. All the remote sensing images are reprojected to the UTM 50 N projection coordinate system with the WGS84 reference ellipsoid and resampled to 500 m resolution by the nearest neighbor algorithm. Then, the null value of the images is filled using the sliding window, and all the images are cropped to a uniform size.

2.3. Land Cover Map

The land cover map is the input data for the UWET model when calculating abundance in an unmixing-based spatial downscaling process and filtrating similar pixels in a weight-based temporal prediction process. Traditional classification methods use a single image for land use extraction, which makes it difficult to distinguish vegetation types accurately. Machine learning algorithms have a wider application in extracting land use types from remote sensing images [38,39], and the decision tree method is widely used for distinguishing crop types based on multi-temporal vegetation index images [40,41]. We combine the support vector machine method and the decision tree method for the extraction of the land cover map from the NDVI time series curve. Figure 2 is the flowchart for the extraction of land types.
Field survey data were collected during two growing periods of winter wheat using a GPS in Luancheng County for the training and validation data of classification. GPS sampling points include 200 for winter wheat, 50 for bare soil, 50 for buildings, 30 for other vegetation, and 10 for water. Figure 3 shows the spatial distribution of GPS sampling points in a field survey, in which eighty percent of GPS field data is training data and twenty percent is validation data.
Support vector machines (SVMs) are non-parametric supervised machine learning techniques originally designed to solve binary classification problems [39]. Before extracting the winter wheat area, we used the SVM method to extract the initial classification results from cloudless Sentinel-2 images on 22 May 2020 and 2 May 2021. The initial classification map including bare soil, buildings, water, and vegetation area is used to mask non-vegetation land cover, and then the non-vegetation mask prior to winter wheat extraction reduces confusion between winter wheat and surrounding surface features.
After that, the winter wheat is extracted from the vegetation area in the initial classification map. We use the Normalized Vegetation Index (NDVI) to distinguish winter wheat from other vegetation. Winter wheat is only one crop from October to June next year in Luancheng County, which is a northern area, and other vegetation is mostly grass or trees. From January to May, winter wheat and other vegetation are in the growing stage, and chlorophyll content ascents, causing higher NDVI values. After May, winter wheat steps into the late growth stage, and the chlorophyll content of winter wheat descends, causing NDVI values to decrease. However, from March to June, the chlorophyll content of grass and trees always ascends, and NDVI values are also higher. Figure 4 shows the NDVI variation curve during the growing stage of winter wheat. These change features of NDVI curves are used to differentiate and extract winter wheat from other vegetation by the decision tree method. All cloudless Sentinel-2 images from 2019 to 2021 are used to create NDVI images during the wheat growing season. The NDVI images are interpolated by cubic splines and S-G filtering to obtain smooth NDVI curves.
Two land cover maps from 2019 to 2020 and from 2020 to 2021 are extracted from Sentinel-2 images based on the classification method (Figure 5). The producer accuracy and user accuracy of training data are shown in Table 2, and Table 3 shows the accuracy of validation data. The values of producer and user accuracy are larger than 80% in Table 2 and Table 3. The overall training accuracy is 93.38%, and the training kappa coefficient is 0.89. The overall validation accuracy is 94.12%, and the validation kappa coefficient is 0.90. The values of overall accuracy are larger than 90%, and the kappa coefficient is larger than 0.80, indicating that the classification results meet the UWET model input data requirements [42].

2.4. Crop Phenology

NDVI time series data have been widely used for phenological characterization [43]. In this paper, the crop phenology extraction method is based on the mathematical features of the smooth NDVI curves [44]. Section 2.3 shows the extraction method of the smooth NDVI curves. Figure 4 shows mathematical feature points and corresponding phenological features on the smooth NDVI time series curve. In Figure 4, three mathematical feature points, including D1, D2, and D3, are defined as the minimum point, the maximum point, and the point with the minimum first derivative when the NDVI value decreases. The corresponding phenological features of winter wheat are the sowing date (D1), heading date (D2), and maturity date (D3), which are the important phenological feature points when the growth state of winter wheat changes significantly. Finally, the growing season of winter wheat is divided into four parts: the sowing period, elongation period, heading and milky period, and the maturity and harvest period. The results of the phenological periods are shown in Table 4.

3. Methods

In this study, Landsat-ET and MODIS-ET data are estimated from Landsat and MODIS images based on the Surface Energy Balance Algorithm for Land (SEBAL) model. The UWET (unmixing–weight ET image fusion) model is used to obtain high spatiotemporal resolution daily ET datasets. Landsat-ET and MODIS-ET results by SEBAL are input data of the UWET model, which combines the unmixing- and weight-based image fusion methods.

3.1. UWET Description

The UWET model integrates the advantages of the unmixing method in spatial downscaling and the weight-based method in temporal prediction (Figure 6). During the unmixing-based spatial downscaling process, the daily 500 m MODIS-ET images and the 30 m Landsat-ET on cloudless dates are unmixed into a 10 m resolution using a 10 m land cover map. The unmixing results are MS ET and LS ET images with a 10 m spatial resolution. Before the temporal predicting process, the date selection is conducted beforehand in the MS ET and LS ET images for MS-LS ET pairs using the phenology and precipitation data. Then, the MS ET and LS ET images are fused to produce the daily ET maps with a 10 m resolution using the weight-based temporal prediction method. In the UWET model, the land cover map determines the abundance of each category in the unmixing coarse pixel process and filters the similar pixels in the weight-based fusion process. The UWET framework is shown in Figure 6.
UWET includes three parts: (1) the unmixing-based spatial downscaling method, (2) the date selection of MS-LS ET pairs, and (3) the weight-based temporal prediction method results.

3.1.1. The Unmixing-Based Spatial Downscaling Method

In the unmixing process of UWET, 30 m Landsat-ET images and 500 m MODIS-ET images are unmixed using the high-resolution land cover map extracted from 10 m Sentinel-2 images. Landsat-ET and MODIS-ET images are inversed from Landsat and MODIS images based on SEBAL. The unmixing-based spatial downscaling method includes three steps: (1) calculating the abundances of land type within each Landsat or MODIS ET pixel (Equation (1)), (2) linearly unmixing coarse ET pixels in the Landsat or MODIS resolution (Equation (2)), and (3) assigning the ET results to the pixels with the land cover map resolution. The linear unmixing process is shown in Figure 7.
The linear unmixing theory of coarse pixels assumes that the pixel value of a low resolution is a linear combination of the pixel values of different land cover types, including the wheat type. The abundance of different land cover types within the coarse pixel is calculated as follows:
C i = s i S ,
where Ci is the abundance for the i’th land cover type; si is the area of class i within the coarse pixel; S is the area of the coarse pixel; and i is the land cover class.
The linear unmixing method is conducted in a sliding window of n coarse pixels:
Y = E T c o a r s e 1 E T c o a r s e n = A x + σ = C 1 1 C 1 k C n 1 C n k E T 1 E T k + σ 1 σ n ,
where Y is [n × 1] and contains the ET values of each coarse pixel in the sliding window; x is a [k × 1] column vector that contains the fine pixel ET results of k land cover types; A is a [n × k] abundance matrix; σ is the residual, which represents the system errors encountered during the unmixing process, primarily sensor and classification errors for planting structures; n is the number of the coarse pixels in the siding window; and k is the number of land cover types.
Equation (2) is solved by minimizing the residual σ, and the following is the objective function:
D m i n = 1 n σ 1 2 + σ 2 2 + σ 3 2 + + σ n 2 ,
The setting of constraint boundaries is vital to the solution accuracy of Equation (2). In order to ensure the rationality of the results, the minimum and maximum values of the land cover class are used as the constraint boundary.
M i n i H i m i n ( M a x i , 10 ) ,
where Mini is the minimum of class i; Maxi is the maximum of class i; and Hi is the solution result of class i in the central pixel of the sliding window.
According to Equations (1)–(4), the fine pixel ET values of winter wheat are acquired. Then, the ET results are assigned to every fine pixel in the winter wheat region with a 10 m spatial resolution. Finally, the MS and LS ET images are obtained by unmixing, respectively, the MODIS-ET and Landsat-ET images.

3.1.2. The Date Selection of the MS-LS Image Pairs

The unmixed MS ET results are daily, and the LS ET results are not daily and happen on cloudless dates. Before conducting the weight-based temporal prediction, we match MS ET with LS ET on cloudless dates and select the prediction dates for every MS-LS ET pair. The base date selection of MS-LS ET image pairs is based on the phenological period, the temporal distribution of rainfall, and the LS ET image dates without cloud cover. The MS-LS ET image pairs on the base dates are the inputs for the weight-based temporal prediction process of the UWET model.
It is the basic matching principle that the base dates of MS-LS ET pairs and corresponding prediction dates exist in the same phenology period and the same rainfall cycle. Figure 8 is taken as an example of the matching principle. The whole winter wheat growth season is from DOY 274 to 176 in 2019–2020 (Figure 8). In the upper section of Figure 8, the base dates are red lines, and the corresponding prediction dates are on the gray boxes surrounding the red lines. During the sowing period, there is no available cloudless LS ET image, and the base date DOY 300 is selected for the corresponding prediction dates from 274 to 310. During the elongation period from DOY 292 to 103 next year, the available cloudless LS ET images are continuous and uniform, and the second to eighth base dates are selected for the corresponding prediction dates based on the temporal distribution of rainfall and cloudless LS ET image dates. There are two cloudless LS ET images on DOY 111 and 143 during the heading and milky period. DOY 123 is just at the beginning date of the next rainfall, and the ninth base date, DOY 111, is selected for the corresponding prediction dates from DOY 105 to 123. During the maturity and harvest period, heavy rainfall occurs at the same time, and the tenth base date, DOY 143, is selected for the corresponding prediction dates from DOY 124 to 176.

3.1.3. The Weight-Based Temporal Prediction Method

By the weight-based image fusion method, we predict ET images on the dates without LS ET images in order to improve the unmixed MS-ET results and acquire daily high spatiotemporal ET results. In Section 3.1.1, before the temporal prediction, Landsat-ET and MODIS-ET images are spatially downscaled to the 10 m spatial resolution by the unmixing-based method to obtain the LS and MS ET results. In Section 3.1.2, the phenology and precipitation data are applied to determine the base dates tk of LS-MS ET image pairs and are used for high spatial resolution ET images at the prediction dates t0. Figure 9 is the weight-based temporal prediction process for daily ET images including 3 steps. In Figure 9, the symbols “+” are central pixels, circles similar pixels, and numbers 1–4 the different land types.
➀ The first step is filtering the similar pixel from the LS ET image on the base date tk. The filtering method of similar pixels includes the initial filtering using the land cover map and the ultimate filtering from the thresholds of the difference between the central and neighboring pixels. The 10 m land cover map from Sentinel-2 images is used to search initial similar pixels by selecting the same land cover class pixels within the moving window. These ultimate filtering similar pixels are filtered using the threshold information from the initial similar pixels.
➁ The weight of the similar pixel to the central pixel is calculated from the LS ET image on the base date tk and two MS ET images on the base date tk and prediction date t0. The weight information is determined by the spatiotemporal information and ET values of neighbor pixels.
➂ The ET images on the prediction dates t0 are predicted using the weight-based image fusion method (Equation (5)), and 10 m daily ET maps are finally acquired.
The L-S images on the date t0 are predicted as follows:
L S x c e n t r a l , y c e n t r a l , t 0 = i = 1 n j = 1 n W i j × ( L S x i , y j , t k M S x i , y j , t k + M S x i , y j , t 0 )
where the (xcentral, ycentral) coordinate is the central pixel location of the sliding window, the (xi, yj) coordinate is the pixel location of the coregistered LS and MS ET images within the window, n is the size of the sliding window, tk is the base date for both MS and LS data, t0 is the prediction date, and the Wij is the weight function, which is assigned to each similar neighbor based on the spectral difference, the temporal difference, and the spatial distance.

3.2. SEBAL Model

The Surface Energy Balance Algorithm for Land (SEBAL) model algorithm is the basic model for crop ET estimation. The SEBAL model predicts ET and other energy exchanges based on energy balance using remote sensing images with visible, near-infrared, and thermal infrared bands [45,46]. ET is calculated for each pixel of the image according to the surface energy balance formula as follows:
λ E T = R n G H ,
where Rn is the net radiation flux, G is the soil heat flux, H is the sensible heat flux, and λET is the latent heat flux (W·m−2).
The net radiation flux (Rn) is estimated using albedo, transmittance, and long wave emission. Albedo is calculated by integrating surface reflectivity from all bands, and weighting coefficients are applied to each band for albedo estimation [45,47]. The soil heat flux (G) is calculated from the parameters of the NDVI and the net radiation flux. The sensible heat flux (H) is calculated from several factors: surface temperature, wind speed, surface roughness, and surface-to-air temperature differences [45]. In order to account for the effects generated by surface heating, the Monin–Obukhov theory is applied in the iterative process for computing H. In this paper, the surface temperature is estimated using the mono-window algorithm [48].
The 24 h ET images on the cloudless dates are calculated by assuming that the value for the evaporation rate Λ is constant over the full 24 h period [45] as follows:
E T i m a g e c l o u d l e s s = 86400 Λ × ( R n 24 G 24 ) λ ,
where Λ is the instantaneous evaporation rate, λ is the latent heat of vaporization (MJ·kg−1), Rn24 is the daily net radiation flux, and G24 is the daily soil heat flux. Rn24 and G24 are computed by integrating Rn and G for 24 h.
Crop reference ET is calculated using the FAO Penman–Monteith method [49], and the ratio of ET images from MODIS to crop reference ET on a cloudless day is obtained. ET images on the cloud cover dates are estimated by applying the ratio to the crop reference ET on cloudy dates. ETimage-cloud is calculated as follows:
E T i m a g e c l o u d = E T 0 c l o u d × E T i m a g e c l o u d l e s s E T 0 c l o u d l e s s
where ET0 is the crop reference ET and ET24 is the 24 h ET calculated using the MODIS data.

4. Results

4.1. Evaluation of Daily ET Time Series by the UWET Model

The daily ET time series during the growth stage of winter wheat are acquired from Landsat and MODIS images by the UWET model. Figure 10 shows the wheat ET variation trend in the different phenological periods and rainfall during one growth stage. Figure 11 presents the comparison of measured ET, Landsat-ET, and UWET at the Luancheng station.
In Figure 10, the change in daily UWIF-ET during the growing season is consistent with the phenological characteristics of winter wheat and daily precipitation. The fitted line of the daily ET time series shows the shape of “M” with two peaks. The first ET peak appears in the early stage of the elongation period. In the following period, with the declining temperature and rainfall, daily ET also declines. The second ET peak appears in the early stage of the heading and milky period when the winter wheat becomes vigorous with the rising temperature and rainfall. During the sowing and harvest periods, daily ET does not rise with the rising rainfall because winter wheat may not emerge in the sowing period and stops growing in the harvest period. In Figure 10, it is also found that the precipitation has a delayed effect on ET in the elongation period, and a rapid increase in daily ET occurs after the peak of precipitation.
Figure 11 shows that the daily UWET results near the base dates of the input Landsat-ET are closer to the measured ET results than the base dates. The base date of Landsat-ET in the orange oval is 31 January 2020 (DOY 31) and 16 February 2020 (DOY 47), and the base date of Landsat-ET in the red oval is 19 May 2020 (DOY 143). The predicted dates in the orange oval are closer to the base dates for the prediction than the ones in the red oval. The daily ET values predicted by the UWET model in the orange oval are also closer to the measured ET than those in the red oval. This phenomenon indicates that the accuracy of prediction results of the UWET model is affected by the sparsity of available Landsat images. The prediction dates are closer to the base date, and the prediction results are more accurate.

4.2. Evaluation of ET Spatial Patterns by the UWET Model

4.2.1. The Spatial Pattern Comparison between UWET and Landsat-ET

In Table 5 and Figure 12, two fields (Field 1 and Field 2) are selected for the spatial pattern comparison between the UWET and Landsat-ET results on 28 November 2019, 20 April 2020, and 22 March 2020. The coordinate range of Field 1 is from 37°52′2.43″N, 114°42′48.48″E to 37°51′24.72″N, 114°43′23.23″E, and Field 2 is from 37°54′32.72″N, 114°43′20.83″E to 37°53′59.63″N,114°44′2.25″E. Table 5 presents three quality indicators for the spatial pattern comparison, including correlation coefficients (R), the Root Mean Square Error (RMSE), and the Mean Absolute Error (MAE) indicators. In Figure 12, the UWET results on three dates show more details than Landsat-ET and are also consistent with the corresponding land cover and Landsat-ET images in the spatial patterns. It is found in Figure 12 and Table 5 that the UWET results on 20 April 2020 and 22 March 2020 agree better with Landsat-ET than on 28 November 2018. The reason leading to this phenomenon may be the phenological difference between the prediction and base date, in which the prediction date of 28 November 2019 is in the middle elongation period with most wheat already emerging, but the base date of 27 October 2019 for this prediction date is in the early elongation period with more bare soil. It is also found in Table 5 that on two dates of 20 April 2020 and 22 May 2020, the MAE values in Field 2 are lower than in Field 1, which means that the UWET results agree better with Landsat-ET than in Field 1. The reason may be that in spring and summer, Field 1 has more bare soil and other vegetation than Field 2, and the bare soil and other vegetation may be a disturbance in the fusion process. However, due to low temperatures in winter, ET values on 28 November 2019 are low, so the MAE values have little difference between Field 1 and Field 2.
At the winter wheat points ➀ in Figure 12a,d,e, the winter wheat ET value of UWET (Figure 12e) is different from the surrounding winter wheat pixels, but Landsat-ET (Figure 12d) is the same. This phenomenon indicates that UWET can capture the different growth and development of winter wheat. At the building points ➁, the ET values of UWET (Figure 12g) are near zero, but the values of Landsat-ET (Figure 12f) are near six. The ET map of UWET truly reflects the spatial distribution of ET on the ground. At the bare soil points ➂, the bare soil ET value of UWET (Figure 12l,n) is lower than the surrounding vegetation pixels, but Landsat-ET (Figure 12k,m) is as same as the surrounding vegetation pixels. Therefore, the UWET method exhibits the characteristics of bare soil, while the Landsat-ET map does not due to the mixed pixels. On the whole, UWET shows more spatial details than Landsat-ET.

4.2.2. The ET Spatial Distribution by UWET

Figure 13 shows the spatial distribution of the accumulated ET. The accumulated ET of winter wheat between 2019 and 2020 (Figure 13a) mainly ranges from 350 to 660 mm, with an average of 499.89 mm, and the accumulated ET values in the northern and southern regions are higher than in other regions. The accumulated ET of winter wheat between 2020 and 2021 (Figure 13b) mainly ranges from 300 to 620 mm, with an average of 459.44 mm, and the accumulated ET values in the southern and western regions are higher than in other regions. The accumulated ET during 2019–2020 is higher than during 2020–2021, and the reason may be the accumulated precipitation during 2019–2020 with an average of 301 mm, which is higher than in 2020–2021 with an average of 206 mm.

4.3. UWET Accuracy Evaluation by the Situ Station Data

Figure 14 shows that the MODIS-ET and UWET results are compared with the daily measured ET at the Luahncheng station (Figure 1). In Figure 14, the dotted lines are 1:1 lines, and the red lines are fitted lines. The correlation coefficients (R), the Root Mean Square Error (RMSE), and the Mean Absolute Error (MAE) are used to evaluate the accuracy of the UWET model. The UWET results are compared with the measured ET, with average R, RMSE, and MAE values of 0.93, 0.76 mm/day, and 0.57 mm/day. The MODIS-ET results are compared with the measured ET, with average R, RMSE, and MAE values of 0.82, 1.18 mm/day, and 0.84 mm/day. The performance of UWET is better than MODIS-ET in accuracy evaluation.

5. Discussion

There are many image fusion models, but a few models are used for ET mapping. Landsat and MODIS are mostly selected for the input data in the image fusion models. The unmixing-based STRUM model directly unmixes the residual image as the difference between the two MODIS images and adds the unmixed residual image to the Landsat image on the base date, which offers more spatial features [9]. The weight-based STARFM model blends Landsat and MODIS surface reflectance and applies a weight function to each similar neighbor based on the spectral difference, the temporal difference, and the spatial distance, which incorporates better the temporal signatures of MODIS [49]. The UWET model combines the spatial feature extraction method of the unmixing-based model and the temporal signature acquisition means of the weight-based model. In order to predict crop ET more properly, the land cover map, phenology, and precipitation are also taken into consideration in UWET. The following is a discussion about ET spatiotemporal characteristics by the STARFM, STRUM and UWET models.

5.1. Comparison of ET Spatial Characteristics by Three Fusion Models

One piece of wheat field (Field 2 in Section 4.2.1) is selected for the evaluation of spatial characteristics by the STARFM, STRUM, and UWET models on 20 May 2020. The wheat field includes 100 × 106 10 m pixels, where the land cover types are mostly the croplands of winter wheat or other vegetation. Figure 15 offers land cover maps and ET maps, including Landsat-ET, MODIS-ET, STARFM, STRUM, and UWET. Landsat-ET and MODIS-ET maps are extracted from Landsat or MODIS images by the crop ET estimation model, and the last three ET maps are inversed, respectively, by the STARFM, STRUM, and UWET. It is found that the MODIS-ET map is coarser than the other four ET maps due to the low spatial resolution of MODIS images. The STRUM-ET and UWET maps with more spatial details are finer than the STARFM-ET maps by comparing the details in the blue oval of Figure 15a,c–f. The reason for this phenomenon may be that the coarse ET maps are unmixed using the fine land cover map in the fusing process of the unmixing-based STRUM and UWET models. By comparing the details in the blue oval of Figure 15e,f, the STRUM-ET maps have the same fine spatial features as the UWET maps, but the STRUM-ET values in the southern tip of the bare soil are higher than that of UWET. The reason leading to this phenomenon may be that STRUM-ET maps are unmixed improperly from the surrounding vegetation by the STRUM model, but the UWET model incorporates the temporal weight after the unmixing step, which makes the UWET values closer to the actual ET values of the land cover type. On the whole, the unmixing-based models, including UWET and STRUM, perform better in the spatial resolution than the weight-based STARFM.
The R, RMSE, and MAE indicators of daily ET images by the three ET fusion models are calculated according to the comparison with the Landsat-ET map in the field of Figure 15 on 22 May 2020. Table 6 shows the indicators of the three methods in the whole region and wheat planting region. In the whole region, the R indicator performance by the three methods is similar and good, and the RMSE and MAE indicators are larger than a millimeter due to the low spatial resolution of MODIS images. In the wheat region, the R indicator of the UWET and STRUM performs better than the STARFM. The RMSE and MAE indicators by the three models in the wheat region are lower than those in the whole region.

5.2. Comparison of ET Temporal Characteristics by the Three Fusion Models

Figure 16 shows the temporal variation lines of the average daily ET value in the wheat region from the three models (STARFM, STRUM, and UWET). It is found that the daily ET results of the three fusion models capture the dynamic changes in the wheat phenological characteristics of winter wheat. The time variation characteristics of the UWET and STARFM are similar. However, the STRUM-ET values have little fluctuation compared with the other two methods in the red dotted ova (Figure 16). Although the STRUM acquires a high spatial resolution by unmixing the difference values between the base and prediction MODIS-ET, it is difficult to set effective constraints for these difference values on the phenology and time, which leads to little fluctuation in the temporal characteristics. The STARFM and UWET incorporate the temporal weight in the fusion process and make the ET results match better with the temporal characteristics of winter wheat than that of the STRUM. On the whole, the weight-based models, including the UWET and STARFM, perform better in the temporal characteristics than the unmixing-based STRUM.

6. Conclusions

This paper proposes the UWET method to fuse the ET images of different spatiotemporal resolutions, which integrates the advantages of the unmixing method in spatial downscaling and the weight-based method in temporal prediction. During the fusion process of UWET, the land cover map is introduced to the unmixing of coarse pixels and filtration of similar pixels, and the crop phenology and precipitation are applied to determine the base and prediction dates of LS-MS ET image pairs, which are different from other unmixing or weight-based models. The UWET model is capable of capturing temporal changes in crop ET and phenological characteristics throughout the whole growing season and improves the spatial resolution to 10 m.
UWET is applied to the Landsat-ET and MODIS-ET fusion process, resulting in a daily ET fusion dataset with a spatial resolution of 10 m for Luancheng County. The performance of UWET for winter wheat is validated with the measured ET with R, RMSE, and MAE values of 0.78, 1.46 mm/day, and 0.97 mm/day, respectively. The accumulated ET of winter wheat in 2019–2020 mainly ranges from 350 to 660 mm, with an average of 499.89 mm, and the accumulated ET of winter wheat in 2020–2021 mainly ranges from 300 to 620 mm, with an average of 459.44 mm Compared with the Landsat-ET results, the UWET results are consistent in spatial details at a regional scale and the wheat region. The unmixing-based models, including UWET and the STRUM, perform better in the spatial characteristics than only the weight-based STARFM. The weight-based models, including UWET and the STARFM, perform better in the temporal characteristics than only the unmixing-based STRUM. During the growing season, the change in daily UWET is consistent with the phenological characteristics of winter wheat. UWET performance in various situations will be discussed in further research in order to facilitate applications in other study areas with more complex surface conditions.

Author Contributions

Conceptualization, X.Z. and H.G.; methodology, X.Z., H.G., and L.S.; software, H.G. and X.Z.; validation, X.Z., H.G., L.S., and J.B.; formal analysis, X.Z., H.G., L.S., X.H., L.Z., and J.B.; investigation, X.Z. and H.G.; resources, X.Z., H.G., and L.S.; data curation, X.Z. and H.G.; writing—original draft preparation, X.Z. and H.G.; writing—review and editing, X.Z., H.G., L.S., X.H., and L.Z.; visualization, X.Z. and H.G.; supervision, X.Z.; project administration, X.Z.; funding acquisition, X.Z. and L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (2021YFC3201203) and the National Natural Science Foundation of China (51209163).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank all of the researchers from Luancheng Agro-ecosystem Experimental Station for supporting the continued operation, maintenance, collection, and processing of the eddy covariance flux tower systems used in this study. We are truly grateful to the Reviewers and Editors’ constructive comments and thoughtful suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. FAO. The State of the World’s Land and Water Resources for Food and Agriculture—Systems at breaking point (SOLAW 2021). Available online: https://www.fao.org/documents/card/en/c/cb7654en (accessed on 19 October 2023).
  2. UN. The United Nations World Water Development Report 2021: Valuing Water. Available online: https://www.unwater.org/publications/un-world-water-development-report-2021 (accessed on 19 October 2023).
  3. Long, F.; Yang, W.; Sun, Z.; Cui, Y.; Zhang, C.; Cui, Y. Gravity satellite inversion and watershed water balance of groundwater storage changes in the Haihe Plain. J. Water Resour. 2023, 54, 255–267. [Google Scholar] [CrossRef]
  4. Wang, X.W.; Lei, H.M.; Li, J.D.; Huo, Z.L.; Zhang, Y.Q.; Qu, Y.P. Estimating evapotranspiration and yield of wheat and maize croplands through a remote sensing-based model. Agric. Water Manag. 2023, 282, 108294. [Google Scholar] [CrossRef]
  5. Xu, H.; Yang, R. Does agricultural water conservation policy necessarily reduce agricultural water extraction? Evidence from China. Agric. Water Manag. 2022, 274, 107987. [Google Scholar] [CrossRef]
  6. Zhang, C.J.; Long, D.; Zhang, Y.C.; Anderson, M.C.; Kustas, W.P.; Yang, Y. A decadal (2008–2017) daily evapotranspiration data set of 1 km spatial resolution and spatial completeness across the North China Plain using TSEB and data fusion. Remote Sens. Environ. 2021, 262, 22. [Google Scholar] [CrossRef]
  7. Liou, Y.A.; Kar, S.K. Evapotranspiration Estimation with Remote Sensing and Various Surface Energy Balance Algorithms—A Review. Energies 2014, 7, 2821–2849. [Google Scholar] [CrossRef]
  8. Allies, A.; Olioso, A.; Cappelaere, B.; Boulet, G.; Etchanchu, J.; Barral, H.; Bouzou Moussa, I.; Chazarin, J.-P.; Delogu, E.; Issoufou, H.B.-A.; et al. A remote sensing data fusion method for continuous daily evapotranspiration mapping at kilometric scale in Sahelian areas. J. Hydrol. 2022, 607, 127504. [Google Scholar] [CrossRef]
  9. Zhang, K.; Kimball, J.S.; Running, S.W. A review of remote sensing based actual evapotranspiration estimation. Wiley Interdiscip. Rev. Water 2016, 3, 834–853. [Google Scholar] [CrossRef]
  10. Anderson, M.C.; Yang, Y.; Xue, J.; Knipper, K.R.; Yang, Y.; Gao, F.; Hain, C.R.; Kustas, W.P.; Cawse-Nicholson, K.; Hulley, G.; et al. Interoperability of ECOSTRESS and Landsat for mapping evapotranspiration time series at sub-field scales. Remote Sens. Environ. 2021, 252, 112189. [Google Scholar] [CrossRef]
  11. Song, E.Z.; Zhu, X.Y.; Shao, G.C.; Tian, L.J.; Zhou, Y.H.; Jiang, A.; Lu, J. Multi-Temporal Remote Sensing Inversion of Evapotranspiration in the Lower Yangtze River Based on Landsat 8 Remote Sensing Data and Analysis of Driving Factors. Remote Sens. 2023, 15, 2887. [Google Scholar] [CrossRef]
  12. Mokhtari, A.; Sadeghi, M.; Afrasiabian, Y.; Yu, K. OPTRAM-ET: A novel approach to remote sensing of actual evapotranspiration applied to Sentinel-2 and Landsat-8 observations. Remote Sens. Environ. 2023, 286, 113443. [Google Scholar] [CrossRef]
  13. Yao, Y.J.; Liang, S.L.; Li, X.L.; Zhang, Y.H.; Chen, J.Q.; Jia, K.; Zhang, X.T.; Fisher, J.B.; Wang, X.Y.; Zhang, L.L.; et al. Estimation of high-resolution terrestrial evapotranspiration from Landsat data using a simple Taylor skill fusion method. J. Hydrol. 2017, 553, 508–526. [Google Scholar] [CrossRef]
  14. Zhukov, B.; Oertel, D.; Lanzl, F.; Reinhackel, G. Unmixing-based multisensor multiresolution image fusion. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1212–1226. [Google Scholar] [CrossRef]
  15. Feng, G.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar] [CrossRef]
  16. Zhu, X.; Chen, J.; Gao, F.; Chen, X.; Masek, J.G. An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sens. Environ. 2010, 114, 2610–2623. [Google Scholar] [CrossRef]
  17. Huang, B.; Zhang, H. Spatio-temporal reflectance fusion via unmixing: Accounting for both phenological and land-cover changes. Int. J. Remote Sens. 2014, 35, 6213–6233. [Google Scholar] [CrossRef]
  18. Gevaert, C.M.; García-Haro, F.J. A comparison of STARFM and an unmixing-based algorithm for Landsat and MODIS data fusion. Remote Sens. Environ. 2015, 156, 34–44. [Google Scholar] [CrossRef]
  19. Xie, D.; Zhang, J.; Zhu, X.; Pan, Y.; Liu, H.; Yuan, Z.; Yun, Y. An Improved STARFM with Help of an Unmixing-Based Method to Generate High Spatial and Temporal Resolution Remote Sensing Data in Complex Heterogeneous Regions. Sensors 2016, 16, 207. [Google Scholar] [CrossRef]
  20. Zhu, X.; Helmer, E.H.; Gao, F.; Liu, D.; Chen, J.; Lefsky, M.A. A flexible spatiotemporal method for fusing satellite images with different resolutions. Remote Sens. Environ. 2016, 172, 165–177. [Google Scholar] [CrossRef]
  21. Guo, D.; Shi, W.; Hao, M.; Zhu, X. FSDAF 2.0: Improving the performance of retrieving land cover changes and preserving spatial details. Remote Sens. Environ. 2020, 248, 111973. [Google Scholar] [CrossRef]
  22. Li, X.; Foody, G.M.; Boyd, D.S.; Ge, Y.; Zhang, Y.; Du, Y.; Ling, F. SFSDAF: An enhanced FSDAF that incorporates sub-pixel class fraction change information for spatio-temporal image fusion. Remote Sens. Environ. 2020, 237, 111537. [Google Scholar] [CrossRef]
  23. Zhu, X.; Cai, F.; Tian, J.; Williams, T.K. Spatiotemporal Fusion of Multisource Remote Sensing Data: Literature Survey, Taxonomy, Principles, Applications, and Future Directions. Remote Sens. 2018, 10, 527. [Google Scholar] [CrossRef]
  24. Huang, B.; Song, H. Spatiotemporal Reflectance Fusion via Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3707–3716. [Google Scholar] [CrossRef]
  25. Wang, S.; Wang, C.Z.; Zhang, C.L.; Xue, J.Y.; Wang, P.; Wang, X.W.; Wang, W.S.; Zhang, X.; Li, W.C.; Huang, G.H.; et al. A classification-based spatiotemporal adaptive fusion model for the evaluation of remotely sensed evapotranspiration in heterogeneous irrigated agricultural area. Remote Sens. Environ. 2022, 273, 16. [Google Scholar] [CrossRef]
  26. Mingquan, W.; Zheng, N.; Changyao, W.; Chaoyang, W.; Li, W. Use of MODIS and Landsat time series data to generate high-resolution temporal synthetic Landsat data using a spatial and temporal reflectance fusion model. J. Appl. Remote Sens. 2012, 6, 063507. [Google Scholar] [CrossRef]
  27. Emelyanova, I.V.; McVicar, T.R.; Van Niel, T.G.; Li, L.T.; van Dijk, A.I.J.M. Assessing the accuracy of blending Landsat–MODIS surface reflectances in two landscapes with contrasting spatial and temporal dynamics: A framework for algorithm selection. Remote Sens. Environ. 2013, 133, 193–209. [Google Scholar] [CrossRef]
  28. Wang, Q.; Tang, Y.; Tong, X.; Atkinson, P.M. Virtual image pair-based spatio-temporal fusion. Remote Sens. Environ. 2020, 249, 112009. [Google Scholar] [CrossRef]
  29. Yang, Z.; Pan, X.; Liu, Y.B.; Tansey, K.; Yuan, J.; Wang, Z.C.; Liu, S.Y.; Yang, Y.B. Evaluation of spatial downscaling for satellite retrieval of evapotranspiration from the nonparametric approach in an arid area. J. Hydrol. 2024, 628, 130538. [Google Scholar] [CrossRef]
  30. Cammalleri, C.; Anderson, M.C.; Gao, F.; Hain, C.R.; Kustas, W.P. Mapping daily evapotranspiration at field scales over rainfed and irrigated agricultural areas using remote sensing data fusion. Agric. For. Meteorol. 2014, 186, 1–11. [Google Scholar] [CrossRef]
  31. Semmens, K.A.; Anderson, M.C.; Kustas, W.P.; Gao, F.; Alfieri, J.G.; McKee, L.; Prueger, J.H.; Hain, C.R.; Cammalleri, C.; Yang, Y.; et al. Monitoring daily evapotranspiration over two California vineyards using Landsat 8 in a multi-sensor data fusion approach. Remote Sens. Environ. 2016, 185, 155–170. [Google Scholar] [CrossRef]
  32. Carpintero, E.; Anderson, M.C.; Andreu, A.; Hain, C.; Gao, F.; Kustas, W.P.; González-Dugo, M.P. Estimating Evapotranspiration of Mediterranean Oak Savanna at Multiple Temporal and Spatial Resolutions. Implications for Water Resources Management. Remote Sens. 2021, 13, 3701. [Google Scholar] [CrossRef]
  33. Cammalleri, C.; Anderson, M.C.; Gao, F.; Hain, C.R.; Kustas, W.P. A data fusion approach for mapping daily evapotranspiration at field scale. Water Resour. Res. 2013, 49, 4672–4686. [Google Scholar] [CrossRef]
  34. Qiu, R.J.; Katul, G.G.; Wang, J.T.; Xu, J.Z.; Kang, S.Z.; Liu, C.W.; Zhang, B.Z.; Li, L.G.; Cajucom, E.P. Differential response of rice evapotranspiration to varying patterns of warming. Agric. For. Meteorol. 2021, 298, 108293. [Google Scholar] [CrossRef]
  35. Yang, Y.; Roderick, M.L.; Guo, H.; Miralles, D.G.; Zhang, L.; Fatichi, S.; Luo, X.; Zhang, Y.; McVicar, T.R.; Tu, Z.; et al. Evapotranspiration on a greening Earth. Nat. Rev. Earth Environ. 2023, 4, 626–641. [Google Scholar] [CrossRef]
  36. Li, S.J.; Wang, G.J.; Sun, S.L.; Hagan, D.F.T.; Chen, T.X.; Dolman, H.; Liu, Y. Long-term changes in evapotranspiration over China and attribution to climatic drivers during 1980–2010. J. Hydrol. 2021, 595, 126037. [Google Scholar] [CrossRef]
  37. Liu, F.; Shen, Y.; Cao, J.; Zhang, Y. A dataset of water, heat, and carbon fluxes over the winter wheat-summer maize croplands in Luancheng during 2013–2017. Sci. Data Bank 2023, 8, 1–10. [Google Scholar] [CrossRef]
  38. Adam, E.; Mutanga, O.; Odindi, J.; Abdel-Rahman, E.M. Land-use/cover classification in a heterogeneous coastal landscape using RapidEye imagery: Evaluating the performance of random forest and support vector machines classifiers. Int. J. Remote Sens. 2014, 35, 3440–3458. [Google Scholar] [CrossRef]
  39. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
  40. Zhang, X.; Xiong, Q.; Di, L.; Tang, J.; Yang, J.; Wu, H.; Qin, Y.; Su, R.; Zhou, W. Phenological metrics-based crop classification using HJ-1 CCD images and Landsat 8 imagery. Int. J. Digit. Earth 2018, 11, 1219–1240. [Google Scholar] [CrossRef]
  41. Zhang, X.; Cao, Z.; Yang, D.; Wang, Q.; Wang, X.; Xiong, Q. Extraction and spatio-temporal analysis of county-level crop planting patterns based on HJ-1 CCD. Trans. Chin. Soc. Agric. Eng. 2021, 37, 168–181. [Google Scholar] [CrossRef]
  42. Landis, J.R.; Koch, G.G. The Measurement of Observer Agreement for Categorical Data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef]
  43. Zeng, L.L.; Wardlow, B.D.; Xiang, D.X.; Hu, S.; Li, D.R. A review of vegetation phenological metrics extraction using time-series, multispectral satellite data. Remote Sens. Environ. 2020, 237, 111511. [Google Scholar] [CrossRef]
  44. Gao, H.S.; Zhang, X.C.; Wang, X.G.; Zeng, Y.H. Phenology-Based Remote Sensing Assessment of Crop Water Productivity. Water 2023, 15, 329. [Google Scholar] [CrossRef]
  45. Bastiaanssen, W.G.M.; Menenti, M.; Feddes, R.A.; Holtslag, A.A.M. A remote sensing surface energy balance algorithm for land (SEBAL)—1. Formulation. J. Hydrol. 1998, 212, 198–212. [Google Scholar] [CrossRef]
  46. Bastiaanssen, W.G.M.; Pelgrum, H.; Wang, J.; Ma, Y.; Moreno, J.F.; Roerink, G.J.; van der Wal, T. A remote sensing surface energy balance algorithm for land (SEBAL)—2. Validation. J. Hydrol. 1998, 212, 213–229. [Google Scholar] [CrossRef]
  47. Liang, S.L. Narrowband to broadband conversions of land surface albedo I Algorithms. Remote Sens. Environ. 2001, 76, 213–238. [Google Scholar] [CrossRef]
  48. Qin, Z.; Karnieli, A.; Berliner, P. A mono-window algorithm for retrieving land surface temperature from Landsat TM data and its application to the Israel-Egypt border region. Int. J. Remote Sens. 2001, 22, 3719–3746. [Google Scholar] [CrossRef]
  49. Allan, R.; Pereira, L.; Smith, M. Crop Evapotranspiration-Guidelines for Computing Crop Water Requirements; FAO Irrigation and Drainage Paper 56; FAO: Rome, Italy, 1998; Volume 56. [Google Scholar]
Figure 1. Location of the study area and ground test station.
Figure 1. Location of the study area and ground test station.
Remotesensing 16 02414 g001
Figure 2. The flowchart for the extraction of land types.
Figure 2. The flowchart for the extraction of land types.
Remotesensing 16 02414 g002
Figure 3. The spatial distribution of sampling points.
Figure 3. The spatial distribution of sampling points.
Remotesensing 16 02414 g003
Figure 4. The NDVI curve of winter wheat and corresponding feature points from 2019 to 2020.
Figure 4. The NDVI curve of winter wheat and corresponding feature points from 2019 to 2020.
Remotesensing 16 02414 g004
Figure 5. Land cover map. (a) 2019–2020; (b) 2020–2021.
Figure 5. Land cover map. (a) 2019–2020; (b) 2020–2021.
Remotesensing 16 02414 g005
Figure 6. The UWET framework.
Figure 6. The UWET framework.
Remotesensing 16 02414 g006
Figure 7. Unmixing-based spatial downscaling of Landsat-ET and MODIS-ET coarse pixels.
Figure 7. Unmixing-based spatial downscaling of Landsat-ET and MODIS-ET coarse pixels.
Remotesensing 16 02414 g007
Figure 8. The base dates and prediction dates matching process of LS-MS ET image pairs.
Figure 8. The base dates and prediction dates matching process of LS-MS ET image pairs.
Remotesensing 16 02414 g008
Figure 9. The framework of weight-based temporal prediction process.
Figure 9. The framework of weight-based temporal prediction process.
Remotesensing 16 02414 g009
Figure 10. The variation of daily UWET.
Figure 10. The variation of daily UWET.
Remotesensing 16 02414 g010
Figure 11. The comparison of measured ET, Landsat-ET, and UWET at the Luancheng station.
Figure 11. The comparison of measured ET, Landsat-ET, and UWET at the Luancheng station.
Remotesensing 16 02414 g011
Figure 12. The spatial pattern comparison between UWET and Landsat-ET. (a) Land cover map of Field 1; (bg) Landsat-ET and UWET maps on different dates in Field 1; (h) Land cover map of Field 2; (in): Landsat-ET and UWET maps on different dates in Field 2.
Figure 12. The spatial pattern comparison between UWET and Landsat-ET. (a) Land cover map of Field 1; (bg) Landsat-ET and UWET maps on different dates in Field 1; (h) Land cover map of Field 2; (in): Landsat-ET and UWET maps on different dates in Field 2.
Remotesensing 16 02414 g012
Figure 13. Spatial distribution of wheat ET. (a) Accumulated ET between 2019 and 2020; (b) accumulated ET between 2020 and 2021.
Figure 13. Spatial distribution of wheat ET. (a) Accumulated ET between 2019 and 2020; (b) accumulated ET between 2020 and 2021.
Remotesensing 16 02414 g013
Figure 14. Validation of crop ET. (a) MODIS-ET in 2019–2020; (b) MODIS-ET in 2020–2021; (c) UWET in 2019–2020; (d) UWET in 2020–2021.
Figure 14. Validation of crop ET. (a) MODIS-ET in 2019–2020; (b) MODIS-ET in 2020–2021; (c) UWET in 2019–2020; (d) UWET in 2020–2021.
Remotesensing 16 02414 g014
Figure 15. The spatial characteristics comparison of three fusion models on 22 May 2020. (a) Land cover map; (b) MODIS-ET map; (c) Landsat-ET map; (d) STARFM-ET map; (e) STRUM-ET map; (f) UWET map.
Figure 15. The spatial characteristics comparison of three fusion models on 22 May 2020. (a) Land cover map; (b) MODIS-ET map; (c) Landsat-ET map; (d) STARFM-ET map; (e) STRUM-ET map; (f) UWET map.
Remotesensing 16 02414 g015
Figure 16. Daily ET of the three models during the growing season. (a) 2019–2020; (b) 2020–2021.
Figure 16. Daily ET of the three models during the growing season. (a) 2019–2020; (b) 2020–2021.
Remotesensing 16 02414 g016
Table 1. Description of the satellite datasets.
Table 1. Description of the satellite datasets.
DatasetSpatial ResolutionDOY of the Acquisition TimeApplication
201920202021
Sentinel-210 m248–3651–177
248–365
1–177Land cover map
Crop phenology
LandsatLandsat 830 m300, 332, 36431, 47, 63, 79, 95, 111, 143, 287, 3511, 17, 33, 49, 81, 97, 129, 145, 177ET estimation
Landsat 9
MODISMOD09GA500 m274–3651–176
275–365
1–170
MCD43A3500 m
MOD11A11 km
Table 2. The producer and user accuracy of training data.
Table 2. The producer and user accuracy of training data.
Bare SoilBuildingWaterOther VegetationWinter WheatTotalUser Accuracy
Bare soil3450003987.18%
Building6350004185.37%
Water008008100%
Other vegetation0002142584.00%
Winter wheat000315615998.11%
Total4040824160272
Producer accuracy85.00%87.50%100%87.50%97.5% 93.38%
Table 3. The producer and user accuracy of validation data.
Table 3. The producer and user accuracy of validation data.
Bare SoilBuildingWaterOther VegetationWinter WheatTotalUser Accuracy
Bare soil81000988.89%
Building290001181.82%
Water002002100%
Other vegetation00061785.71%
Winter wheat00003939100%
Total1010264068
Producer accuracy80.00%90.00%100%100%97.50% 94.12%
Table 4. Main phenological periods of winter wheat.
Table 4. Main phenological periods of winter wheat.
Phenological PeriodSowing PeriodElongation PeriodHeading and Milky PeriodMaturity and Harvest Period
DOY (2019–2020)274–291292–103104–156157–172
DOY (2020–2021)275–298299–109110–121122–171
Table 5. Quality indicators of UWET and Landsat-ET at a regional scale.
Table 5. Quality indicators of UWET and Landsat-ET at a regional scale.
Indicator28 November 201920 April 202022 May 2020
Filed1R0.310.780.87
RMSE (mm/d)0.50 mm/day1.52 mm/day1.69 mm/day
MAE (mm/d)0.33 mm/day1.33 mm/day1.58 mm/day
Filed2R0.370.740.80
RMSE (mm/d)1.06 mm/day0.97 mm/day1.36 mm/day
MAE (mm/d)0.89 mm/day0.74 mm/day1.07 mm/day
Table 6. Quality indicators of the three methods at a regional scale and the wheat region.
Table 6. Quality indicators of the three methods at a regional scale and the wheat region.
RegionIndicatorsSTARFMSTRUMUWET
The whole regionR0.830.800.80
RMSE (mm/d)1.181.411.36
MAE (mm/d)0.941.131.07
The Wheat regionR0.310.620.60
RMSE (mm/d)0.830.870.85
MAE (mm/d)0.680.770.75
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Gao, H.; Shi, L.; Hu, X.; Zhong, L.; Bian, J. Mapping Crop Evapotranspiration by Combining the Unmixing and Weight Image Fusion Methods. Remote Sens. 2024, 16, 2414. https://doi.org/10.3390/rs16132414

AMA Style

Zhang X, Gao H, Shi L, Hu X, Zhong L, Bian J. Mapping Crop Evapotranspiration by Combining the Unmixing and Weight Image Fusion Methods. Remote Sensing. 2024; 16(13):2414. https://doi.org/10.3390/rs16132414

Chicago/Turabian Style

Zhang, Xiaochun, Hongsi Gao, Liangsheng Shi, Xiaolong Hu, Liao Zhong, and Jiang Bian. 2024. "Mapping Crop Evapotranspiration by Combining the Unmixing and Weight Image Fusion Methods" Remote Sensing 16, no. 13: 2414. https://doi.org/10.3390/rs16132414

APA Style

Zhang, X., Gao, H., Shi, L., Hu, X., Zhong, L., & Bian, J. (2024). Mapping Crop Evapotranspiration by Combining the Unmixing and Weight Image Fusion Methods. Remote Sensing, 16(13), 2414. https://doi.org/10.3390/rs16132414

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop