Next Article in Journal
Evaluation of Prescribed Fires from Unmanned Aerial Vehicles (UAVs) Imagery and Machine Learning Algorithms
Previous Article in Journal
Near-Field Remote Sensing of Surface Velocity and River Discharge Using Radars and the Probability Concept at 10 U.S. Geological Survey Streamgages
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusion of MODIS and Landsat-Like Images for Daily High Spatial Resolution NDVI

by
Roberto Filgueiras
1,*,
Everardo Chartuni Mantovani
1,
Elpídio Inácio Fernandes-Filho
2,
Fernando França da Cunha
1,
Daniel Althoff
1 and
Santos Henrique Brant Dias
3
1
Department of Agricultural Engineering, Federal University of Viçosa (UFV), Viçosa 36570-900, Brazil
2
Department of Soil and Plant Nutrition, Federal University of Viçosa (UFV), Viçosa 36570-900, Brazil
3
Department of Soils and Agricultural Engineering, State University of Ponta Grossa (UEPG), Ponta Grossa 84030-900, Brazil
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(8), 1297; https://doi.org/10.3390/rs12081297
Submission received: 17 February 2020 / Revised: 30 March 2020 / Accepted: 14 April 2020 / Published: 20 April 2020
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
One of the obstacles in monitoring agricultural crops is the difficulty in understanding and mapping rapid changes of these crops. With the purpose of addressing this issue, this study aimed to model and fuse the Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) using Landsat-like images to achieve daily high spatial resolution NDVI. The study was performed for the period of 2017 on a commercial farm of irrigated maize-soybean rotation in the western region of the state of Bahia, Brazil. To achieve the objective, the following procedures were performed: (i) Landsat-like images were upscaled to match the Landsat-8 spatial resolution (30 m); (ii) the reflectance of Landsat-like images was intercalibrated using the Landsat-8 as a reference; (iii) Landsat-like reflectance images were upscaled to match the MODIS sensor spatial resolution (250 m); (iv) regression models were trained daily to model MODIS NDVI using the upscaled Landsat-like reflectance images (250 m) of the closest day as the input; and (v) the intercalibrated version of the Landsat-like images (30 m) used in the previous step was used as the input for the trained model, resulting in a downscaled MODIS NDVI (30 m). To determine the best fitting model, we used the following statistical metrics: coefficient of determination (r2), root mean square error (RMSE), Nash–Sutcliffe efficiency index (NSE), mean bias error (MBE), and mean absolute error (MAE). Among the assessed regression models, the Cubist algorithm was sensitive to changes in agriculture and performed best in modeling of the Landsat-like MODIS NDVI. The results obtained in the present research are promising and can enable the monitoring of dynamic phenomena with images available free of charge, changing the way in which decisions are made using satellite images.

Graphical Abstract

1. Introduction

The techniques and technologies available through the science of remote sensing have become indispensable for monitoring changes in the terrestrial surface [1,2,3,4,5,6,7]. Such techniques are necessary when the objective is to follow agricultural development and make decisions about crop-related issues [8,9]. Sensors aboard satellites capture images from the Earth’s surface at various spatiotemporal resolutions. They are often divided into sensors that capture images at low, medium, and high spatial resolutions [10]. Generally, high spatial resolution images have smaller dimensions, covering smaller portions of the Earth’s surface and resulting in a lower temporal resolution [4,11,12,13]. On the other hand, high temporal resolution images tend to cover larger portions of the Earth’s surface, resulting in a poor spatial resolution. This fact is considered a technical limitation and known as a trade-off between the temporal and spatial resolution [14,15], which is due to the relationship between the scanning swath and image pixel size [11]. Based on these assumptions, there is currently no single orbital constellation that captures images with a high/medium temporal resolution and high spatial resolution, at least for free [16].
The lower temporal resolution hampers the monitoring of phenomena such as the phenological cycle of crops and crop-growth, which requires frequent observations [12,17]. These situations are commonly faced by researchers who use Landsat platforms, which have a 30 m spatial resolution and 16 day temporal resolution, when the sky is free of clouds [18,19]. On the other hand, the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor, which is aboard the AQUA and TERRA satellites, is available with a temporal resolution of one to two days, but with a lower spatial resolution (250, 500, and 1000 m) [20,21]. This challenges the use of this sensor for precision agriculture, since such agriculture requires detailed information on the farm’s processes at a high spatial resolution [10].
To overcome the obstacles of the absence of a single sensor that provides images free of charge contemplating both characteristics (high frequency and detailed monitoring) [10,13], it is necessary to develop techniques that combine images with different resolution characteristics and generate hybrid products at a high/medium spatial resolution and high temporal resolution [22]. Hybrid products would be able to meet the expectations of monitoring phenomena with rapid transitions and aid decision-making in a timely manner.
The integration of data with complementary characteristics has been frequently addressed in research. Researchers have developed methodologies seeking to solve the difficulties of acquiring images with high frequency at detailed resolutions [3,4,11,12,16,23,24,25,26,27]. Generating such information can be performed by techniques that use multiple sensors and downscale images [10,11,23,24,26,28]. In addition, data fusion techniques have also been proposed to combine images from multiple sensors with different spatial and temporal characteristics [29,30,31]. In most studies, images captured by MODIS have been used as a source of data at a course spatial and high temporal resolution and Landsat-like images have been employed as a source of data at a high/medium spatial and low temporal resolution [31,32,33,34].
Data fusion methods present a range of approaches and bases, which make some methods more difficult to apply or even more efficient than others [22]. According to Zhu et al. [22], those methods can be stratified into five groups: weight function-based, unmixing-based, Bayesian-based, learning-based, and hybrid methods. They also reported that all of these methods require at least one pair of coarse and medium/high spatial resolution images on temporally close days and coarse images on prediction days. For the weight function-based group, Gao et al. [17] proposed a method called the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), fusing the MODIS daily 500 m surface reflectance product (MOD09GHK) and Landsat images with the aim of producing a synthetic daily Landsat image of surface reflectance. This method is basically a pair-based approach, and represents one of the first models developed for this context [22,30]. In this same line, Hilker et al. [35] also presented a data fusion model to produce hybrid images and detect changes in the surface. This methodology was denominated as the Spatial Temporal Adaptive Algorithm for mapping Reflectance Change (STAARCH). Other important research in this sense is the work of Zhu et al. [15], who developed an enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM) based on the work of Gao et al. [17].
In terms of the unmixing-based data fusion methodology, the important work of Atzberger et al. [36] should be noted, which proposed unmixing approaches for combining datasets with different spatial resolutions. This work made it possible to use the historical (long-term average) crop profiles derived from the Satellite Pour l’Observation de la Terre-VEGETATION (SPOT-VGT) with endmembers derived from Project for On-Board Autonomy-Vegetation (PROBA-V).
Huang et al. [37], representing the Bayesian-based group, proposed unified fusion generating simultaneously high-resolution synthetic spatial-temporal-spectral Earth observations. With respect to learning-based methods, Zhu et al. [22] used machine learning algorithms to understand the relationship between observed coarse-fine image pairs and then predicted the non-observed fine images. Another example of a learning-based method can be seen in the work of Liu et al. [38], who proposed a spatiotemporal fusion method using a powerful learning technique—the extreme learning machine (ELM). The aim of this research was to improve the quality of data fusion with a fast computational model.
In terms of the hybrid group, Xie et al. [39] developed an improved STARFM with the help of an unmixing-based method, which they called unmixing-based STARFM (USTARFM). This proposed model’s main objective is to improve the performance of STARFM in heterogeneous areas. Another hybrid method was presented in the work of Xu et al. [40], which proposed a method based on the traditional spatial unmixing technique, but with a regularized term approach, in order to ensure that the spectrum classes were not very different from the previously predefined classes, ensuring an accurate unmixing result. Gevaert and García-Haro [41], also in the line of hybrid modes, proposed a unique Bayesian formulation to restrain the unmixing process.
Previous work has shown that data fusion techniques are an important alternative for crop monitoring with spatial and temporal detailing; more specifically, for crop phenology monitoring [42,43] and biophysical parameters [44,45]. Data fusion models have been tested for various agricultural crops, i.e., rice [46], soybean [29], maize [29], and cotton [31], among others. Indeed, many studies have been published with the purpose of making the monitoring of surface more detailed and more information available. In addition, also seeking a greater availability of surface information, the research of Vuolo et al. [47] dealt with the generation of high revisit Landsat and a high spatial resolution without the requirement for coarse data. We can also consider the papers of Immitzer et al. [48] and Atzberger and Rembold [49], where the use of data fusion was completely avoided and the fractional coverage of crops was obtained using high-resolution training data fed with low-resolution predictor variables.
Multi-sensor coupling and data fusion are essential in monitoring the condition of vegetation and the dynamics of agricultural crops. The vegetation index is an important indicator for obtaining crop status information and it is widely used in monitoring the growth of crops [13]. The vegetation index estimated with multi-sensor coupling and data fusion provides a closer look at the variability of vegetation vigor, thus facilitating several decision-making processes and leading to more efficient and precise management for farmers.
In spite of the fact that a lot of research has made use of multisensory coupling and data fusion approaches with the purpose of increasing the spatiotemporal resolution of monitoring, both approaches have rarely been dealt with in the same research. This could deeply increase the chances of success in cases where high spatiotemporal monitoring is needed. Therefore, the possibility of generating hybrid products becomes paramount in the acquisition of ideal monitoring characteristics, especially in countries where agriculture is the main economic activity.
Due to the relevance of improving the spatial resolution and temporal frequency of images for agricultural monitoring, the objective of the present study was to model and fuse the MODIS Normalized Difference Vegetation Index (NDVI) using images from the Sentinel-2A, Sentinel-2B, Landsat-8, and Landsat-7 satellites (denominated here as Landsat-like) to achieve daily high spatial resolution NDVI. To achieve this, we tested seven regression algorithms on a commercial farm of irrigated maize-soybean rotation, using the intercalibrated Landsat-like images as the input for the regression approach and the MODIS NDVI as the target variable.

2. Materials and Methods

2.1. Study Area

The study area is located in the western region of the state of Bahia, Brazil (Figure 1). The area has the following coordinate pairs as boundaries: X1-411,958; Y1-8,630,320 (upper left corner); X2-441,487; and Y2-8,613,684 (lower right corner), WGS-84 datum, UTM (Universal Transverse Mercator) projection, zone 23 S.
The entire area is 49,123.09 ha, and consist of different land uses: native forest, pastures, rainfed crops, and irrigated crops with the last being represented by the central pivots of Figure 2. We chose this area for the study because of its land use and because it is part of a large grain production region (Western Bahia), which could possibly benefit from the results of our research. In the survey of the 2017/2018 crop, this region was responsible for planting 2.415 million hectares, with the main crops being soybean, cotton, maize, and coffee [50]. The present work considered the whole area in the process of training and validation of the models; however, we gave prominence to the results for irrigated crops.
All of the irrigated area (2842.45 ha) is irrigated using center pivot systems (Figure 2). The area has 17 center pivots, of which 16 are towable; however, the equipment is only moved from one crop season to another, for example, the pivot is positioned in the pivot area denominated A in the main crop season and, in the following season, the pivot is positioned in the area denominated B, according to Figure 2. The crops cultivated during the monitoring of this research were maize (seed) and soybean.

2.2. Orbital Images

The images used in the research correspond to five orbital platforms, including one with a lower spatial resolution and a higher temporal resolution (1 to 2 days) (TERRA/MODIS), and others with a lower temporal and better spatial resolution (Landsat-8/OLI, Landsat-7/ETM+, Sentinel-2A/MSI, and Sentinel-2B/MSI), which were denominated as Landsat-like. The platforms used in this study, which come from the Landsat constellation, have a temporal resolution of 16 days, and those belonging to the Sentinel constellation have a temporal resolution of 10 days [19,21,51,52,53].

2.2.1. Preprocessing Landsat-Like Images

The conversion of digital numbers into reflectance is required to compare instruments from a physical reference [54]. In order to use the images from Landsat-8 and Landsat-7, we converted the digital numbers into physical values (reflectance) and, concomitantly, we corrected the atmosphere’s influence on these images. The images of Sentinel-2A and 2B were already in reflectance at the top of the atmosphere, so only atmospheric correction was necessary. After these processes, the spectral bands from different sensors were intercalibrated using linear regression to make them compatible.
The purpose of intercalibrating the Enhanced Thematic Mapper Plus (ETM+), MultiSpectral Instrument (MSI), and Operational Land Imager (OLI) sensors is to obtain images matching the Landsat-8 spectral resolution (Table 1) and that are compatible, thus resulting in an equivalent input for the model [55,56,57,58,59]. First, we resampled the MSI sensor data to match the 30 m spatial resolution of the OLI sensor (Table 1).
Second, two pairs of images were used for the intercalibration of this procedure, with the first pair referring to the date (19 August 2017) when the passage of the MSI sensor coincided with that of the OLI sensor, and the other pair referring to the dates 10 August 2017 (ETM+) and 18 August 2017 (OLI), since these sensors do not overlap temporally. The pair of images dated 18 August 2017 were only used for intercalibration, while the ETM+ image of the day 10 August 2017 was also used in the data fusion procedure (Section Daily data fusion modeling). The Landsat-8/OLI images were used as the standard. With the purpose of calibrating the Landsat-like images, simple linear regression models were developed for both the ETM+ and MSI (independent variables) as a function of the OLI (dependent variable), as shown in Equations (1) and (2):
ρ OLI = β o + β i ρ ETM ,
ρ OLI = β o + β i ρ MSI ,
where ρ ETM and ρ M S I are the reflectance of the ETM+ and MSI bands, respectively, and β o and β i are the intercept and angular coefficient of the equation, respectively.

2.2.2. Processing MODIS Images

The MODIS images used were obtained from the MOD09GQ product, which has a spatial resolution of 250 m and temporal resolution of one to two days, depending on the latitude of the image. Of the 36 bands available for this sensor, the bands used referred to the red (Band 1) and near infrared (Band 2) wavelengths—the only two bands related to the surface reflectance found in the MOD09GQ product [54,60,61].
Subsequently, we proceeded to remove the images that could not be used because of cloudy days and gaps in the study region. After the selection of images, the NDVI was calculated according to Equation (3) [60]. Then, the daily modeling was processed to perform the downscaling procedure.
NDVI = ( ρ nir   ρ r ) ( ρ nir + ρ r   ) ,
where ρ nir is the reflectance of the near infrared band and ρ r is the reflectance of the red band. The NDVI has a response range from −1.0 to 1.0. For further information, consult [58,62,63].

2.3. Landsat-Like Data Inputs

Landsat-like bands referring to the blue, green, red, and near infrared electromagnetic spectrum were used to model the NDVI of MODIS (NDVIMODIS-250) (Table 1). These bands were also used to generate six covariates using the Normalized Ratio Procedure between Bands (NRPB) as suggested by Filgueiras et al. [62]. This procedure calculated all the possibilities of normalized ratios between the spectral bands of the intercalibrated images, as shown in Equation (4):
NRPB = ( ρ x     ρ y ) ( ρ x + ρ y )   ,
where ρ x and ρ y are the surface reflectances, relative to the different bands of Landsat-like images.

2.4. Regression Algorithms

In order to find a model with a greater ability to fit and estimate NDVIMODIS-250, seven regression algorithms were tested: support vector machine with linear kernel (SVMLinear) regression, linear regression (LM), Ridge regression, Cubist regression, partial least squares (PLS) regression, principal components regression (PCR), and Generalized Boosted Regression (GBM). Methodological information on these models can be found in the publications of Kuhn and Johnson [63]. All models tested were implemented using the R programming language and environment [64].

2.5. Daily Data Fusion Modeling

The data fusion procedure was performed for images from 29 July 2017 to 13 August 2017. For this, 16 images of the tile H-13 V-10 of the MODIS sensor were downloaded, as well as three Landsat-like images, the Landsat-like image of the day 10 August 2017 (ETM+) was used in the intercalibration. To perform daily data fusion, each of the four Landsat-like images used referred to a different platform (Landsat-7, Landsat-8, and Sentinel-2A and 2B). Therefore, 16 MODIS images and four Landsat-like images were separated for the daily data fusion modeling.
When analyzing the availability of information for the agricultural property, four MODIS images could not be used because of the presence of clouds (1 August 2017 and 7 August 2017) and because of the absence of information resulting from gaps in the MODIS orbit (2 August 2017 and 9 August 2017). Therefore, 12 images of NDVI from the MODIS sensor, the MOD09GQ product, and three Landsat-like images were used in the procedure of daily data fusion modeling (Figure 3), which corresponded to the days 29 August 2017 (MSI), 3 August 2017 (MSI), and 10 August 2017 (ETM+). As it was not possible to acquire the MODIS image of the study area on 2 August 2017, it was not possible to run data fusion on that day, so the OLI image was not used.
For each modeling procedure, 5000 points were extracted from the data set and separated into two groups: a training set (70%) and validation set (30%). The modeling was performed for all cloud-free days when MODIS images were available. We started the modeling on a day (29 July 2017) when both sources of information were available, that is, MODIS images and Landsat-like images. Firstly, we upscaled the Landsat-like images to make them compatible with the MODIS images (250 m) and trained the models to predict NDVIMODIS-250 with the regression algorithms. The best-performing model during validation was chosen to predict the NDVIMODIS-30 using the Landsat-like image (30 m) as the input.
On the following day (30 July 2017), the MODIS image of the day and the Landsat-like image of the previous day (29 July 2017) were used, assuming that the vegetation in the study area was known and the land use had not changed. We used the closest Landsat-like image to predict the NDVIMODIS-30, i.e., for 30 July 2017, the models were trained with a Landsat-like image from 29 July 2017. This approach was performed experimentally over 16 days (12 days with images), as exemplified in Figure 4.
We also highlight two situations which need attention in the data fusion procedure. First, preference should always be given to the Landsat-like image closest to the modeling date. Second, if two Landsat-like images are available on the same date, preference should be given to the OLI sensor. If there is no image from the OLI sensor available, preference should be given to the alternative sensor which presented higher r2 during intercalibration.

Statistical Analyses

After obtaining the fitted models, they were compared with the values observed by the NDVI from the MODIS sensor in a region not trained by the models (sample validation set). From the predicted (NDVIMODIS-250 estimated with Landsat-like images) and observed (NDVI MODIS) values, the following statistical metrics were calculated: determination coefficient (r2), root mean square error (RMSE), Nash–Sutcliffe efficiency index (NSE) [65], mean bias error (MBE), and mean absolute error (MAE), as shown in Equations (5)–(9):
r 2 = ( ( P i P ¯ ) ( O i O ¯ ) ) 2 ( ( P i P ¯ ) 2 ) ( ( O i O ¯ ) 2 ) ,
RMSE = 1 n i = 1 n ( O i P i ) 2 ,
NSE = 1 i = 1 n ( O i P i ) 2 i = 1 n ( O i O ¯ ) 2 ,
MAE = 1 n i = 1 n | P i O i | ,
MBE = 1 n i = 1 n ( P i O i ) ,
where Pi is the value estimated by the model, O i represents the observed values, O ¯ represents the observed mean values, and n represents the pair numbers of observations.
The r2 value indicates the extent to which the independent variables explain the variance of the dependent variable. The RMSE is an accuracy metric of the model obtained through the quadratic difference between the estimated and observed data, while the MAE provides a mean value of the absolute errors, differing from the RMSE, which gives greater importance to larger errors. This means that RMSE should be most useful when large errors are particularly undesirable; the MBE indicates possible underestimation or overestimation trends. The NSE is used to evaluate the predictive power of the model, varying from -∞ to 1; the value 1 corresponds to a perfect fit between the data estimated by the model and the data observed. Values between 0 and 1 are generally considered to indicate an acceptable level of performance, whereas values below 0 indicate that the observed mean value predicts better than the model, indicating an unacceptable performance [66,67,68,69,70,71,72,73].

3. Results

The intercalibration of the spectral bands for the ETM+ and MSI based on the OLI sensor obtained the following r2: ETM+ blue band (r2 = 0.97); MSI blue band (r2 = 0.96); ETM+ green band (r2 = 0.98); MSI green band (r2 = 0.96); ETM+ red band (r2 = 0.99); MSI red band (r2 = 0.97); ETM+ infrared band (r2 = 0.93); and MSI infrared band (r2 = 0.94). It should be noted that all p-values were significant at 0.1%.
Figure 5 shows the results of the statistical metrics obtained for the validation set by the prediction models. A set of metrics was applied taking into account the findings highlighted in Chai and Draxler [74], who concluded that a composite of statistical indices is usually required to evaluate the performance of the models. It can be observed from a general analysis that the regression models applied to the prediction of the NDVIMODIS-250 showed satisfactory performances. The results of the statistical metrics show that it is feasible to estimate the NDVIMODIS-30 using the covariates, that is, information derived from the Landsat-like images.
The mean values found for the MAE are shown in Figure 5A. When considering all the dates, Cubist regression presented the lowest mean of MAE. It also presented the lowest MAE value for the entire study period, which makes this model the one that had the greatest capacity to reproduce the reality of the estimated data. By analyzing the MBE (Figure 5B), it could be seen that the models, in general, are near the tenuous line (value ≈0) between negative and positive values. The MBE was not analyzed for predictive accuracy, and it is used in this case to know whether the models tend to underestimate (negative values) or overestimate (positive values).
The values found for r2 (Figure 5C) were high, which indicates that the independent variables have the capacity to explain the variance in the dependent variable. The highest mean of r2 was that for the Cubist model (0.9093), indicating that it was the model that best explained the variability of the data observed by the MODIS sensor [75]. Figure 5D shows that the RMSE values were low, which indicates that the values fit the data well and, consequently, all the models have the capacity to explain the observed values. The RMSE was mainly used to give higher weights to less favorable conditions, being an interesting metric for revealing differences in performances [74]. The highest value of NSE was also found for the Cubist model, which was the only one tested that had an average NSE above 0.90 (0.9008). This index indicates how much of the dependent variable is explained by the model [65], and is an important metric for decisions on which model will be adopted in the current prediction of MODIS NDVI30 meters. After the analysis presented in Figure 5, we could conclude that the Cubist model was superior to the others for the prediction of MODIS NDVI30 meters, since it showed the best result in all metrics evaluating the fit of the models to the observed data. Due to these results, Cubist regression was the model adopted to generate NDVI images with a 30 m spatial resolution (MODIS NDVI30 meters).
In Figure 6, we also highlight the spatial resolution of the MODIS images (250 m). This means that each pixel of an image of these bands has an area of 62,500 m2 and does not allow the nuances of the surface to be observed in detail, especially when the purpose is to observe agricultural areas such as center pivots.
Figure 7 shows the relationship between the predicted (NDVIMODIS-250 predicted with upscaled Landsat-like images) value and the observed value of MODISNDVI-250 for each monitoring day using the Cubist model. It can be observed that the data fit the values observed well for all the days in which NDVIMODIS-250 was modeled with the parameters derived from Landsat-like images, making it possible to perceive the high density of dots (light green color) near the line, which serves as a reference for the ideal fit. The bars of errors show how small the deviation of the estimated data was, on the whole, against the observed data.
Figure 8 shows the results of NDVIMODIS-30 estimated by bands and products derived from Landsat-like images with a 30 m spatial resolution. This technique, which enables monitoring of the surface to be continuous in time with spatial detailing, is only possible due to certain factors, which can be considered as the premises of this modeling.
The first of these assumptions is the coupling of sensors with a detailed resolution (Landsat-like), since this procedure substantially increases the frequency of images with a more detailed spatial resolution, as done in the current study. The mean frequency of one image every four days was achieved by coupling multi-sensors (ETM+, MSI, and OLI). This premise allows for a greater frequency of detailed images (Landsat-like images), which are the independent variables of the model. In order to perform the image coupling procedure, it is necessary to take into account factors that may hamper the intercalibration of these sensors, as reported in D’Odorico et al. [55], Fan and Liu [56], Mao et al. [57], Teillet et al. [76], and Yan et al. [77].
A second (premise) condition for performing this image data fusion procedure is the consideration that abrupt land-use changes from one day to another are known by the user at the monitored site. If, for example, harvesting of a crop takes place one day after the sensor with a better resolution passes (Landsat-like), the next day’s downscaled image will display a drop in the NDVIMODIS-30 values, caused by the response of the images of NDVIMODIS-250 m. However, the result of the NDVIMODIS-30m (downscaling procedure) image will exhibit a more sensitive drop due to the fact that the Landsat image is from the day on which the crop had not yet been harvested. Therefore, it is essential that the user of this methodology knows the monitored surface and has updated information about the management and land-use of that region. Ke et al. [45] pointed out a similar premise.
It could be observed in the monitoring of the downscaled images that the crop of the pivot denominated 1A (Figure 2) had been entirely harvested on the day 8 August 2017 (Figure 7H). By observing Figure 7, it can be noted that the area of this center pivot showed a subtle decrease in the NDVI value between 29 July 2017 (Figure 7A) and 8 August 2017 (Figure 7H) and, after that date, there was a more pronounced fall in NDVI values, indicating that harvesting had occurred. This fact can also be observed in the images that underwent the downscaling process (Figure 8).
The NDVI value of center pivot 1A between the dates 4 August 2017 and 8 August 2017 (Figure 8E–H) showed a noticeable drop, which underscores the robustness of the data fusion methodology applied in the present study. The model fitted using previous Landsat-like images can satisfactorily model the information on the day of the MODIS sensor, thus offering a product with high-frequency images and spatial detail. The average of the NDVI MODIS for the central pivot 1A on 29 July 2017 was 0.56, while the average NDVI for 8 August 2017 (already harvested) was 0.32. By comparing the drop of 0.24 that occurred on those days with the average value found for MAE (0.03) for the Cubist model over the monitoring period, the relationship of this error with the temporal variability of NDVI can be highlighted, allowing us to understand the potentiality of the suggested model.

4. Discussion

We can observe that the validation metrics of the Cubist and GBM models stood out against the others for the 10th image (11 August 2017) (Figure 5), with the Cubist model being superior to that of the GBM. On that day, the metrics obtained by all the models were farther from the study period average. We relate this to the distortion presented by the MODIS image for this date (11 August 2017), which can be seen in Figure 6J. Figure 6 shows the NDVI calculated with bands one and two of the MODIS sensor, which are related to the MOD09GQ product. We believe that the deterioration in performance on the 10th image (11 August 2017) was mainly due to the viewing angle with which the sensor captured the image and the proximity of the gap in the study area, since this image is more distorted than the others (Figure 6J).
Although the Cubist model performed better, the merit of the other models cannot be disregarded, since all of them were able to predict this parameter under the circumstances of the study with similar performance metrics. The performance of the LM model can also be highlighted; it was the simplest model but had the highest degree of interpretability. This model, like the others, was able to model the NDVI standards on all of the dates analyzed. However, in situations where abrupt changes in vegetation occur, it is expected that this model, as well as other linear models (PCR, RIDGE, PLS, and SVMLinear), will present a lower performance compared to models such as Cubist and GBM models. As we analyzed the area as a whole, we did not notice any change in the performance of the validation set for changes in crops in central pivots.
Ke et al. [78] proposed a method for downscaling the evapotranspiration product of the MODIS 8-day 1km, based on Landsat-8 images and machine learning techniques. The methodology used by these researchers is similar to that of the current research, that is, upscaling was performed for the information derived from the Landsat-8 satellite and from this, the evapotranspiration of the MODIS product was modeled using machine learning. Subsequently, they applied the fitted model to the covariates with a 30 m spatial resolution. The aforementioned paper tested three models of machine learning: Random Forest (RF), Cubist, and SVM. Among these, for the studied variable, RF was the one that obtained the smallest errors. However, these authors point out that RF showed results that were similar to, but only slightly better than, those of the Cubist model.
As found by Ke et al. [78], RF results are generally similar to those of the Cubist model and because of this and the time demanded by the RF to process the data, this algorithm was excluded from the analysis of the current research. The approach of the present study aimed to generate daily models throughout the monitoring period, requiring, besides acceptable metrics, rapidity in the analysis. This is an important point and it must be considered in the creation of methodologies. Regarding this point, the Cubist model stands out against RF.
Muhling et al. [79] used the Cubist model to predict the surface temperature and salinity in the Chesapeake basin in the United States of America. According to these authors, the Cubist model is a regressor similar to other regression tree models in the way that it divides training data into increasingly homogeneous subsets. However, what makes this model different from others is the way in which the value is predicted in the final node; instead of being a final fixed value, like those of other models based on decision trees, this algorithm promotes multiple linear regression to reach the final results of the nodes. According to these authors, the final regression in the nodes causes the Cubist model to have a greater capacity to generalize, having the possibility of extrapolating the predictions beyond the amplitude of the training set.
The research conducted by Ke et al. [78] did not include a methodology of downscaling that was temporal and spatial, but these authors highlighted the importance of applying this methodology in order to combine the spatial resolution of Landsat and the high temporal frequency of MODIS. In the research conducted by Ke et al. [45], using machine learning techniques integrated with the space-time downscaling approach were proposed to generate an evapotranspiration product every 8 days with a resolution of 30 m. This product was generated based on MODIS information and the Landsat-8 orbital platform, using the methodology of Ke et al. [78] as the basis for achieving the results. After the generation of the evapotranspiration results obtained through the downscaling process, it was verified that the results of this product show a greater precision than the product itself, demonstrating the potential of these techniques in the generation of information with greater spatial details over time, reinforcing the importance of studies of this nature.
Gu and Wylie [3] developed a methodology to integrate the growing season NDVI (GSN) product calculated with the NDVI of the MODIS sensor (spatial resolution of 250 m) with multiple dates of the Landsat constellation (spatial resolution of 30 m), with the proposal to generate a GSN product integrating these sources of information. These researchers found a strong correlation between the predicted data and the observed data (r = 0.97), and the MAE was 0.026, which is in agreement with the present study, since Gu and Wylie [3] used the Cubist model to obtain these results.
Boyte et al. [28] integrated weekly NDVI data from the MODIS sensor and Landsat-8 platform using four regression tree models, with the purpose of downscaling the MODIS images and applying them to the monitoring. The results show that the correlation coefficient was strong for all predictions (r ≥ 0.89) and, in addition to these results, the researchers emphasized the visual quality of the synthetic product MODIS NDVI of 30 m compared with the original product of 250 m. Boyte et al. [28] stated that these products with a 30 m spatial resolution contribute to understanding seasonal processes and provide important resources for land managers.

5. Conclusions

In this study, an NDVI downscaling methodology was developed with images from the Sentinel-2A, Sentinel-2B, Landsat-7, and Landsat-8 orbital platforms, and an NDVI product with a 30 m spatial resolution and the temporal frequency of the MODIS sensor (1 to 2 days) was generated.
Among the regression models used in the modeling of the daily NDVI at a 30 m resolution, the Cubist model was the one that showed the best fit, being the model recommended to proceed with this methodology in the study area. However, the other models also showed satisfactory results and could be used to generate the final product.
Due the performance of the methodology developed in this study, agricultural monitoring in higher detail (30 m) with a daily temporal frequency (temporal resolution of the MOD09QG) could be feasible.
The images with a daily 30 m resolution were sensitive to changes in land use, even on dates that had only information on NDVI at a 250 m spatial resolution, as occurred during the harvest of pivot 1A, or even when the MODIS images presented distortion, as shown in Figure 7J.
The methodology applied in this study can be used to generate other variables, such as evapotranspiration, the surface temperature, and soil moisture, which can be employed to make irrigation decisions at the farm level.

Author Contributions

R.F. conceived the idea, designed and performed the experiments, produced the results, and drafted the manuscript; E.C.M. contributed to the conceptualization, supervision, and content; D.A. contributed to revising the English, the discussion of the results, and the revision of the paper; F.F.d.C., S.H.B.D., and E.I.F.-F. contributed to the discussion of the results and the revision of the article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Conselho Nacional de Desenvolvimento Científico e Tecnológico—Brazil (CNPq), grant number: 148636/2016-0, and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brazil (CAPES), finance code: 001.

Acknowledgments

We thank the Department of Agriculture Engineering (DEA), the Center of Reference in Water Resources (CRRH), and the Group of Studies and Solutions for Irrigated Agriculture (GESAI) of the Federal University of Viçosa for supporting the researchers.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Atzberger, C. Advances in Remote Sensing of Agriculture: Context Description, Existing Operational Monitoring Systems and Major Information Needs. Remote Sens. 2013, 5, 949–981. [Google Scholar] [CrossRef] [Green Version]
  2. Feilhauer, H.; Somers, B.; van der Linden, S. Optical trait indicators for remote sensing of plant species composition: Predictive power and seasonal variability. Ecol. Indic. 2017, 73, 825–833. [Google Scholar] [CrossRef]
  3. Gu, Y.; Wylie, B. Downscaling 250-m MODIS Growing Season NDVI Based on Multiple-Date Landsat Images and Data Mining Approaches. Remote Sens. 2015, 7, 3489–3506. [Google Scholar] [CrossRef] [Green Version]
  4. Hilker, T.; Wulder, M.A.; Coops, N.C.; Seitz, N.; White, J.C.; Gao, F.; Masek, J.G.; Stenhouse, G. Generation of dense time series synthetic Landsat data through data blending with MODIS using a spatial and temporal adaptive reflectance fusion model. Remote Sens. Environ. 2009, 113, 1988–1999. [Google Scholar] [CrossRef]
  5. Joshi, N.; Baumann, M.; Ehammer, A.; Fensholt, R.; Grogan, K.; Hostert, P.; Jepsen, M.; Kuemmerle, T.; Meyfroidt, P.; Mitchard, E.; et al. A Review of the Application of Optical and Radar Remote Sensing Data Fusion to Land Use Mapping and Monitoring. Remote Sens. 2016, 8, 70. [Google Scholar] [CrossRef] [Green Version]
  6. Xu, Y.; Smith, S.E.; Grunwald, S.; Abd-Elrahman, A.; Wani, S.P. Incorporation of satellite remote sensing pan-sharpened imagery into digital soil prediction and mapping models to characterize soil property variability in small agricultural fields. ISPRS J. Photogramm. Remote Sens. 2017, 123, 1–19. [Google Scholar] [CrossRef] [Green Version]
  7. Lv, Z.; Shi, W.; Zhou, X.; Benediktsson, J. Semi-Automatic System for Land Cover Change Detection Using Bi-Temporal Remote Sensing Images. Remote Sens. 2017, 9, 1112. [Google Scholar] [CrossRef] [Green Version]
  8. Bastiaanssen, W.G.M.; Steduto, P. The water productivity score (WPS) at global and regional level: Methodology and first results from remote sensing measurements of wheat, rice and maize. Sci. Total Environ. 2017, 575, 595–611. [Google Scholar] [CrossRef]
  9. Ribeiro, R.B.; Filgueiras, R.; Ramos, M.C.A.; de Almeida, L.T.; Generoso, T.N.; Monteiro, L.I.B. Variabilidade espaço-temporal da condição da vegetação na agricultura irrigada por meio de imagens sentinel-2a. Rev. Bras. Agric. Irrig. 2017, 11, 1884–1893. [Google Scholar]
  10. Mahour, M.; Tolpekin, V.; Stein, A.; Sharifi, A. A comparison of two downscaling procedures to increase the spatial resolution of mapping actual evapotranspiration. ISPRS J. Photogramm. Remote Sens. 2017, 126, 56–67. [Google Scholar] [CrossRef]
  11. Bisquert, M.; Sanchez, J.M.; Caselles, V. Evaluation of Disaggregation Methods for Downscaling MODIS Land Surface Temperature to Landsat Spatial Resolution in Barrax Test Site. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1430–1438. [Google Scholar] [CrossRef]
  12. Hong, S.-H.; Hendrickx, J.M.H.; Borchers, B. Down-scaling of SEBAL derived evapotranspiration maps from MODIS (250 m) to Landsat (30 m) scales. Int. J. Remote Sens. 2011, 32, 6457–6477. [Google Scholar] [CrossRef]
  13. Meng, J.; Du, X.; Wu, B. Generation of high spatial and temporal resolution NDVI and its application in crop biomass estimation. Int. J. Digit. Earth 2013, 6, 203–218. [Google Scholar] [CrossRef]
  14. Liu, W.; Zeng, Y.; Li, S.; Huang, W. Spectral unmixing based spatiotemporal downscaling fusion approach. Int. J. Appl. Earth Obs. Geoinf. 2020, 88, 102054. [Google Scholar] [CrossRef]
  15. Zhu, X.; Chen, J.; Gao, F.; Chen, X.; Masek, J.G. An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sens. Environ. 2010, 114, 2610–2623. [Google Scholar] [CrossRef]
  16. Yang, D.; Su, H.; Yong, Y.; Zhan, J. MODIS-Landsat Data Fusion for Estimating Vegetation Dynamics—A Case Study for Two Ranches in Southwestern Texas. In Proceedings of the 1st International Electronic Conference on Remote Sensing, 22 June–5 July 2015; Volume 22. [Google Scholar]
  17. Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar]
  18. Knight, E.J.; Kvaran, G. Landsat-8 operational land imager design, characterization and performance. Remote Sens. 2014, 6, 10286–10305. [Google Scholar] [CrossRef] [Green Version]
  19. Roy, D.P.; Wulder, M.A.; Loveland, T.R.; Woodcock, C.E.; Allen, R.G.; Anderson, M.C.; Helder, D.; Irons, J.R.; Johnson, D.M.; Kennedy, R.; et al. Landsat-8: Science and product vision for terrestrial global change research. Remote Sens. Environ. 2014, 145, 154–172. [Google Scholar] [CrossRef] [Green Version]
  20. Barnes, W.L.; Pagano, T.S.; Salomonson, V.V. Prelaunch characteristics of the moderate resolution imaging spectroradiometer (MODIS) on EOS-AM1. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1088–1100. [Google Scholar] [CrossRef] [Green Version]
  21. Masuoka, E.; Fleig, A.; Wolfe, R.E.; Patt, F. Key characteristics of MODIS data products. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1313–1323. [Google Scholar] [CrossRef]
  22. Zhu, X.; Cai, F.; Tian, J.; Williams, T.K.-A. Spatiotemporal Fusion of Multisource Remote Sensing Data: Literature Survey, Taxonomy, Principles, Applications, and Future Directions. Remote Sens. 2018, 10, 527. [Google Scholar]
  23. Hwang, T.; Song, C.; Bolstad, P.V.; Band, L.E. Downscaling real-time vegetation dynamics by fusing multi-temporal MODIS and Landsat NDVI in topographically complex terrain. Remote Sens. Environ. 2011, 115, 2499–2512. [Google Scholar] [CrossRef]
  24. Mukherjee, S.; Joshi, P.K.; Garg, R.D. A comparison of different regression models for downscaling Landsat and MODIS land surface temperature images over heterogeneous landscape. Adv. Space Res. 2014, 54, 655–669. [Google Scholar] [CrossRef]
  25. Rodriguez-Galiano, V.; Pardo-Iguzquiza, E.; Sanchez-Castillo, M.; Chica-Olmo, M.; Chica-Rivas, M. Downscaling Landsat 7 ETM+ thermal imagery using land surface temperature and NDVI images. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 515–527. [Google Scholar] [CrossRef]
  26. Singh, R.K.; Senay, G.B.; Velpuri, N.M.; Bohms, S.; Verdin, J.P. On the downscaling of actual evapotranspiration maps based on combination of MODIS and Landsat-based actual evapotranspiration estimates. Remote Sens. 2014, 6, 10483–10509. [Google Scholar] [CrossRef] [Green Version]
  27. Weng, Q.; Fu, P.; Gao, F. Generating daily land surface temperature at Landsat resolution by fusing Landsat and MODIS data. Remote Sens. Environ. 2014, 145, 55–67. [Google Scholar] [CrossRef]
  28. Boyte, S.P.; Wylie, B.K.; Rigge, M.B.; Dahal, D. Fusing MODIS with Landsat 8 data to downscale weekly normalized difference vegetation index estimates for central Great Basin rangelands, USA. Gisci. Remote Sens. 2018, 55, 376–399. [Google Scholar] [CrossRef]
  29. Gao, F.; Anderson, M.; Daughtry, C.; Johnson, D. Assessing the Variability of Corn and Soybean Yields in Central Iowa Using High Spatiotemporal Resolution Multi-Satellite Imagery. Remote Sens. 2018, 10, 1489. [Google Scholar] [CrossRef] [Green Version]
  30. Wang, Q.; Zhang, Y.; Onojeghuo, A.O.; Zhu, X.; Atkinson, P.M. Enhancing Spatio-Temporal Fusion of MODIS and Landsat Data by Incorporating 250 m MODIS Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4116–4123. [Google Scholar] [CrossRef]
  31. Wu, M.; Yang, C.; Song, X.; Hoffmann, W.C.; Huang, W.; Niu, Z.; Wang, C.; Li, W.; Yu, B. Monitoring cotton root rot by synthetic Sentinel-2 NDVI time series using improved spatial and temporal data fusion. Sci. Rep. 2018, 8, 2016. [Google Scholar] [CrossRef] [Green Version]
  32. Chen, J.; Sciusco, P.; Ouyang, Z.; Zhang, R.; Henebry, G.M.; John, R.; Roy, D.P. Linear downscaling from MODIS to landsat: Connecting landscape composition with ecosystem functions. Landsc. Ecol. 2019, 34, 2917–2934. [Google Scholar] [CrossRef]
  33. Fu, Y.; Xu, S.; Zhang, C.; Sun, Y. Spatial downscaling of MODIS Chlorophyll-a using Landsat 8 images for complex coastal water monitoring. Estuar. Coast. Shelf Sci. 2018, 209, 149–159. [Google Scholar] [CrossRef]
  34. Zhou, X.; Wang, P.; Tansey, K.; Zhang, S.; Li, H.; Wang, L. Developing a fused vegetation temperature condition index for drought monitoring at field scales using Sentinel-2 and MODIS imagery. Comput. Electron. Agric. 2020, 168, 105144. [Google Scholar] [CrossRef]
  35. Hilker, T.; Wulder, M.A.; Coops, N.C.; Linke, J.; McDermid, G.; Masek, J.G.; Gao, F.; White, J.C. A new data fusion model for high spatial- and temporal-resolution mapping of forest disturbance based on Landsat and MODIS. Remote Sens. Environ. 2009, 113, 1613–1627. [Google Scholar] [CrossRef]
  36. Atzberger, C.; Formaggio, A.R.; Shimabukuro, Y.E.; Udelhoven, T.; Mattiuzzi, M.; Sanchez, G.A.; Arai, E. Obtaining crop-specific time profiles of NDVI: The use of unmixing approaches for serving the continuity between SPOT-VGT and PROBA-V time series. Int. J. Remote Sens. 2014, 35, 2615–2638. [Google Scholar] [CrossRef]
  37. Huang, B.; Zhang, H.; Song, H.; Wang, J.; Song, C. Unified fusion of remote-sensing imagery: Generating simultaneously high-resolution synthetic spatial–temporal–spectral earth observations. Remote Sens. Lett. 2013, 4, 561–569. [Google Scholar] [CrossRef]
  38. Liu, X.; Deng, C.; Wang, S.; Huang, G.-B.; Zhao, B.; Lauren, P. Fast and Accurate Spatiotemporal Fusion Based Upon Extreme Learning Machine. IEEE Geosci. Remote Sens. Lett. 2016, 13, 2039–2043. [Google Scholar] [CrossRef]
  39. Xie, D.; Zhang, J.; Zhu, X.; Pan, Y.; Liu, H.; Yuan, Z.; Yun, Y. An Improved STARFM with Help of an Unmixing-Based Method to Generate High Spatial and Temporal Resolution Remote Sensing Data in Complex Heterogeneous Regions. Sensors 2016, 16, 207. [Google Scholar] [CrossRef] [Green Version]
  40. Xu, Y.; Huang, B.; Xu, Y.; Cao, K.; Guo, C.; Meng, D. Spatial and Temporal Image Fusion via Regularized Spatial Unmixing. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1362–1366. [Google Scholar]
  41. Gevaert, C.M.; García-Haro, F.J. A comparison of STARFM and an unmixing-based algorithm for Landsat and MODIS data fusion. Remote Sens. Environ. 2015, 156, 34–44. [Google Scholar] [CrossRef]
  42. Zheng, Y.; Wu, B.; Zhang, M.; Zeng, H. Crop Phenology Detection Using High Spatio-Temporal Resolution Data Fused from SPOT5 and MODIS Products. Sensors 2016, 16, 2099. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Zhu, X.; Helmer, E.H.; Gao, F.; Liu, D.; Chen, J.; Lefsky, M.A. A flexible spatiotemporal method for fusing satellite images with different resolutions. Remote Sens. Environ. 2016, 172, 165–177. [Google Scholar] [CrossRef]
  44. Cammalleri, C.; Anderson, M.C.; Gao, F.; Hain, C.R.; Kustas, W.P. A data fusion approach for mapping daily evapotranspiration at field scale: Data Fusion Approach for Mapping Daily ET. Water Resour. Res. 2013, 49, 4672–4686. [Google Scholar] [CrossRef]
  45. Ke, Y.; Im, J.; Park, S.; Gong, H. Spatiotemporal downscaling approaches for monitoring 8-day 30 m actual evapotranspiration. ISPRS J. Photogramm. Remote Sens. 2017, 126, 79–93. [Google Scholar] [CrossRef]
  46. Son, N.-T.; Chen, C.-F.; Chang, L.-Y.; Chen, C.-R.; Sobue, S.-I.; Minh, V.-Q.; Chiang, S.-H.; Nguyen, L.-D.; Lin, Y.-W. A logistic-based method for rice monitoring from multitemporal MODIS-Landsat fusion data. Eur. J. Remote Sens. 2016, 49, 39–56. [Google Scholar] [CrossRef] [Green Version]
  47. Vuolo, F.; Ng, W.-T.; Atzberger, C. Smoothing and gap-filling of high resolution multi-spectral time series: Example of Landsat data. Int. J. Appl. Earth Obs. Geoinf. 2017, 57, 202–213. [Google Scholar] [CrossRef]
  48. Immitzer, M.; Böck, S.; Einzmann, K.; Vuolo, F.; Pinnel, N.; Wallner, A.; Atzberger, C. Fractional cover mapping of spruce and pine at 1 ha resolution combining very high and medium spatial resolution satellite imagery. Remote Sens. Environ. 2018, 204, 690–703. [Google Scholar] [CrossRef] [Green Version]
  49. Atzberger, C.; Rembold, F. Mapping the Spatial Distribution of Winter Crops at Sub-Pixel Level Using AVHRR NDVI Time Series and Neural Nets. Remote Sens. 2013, 5, 1335–1354. [Google Scholar] [CrossRef] [Green Version]
  50. AIBA. Levantamento Safra Oeste da Bahia 2017–2018; Associação de Agricultores Irrigantes da Bahia: Barreiras, Brazil, 2018; p. 1. [Google Scholar]
  51. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  52. Loveland, T.R.; Dwyer, J.L. Landsat: Building a strong future. Remote Sens. Environ. 2012, 122, 22–29. [Google Scholar] [CrossRef]
  53. Rudorff, B.F.R. Sensor Modis e Suas Aplicações Ambientas no Brasil; Editora Parêntese: São José dos Campos, Brazil, 2007; ISBN 85-60507-00-0. [Google Scholar]
  54. Ponzoni, F.; Shimabukuro, Y.; Kuplich, T. Sensoriamento Remoto da Vegetação. 2a Edição Atualizada e Ampliada; Oficina de Textos: São Paulo, Brazil, 2012. [Google Scholar]
  55. D’Odorico, P.; Gonsamo, A.; Damm, A.; Schaepman, M.E. Experimental Evaluation of Sentinel-2 Spectral Response Functions for NDVI Time-Series Continuity. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1336–1348. [Google Scholar] [CrossRef]
  56. Fan, X.; Liu, Y. A Generalized Model for Intersensor NDVI Calibration and Its Comparison with Regression Approaches. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1842–1852. [Google Scholar] [CrossRef]
  57. Mao, D.; Wang, Z.; Luo, L.; Ren, C. Integrating AVHRR and MODIS data to monitor NDVI changes and their relationships with climatic parameters in Northeast China. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 528–536. [Google Scholar] [CrossRef]
  58. Latorre, M.; Anderson, L.O.; Shimabukuro, Y.E.; de Carvalho Júnior, O.A. Sensor MODIS: Características gerais e aplicações. Rev. Espaço E Geogr. 2003, 6, 91–121. [Google Scholar]
  59. Mas, J.F. Aplicaciones del Sensor MODIS Para el Monitoreo del Territorio, Primera ed.; Mas, J.-F., Ed.; Secretaría de Medio Ambiente y Recursos Naturales: Mexico City, Mexico, 2011; ISBN 978-607-7908-55-5. [Google Scholar]
  60. Rouse, J., Jr.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with ERTS; NASA Special Publication; NASA: Washington, DC, USA, 1974.
  61. Formaggio, A.R.; Sanches, I.D. Sensoriamento Remoto em Agricultura; Oficina de Textos: São Paulo, Brazil, 2017; ISBN 978-85-7975-277-3. [Google Scholar]
  62. Filgueiras, R.; Mantovani, E.C.; Dias, S.H.B.; Fernandes Filho, E.I.; Cunha, F.F.d.; Neale, C.M.U. New approach to determining the surface temperature without thermal band of satellites. Eur. J. Agron. 2019, 106, 12–22. [Google Scholar] [CrossRef]
  63. Kuhn, M.; Johnson, K. Applied Predictive Modeling; Springer: New York, NY, USA, 2013; ISBN 978-1-4614-6848-6. [Google Scholar]
  64. R Core Team. A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2017. [Google Scholar]
  65. Nash, J.E.; Sutcliffe, J.V. River flow forecasting through conceptual models: Part I—A discussion of principles. J. Hydrol. 1970, 10, 282–290. [Google Scholar] [CrossRef]
  66. Bakšić, N.; Bakšić, D.; Jazbec, A. Hourly fine fuel moisture model for Pinus halepensis (Mill.) litter. Agric. For. Meteorol. 2017, 243, 93–99. [Google Scholar]
  67. Keshtegar, B.; Mert, C.; Kisi, O. Comparison of four heuristic regression techniques in solar radiation modeling: Kriging method vs. RSM, MARS and M5 model tree. Renew. Sustain. Energy Rev. 2018, 81, 330–341. [Google Scholar] [CrossRef]
  68. Keshtegar, B.; Allawi, M.F.; Afan, H.A.; El-Shafie, A. Optimized River Stream-Flow Forecasting Model Utilizing High-Order Response Surface Method. Water Resour. Manag. 2016, 30, 3899–3914. [Google Scholar] [CrossRef]
  69. Keshtegar, B.; Piri, J.; Kisi, O. A nonlinear mathematical modeling of daily pan evaporation based on conjugate gradient method. Comput. Electron. Agric. 2016, 127, 120–130. [Google Scholar] [CrossRef]
  70. Keshtegar, B.; Heddam, S. Modeling daily dissolved oxygen concentration using modified response surface method and artificial neural network: A comparative study. Neural Comput. Appl. 2018, 30, 2995–3006. [Google Scholar] [CrossRef]
  71. Leach, J.A.; Moore, D. Insights on stream temperature processes through development of a coupled hydrologic and stream temperature model for forested coastal headwater catchments: Stream temperature processes in headwater catchments. Hydrol. Process. 2017, 31, 3160–3177. [Google Scholar] [CrossRef]
  72. Lorenzo, A.T.; Morzfeld, M.; Holmgren, W.F.; Cronin, A.D. Optimal interpolation of satellite and ground data for irradiance nowcasting at city scales. Sol. Energy 2017, 144, 466–474. [Google Scholar] [CrossRef] [Green Version]
  73. Moriasi, D.N.; Arnold, J.G.; Van Liew, M.W.; Bingner, R.L.; Harmel, R.D.; Veith, T.L. Model evaluation guidelines for systematic quantification of accuracy in watershed simulations. Trans. ASABE 2007, 50, 885–900. [Google Scholar] [CrossRef]
  74. Chai, T.; Draxler, R.R. Root mean square error (RMSE) or mean absolute error (MAE)? Arguments against avoiding RMSE in the literature. Geosci. Model. Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef] [Green Version]
  75. Nagelkerke, N.J. A note on a general definition of the coefficient of determination. Biometrika 1991, 78, 691–692. [Google Scholar] [CrossRef]
  76. Teillet, P.M.; Barker, J.L.; Markham, B.L.; Irish, R.R.; Fedosejevs, G.; Storey, J.C. Radiometric cross-calibration of the Landsat-7 ETM+ and Landsat-5 TM sensors based on tandem data sets. Remote Sens. Environ. 2001, 78, 39–54. [Google Scholar] [CrossRef] [Green Version]
  77. Yan, L.; Roy, D.; Zhang, H.; Li, J.; Huang, H. An Automated Approach for Sub-Pixel Registration of Landsat-8 Operational Land Imager (OLI) and Sentinel-2 Multi Spectral Instrument (MSI) Imagery. Remote Sens. 2016, 8, 520. [Google Scholar] [CrossRef] [Green Version]
  78. Ke, Y.; Im, J.; Park, S.; Gong, H. Downscaling of MODIS One Kilometer Evapotranspiration Using Landsat-8 Data and Machine Learning Approaches. Remote Sens. 2016, 8, 215. [Google Scholar] [CrossRef] [Green Version]
  79. Muhling, B.A.; Gaitán, C.F.; Stock, C.A.; Saba, V.S.; Tommasi, D.; Dixon, K.W. Potential Salinity and Temperature Futures for the Chesapeake Bay Using a Statistical Downscaling Spatial Disaggregation Framework. Estuaries Coasts 2018, 41, 349–372. [Google Scholar] [CrossRef]
Figure 1. Location of the study area in relation to Brazil and the state of Bahia.
Figure 1. Location of the study area in relation to Brazil and the state of Bahia.
Remotesensing 12 01297 g001
Figure 2. Arrangement of pivots in the study area.
Figure 2. Arrangement of pivots in the study area.
Remotesensing 12 01297 g002
Figure 3. Dates of the sensor images used in the daily data fusion procedure.
Figure 3. Dates of the sensor images used in the daily data fusion procedure.
Remotesensing 12 01297 g003
Figure 4. Processing and generation of NDVI models with a daily frequency and detailed spatial resolution, for 5 days, to exemplify the 12 days of analysis.
Figure 4. Processing and generation of NDVI models with a daily frequency and detailed spatial resolution, for 5 days, to exemplify the 12 days of analysis.
Remotesensing 12 01297 g004
Figure 5. Statistical metrics of the MODIS-Landsat-like downscaling model for NDVI: (A) mean absolute error (MAE); (B) mean bias error (MBE); (C) determination coefficient (r2); (D) root mean square error (RMSE), and (E) Nash–Sutcliffe efficiency index (NSE).
Figure 5. Statistical metrics of the MODIS-Landsat-like downscaling model for NDVI: (A) mean absolute error (MAE); (B) mean bias error (MBE); (C) determination coefficient (r2); (D) root mean square error (RMSE), and (E) Nash–Sutcliffe efficiency index (NSE).
Remotesensing 12 01297 g005
Figure 6. NDVI images estimated from the product MOD09QG of the MODIS sensor, with a spatial resolution of 250 m, on the following days: 07/29/2017 (A); 30 July 2017 (B); 31 July 2017 (C); 3 August 2017 (D); 4 August 2017 (E); 5 August 2017 (F); 6 August 2017 (G); 8 August 2017 (H); 10 August 2017 (I); 11 August 2017 (J); 12 August 2017 (K); 13 August 2017 (L).
Figure 6. NDVI images estimated from the product MOD09QG of the MODIS sensor, with a spatial resolution of 250 m, on the following days: 07/29/2017 (A); 30 July 2017 (B); 31 July 2017 (C); 3 August 2017 (D); 4 August 2017 (E); 5 August 2017 (F); 6 August 2017 (G); 8 August 2017 (H); 10 August 2017 (I); 11 August 2017 (J); 12 August 2017 (K); 13 August 2017 (L).
Remotesensing 12 01297 g006
Figure 7. Observed NDVI from MODIS (X) versus predicted (Y) values of NDVI MODIS for the following monitoring days: Day 1 (29 July 2017); Day 2 (30 July 2017); Day 3 (31 July 2017); Day 4 (3 August 2017); Day 5 (4 August 2017); Day 6 (5 August 2017); Day 7 (6 August 2017); Day 8 (8 August 2017); Day 9 (10 August 2017); Day 10 (11 August 2017); Day 11 (12 August 2017); and Day 12 (13 August).
Figure 7. Observed NDVI from MODIS (X) versus predicted (Y) values of NDVI MODIS for the following monitoring days: Day 1 (29 July 2017); Day 2 (30 July 2017); Day 3 (31 July 2017); Day 4 (3 August 2017); Day 5 (4 August 2017); Day 6 (5 August 2017); Day 7 (6 August 2017); Day 8 (8 August 2017); Day 9 (10 August 2017); Day 10 (11 August 2017); Day 11 (12 August 2017); and Day 12 (13 August).
Remotesensing 12 01297 g007
Figure 8. MODIS NDVI images estimated from products derived from Landsat-like images, with a spatial resolution of 30 m, on the following days: 29 July 2017 (A); 30 July 2017 (B); 31 July 2017 (C); 3 August 2017 (D); 4 August 2017 (E); 5 August 2017 (F); 6 August 2017 (G); 8 August 2017 (H); 10 August 2017 (I); 11 August 2017 (J); 12 August 2017 (K); 13 August 2017 (L).
Figure 8. MODIS NDVI images estimated from products derived from Landsat-like images, with a spatial resolution of 30 m, on the following days: 29 July 2017 (A); 30 July 2017 (B); 31 July 2017 (C); 3 August 2017 (D); 4 August 2017 (E); 5 August 2017 (F); 6 August 2017 (G); 8 August 2017 (H); 10 August 2017 (I); 11 August 2017 (J); 12 August 2017 (K); 13 August 2017 (L).
Remotesensing 12 01297 g008
Table 1. Description of bands common to Enhanced Thematic Mapper Plus (ETM+), Operational Land Imager (OLI) and MultiSpectral Instrument (MSI) sensors used to model the Moderate Resolution Imaging Spectroradiometer (MODIS)-Normalized Difference Vegetation Index (NDVI).
Table 1. Description of bands common to Enhanced Thematic Mapper Plus (ETM+), Operational Land Imager (OLI) and MultiSpectral Instrument (MSI) sensors used to model the Moderate Resolution Imaging Spectroradiometer (MODIS)-Normalized Difference Vegetation Index (NDVI).
ETM+OLIMSI
BWavelength (µm)Spatial Res. (m)BWavelength (µm)Spatial Res. (m)BWavelength (µm)Spatial Res. (m)
B10.45–0.5230B20.45–0.5130B020.45–0.5210
B20.52–0.6030B30.53–0.5930B030.54–0.5810
B30.63–0.6930B40.64–0.6730B040.65–0.6810
B40.76–0.9030B50.85–0.8830B080.78–0.9010
B: bands; Spatial res.: spatial resolution.

Share and Cite

MDPI and ACS Style

Filgueiras, R.; Mantovani, E.C.; Fernandes-Filho, E.I.; Cunha, F.F.d.; Althoff, D.; Dias, S.H.B. Fusion of MODIS and Landsat-Like Images for Daily High Spatial Resolution NDVI. Remote Sens. 2020, 12, 1297. https://doi.org/10.3390/rs12081297

AMA Style

Filgueiras R, Mantovani EC, Fernandes-Filho EI, Cunha FFd, Althoff D, Dias SHB. Fusion of MODIS and Landsat-Like Images for Daily High Spatial Resolution NDVI. Remote Sensing. 2020; 12(8):1297. https://doi.org/10.3390/rs12081297

Chicago/Turabian Style

Filgueiras, Roberto, Everardo Chartuni Mantovani, Elpídio Inácio Fernandes-Filho, Fernando França da Cunha, Daniel Althoff, and Santos Henrique Brant Dias. 2020. "Fusion of MODIS and Landsat-Like Images for Daily High Spatial Resolution NDVI" Remote Sensing 12, no. 8: 1297. https://doi.org/10.3390/rs12081297

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop