Next Article in Journal
Airborne Thermal Data Identifies Groundwater Discharge at the North-Western Coast of the Dead Sea
Previous Article in Journal
Trait Estimation in Herbaceous Plant Assemblages from in situ Canopy Spectra
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Image Fusion Approach Based on Enhanced Spatial and Temporal the Adaptive Reflectance Fusion Model

1
State Key Laboratory of Resources and Environmental Information System, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, 11A, Datun Road, Beijing 100101, China
2
Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100101, China
3
Department of Geography and Resource Management, The Chinese University of Hong Kong, Shatin, NT, Hong Kong 8520, China
4
Department of Geography, The Ohio State University, Columbus, OH 43210, USA
5
College of Forestry, Oregon State University, 231 Peavy Hall, Corvallis, OR 97331, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2013, 5(12), 6346-6360; https://doi.org/10.3390/rs5126346
Submission received: 1 August 2013 / Revised: 8 November 2013 / Accepted: 11 November 2013 / Published: 26 November 2013

Abstract

:
High spatiotemporal resolution satellite imagery is useful for natural resource management and monitoring for land-use and land-cover change and ecosystem dynamics. However, acquisitions from a single satellite can be limited, due to trade-offs in either spatial or temporal resolution. The spatial and temporal adaptive reflectance fusion model (STARFM) and the enhanced STARFM (ESTARFM) were developed to produce new images with high spatial and high temporal resolution using images from multiple sources. Nonetheless, there were some shortcomings in these models, especially for the procedure of searching spectrally similar neighbor pixels in the models. In order to improve these models’ capacity and accuracy, we developed a modified version of ESTARFM (mESTARFM) and tested the performance of two approaches (ESTARFM and mESTARFM) in three study areas located in Canada and China at different time intervals. The results show that mESTARFM improved the accuracy of the simulated reflectance at fine resolution to some extent.

1. Introduction

Considerable progress has been achieved in biophysical plant properties research using optical sensors [1]. For instance, blending remotely-sensed data from multiple sources may provide more useful information than each single sensor data [2], thereby enhancing the capability of remote sensing for the monitoring of land cover and land use change, vegetation phenology and ecological disturbance [36]. Data fusion may also be applied to fill in “missing days” when dealing with satellite imagery with longer revisit periods [7]. The traditional image fusion methods can produce new multispectral high-resolution images with different spatial and spectral characteristics, such as principle component analysis (PCA) [810], intensity-hue-saturation (IHS) [11,12], Brovey transform [13], synthetic variable ratio (SVR) [13] and wavelet transform [9,14]. To obtain the reflectance data with high spatiotemporal resolution, a new developed image fusion modeling approach, the spatial and temporal adaptive reflectance fusion model (STARFM), has been proven to have the capacity of blending Landsat Thematic Mapper (TM)/Enhanced TM Plus (ETM+) images (with high spatial resolution) and Moderate-resolution Imaging Spectroradiometer (MODIS) images (with high temporal resolution) to simulate the daily surface reflectance at Landsat spatial resolution and MODIS temporal frequency [1,5,15]. The STARFM method has been applied to a conifer-dominated area in central British Columbia, Canada, and the landscape-level forest structure and dynamics are characterized [1]. Watts et al. [16] combined the STARFM method and random forest classification models to produce synthetic images, and their results showed that this method improves the accuracy for conservation tillage classification. A combination of bilateral filtering and STARFM was applied for generating high spatiotemporal resolution land surface temperature for urban heat island monitoring [17]. The STARFM method was also applied to the generation and evaluation of gross primary productivity (GPP) by blending Landsat and MODIS image data [18,19] and the monitoring of changes in vegetation phenology [4]. Considering sensor observation differences between different cover types, the STARFM was modified when calculating the weight function of the fusion model [20].
The quality of synthetic imagery produce by STARFM depends on the geographic region [6]. To improve upon this limitation, enhanced STARFM (ESTARFM) has developed based on a remotely-sensed reflectance trend between two dates and spectral unmixing theory, and it has a better performance in heterogeneous and changing landscapes [6]. The performance of STARFM and ESTARFM was assessed in two landscapes with contrasting spatial and temporal dynamics [21].
Although ESTARFM improved the capacity of STARFM in the heterogeneous and changing region, it does not address spatial autocorrection, as pixel simulations rely on the information of the entire image, including the overall standard deviation of reflectance and the estimated number of land cover types. The pixels are selected as the similar pixels in ESTARFM if all fine-resolution images satisfy the following rule:
| F ( x i , y j , T m , B n ) F ( x w / 2 , y w / 2 , T m , B n ) | σ ( B n ) / m
where F is the fine-resolution reflectance, (xi, yj) indicates the pixel location, w/2 is the half size of the local moving window, Tm is the date of acquired image data, σ(Bn) is the standard deviation of the reflectance of the n-th band (Bn) and m is the number of land cover types.
However, according to the first law of geography, the autocorrelation decreases with distance [22], the overall image information may not adequately represent the status of a simulated pixel within the local moving window. Furthermore, it is important to select accurate similar neighbor pixels, because it could help to improve the spectral blending at the next steps for the image fusion. This study hypothesizes that the pixels within the local moving window would be candidates for spectrally similar neighbor pixels.
To improve upon this shortcoming, we have modified the procedure of similar pixel selection in the ESTARFM model (mESTARFM). The performance of our new mESTARFM model was tested across three study regions. The synthetic Landsat-like images (taking the near-infrared (NIR), red and green band, for example) are simulated by mESTARFM. We first introduce the details of the modified procedure of similar pixel selection and then evaluate the performance of mESTARFM and compare it to the original ESTARFM in the three study areas at different time intervals. Finally, we discuss and conclude the findings of this study.

2. Methods

Landsat TM/ETM+ and MODIS reflectance data were used in the study to simulate synthetic images with high spatial and temporal resolution. We selected Landsat and MODIS, since they have similar bandwidths, though MODIS bandwidths are narrower than TM/ETM+ [5]. For our simulation, the default size and the increased step of the moving window are set to be 1,500 m × 1,500 m [5] (Landsat TM/ETM+: 50 pixels × 50 pixels; MODIS: 3 pixels × 3 pixels) and 60 m, respectively. The effect of pixels with long a distance to the central pixel is assumed to be negligible, and the maximum size of the moving window size is set to be 3,000 m to save computing time. The number of classes within the moving window is a function of the amount of land cover types found within this window. The acceptable levels for uncertainties of Landsat and MODIS surface reflectance for the visible band and the NIR band were set to 0.002 and 0.005, respectively [5].
mESTARFM data processing includes three major processing steps: (i) two pairs of fine-resolution reflectance and land cover data (optional) are used for the selection of spectrally similar neighbor pixels for the central pixel within the local moving window; (ii) within the area of similar pixels, the weighting and conversion coefficients for the central pixel are calculated; and (iii) the weighting and conversion coefficients are applied to the available coarse-resolution reflectance to produce the fine-resolution reflectance for the date of simulation. Three steps would repeat with increasing local moving window size until the most similar simulated fine-resolution reflectance is obtained compared to the observed fine-resolution reflectance.

2.1. Study Areas

Three case study regions were selected (Figure 1), one in China and two in Canada. The first forested study region (12 km × 12 km, 54°N, 104°W, Figure 1a) is located near Saskatoon, Canada. This study site is characterized by rapid changes in phenology and a short growing season [5]. The dominant vegetation is coniferous forest. The second study region (30 km × 30 km, Figure 1b) is located in Jiangxi province, China, centered on the Qianyanzhou (QYZ) flux tower (26.74159°N, 115.05777°E), which belongs to the Chinese Ecosystem Research Network (CERN). The dominant vegetation type is coniferous forest, containing species of Pinus massoniana, Pinus elliottii, Cunninghamia lanceolata and Schima superba [23]. The climate in this region is warm and humid, with an annual temperature of 17.9 °C and annual rainfall of 1,485 mm [24]. The third study region (15 km × 15 km, Figure 1c) is located in Quebec, Canada, centered on the Eastern Old Black Spruce (EOBS) flux tower (49.69247 N, 74.34204 W), which belongs to the Fluxnet Canada Research Network/Canadian Carbon Program (CCP). The vegetation in the area is dominated by a coniferous boreal forest, containing species of Picea mariana and Pinus banksiana [25]. The climate in this region is warm and humid, with an annual temperature of 0 °C and annual rainfall of 1,461 mm [26]. The time intervals for these three areas gradually increase, and the performance of the approach was tested at monthly, annual and multi-yearly time intervals.

2.2. Satellite Data and Preprocessing

Landsat TM/ETM+ data were acquired from the United States Geological Survey (USGS) EarthExplorer ( http://edcsns17.cr.usgs.gov/NewEarthExplorer/). The 8-day MODIS surface reflectance products at 500-m resolution (MOD09A1) were acquired from the National Aeronautics and Space Administration (NASA) Reverb portal ( http://reverb.echo.nasa.gov/reverb/). Only the Collection 5 MODIS data were used for the study. The data used in the study are shown in Table 1.
Both fine- and coarse-resolution images were preprocessed before the calculation. In this study, the top atmosphere Landsat TM/ETM+ data were atmospherically corrected and converted to surface reflectance using the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) [27]. Cloud and snow masking was performed using the automated cloud-cover assessment (ACCA) algorithm [28,29] built into LEDAPS. The atmospheric correction method in LEDAPS for Landsat TM/ETM+ data is based on the 6S model, which is also used for MODIS data [5,6]. The MODIS surface reflectance product data (MOD09A1) were re-projected, clipped and resampled (30 m, bilinear approach) to the same extent as Landsat TM/ETM+ data using the MODIS reprojection tools.

2.3. Land Cover Data

Two different land cover datasets were used in this study. For the study regions in Canada, we used the land cover products developed by the Canadian Forest Service and Canadian Space Agency joint project, Earth Observation for Sustainable Development of Forests (EOSD), based on Landsat 7 ETM+ data, and represents circa year 2000 conditions. The land cover map represents 23 unique land cover classes mapped at the spatial resolution of 25 m [30]. The accuracy of the land cover classification was found to be 77%, achieving a target accuracy of 80%, with a 90% confidence interval of 74–80% [31]. Land cover products for the study area were acquired from the EOSD data portal ( http://www4.saforah.org/eosdlcp/nts_prov.html). For the study region in China, we used the land cover products (100 m) at the scale of 1:250,000, developed by the Earth System Scientific Data Sharing Network (ESSDSN). The qualitative accuracy of the land cover classification was found to be 80–90% [32]. The land cover classification product in 2005 was acquired from the ESSDSN portal ( http://www.geodata.cn/Portal/metadata/viewMetadata.jsp?id=100101-11860). The land cover data were resampled to Landsat spatial resolution (30 m) using the nearest neighbor method.

2.4. Implementation of mESATRFM

We selected similar neighbor pixels according to the standard deviation and the number of land cover types within a local moving window. The number of classes was determined by land cover data. The algorithm of similar neighbor pixels selection is modified if land cover data are not available. The details are described as follows.
In the study, the land cover data were added as an auxiliary for searching for spectrally similar neighbor pixels. The algorithm of the threshold method is modified as follows. If the pixels within the local moving window of the n-th band satisfy the following rule, they will be selected as spectrally similar neighbor pixels for the central pixel.
{ | F ( x i , y j , T m , B n ) F ( x w / 2 , y w / 2 , T m , B n ) | σ ( b n ) / s L ( x i , y j ) L ( x w / 2 , y w / 2 ) = 0
where σ(bn) is the standard deviation of the n-th band Bn’s reflectance within the local moving window, s is the number of land cover types within the local moving window and L is the land cover type. The similar neighbor pixels are selected based on the threshold of the standard deviation and under the condition that the candidate pixel has the same land cover type as the central pixel of the local moving window. The input data include two pairs of fine- and coarse-resolution images, one coarse-resolution image and land cover data. The land cover data are applied to a similar spectral pixels selection when available.
The reflectance of the central pixel (xw/2, yw/2) at the date of simulation, T, can be calculated as,
F ( x w / 2 , y w / 2 , T p , B n ) = F ( x w / 2 , y w / 2 , T 0 , B n ) + k = 1 K W i , j , k × V i , j , k × ( C ( x i , y j , T p , B n ) C ( x i , y j , T 0 , B n ) )
where T0 is the base date, K is the total number of spectrally similar neighbor pixels, including the central pixel. The weighting factor, W, and conversion coefficient V are calculated following Zhu et al. [5].
According to Equation (3), either fine-resolution reflectance at beginning date Tb or ending date Te can be used for calculating fine-resolution reflectance at the date of simulation, Tp, which is marked as Fb(xw/2, yw/2, Tp, Bn) and Fe(xw/2, yw/2, Tp, Bn), respectively. By combining these two simulated results, the simulated central pixel’s reflectance is expected to be more accurate as the Landsat image closer to the simulated time [6]. The temporal weighting factor is calculated as:
β t = 1 / | j = 1 w i = 1 w C ( x i , y i , T β , B n ) j = 1 w i = 1 w C ( x i , y i , T p , B n ) | t = b , e ( 1 / | j = 1 w i = 1 w C ( x i , y j , T β , B n ) j = 1 w i = 1 w C ( x i , y j , T p , B n ) | ) , ( β = b , e )
Thus, the final result for the simulated central pixel’s reflectance can be calculated as,
F ( x w / 2 , y w / 2 , T p , B n ) = β b × F b ( x w / 2 , y w / 2 , T p , B n ) + β e × F e ( x w / 2 , y w / 2 , T p , B n )

2.5. Accuracy Assessment

We assessed the accuracy of our simulations by comparing the simulated values to a Landsat observation of the same data that was set aside and not used in the simulation. A linear regression model (simulated versus observed reflectance) was used to assess the fusion models of ESTARFM and mESTARFM. RMSE (root mean squared error) is used to measure the differences between simulated reflectance values by the image fusion models (ESTARFM, mESTARFM) and the observed reflectance. MAE (mean absolute error) is a quantity used to measure how close simulated reflectance values are to the observed reflectance. R2 is used to measure the fitness of the linear regression between simulated and observed reflectance. A two-side t-test (p-value) of simulated and observed reflectance values is used to determine whether there is a statistically significant difference.

3. Results

Figure 2 shows the scenes of images using NIR-red-green as red-green-blue composites for three study areas. The sub-images at the upper and lower rows are Landsat and MODIS images, respectively. The synthetic image at Landsat spatial resolution was simulated by two pairs of Landsat and MODIS imagery data and one MODIS imagery dataset at the simulated date. For example, Landsat-like imagery data (11 July 2001) can be simulated by two pairs of Landsat and MODIS imagery data acquired on 24 May and 12 August 2001, and one MODIS imagery dataset acquired on 11 July 2001. The performance can be assessed by comparing the simulated image and the observed Landsat TM/ETM+.
Figure 3 shows the original Landsat ETM+ and simulated images by ESTARFM and mESTARFM, respectively. For each study area, two simulated images based on two approaches were overall close to the observed image.
The fusion model performance was assessed by regression-based methods. For monthly changes over a forested area, Figure 4 shows the per-pixel comparison between the simulated and observed reflectance values on 11 July 2001 for the green, red and NIR bands of Landsat ETM+. Simulated pixel values were close to the observed ones, as indicated by the 1:1 line, demonstrating that both ESTARFM and mESTARFM can capture the reflectance changes. Table 2 (monthly changes over a forested area) shows the statistical characteristics of linear regression analysis between the simulated and observed reflectance values on a pixel basis. The simulated images for 11 July 2001 using both approaches are closer to the observed image than those for 24 May and 12 August, suggesting that the incorporated changes can be captured on the basis of MODIS images to estimate the Landsat-like image.
For annual changes of the heterogeneous region around the Qianyanzhou flux site, Figure 5 shows the scatter-plots of simulated and observed reflectance values on April 13, 2002 for the NIR, red and green bands of Landsat ETM+. The rows represent the reflectance of the NIR, red and green band of Landsat ETM+, respectively (Figure 5). The linear regression parameters (R2, slope, intercept) are shown in Figure 5. Table 2 (annual changes of the heterogeneous region) shows the detailed statistical parameters of linear regression between simulated and observed reflectance for the QYZ study area.
For changes of the heterogeneous region over several years around the EOBS flux site, Figure 6 shows the original Landsat TM and simulated Landsat-like images using ESTARFM and mESTARFM. The rows represent the reflectance of the NIR, red and green band of Landsat TM, respectively. Table 2 shows the detailed statistical parameters of the linear regression between the simulated and observed reflectance for the EOBS study area.

4. Discussions

This study introduces an updated version of the image fusion model (mESTARFM) that can blend the multi-source remotely sensed data at fine spatial resolution. The simulation capacity in the three study areas with different time intervals has been tested using the ESTARFM and mESTARFM model. Compared with the original Landsat images, the synthetic Landsat-like images produced using the updated model have higher accuracy than ESTARFM.
Specifically, mESTARFM obtains improved synthetic images, because it uses more information around the central pixel and the additional ancillary data, i.e. land cover data. As shown in the lower row of Figure 3b, these two versions of the models have the capacity of mainly capturing the annual changes, while the updated version (mESTARFM) preserves more spatial details compared to ESTARFM. As shown in Figures 3c and 6, for green and red bands, the performance of simulated Landsat-like images is not good. This may be caused by long time intervals and no suitable land cover data around the simulated date. This would lead to lower accuracy if any major land disturbances happened. The simulated reflectance of green and red bands contains a number of pixels with zero values, while the corresponding observed reflectance values are not, which may be due to the fact that ice only exists in the simulated date. Taking the scatter plot at three study areas into account (Figures 4 and 6), the scatters of mESTARFM are closer to the 1:1 line. This indicates that the simulated Landsat-like reflectance results are more similar to the observed Landsat reflectance compared with ESTARFM.
The most important improvement of mESTARFM is the selection of spectrally similar neighbor pixels within a moving window with optimal size. The ESTARFM uses the information of the entire image to select the similar pixels within the local moving window; however, the information of the image is not necessarily relevant locally. According to Tobler’s First Law of Geography, the pixels that are nearer are more related to the center pixel. For the pixels outside the local moving window, the effect on the central pixel can be considered negligible. The size of the local moving window would increase until the most similar simulated fine-resolution image is obtained compared to the observed one. Besides this, the land cover data were used as auxiliary information for the selection of spectrally similar neighbor pixels. The criteria for determining spectrally similar neighbor pixels includes the threshold of standard deviation, the number of land cover types and the difference between the central pixel and neighbor pixels within the local moving window.
Although improvements of mESTARFM are made, there are still limitations, including: (i) the available land cover data for a certain date may not be sufficient for the spectrally similar neighbor pixels, since that land cover type may change from date 1 to date 2; (ii) the land cover data may not take much effect if the date suitable for the land cover data is far away from the simulated time. Therefore, further research may need to focus on the following issues: (i) The combination of the land cover classification data on date 1 (beginning) and date 2 (ending); the pixels may obtain whether land cover type changes between these two dates [20]. More accurate remotely-sensed data can be simulated if more accurate similar neighbor pixels are selected according to the better classification algorithm. (ii) The establishment of the typical vegetation’s annual variation of spectral reflectance curve may be useful to get more accurate conversion coefficients. (iii) A combined Spatial Temporal Adaptive Algorithm for mapping Reflectance Change (STAARCH) [15]/mESTARFM approach could better indicate the disturbance or changes in spectral reflectance, and that would improve the predictions.

5. Conclusions

In conclusion, the mESTARFM model enhances the capability of image fusion, which can blend multi-source remote-sensing images and produce new images with high spatial and temporal resolutions. Compared with ESTARFM, mESTARFM modifies the threshold of similar neighbor pixels selection and optimal moving window size and introduces land cover data in the calculation. These modifications are helpful for preserving more similar spatial details in the simulated image. In this paper, the updated image fusion model (mESTARFM) was tested by Landsat TM/ETM+ and MODIS; it may be also applicable to other similar instruments. The produced images are useful for research on natural resource management, land use/land cover changes, nature raster and ecological dynamics with high spatial and temporal resolutions.

Acknowledgments

This research is supported by the Research Plan of State Key Laboratory of Resources and Environmental Information System (LREIS), Chinese Academy of Sciences (CAS) (grant no. O88RA900PA), the research grant of the Key Project for the Strategic Science Plan in Institute of Geographic Sciences and Natural Resources Research (IGSNRR), CAS (grant no. 2012ZD010), the research grants (41071059 and 41271116) funded by the National Science Foundation of China, the Strategic Priority Research Program “Climate Change: Carbon Budget and Related Issues of the Chinese Academy of Sciences (Grant no. XDA05040403), the research grants (2010CB950704, 2010CB950902 and 2010CB950904) under the Global Change Program of the Chinese Ministry of Science and Technology and the “One Hundred Talents” program funded by the Chinese Academy of Sciences. We thank the USGS Earth Resources Observation Systems (EROS) data center for providing Landsat data and the Land Processes Distributed Active Archive Center (LP-DAAC) and MODIS science team for providing free MODIS products.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hilker, T.; Wulder, M.A.; Coops, N.C.; Seitz, N.; White, J.C.; Gao, F.; Masek, J.G.; Stenhouse, G. Generation of dense time series synthetic Landsat data through data blending with MODIS using a spatial and temporal adaptive reflectance fusion model. Remote Sens. Environ 2009, 113, 1988–1999. [Google Scholar]
  2. Pohl, C.; van Genderen, J.L. Multisensor image fusion in remote sensing: concepts, methods and applications. Int. J. Remote Sens 1998, 19, 823–854. [Google Scholar]
  3. Camps-Valls, G.; Gomez-Chova, L.; Munoz-Mari, J.; Rojo-Alvarez, J.L.; Martinez-Ramon, M. Kernel-based framework for multitemporal and multisource remote sensing data classification and change detection. IEEE Trans. Geosci. Remote Sens 2008, 46, 1822–1835. [Google Scholar]
  4. Bhandari, S.; Phinn, S.; Gill, T. Preparing Landsat Image Time Series (LITS) for monitoring changes in vegetation phenology in Queensland, Australia. Remote Sens 2012, 4, 1856–1886. [Google Scholar]
  5. Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens 2006, 44, 2207–2218. [Google Scholar]
  6. Zhu, X.L.; Chen, J.; Gao, F.; Chen, X.H.; Masek, J.G. An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sens. Environ 2010, 114, 2610–2623. [Google Scholar]
  7. Chen, J.; Zhu, X.; Vogelmann, J.E.; Gao, F.; Jin, S. A simple and effective method for filling gaps in Landsat ETM+ SLC-off images. Remote Sens. Environ 2011, 115, 1053–1064. [Google Scholar]
  8. Metwalli, M.R.; Nasr, A.H.; Allah, O.S.F.; El-Rabaie, S.; Abd El-Samie, F.E. Satellite image fusion based on principal component analysis and high-pass filtering. J. Opt. Soc. Am. A 2010, 27, 1385–1394. [Google Scholar]
  9. Naidu, V.P.S.; Raol, J.R. Pixel-level image fusion using wavelets and principal component analysis. Def. Sci. J 2008, 58, 338–352. [Google Scholar]
  10. Riasati, V.R.; Zhou, H. Reduced data projection slice image fusion using principal component analysis. Proc. SPIE 2005, 5813, 1–15. [Google Scholar]
  11. Choi, M. A new intensity-hue-saturation fusion approach to image fusion with a tradeoff parameter. IEEE Trans. Geosci. Remote Sens 2006, 44, 1672–1682. [Google Scholar]
  12. Tu, T.-M.; Su, S.-C.; Shyu, H.-C.; Huang, P.S. Efficient intensity-hue-saturation-based image fusion with saturation compensation. Opt. Eng 2001, 40, 720–728. [Google Scholar]
  13. Zhang, Y. Understanding image fusion. Photogramm. Eng. Remote Sens 2004, 70, 657–661. [Google Scholar]
  14. Nunez, J.; Otazu, X.; Fors, O.; Prades, A.; Pala, V.; Arbiol, R. Multiresolution-based image fusion with additive wavelet decomposition. IEEE Trans. Geosci. Remote Sens 1999, 37, 1204–1211. [Google Scholar]
  15. Hilker, T.; Wulder, M.A.; Coops, N.C.; Linke, J.; McDermid, G.; Masek, J.G.; Gao, F.; White, J.C. A new data fusion model for high spatial- and temporal-resolution mapping of forest disturbance based on Landsat and MODIS. Remote Sens. Environ 2009, 113, 1613–1627. [Google Scholar]
  16. Watts, J.D.; Powell, S.L.; Lawrence, R.L.; Hilker, T. Improved classification of conservation tillage adoption using high temporal and synthetic satellite imagery. Remote Sens. Environ 2011, 115, 66–75. [Google Scholar]
  17. Huang, B.; Wang, J.; Song, H.; Fu, D.; Wong, K. Generating high spatiotemporal resolution land surface temperature for urban heat island monitoring. IEEE Geosci. Remote Sens. Lett 2013, 10, 1–5. [Google Scholar]
  18. Singh, D. Generation and evaluation of gross primary productivity using Landsat data through blending with MODIS data. Int. J. Appl. Earth Obs. Geoinf 2011, 13, 59–69. [Google Scholar]
  19. Chen, B.; Ge, Q.; Fu, D.; Yu, G.; Sun, X.; Wang, S.; Wang, H. A data-model fusion approach for upscaling gross ecosystem productivity to the landscape scale based on remote sensing and flux footprint modelling. Biogeosciences 2010, 7, 2943–2958. [Google Scholar]
  20. Shen, H.; Wu, P.; Liu, Y.; Ai, T.; Wang, Y.; Liu, X. A spatial and temporal reflectance fusion model considering sensor observation differences. Int. J. Remote Sens 2013, 34, 4367–4383. [Google Scholar]
  21. Emelyanova, I.V.; McVicar, T.R.; Van Niel, T.G.; Li, L.T.; van Dijk, A.I.J.M. Assessing the accuracy of blending Landsat–MODIS surface reflectances in two landscapes with contrasting spatial and temporal dynamics: A framework for algorithm selection. Remote Sens. Environ 2013, 133, 193–209. [Google Scholar]
  22. Tobler, W.R. A computer movie simulating urban growth in the Detroit region. Econ. Geogr 1970, 46, 234–240. [Google Scholar]
  23. Liu, Y.F.; Yu, G.R.; Wen, X.F.; Wang, Y.H.; Song, X.; Li, J.; Sun, X.M.; Yang, F.T.; Chen, Y.R.; Liu, Q.J. Seasonal dynamics of CO2 fluxes from subtropical plantation coniferous ecosystem. Sci. China Ser. D 2006, 49, 99–109. [Google Scholar]
  24. Huang, M.; Ji, J.J.; Li, K.R.; Liu, Y.F.; Yang, F.T.; Tao, B. The ecosystem carbon accumulation after conversion of grasslands to pine plantations in subtropical red soil of south China. Tellus B 2007, 59, 439–448. [Google Scholar]
  25. Richardson, A.D.; Hollinger, D.Y.; Burba, G.G.; Davis, K.J.; Flanagan, L.B.; Katul, G.G.; Munger, J.W.; Ricciuto, D.M.; Stoy, P.C.; Suyker, A.E.; Verma, S.B.; Wofsy, S.C. A multi-site analysis of random error in tower-based measurements of carbon and energy fluxes. Agric. For. Meteorol 2006, 136, 1–18. [Google Scholar]
  26. Yuan, F.; Arain, M.A.; Barr, A.G.; Black, T.A.; BOURQUE, C.P.A.; Coursolle, C.; Margolis, H.A.; McCAUGHEY, J.H.; Wofsy, S.C. Modeling analysis of primary controls on net ecosystem productivity of seven boreal and temperate coniferous forests across a continental transect. Glob. Change Biol 2008, 14, 1765–1784. [Google Scholar]
  27. Masek, J.G.; Vermote, E.F.; Saleous, N.E.; Wolfe, R.; Hall, F.G.; Huemmrich, K.F.; Gao, F.; Kutler, J.; Lim, T.K. A Landsat surface reflectance dataset for North America, 1990–2000. IEEE Geosci. Remote Sens. Lett 2006, 3, 68–72. [Google Scholar]
  28. Irish, R.R.; Barker, J.L.; Goward, S.N.; Arvidson, T. Characterization of the Landsat-7 ETM+ automated cloud-cover assessment (ACCA) algorithm. Photogramm. Eng. Remote Sens 2006, 72, 1179–1188. [Google Scholar]
  29. Irish, R.R. Landsat 7 automatic cloud cover assessment. Proc. SPIE 2000, 4049, 348–355. [Google Scholar]
  30. Wulder, M.A.; Dechka, J.A.; Gillis, M.A.; Luther, J.E.; Hall, R.J.; Beaudoin, A. Operational mapping of the land cover of the forested area of Canada with Landsat data: EOSD land cover program. For. Chron 2003, 79, 1075–1083. [Google Scholar]
  31. Wulder, M.A.; White, J.C.; Magnussen, S.; McDonald, S. Validation of a large area land cover product using purpose-acquired airborne video. Remote Sens. Environ 2007, 106, 480–491. [Google Scholar]
  32. Yunqiang, Z.; Runda, L.; Min, F.; Song, J. Research on Earth System Scientific Data Sharing Platform Based on SOA. Proceedings of WRI World Congress on Software Engineering, 2009, WCSE’09, Los Angeles, CA, USA, 19–21 May 2009; 1, pp. 77–83.
Figure 1. Land cover map of three study areas.
Figure 1. Land cover map of three study areas.
Remotesensing 05 06346f1
Figure 2. Near-infrared (NIR)-red-green composite of Landsat Enhanced Thematic Mapper Plus (ETM+) (upper row) and MODIS (lower row) surface reflectance images. The labels (a–c) represent the monthly changes over a forested area (study area 1, Canada), the annual changes of a heterogeneous region (study area 2, China) and the changes of a heterogeneous region (study area 3, Canada) over several years, respectively.
Figure 2. Near-infrared (NIR)-red-green composite of Landsat Enhanced Thematic Mapper Plus (ETM+) (upper row) and MODIS (lower row) surface reflectance images. The labels (a–c) represent the monthly changes over a forested area (study area 1, Canada), the annual changes of a heterogeneous region (study area 2, China) and the changes of a heterogeneous region (study area 3, Canada) over several years, respectively.
Remotesensing 05 06346f2
Figure 3. Comparison of the observed image, the simulated image ((a) monthly changes over a forest area (study area 1, Canada); (b) annual changes of heterogeneous region (study area 2, China); (c) changes of heterogeneous region (study area 3, Canada) over several years) by ESTARFM and the modified enhanced spatial and temporal adaptive reflectance fusion model (mESTARFM) at three study areas.
Figure 3. Comparison of the observed image, the simulated image ((a) monthly changes over a forest area (study area 1, Canada); (b) annual changes of heterogeneous region (study area 2, China); (c) changes of heterogeneous region (study area 3, Canada) over several years) by ESTARFM and the modified enhanced spatial and temporal adaptive reflectance fusion model (mESTARFM) at three study areas.
Remotesensing 05 06346f3
Figure 4. Scatter plot of observed and simulated reflectance by mESTARFM and ESTARFM for the NIR, red and green band (a–f, monthly changes over a forested area).
Figure 4. Scatter plot of observed and simulated reflectance by mESTARFM and ESTARFM for the NIR, red and green band (a–f, monthly changes over a forested area).
Remotesensing 05 06346f4aRemotesensing 05 06346f4b
Figure 5. Scatter plot of the observed and simulated reflectance by mESTARFM and ESTARFM for the NIR, red and green band (a–f, annual changes around the Qianyanzhou (QYZ) flux site).
Figure 5. Scatter plot of the observed and simulated reflectance by mESTARFM and ESTARFM for the NIR, red and green band (a–f, annual changes around the Qianyanzhou (QYZ) flux site).
Remotesensing 05 06346f5
Figure 6. Scatter plot of the observed and simulated reflectance by mESTARFM and ESTARFM for the NIR, red and green band (a–f, changes over several years around the EOBS flux site).
Figure 6. Scatter plot of the observed and simulated reflectance by mESTARFM and ESTARFM for the NIR, red and green band (a–f, changes over several years around the EOBS flux site).
Remotesensing 05 06346f6
Table 1. The Landsat/ Moderate-resolution Imaging Spectroradiometer (MODIS) data used for the three study areas.
Table 1. The Landsat/ Moderate-resolution Imaging Spectroradiometer (MODIS) data used for the three study areas.
Study AreaLandsatMODIS
DatePath/RowDateTile
Monthly changes over a forested area24 May 200137/2217–24 May 2001h11v03
11 July 20014–11 July 2001
12 August 20015–12 August 2001
Annual changes of heterogeneous region19 October 2001122/4116–23 October 2001h28v06
13 April 20027–14 April 2002
7 November 20021–8 November 2002
Changes of heterogeneous region over several years13 May 200116/259–16 May 2001h13v04
8 May 20051–8 May 2005
8 September 20096–13 September 2009
Table 2. Statistical parameters of the linear regression analysis between simulated and observed reflectance over three study areas. RMSE, root mean squared error; MAE, mean absolute error.
Table 2. Statistical parameters of the linear regression analysis between simulated and observed reflectance over three study areas. RMSE, root mean squared error; MAE, mean absolute error.
ESTARFM
mESTARFM
Study AreaBandR2RMSEMAEp-valueR2RMSEMAEp-valueWindow Size (m)
Monthly changes over a forested areaGreen0.78350.0046−0.0003<0.00010.80100.00450.0005<0.00011,500
Red0.86320.00500.0031<0.00010.86710.00500.0031<0.00013,000
NIR0.91850.0165−0.0004<0.00010.94780.01310.0042<0.00013,000
Annual changes of heterogeneous regionGreen0.66490.01210.0025<0.00010.71750.01120.0029<0.00011,500
Red0.63600.02060.0082<0.00010.67850.01970.0089<0.00011,500
NIR0.45640.02910.0036<0.00010.46470.02840.0035<0.00011,500
Changes of heterogeneous region over several yearsGreen0.17630.0205−0.0060<0.00010.23470.0159−0.0050<0.00013,000
Red0.25400.15910.1513<0.00010.29680.15910.1513<0.00013,000
NIR0.73880.03630.0106<0.00010.80670.03070.0134<0.00013,000

Share and Cite

MDPI and ACS Style

Fu, D.; Chen, B.; Wang, J.; Zhu, X.; Hilker, T. An Improved Image Fusion Approach Based on Enhanced Spatial and Temporal the Adaptive Reflectance Fusion Model. Remote Sens. 2013, 5, 6346-6360. https://doi.org/10.3390/rs5126346

AMA Style

Fu D, Chen B, Wang J, Zhu X, Hilker T. An Improved Image Fusion Approach Based on Enhanced Spatial and Temporal the Adaptive Reflectance Fusion Model. Remote Sensing. 2013; 5(12):6346-6360. https://doi.org/10.3390/rs5126346

Chicago/Turabian Style

Fu, Dongjie, Baozhang Chen, Juan Wang, Xiaolin Zhu, and Thomas Hilker. 2013. "An Improved Image Fusion Approach Based on Enhanced Spatial and Temporal the Adaptive Reflectance Fusion Model" Remote Sensing 5, no. 12: 6346-6360. https://doi.org/10.3390/rs5126346

Article Metrics

Back to TopTop