remotesensing-logo

Journal Browser

Journal Browser

Multisource Remote Sensing Data Fusion and Applications in Vegetation Monitoring

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Forest Remote Sensing".

Deadline for manuscript submissions: closed (31 July 2018) | Viewed by 94106

Special Issue Editors

USDA-ARS Hydrology and Remote Sensing Laboratory, BARC-West, Beltsville, MD 20705-2350, USA
Interests: crop and vegetation condition monitoring; remote sensing data fusion
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
USDA-ARS Hydrology and Remote Sensing Laboratory, BARC-West, Beltsville, MD 20705-2350, USA
Interests: water, energy and carbon flux mapping using data fusion approach; drought monitoring and early detection

Special Issue Information

Dear Colleagues,

Fusing remote sensing data from multiple sensors has greatly benefited many applications that require more extensive temporal, spatial or spectral information than any individual sensor can provide. With the increasing availability of new remote sensing data sources, we have ever-expanding choices of datasets that can be included in fusion systems. This special issue will focus on new data fusion approach development and applications for vegetation monitoring. In particular, we emphasize fusion as an approach to improving temporal and spatial detail in remote sensing products used for operational monitoring and decision making, including issues relating to real-time processing. Approaches that fuse remote sensing data from sensors at different spatial and temporal resolutions are welcome. New fusion approaches that use data products from Landsat, Sentinel-2, MODIS, VIIRS and geostationary sensors are encouraged. The special issue will focus on the applications that demonstrate improvements (value-added) when using data fusion approaches, especially in land cover and land use change detection, vegetation phenology mapping, biomass and yield estimation, forest disturbance detection, crop conditions monitoring, crop water use and drought monitoring.

Dr. Feng Gao
Dr. Martha Anderson
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Data Fusion
  • Land Cover and Land Use Change
  • Vegetation Phenology
  • Crop Water Use
  • Crop Condition
  • Forest Disturbance

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

27 pages, 6405 KiB  
Article
An Improved Spatial and Temporal Reflectance Unmixing Model to Synthesize Time Series of Landsat-Like Images
by Jianhang Ma, Wenjuan Zhang, Andrea Marinoni, Lianru Gao and Bing Zhang
Remote Sens. 2018, 10(9), 1388; https://doi.org/10.3390/rs10091388 - 31 Aug 2018
Cited by 31 | Viewed by 4970
Abstract
The trade-off between spatial and temporal resolution limits the acquisition of dense time series of Landsat images, and limits the ability to properly monitor land surface dynamics in time. Spatiotemporal image fusion methods provide a cost-efficient alternative to generate dense time series of [...] Read more.
The trade-off between spatial and temporal resolution limits the acquisition of dense time series of Landsat images, and limits the ability to properly monitor land surface dynamics in time. Spatiotemporal image fusion methods provide a cost-efficient alternative to generate dense time series of Landsat-like images for applications that require both high spatial and temporal resolution images. The Spatial and Temporal Reflectance Unmixing Model (STRUM) is a kind of spatial-unmixing-based spatiotemporal image fusion method. The temporal change image derived by STRUM lacks spectral variability and spatial details. This study proposed an improved STRUM (ISTRUM) architecture to tackle the problem by taking spatial heterogeneity of land surface into consideration and integrating the spectral mixture analysis of Landsat images. Sensor difference and applicability with multiple Landsat and coarse-resolution image pairs (L-C pairs) are also considered in ISTRUM. Experimental results indicate the image derived by ISTRUM contains more spectral variability and spatial details when compared with the one derived by STRUM, and the accuracy of fused Landsat-like image is improved. Endmember variability and sliding-window size are factors that influence the accuracy of ISTRUM. The factors were assessed by setting them to different values. Results indicate ISTRUM is robust to endmember variability and the publicly published endmembers (Global SVD) for Landsat images could be applied. Only sliding-window size has strong influence on the accuracy of ISTRUM. In addition, ISTRUM was compared with the Spatial Temporal Data Fusion Approach (STDFA), the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), the Hybrid Color Mapping (HCM) and the Flexible Spatiotemporal DAta Fusion (FSDAF) methods. ISTRUM is superior to STDFA, slightly superior to HCM in cases when the temporal change is significant, comparable with ESTARFM and a little inferior to FSDAF. However, the computational efficiency of ISTRUM is much higher than ESTARFM and FSDAF. ISTRUM can to synthesize Landsat-like images on a global scale. Full article
Show Figures

Graphical abstract

25 pages, 5009 KiB  
Article
Improving Spatial-Temporal Data Fusion by Choosing Optimal Input Image Pairs
by Donghui Xie, Feng Gao, Liang Sun and Martha Anderson
Remote Sens. 2018, 10(7), 1142; https://doi.org/10.3390/rs10071142 - 19 Jul 2018
Cited by 40 | Viewed by 5265
Abstract
Spatial and temporal data fusion approaches have been developed to fuse reflectance imagery from Landsat and the Moderate Resolution Imaging Spectroradiometer (MODIS), which have complementary spatial and temporal sampling characteristics. The approach relies on using Landsat and MODIS image pairs that are acquired [...] Read more.
Spatial and temporal data fusion approaches have been developed to fuse reflectance imagery from Landsat and the Moderate Resolution Imaging Spectroradiometer (MODIS), which have complementary spatial and temporal sampling characteristics. The approach relies on using Landsat and MODIS image pairs that are acquired on the same day to estimate Landsat-scale reflectance on other MODIS dates. Previous studies have revealed that the accuracy of data fusion results partially depends on the input image pair used. The selection of the optimal image pair to achieve better prediction of surface reflectance has not been fully evaluated. This paper assesses the impacts of Landsat-MODIS image pair selection on the accuracy of the predicted land surface reflectance using the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) over different landscapes. MODIS images from the Aqua and Terra platforms were paired with images from the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 8 Operational Land Imager (OLI) to make different pair image combinations. The accuracy of the predicted surface reflectance at 30 m resolution was evaluated using the observed Landsat data in terms of mean absolute difference, root mean square error and correlation coefficient. Results show that the MODIS pair images with smaller view zenith angles produce better predictions. As expected, the image pair closer to the prediction date during a short prediction period produce better prediction results. For prediction dates distant from the pair date, the predictability depends on the temporal and spatial variability of land cover type and phenology. The prediction accuracy for forests is higher than for crops in our study areas. The Normalized Difference Vegetation Index (NDVI) for crops is overestimated during the non-growing season when using an input image pair from the growing season, while NDVI is slightly underestimated during the growing season when using an image pair from the non-growing season. Two automatic pair selection strategies are evaluated. Results show that the strategy of selecting the MODIS pair date image that most highly correlates with the MODIS image on the prediction date produces more accurate predictions than the nearest date strategy. This study demonstrates that data fusion results can be improved if appropriate image pairs are used. Full article
Show Figures

Graphical abstract

23 pages, 4346 KiB  
Article
Daily Retrieval of NDVI and LAI at 3 m Resolution via the Fusion of CubeSat, Landsat, and MODIS Data
by Rasmus Houborg and Matthew F. McCabe
Remote Sens. 2018, 10(6), 890; https://doi.org/10.3390/rs10060890 - 07 Jun 2018
Cited by 105 | Viewed by 13241
Abstract
Constellations of CubeSats are emerging as a novel observational resource with the potential to overcome the spatiotemporal constraints of conventional single-sensor satellite missions. With a constellation of more than 170 active CubeSats, Planet has realized daily global imaging in the RGB and near-infrared [...] Read more.
Constellations of CubeSats are emerging as a novel observational resource with the potential to overcome the spatiotemporal constraints of conventional single-sensor satellite missions. With a constellation of more than 170 active CubeSats, Planet has realized daily global imaging in the RGB and near-infrared (NIR) at ~3 m resolution. While superior in terms of spatiotemporal resolution, the radiometric quality is not equivalent to that of larger conventional satellites. Variations in orbital configuration and sensor-specific spectral response functions represent an additional limitation. Here, we exploit a Cubesat Enabled Spatio-Temporal Enhancement Method (CESTEM) to optimize the utility and quality of very high-resolution CubeSat imaging. CESTEM represents a multipurpose data-driven scheme for radiometric normalization, phenology reconstruction, and spatiotemporal enhancement of biophysical properties via synergistic use of CubeSat, Landsat 8, and MODIS observations. Phenological reconstruction, based on original CubeSat Normalized Difference Vegetation Index (NDVI) data derived from top of atmosphere or surface reflectances, is shown to be susceptible to large uncertainties. In comparison, a CESTEM-corrected NDVI time series is able to clearly resolve several consecutive multicut alfalfa growing seasons over a six-month period, in addition to providing precise timing of key phenological transitions. CESTEM adopts a random forest machine-learning approach for producing Landsat-consistent leaf area index (LAI) at the CubeSat scale with a relative mean absolute difference on the order of 4–6%. The CubeSat-based LAI estimates highlight the spatial resolution advantage and capability to provide temporally consistent and time-critical insights into within-field vegetation dynamics, the rate of vegetation green-up, and the timing of harvesting events that are otherwise missed by 8- to 16-day Landsat imagery. Full article
Show Figures

Graphical abstract

20 pages, 3467 KiB  
Article
Spatial Rice Yield Estimation Based on MODIS and Sentinel-1 SAR Data and ORYZA Crop Growth Model
by Tri D. Setiyono, Emma D. Quicho, Luca Gatti, Manuel Campos-Taberner, Lorenzo Busetto, Francesco Collivignarelli, Francisco Javier García-Haro, Mirco Boschetti, Nasreen Islam Khan and Francesco Holecz
Remote Sens. 2018, 10(2), 293; https://doi.org/10.3390/rs10020293 - 14 Feb 2018
Cited by 55 | Viewed by 9979
Abstract
Crop insurance is a viable solution to reduce the vulnerability of smallholder farmers to risks from pest and disease outbreaks, extreme weather events, and market shocks that threaten their household food and income security. In developing and emerging countries, the implementation of area [...] Read more.
Crop insurance is a viable solution to reduce the vulnerability of smallholder farmers to risks from pest and disease outbreaks, extreme weather events, and market shocks that threaten their household food and income security. In developing and emerging countries, the implementation of area yield-based insurance, the form of crop insurance preferred by clients and industry, is constrained by the limited availability of detailed historical yield records. Remote-sensing technology can help to fill this gap by providing an unbiased and replicable source of the needed data. This study is dedicated to demonstrating and validating the methodology of remote sensing and crop growth model-based rice yield estimation with the intention of historical yield data generation for application in crop insurance. The developed system combines MODIS and SAR-based remote-sensing data to generate spatially explicit inputs for rice using a crop growth model. MODIS reflectance data were used to generate multitemporal LAI maps using the inverted Radiative Transfer Model (RTM). SAR data were used to generate rice area maps using MAPScape-RICE to mask LAI map products for further processing, including smoothing with logistic function and running yield simulation using the ORYZA crop growth model facilitated by the Rice Yield Estimation System (Rice-YES). Results from this study indicate that the approach of assimilating MODIS and SAR data into a crop growth model can generate well-adjusted yield estimates that adequately describe spatial yield distribution in the study area while reliably replicating official yield data with root mean square error, RMSE, of 0.30 and 0.46 t ha−1 (normalized root mean square error, NRMSE of 5% and 8%) for the 2016 spring and summer seasons, respectively, in the Red River Delta of Vietnam, as evaluated at district level aggregation. The information from remote-sensing technology was also useful for identifying geographic locations with peculiarity in the timing of rice establishment, leaf growth, and yield level, and thus contributing to the spatial targeting of further investigation and interventions needed to reduce yield gaps. Full article
Show Figures

Graphical abstract

4725 KiB  
Article
Forest Types Classification Based on Multi-Source Data Fusion
by Ming Lu, Bin Chen, Xiaohan Liao, Tianxiang Yue, Huanyin Yue, Shengming Ren, Xiaowen Li, Zhen Nie and Bing Xu
Remote Sens. 2017, 9(11), 1153; https://doi.org/10.3390/rs9111153 - 10 Nov 2017
Cited by 35 | Viewed by 7857
Abstract
Forest plays an important role in global carbon, hydrological and atmospheric cycles and provides a wide range of valuable ecosystem services. Timely and accurate forest-type mapping is an essential topic for forest resource inventory supporting forest management, conservation biology and ecological restoration. Despite [...] Read more.
Forest plays an important role in global carbon, hydrological and atmospheric cycles and provides a wide range of valuable ecosystem services. Timely and accurate forest-type mapping is an essential topic for forest resource inventory supporting forest management, conservation biology and ecological restoration. Despite efforts and progress having been made in forest cover mapping using multi-source remotely sensed data, fine spatial, temporal and spectral resolution modeling for forest type distinction is still limited. In this paper, we proposed a novel spatial-temporal-spectral fusion framework through spatial-spectral fusion and spatial-temporal fusion. Addressing the shortcomings of the commonly-used spatial-spectral fusion model, we proposed a novel spatial-spectral fusion model called the Segmented Difference Value method (SEGDV) to generate fine spatial-spectra-resolution images by blending the China environment 1A series satellite (HJ-1A) multispectral image (Charge Coupled Device (CCD)) and Hyperspectral Imager (HSI). A Hierarchical Spatiotemporal Adaptive Fusion Model (HSTAFM) was used to conduct spatial-temporal fusion to generate the fine spatial-temporal-resolution image by blending the HJ-1A CCD and Moderate Resolution Imaging Spectroradiometer (MODIS) data. The spatial-spectral-temporal information was utilized simultaneously to distinguish various forest types. Experimental results of the classification comparison conducted in the Gan River source nature reserves showed that the proposed method could enhance spatial, temporal and spectral information effectively, and the fused dataset yielded the highest classification accuracy of 83.6% compared with the classification results derived from single Landsat-8 (69.95%), single spatial-spectral fusion (70.95%) and single spatial-temporal fusion (78.94%) images, thereby indicating that the proposed method could be valid and applicable in forest type classification. Full article
Show Figures

Figure 1

24078 KiB  
Article
SCaMF–RM: A Fused High-Resolution Land Cover Product of the Rocky Mountains
by Nicolás Rodríguez-Jeangros, Amanda S. Hering, Timothy Kaiser and John E. McCray
Remote Sens. 2017, 9(10), 1015; https://doi.org/10.3390/rs9101015 - 30 Sep 2017
Cited by 3 | Viewed by 4868
Abstract
Land cover (LC) products, derived primarily from satellite spectral imagery, are essential inputs for environmental studies because LC is a critical driver of processes involved in hydrology, ecology, and climatology, among others. However, existing LC products each have different temporal and spatial resolutions [...] Read more.
Land cover (LC) products, derived primarily from satellite spectral imagery, are essential inputs for environmental studies because LC is a critical driver of processes involved in hydrology, ecology, and climatology, among others. However, existing LC products each have different temporal and spatial resolutions and different LC classes that rarely provide the detail required by these studies. Using multiple existing LC products, we implement our Spatiotemporal Categorical Map Fusion (SCaMF) methodology over a large region of the Rocky Mountains (RM), encompassing sections of six states, to create a new LC product, SCaMF–RM. To do this, we must adapt SCaMF to address the prediction of LC in large space–time regions that present nonstationarities, and we add more flexibility in the LC classifications of the predicted product. SCaMF–RM is produced at two high spatial resolutions, 30 and 50 m, and a yearly frequency for the 30-year period 1983–2012. When multiple products are available in time, we illustrate how SCaMF–RM captures relevant information from the different LC products and improves upon flaws observed in other products. Future work needed includes an exhaustive validation not only of SCaMF–RM but also of all input LC products. Full article
Show Figures

Graphical abstract

5437 KiB  
Article
Evaluating Sentinel-2 and Landsat-8 Data to Map Sucessional Forest Stages in a Subtropical Forest in Southern Brazil
by Camile Sothe, Cláudia Maria de Almeida, Veraldo Liesenberg and Marcos Benedito Schimalski
Remote Sens. 2017, 9(8), 838; https://doi.org/10.3390/rs9080838 - 13 Aug 2017
Cited by 97 | Viewed by 14106
Abstract
Studies designed to discriminate different successional forest stages play a strategic role in forest management, forest policy and environmental conservation in tropical environments. The discrimination of different successional forest stages is still a challenge due to the spectral similarity among the concerned classes. [...] Read more.
Studies designed to discriminate different successional forest stages play a strategic role in forest management, forest policy and environmental conservation in tropical environments. The discrimination of different successional forest stages is still a challenge due to the spectral similarity among the concerned classes. Considering this, the objective of this paper was to investigate the performance of Sentinel-2 and Landsat-8 data for discriminating different successional forest stages of a patch located in a subtropical portion of the Atlantic Rain Forest in Southern Brazil with the aid of two machine learning algorithms and relying on the use of spectral reflectance data selected over two seasons and attributes thereof derived. Random Forest (RF) and Support Vector Machine (SVM) were used as classifiers with different subsets of predictor variables (multitemporal spectral reflectance, textural metrics and vegetation indices). All the experiments reached satisfactory results, with Kappa indices varying between 0.9, with Landsat-8 spectral reflectance alone and the SVM algorithm, and 0.98, with Sentinel-2 spectral reflectance alone also associated with the SVM algorithm. The Landsat-8 data had a significant increase in accuracy with the inclusion of other predictor variables in the classification process besides the pure spectral reflectance bands. The classification methods SVM and RF had similar performances in general. As to the RF method, the texture mean of the red-edge and SWIR bands were considered the most important ranked attributes for the classification of Sentinel-2 data, while attributes resulting from multitemporal bands, textural metrics of SWIR bands and vegetation indices were the most important ones in the Landsat-8 data classification. Full article
Show Figures

Graphical abstract

22422 KiB  
Article
Landsat 15-m Panchromatic-Assisted Downscaling (LPAD) of the 30-m Reflective Wavelength Bands to Sentinel-2 20-m Resolution
by Zhongbin Li, Hankui K. Zhang, David P. Roy, Lin Yan, Haiyan Huang and Jian Li
Remote Sens. 2017, 9(7), 755; https://doi.org/10.3390/rs9070755 - 22 Jul 2017
Cited by 29 | Viewed by 10525
Abstract
The Landsat 15-m Panchromatic-Assisted Downscaling (LPAD) method to downscale Landsat-8 Operational Land Imager (OLI) 30-m data to Sentinel-2 multi-spectral instrument (MSI) 20-m resolution is presented. The method first downscales the Landsat-8 30-m OLI bands to 15-m using the spatial detail provided by the [...] Read more.
The Landsat 15-m Panchromatic-Assisted Downscaling (LPAD) method to downscale Landsat-8 Operational Land Imager (OLI) 30-m data to Sentinel-2 multi-spectral instrument (MSI) 20-m resolution is presented. The method first downscales the Landsat-8 30-m OLI bands to 15-m using the spatial detail provided by the Landsat-8 15-m panchromatic band and then reprojects and resamples the downscaled 15-m data into registration with Sentinel-2A 20-m data. The LPAD method is demonstrated using pairs of contemporaneous Landsat-8 OLI and Sentinel-2A MSI images sensed less than 19 min apart over diverse geographic environments. The LPAD method is shown to introduce less spectral and spatial distortion and to provide visually more coherent data than conventional bilinear and cubic convolution resampled 20-m Landsat OLI data. In addition, results for a pair of Landsat-8 and Sentinel-2A images sensed one day apart suggest that image fusion should be undertaken with caution when the images are acquired under different atmospheric conditions. The LPAD source code is available at GitHub for public use. Full article
Show Figures

Graphical abstract

Review

Jump to: Research

23 pages, 59143 KiB  
Review
Spatiotemporal Fusion of Multisource Remote Sensing Data: Literature Survey, Taxonomy, Principles, Applications, and Future Directions
by Xiaolin Zhu, Fangyi Cai, Jiaqi Tian and Trecia Kay-Ann Williams
Remote Sens. 2018, 10(4), 527; https://doi.org/10.3390/rs10040527 - 29 Mar 2018
Cited by 324 | Viewed by 21522
Abstract
Satellite time series with high spatial resolution is critical for monitoring land surface dynamics in heterogeneous landscapes. Although remote sensing technologies have experienced rapid development in recent years, data acquired from a single satellite sensor are often unable to satisfy our demand. As [...] Read more.
Satellite time series with high spatial resolution is critical for monitoring land surface dynamics in heterogeneous landscapes. Although remote sensing technologies have experienced rapid development in recent years, data acquired from a single satellite sensor are often unable to satisfy our demand. As a result, integrated use of data from different sensors has become increasingly popular in the past decade. Many spatiotemporal data fusion methods have been developed to produce synthesized images with both high spatial and temporal resolutions from two types of satellite images, frequent coarse-resolution images, and sparse fine-resolution images. These methods were designed based on different principles and strategies, and therefore show different strengths and limitations. This diversity brings difficulties for users to choose an appropriate method for their specific applications and data sets. To this end, this review paper investigates literature on current spatiotemporal data fusion methods, categorizes existing methods, discusses the principal laws underlying these methods, summarizes their potential applications, and proposes possible directions for future studies in this field. Full article
Show Figures

Graphical abstract

Back to TopTop