Next Article in Journal
Integrated Time and Phase Synchronization Strategy for a Multichannel Spaceborne-Stationary Bistatic SAR System
Previous Article in Journal
Evaluating the Effectiveness of Conservation on Mangroves: A Remote Sensing-Based Comparison for Two Adjacent Protected Areas in Shenzhen and Hong Kong, China
Open AccessArticle

Surface Water Mapping from Suomi NPP-VIIRS Imagery at 30 m Resolution via Blending with Landsat Data

1
College of Urban and Environmental Sciences, Northwest University, Xi’an 710127, China
2
Commonwealth Scientific and Industrial Research Organization (CSIRO) Land and Water, Canberra, ACT 2600, Australia
3
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430072, China
4
Key Laboratory of Geographic Information Science, Ministry of Education, East China Normal University, Shanghai 200062, China
5
Beijing Laboratory of Water Resource Security, Capital Normal University, Beijing 100048, China
*
Author to whom correspondence should be addressed.
Academic Editors: Magaly Koch and Prasad S. Thenkabail
Remote Sens. 2016, 8(8), 631; https://doi.org/10.3390/rs8080631
Received: 21 May 2016 / Revised: 6 July 2016 / Accepted: 27 July 2016 / Published: 29 July 2016

Abstract

Monitoring the dynamics of surface water using remotely sensed data generally requires both high spatial and high temporal resolutions. One effective and popular approach for achieving this is image fusion. This study adopts a widely accepted fusion model, the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), for blending the newly available coarse-resolution Suomi NPP-VIIRS data with Landsat data in order to derive water maps at 30 m resolution. The Pan-sharpening technique was applied to preprocessing NPP-VIIRS data to achieve a higher-resolution before blending. The modified Normalized Difference Water Index (mNDWI) was employed for mapping surface water area. Two fusion alternatives, blend-then-index (BI) or index-then-blend (IB), were comparatively analyzed against a Landsat derived water map. A case study of mapping Poyang Lake in China, where water distribution pattern is complex and the water body changes frequently and drastically, was conducted. It has been revealed that the IB method derives more accurate results with less computation time than the BI method. The BI method generally underestimates water distribution, especially when the water area expands radically. The study has demonstrated the feasibility of blending NPP-VIIRS with Landsat for achieving surface water mapping at both high spatial and high temporal resolutions. It suggests that IB is superior to BI for water mapping in terms of efficiency and accuracy. The finding of this study also has important reference values for other blending works, such as image blending for vegetation cover monitoring.
Keywords: spatial and temporal fusion; mNDWI; OTSU thresholding; Pan-sharpening; ESTARFM spatial and temporal fusion; mNDWI; OTSU thresholding; Pan-sharpening; ESTARFM

1. Introduction

Surface water bodies, such as rivers, lakes and reservoirs are irreplaceable water resources for human life and ecosystems. Changes in surface water may result in disasters, such as flood or drought issues. Measuring and monitoring surface water using remote sensing technology is therefore an essential topic in many research areas, including flood-related studies and water resource management. A variety of remote sensors have been applied in detecting and monitoring surface water since the 1970s, such as Landsat Multispectral Scanner (MSS) and Thematic Mapper (TM)/Enhanced Thematic Mapper Plus (ETM+) [1,2,3,4,5,6], NOAA Advanced Very High Resolution Radiometer (AVHRR) [7,8], and Moderate Resolution Imaging Spectroradiometer (MODIS) [9,10,11].
Recent advances in satellite remote sensing promise more choices for data sources for detecting surface water area. For example, Operational Land Imager (OLI) onboard Landsat 8 with a spatial resolution of 30 m is considered a continuation of the Landsat series. Its value in surface water detection has been tested [12]. The Suomi National Polar-orbiting Partnership (Suomi NPP) is a new generation of satellites intended to replace the Earth Observing System satellites [13]. Its Visible Infrared Imaging Radiometer Suite (VIIRS) provides a range of visible and infrared bands with spatial resolutions ranging from 375 m to 750 m. It is considered as an upgrade and replacement of the AVHRR and MODIS as a wide-swath multispectral sensor [14]. Its potential in surface water monitoring has been preliminarily proven [15].
Among the aforementioned numerous remote sensors that have been applied for surface water detection, there is generally a trade-off between the spatial and temporal resolutions [16,17,18]. Medium- to high-resolution images, such as Landsat, are typically available fortnightly or less often, they cannot capture quick water dynamic events. Relative lower spatial resolution sensors like MODIS and NPP-VIIRS scan the earth’s surface once or several times a day, but their coarse resolution hampers the accurate mapping of surface water body. Besides, there is also usually a trade-off between the spatial and spectral resolutions of remote sensors. For example, Landsat provides multispectral bands that are generally at a spatial resolution of 30 m. In the meantime, its panchromatic band (Pan band), whose band width is broader but spatial resolution is as high as 15 m.
Image blending, or image fusion, is a remote sensing technique that aims to exploit more information by integrating data from different sources [19]. In general, it can be divided into two categories [20]. The first category, spatial-spectral fusion, is known as Pan-sharpening, which blends a lower resolution multispectral image with a higher resolution Pan image [21,22]. The second category, spatial-temporal fusion, aims to blend high spatial resolution data with high temporal resolution data to achieve both high spatial and high temporal resolutions. It has been proven to be an effective solution for spatial and temporal trade-off issue. Wu et al. [23] presented a spatio-temporal integrated temperature fusion model to blend multi-sources of remotely sensed data to achieve both higher spatial and temporal resolutions of land surface temperature observation. Gao et al. [24] proposed a Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) to generate daily synthetic Landsat-like image by blending Landsat and MODIS images. This model was later modified by Zhu et al. [25] as Enhanced STARFM (ESTARFM), which improves the prediction in complex heterogeneous regions by involving additional input images and taking into account spatial heterogeneity within mixed pixels. Both STARFM and ESTARFM have been widely used due to their ease of implementation and reasonable algorithm complexity [26,27,28,29,30,31,32].
Jarihani et al. [33] tried to blend Landsat and MODIS images to generate multispectral indices and conducted a comprehensive comparison of “Index-then-Blend” (IB) and “Blend-then-Index” (BI) approaches. The BI approach first blends all necessary reflectance bands and then calculates indices from the blended result, while the IB approach first calculates the index images from the input data and then blends them to synthesize higher resolution index images. They found that the IB approach consistently outperformed the BI approach for all of the nine tested indices, including the modified Normalized Difference Water Index (mNDWI), which is a popular index for delineating water from land [34,35]. However, their findings were based on the difference and deviation of blended indices comparing to the referencing indices calculated from actual Landsat images. They did not examine the performance of these indices in extracting surface water. Therefore, their results cannot intuitively reveal which approach is better for surface water mapping.
This study therefore aims to propose a method that combines Pan-sharpening and ESTARFM to blend Landsat and NPP-VIIRS data for mapping surface water at 30 m resolution. The objectives include (1) generating 30 m resolution mNDWI images by blending Landsat and NPP-VIIRS through IB and BI approaches; (2) comparing the resultant mNDWI images of both approaches; and (3) evaluating the performance of blending results by matching them with the actual referencing Landsat image.

2. Materials and Methods

2.1. Study Area and Materials

2.1.1. Study Area

Poyang Lake is located in the north of Jiangxi Province and south bank of the middle and lower reaches of Yangtze River (Figure 1). It is the largest freshwater lake in China with a drainage area of 162,225 km2, about 97% of the total area of Jiangxi Province [36]. With an overall decreasing trend, the water area of Poyang Lake fluctuates drastically between wet and dry seasons. During the wet season from April to September, the floodplains are inundated and thus form a big lake with a water area bigger than 3000 km2 [37]. During the dry season from October to March, the water area can shrink to less than 1000 km2, forming a narrow meandering channel. Levees have been built around the lake to control flood and facilitate management [38], which lead to numerous small lakes and tributaries, especially during the dry season when the lake is divided into many connected and disconnected segments. For this reason, it is difficult to define the exact boundary of Poyang Lake [39]. This study selected the major water body of Poyang Lake (as shown in red rectangle in Figure 1) as the study area, where water distribution pattern is complex and water area changes frequently and drastically.

2.1.2. Materials

This study attempts to blend two types of remotely sensed data, Suomi NPP-VIIRS and Landsat OLI, to generate a fine resolution surface water map. Suomi NPP-VIIRS sensor provides as many as 22 visible and infrared bands with wavelength ranging from 0.4 to 12.5 μm. Sixteen bands are Moderate-resolution bands (M-bands) and one is Day/Night Band, all at a spatial resolution of 750 m. The rest five bands are Imagery bands (I-bands) at a spatial resolution of 375 m, including a Red band (I1), a Near Infrared (NIR) band (I2), a Short-wave Infrared (SWIR) band (I3), a Medium-wave Infrared (MIR) band (I4) and a Long-wave Infrared (LIR) band (I5). Suomi NPP-VIIRS images were downloaded from Comprehensive Large Array-Data Stewardship System of National Ocean and Atmosphere Authority (NOAA) (http://www.nsof.class.noaa.gov/).
Landsat OLI is the latest available Landsat sensor that provides a series of visible and infrared bands. Eight of them have a spatial resolution of 30 m with an additional Pan band at 15 m resolution. Landsat images were downloaded from United States Geological Survey (USGS)/Land Process Distributed Activity Archive Center (LPDAAC) (http://lpdaac.usgs.gov) and Geospatial Data Cloud, Computer Network Information Center, Chinese Academy of Sciences (http://www.gscloud.cn). The Landsat OLI bands and their corresponding bands of Suomi NPP-VIIRS are listed in Table 1. The other NPP-VIIRS bands are not listed here due to space reason.
Three dates were selected considering the availability and quality of remotely sensed imagery, as well as the water fluctuation status of the study area. For each date, a pair of Suomi NPP-VIIRS and Landsat OLI images (Figure 2) was collected. The path/row of Landsat image is 121/40. Five images, including Landsat and NPP-VIIRS data on 5 October 2013 and 1 May 2014, and a NPP-VIIRS image on 8 October 2014, were used as the input of blending, in order to generate a Landsat-like image at the prediction time (i.e., 8 October 2014). The actual Landsat image on this date was employed as the referencing image to evaluate the blending result. All of these images are of high quality and nearly cloud free. They were all calibrated to surface reflectance and co-registered carefully.

2.2. Methods

The methodology consists of three procedures, Pan-sharpening of NPP-VIIRS M-bands, blending NPP-VIIRS with Landsat OLI, and evaluating blending results. Two approaches were implemented in the blending procedure. The IB approach first calculates mNDWI then blends mNDWI images. The BI approach first blends multi-spectral images and then calculates mNDWI. The flowchart of methodology is shown in Figure 3.

2.2.1. Pan-Sharpening of NPP-VIIRS

Pan-sharpening is commonly used to create a high-resolution band by blending high-resolution panchromatic data with lower spatial resolution multispectral bands. It was employed in this study to improve the spatial resolution of the Green band of NPP-VIIRS (M4), which is an essential band for calculating mNDWI, since there is no Green band included in NPP-VIIRS I-bands. Therefore, the I3 band of NPP-VIIRS was used as the Pan band to Pan-sharpen its M-bands in order to improve their spatial resolution from 750 m to 375 m. The Pan-sharpened M4 band could then be used to derive 375 m resolution mNDWI image together with the SWIR band (I3).
There are several well-known pan-sharpening algorithms, such as Principle Component Analysis (PCA), Hue-Saturation-Value (HSV) and Gram-Schmidt (GS). Among them, Gram-Schmidt has the highest fidelity, which maintains the consistency of image spectral characteristics before and after Pan-sharpening. Therefore, it was adopted in this study to Pan-sharpen NPP-VIIRS.
The GS algorithm is a component substitution based Pan-sharpening method. It has been widely used in other studies [40,41]. The GS first simulates a Pan band from lower spatial resolution multispectral (MS) bands, which is generally achieved by averaging the MS bands. Then, a GS transformation is performed for the simulated Pan band and the MS bands, with the simulated Pan band as the first band. After that, the first band is replaced by the high spatial resolution Pan band. Finally, an inverse GS transformation is applied to create the Pan-sharpened MS bands.

2.2.2. Blending NPP-VIIRS with Landsat OLI

There are two approaches that have been implemented in this procedure. Both consist of the same two components, calculating mNDWI and spatial-temporal blending. They differ from each other in the order of processing. The IB approach calculates the mNDWI first and then blends mNDWI images to generate a high-resolution mNDWI image. The BI approach blends multispectral bands and then calculates mNDWI image from the blending result.
(1) Spatial-temporal blending
The ESTARFM algorithm proposed by Zhu et al. [25] was employed in this study to blend NPP-VIIRS and Landsat OLI data. It was originally designed to blend five images, including two pairs of fine resolution and coarse resolution data (i.e., Landsat and MODIS) at two base time (t1 and t2) and a coarse resolution image at the prediction time (tp), to synthesize a Landsat-like fine-resolution image. Before blending, all images have to be preprocessed to georegistered surface reflectance. There are four major steps.
Firstly, two fine-resolution images are used to search for pixels similar to the central pixel in a local moving window. Similar pixels are defined as those that have the same land-cover type as the central pixel. They can be identified by simply setting a threshold for the similarity, which is determined by the difference between their reflectance value and that of the central pixel.
Secondly, the weights (Wijk) of all similar pixels (xi, yj) at base time k are calculated based on their similarity and distance to the central pixel, as in Equation (1).
W i j k = ( 1 / ( S i j k × T i j k × D i j k ) ) / i = 1 w j = 1 w k = 1 2 ( 1 / ( S i j k × T i j k × D i j k ) )
where w is the size of moving window, Sijk is the spectral distance between Landsat and MODIS data at time tk (Equation (2), where L and M denote Landsat and MODIS reflectance respectively), Tijk is the temporal distance between the base time tk and prediction time tp (Equation (3)), and Dijk is the relative spatial distance between the central pixel (xw/2, yw/2) and similar pixel (xi, yj) (Equation (4)).
S i j k = | L ( x i , y j , t k ) M ( x i , y j , t k ) |
T i j k = | M ( x i , y j , t k ) M ( x i , y j , t p ) |
D i j k = 1 + ( x w / 2 x i ) 2 + y w / 2 y j ) 2 / ( w / 2 )
Thirdly, conversion coefficient Vij for a similar pixel (xi, yj) is determined by linear regression constructed based on the fine- and coarse-resolution reflectance at two base time (t1 and t2). Vij indicates the trend of observations between the two base time.
Finally, Wijk at one of the base time (i.e., Wij1 or Wij2) and Vij are used to calculate the fine-resolution reflectance from the coarse-resolution image at the desired prediction time (tp). The final predicted fine-resolution reflectance of pixel (x, y) at time tp is calculated as Equation (5).
F ( x , y , t p ) = L ( x , y , t 1 ) + i = 1 w j = 1 w W i j 1 × V i j × ( M ( x i , y j , t p ) M ( x i , y j , t 1 ) )
Detailed description of ESTARFM algorithm can be referred to Zhu et al. [25].
(2) Calculating mNDWI
Xu [34] noticed that the water body has a stronger absorbability in the SWIR band than in the NIR band. He thus used the SWIR band to replace the NIR band in NDWI [42], and proposed the modified NDWI (mNDWI). It is defined as the normalized difference between Green band and SWIR band (Equation (6)).
mNDWI = G r e e n S W I R G r e e n + S W I R

2.2.3. Evaluating the Accuracy of Blending Results

mNDWI calculated from Landsat OLI image at the prediction time was employed as the referencing-mNDWI to evaluate the mNDWI images of both approaches. The difference between resultant mNDWI values and referencing-mNDWI values was used as a direct indicator of accuracy.
In order to evaluate surface water detection ability of both resultant mNDWI images, a proper threshold has to be determined to extract water body area from a mNDWI image. In this study, a widely used dynamic thresholding method, the OTSU algorithm [43], was employed to find an optimal threshold for each mNDWI image, including resultant mNDWI images of both approaches and the referencing-mNDWI image. The OTSU method assumes that the image contains two classes of pixels following bi-modal histogram, and then calculates the optimum threshold separating the two classes so that their inter-class variance is maximal. This is a well-accepted algorithm for segmenting index images for land cover object detection.
Surface water bodies were delineated from land through OTSU algorithm. Using the water areas derived from the referencing-mNDWI as the reference, accuracy of water areas detected from both approaches were assessed by building confusion matrices on a pixel-by-pixel basis. Overall accuracy, commission and omission errors as well as the Kappa coefficient were calculated.

3. Results and Discussion

3.1. Blending Results

For the IB approach, mNDWI was first calculated for all the five input images, including three Pan-sharpened NPP-VIIRS and two Landsat images. The mNDWI images were then blended using the ESTARFM algorithm, and produced a synthetic 30 m resolution mNDWI image (Figure 4a) at the prediction time (8 October 2014). For the BI approach, the Pan-sharpened NPP-VIIRS images were first blended with the Landsat images band-by-band using the ESTARFM algorithm. After that, the blended Green band and SWIR1 band were employed to generate a mNDWI image, which is shown as in Figure 4b. Referencing-mNDWI at the prediction time (Figure 4c) was derived from the actual Landsat image using Equation (6).
It can be seen from Figure 4a,b that both approaches synthesized mNDWI images that have a spatial resolution as high as that derived from actual Landsat (Figure 4c). Many details of mNDWI variation in the study area can be reflected in these results. However, both approaches tend to underestimate the mNDWI when visually compared to the referencing-mNDWI image.
Thresholds for all three mNDWI images were selected using the OTSU algorithm. Candidate threshold values from −0.4 to 0.4 were examined with a step of 0.01. Water and non-water inter-class variances were calculated at each threshold interval for the three mNDWI images and plotted in Figure 5. It is revealed that the mNDWI variances between water and non-water are different among these three mNDWI images. The mNDWI image derived from actual Landsat has the highest variance, which represents the best water distinguishing ability. Inter-class variance of the BI resultant mNDWI image is the lowest on the contrary. Finally, threshold values of 0.18, 0.21 and 0.23 (shown as diamond points in Figure 5) were determined for the IB blending result, BI blending result and the actual Landsat, respectively.
Water maps (Figure 4d–f) were then produced by segmenting the mNDWI images with the selected threshold values. Comparing Figure 4d,e with f, it is obvious that blending results of both IB and BI approaches can reasonably map water areas. Small isolated water bodies and even narrow rivers can be restored. However, the BI result (Figure 4e) has apparently underestimated water areas, especially the main lake water area. This underestimated area is a floodplain wetland area where inundation changes drastically between wet and dry seasons. This blending method which integrates surface reflectance from different periods may have introduced some misleading information that caused the misjudgment of water area. On the contrary, the IB result looks slightly better in the floodplain wetland area, although it still has an overall underestimation. As suggested by Jarihani et al. [33], the reason for causing the difference between the two blending approaches is that the IB approach only incurs one instance of blending and therefore produces only one instance of error during blending, while the BI approach has to blend multi bands first which introduces multiple blending errors.

3.2. Comparison and Evaluation

Actual Landsat image was employed to validate the results of both approaches. Validation mainly involves two aspects. One is to compare the mNDWI images directly, and the other is to evaluate the accuracy of water mapping.
Resultant mNDWI images of both approaches were overlaid with the actual Landsat mNDWI image respectively, in order to quantify their prediction accuracy. mNDWI difference maps were produced for both IB (Figure 6a) and BI results (Figure 6b). The IB method generally produces less difference than the BI method. Significant differences (absolute value greater than 0.50) only occur sporadically in IB. On the contrary, the mNDWI difference between the BI result and Landsat is relatively higher. The BI method generally overestimates the mNDWI value in the non-water area and underestimates mNDWI in the water area.
Using actual Landsat mNDWI image as the reference, mean bias and Root Mean Square Deviation (RMSD) of both resultant mNDWI images were calculated. The IB derived mNDWI image has a mean bias of 0.011 and a RMSD of 0.040, while the BI derived mNDWI image has a mean bias of 0.033 and a RMSD of 0.069. Correlation analysis was also conducted through fitting linear regressions between blended indices and referencing indices, producing a coefficient of determination (R2) of 0.81 for the IB result and a R2 of 0.78 for the BI result. It is obvious that the IB method outperforms the BI method. This finding is consistent with that of Jarihani et al. [33].
Water maps of both blending methods were overlaid with the referencing Landsat water map on a pixel-by-pixel basis and evaluation maps were produced (Figure 7a,b). Misclassified water areas can be easily identified from these maps. It is observed that errors in the IB resultant water map are much fewer than those of the BI. Both commissions and omissions in the IB map are limited, while in the BI map, there are large areas of omissions and nearly no commission. This means that the BI method obviously underestimates water areas. The underestimation mainly happens in the west and south of the lake, where there are vast area of floodplains and wetlands. They were not inundated at Time 1 and Time 2 (i.e., 5 October 2013 and 1 May 2014) but appeared to be inundated at Time 3 (i.e., 8 October 2014). The calculated mNDWI values of blended bands in these areas are apparently lower than those calculated from actual Landsat. However, if the mNDWI value was calculated before blending, blended mNDWI values in these areas are much closer to those of actual Landsat imagery.
Commonly used accuracy evaluation indices, including commission and omission errors, overall accuracy and Kappa coefficient, were calculated from evaluation maps of Figure 7, and listed in Table 2. This table provides a more straightforward comparison on the results of both approaches. High values in Kappa and overall accuracy indicate the general consistency between the predicted maps of both approaches and the reference map. Nevertheless, both Kappa coefficient and overall accuracy suggest that the index-then-blend (IB) method generally outperforms the blend-then-index (BI) method. The problem of BI approach is that it underestimated the water area and thus produced significant omission errors (more than 5%).

4. Conclusions

Monitoring the dynamics of surface water generally requires both high spatial and high temporal resolutions, especially in the case of flood inundation monitoring. Unfortunately, for most of the remote sensors, there exists a trade-off between their spatial and temporal resolutions, which makes it difficult to monitor surface water intensively with high accuracy.
This study blended newly available Suomi NPP-VIIRS data with Landsat data for the purpose of acquiring both high spatial and high temporal resolutions to improve surface water monitoring. We employed the widely accepted water index mNDWI and tested two approaches, namely index-then-blend (IB) and blend-then-index (BI). It has been found that the Suomi NPP-VIIRS can replace MODIS imagery to be blended with Landsat data for achieving the daily monitoring of surface water at 30 m resolution. Both approaches can derive reasonable water detection results. They have a general agreement with the actual referencing Landsat image. It seems that the employed fusion model, namely the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), can not only be used for reflectance fusion, but also performs well in blending water index. It has also been noticed that the IB approach generated a water map that was slightly better than the BI approach. The BI method generally underestimates the water distribution, especially when the water area expands drastically at the prediction time. Moreover, it requires multiple bands to be blended in order to calculate the index later, which thus consumes more computation time when conducting image blending. The IB approach calculates the index first and thus only needs to blend a single index image. It can not only save the computation time, but also derives better water mapping results. This has important reference values for other blending work in making decision on whether the IB or BI approach should be chosen. This study has also exemplified the application of blending approaches in improving detection results, not only in surface water monitoring, but also in other related fields, such as vegetation cover monitoring.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (41501460), National Major Scientific Research Program (2013CBA01806), the Special Trade Project for Commonweal of Water Resources (201401026), and Scientific Research Program Funded by Shaanxi Provincial Education Department (15JK1700). We would like to thank Dr Xiaolin Zhu for making the ESTARFM code available.

Author Contributions

Chang Huang, Yun Chen and Shiqiang Zhang contributed the main idea and designed the experiments; Linyi Li performed the experiments; Kaifang Shi and Rui Liu analyzed the remote sensing data; Chang Huang wrote the manuscript, which was then improved by the contribution of all the co-authors.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
(Suomi) NPP-VIIRSVisible Infrared Imaging Radiometer Suite onboard Suomi National Polar-orbiting Partnership
STARFMSpatial and Temporal Adaptive Reflectance Fusion Model
ESTARFMEnhanced Spatial and Temporal Adaptive Reflectance Fusion Model
mNDWImodified Normalized Difference Water Index
BIBlend-then-Index approach
IBIndex-then-Blend approach
MSSMultispectral Sacnner
TMThematic Mapper
ETM+Enhanced Thematic Mapper Plus
OLIOperational Land Imager
AVHRRAdvanced Very High Resolution Radiometer
MODISModerate Resolution Imaging Spectroradiometer
NIRNear Infrared
SWIRShort-wave Infrared
GSGram-Schmidt
MSMultispectral

References

  1. Rango, A.; Salomonson, V.V. Regional flood mapping from space. Water Resour. Res. 1974, 10, 473–484. [Google Scholar] [CrossRef]
  2. Chen, Y.; Wang, B.; Pollino, C.A.; Cuddy, S.M.; Merrin, L.E.; Huang, C. Estimate of flood inundation and retention on wetlands using remote sensing and GIS. Ecohydrology 2014, 7, 1412–1420. [Google Scholar] [CrossRef]
  3. Feyisa, G.L.; Meilby, H.; Fensholt, R.; Proud, S.R. Automated Water Extraction Index: A new technique for surface water mapping using Landsat imagery. Remote Sens. Environ. 2014, 140, 23–35. [Google Scholar] [CrossRef]
  4. Zhang, G.; Yao, T.; Xie, H.; Zhang, K.; Zhu, F. Lakes’ state and abundance across the Tibetan Plateau. Chin. Sci. Bull. 2014, 59, 3010–3021. [Google Scholar] [CrossRef]
  5. Li, J.; Sheng, Y. An automated scheme for glacial lake dynamics mapping using Landsat imagery and digital elevation models: A case study in the Himalayas. Int. J. Remote Sens. 2012, 33, 5194–5213. [Google Scholar] [CrossRef]
  6. Li, W.; Du, Z.; Ling, F.; Zhou, D.; Wang, H.; Gui, Y.; Sun, B.; Zhang, X. A Comparison of Land surface water mapping using the normalized difference water index from TM, ETM+ and ALI. Remote Sens. 2013, 5, 5530–5549. [Google Scholar] [CrossRef]
  7. Barton, I.J.; Bathols, J.M. Monitoring floods with AVHRR. Remote Sens. Environ. 1989, 30, 89–94. [Google Scholar] [CrossRef]
  8. Jain, S.K.; Saraf, A.K.; Goswami, A.; Ahmad, T. Flood inundation mapping using NOAA AVHRR data. Water Resour. Manag. 2006, 20, 949–959. [Google Scholar] [CrossRef]
  9. Chen, Y.; Huang, C.; Ticehurst, C.; Merrin, L.; Thew, P. An Evaluation of MODIS Daily and 8-day composite products for floodplain and wetland inundation mapping. Wetlands 2013, 33, 823–835. [Google Scholar] [CrossRef]
  10. Huang, C.; Chen, Y.; Wu, J. Mapping spatio-temporal flood inundation dynamics at large river basin scale using time-series flow data and MODIS imagery. Int. J. Appl. Earth Obs. Geoinform. 2014, 26, 350–362. [Google Scholar] [CrossRef]
  11. Liu, R.; Chen, Y.; Wu, J.; Gao, L.; Barrett, D.; Xu, T.; Li, L.; Huang, C.; Yu, J. Assessing spatial likelihood of flooding hazard using naïve Bayes and GIS: A case study in Bowen Basin, Australia. Stoch. Environ. Res. Risk Assess. 2015, 30, 1575–1590. [Google Scholar] [CrossRef]
  12. Du, Z.; Li, W.; Zhou, D.; Tian, L.; Ling, F.; Wang, H.; Gui, Y.; Sun, B. Analysis of Landsat-8 OLI imagery for land surface water mapping. Remote Sens. Lett. 2014, 5, 672–681. [Google Scholar] [CrossRef]
  13. Shi, K.; Huang, C.; Yu, B.; Yin, B.; Huang, Y.; Wu, J. Evaluation of NPP-VIIRS night-time light composite data for extracting built-up urban areas. Remote Sens. Lett. 2014, 5, 358–366. [Google Scholar] [CrossRef]
  14. Yu, Y.; Privette, J.L.; Pinheiro, A.C. Analysis of the NPOESS VIIRS land surface temperature algorithm using MODIS data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2340–2350. [Google Scholar]
  15. Huang, C.; Chen, Y.; Wu, J.; Li, L.; Liu, R. An evaluation of suomi NPP-VIIRS data for surface water detection. Remote Sens. Lett. 2015, 6, 155–164. [Google Scholar] [CrossRef]
  16. Huang, C.; Chen, Y.; Wu, J.P. DEM-based modification of pixel-swapping algorithm for enhancing floodplain inundation mapping. Int. J. Remote Sens. 2014, 35, 365–381. [Google Scholar] [CrossRef]
  17. Li, L.; Chen, Y.; Xu, T.; Liu, R.; Shi, K.; Huang, C. Super-resolution mapping of wetland inundation from remote sensing imagery based on integration of back-propagation neural network and genetic algorithm. Remote Sens. Environ. 2015, 164, 142–154. [Google Scholar] [CrossRef]
  18. Li, L.; Chen, Y.; Yu, X.; Liu, R.; Huang, C. Sub-pixel flood inundation mapping from multispectral remotely sensed images based on discrete particle swarm optimization. ISPRS J. Photogramm. Remote Sens. 2015, 101, 10–21. [Google Scholar] [CrossRef]
  19. Pohl, C.; van Genderen, J.L. Review article Multisensor image fusion in remote sensing: Concepts, methods and applications. Int. J. Remote Sens. 1998, 19, 823–854. [Google Scholar] [CrossRef]
  20. Huang, B.; Zhang, H.; Song, H.; Wang, J.; Song, C. Unified Fusion of Remote-Sensing Imagery: Generating Simultaneously High-Resolution Synthetic Spatial-Temporal-Spectral Earth Observations. Remote Sens. Lett. 2013, 4, 561–569. [Google Scholar] [CrossRef]
  21. Zhang, L.; Shen, H.; Gong, W.; Zhang, H. Adjustable model-based fusion method for multispectral and panchromatic images. IEEE Trans. Syst. Man Cybern. B Cybern. A Publ. IEEE Syst. Man Cybern. Soc. 2012, 42, 1693–1704. [Google Scholar] [CrossRef] [PubMed]
  22. Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral Image Denoising Employing a Spectral-Spatial Adaptive Total Variation Model. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3660–3677. [Google Scholar] [CrossRef]
  23. Wu, P.; Shen, H.; Zhang, L.; Göttsche, F.M. Integrated fusion of multi-scale polar-orbiting and geostationary satellite observations for the mapping of high spatial and temporal resolution land surface temperature. Remote Sens. Environ. 2015, 156, 169–181. [Google Scholar] [CrossRef]
  24. Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar]
  25. Zhu, X.; Chen, J.; Gao, F.; Chen, X.; Masek, J.G. An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sens. Environ. 2010, 114, 2610–2623. [Google Scholar] [CrossRef]
  26. Chen, B.; Ge, Q.; Fu, D.; Yu, G.; Sun, X.; Wang, S.; Wang, H. A data-model fusion approach for upscaling gross ecosystem productivity to the landscape scale based on remote sensing and flux footprint modelling. Biogeosciences 2010, 7, 2943–2958. [Google Scholar] [CrossRef]
  27. Gaulton, R.; Hilker, T.; Wulder, M.A.; Coops, N.C.; Stenhouse, G. Characterizing stand-replacing disturbance in western Alberta grizzly bear habitat, using a satellite-derived high temporal and spatial resolution change sequence. For. Ecol. Manag. 2011, 261, 865–877. [Google Scholar] [CrossRef]
  28. Liu, H.; Weng, Q. Enhancing temporal resolution of satellite imagery for public health studies: A case study of West Nile Virus outbreak in Los Angeles in 2007. Remote Sens. Environ. 2012, 117, 57–71. [Google Scholar] [CrossRef]
  29. Zhang, F.; Zhu, X.; Liu, D. Blending MODIS and Landsat images for urban flood mapping. Int. J. Remote Sens. 2014, 35, 3237–3253. [Google Scholar] [CrossRef]
  30. Chen, B.; Huang, B.; Xu, B. Fine Land Cover Classification Using Daily Synthetic Landsat-Like Images at 15-m Resolution. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2359–2363. [Google Scholar] [CrossRef]
  31. Hazaymeh, K.; Hassan, Q.K. Spatiotemporal image-fusion model for enhancing the temporal resolution of Landsat-8 surface reflectance images using MODIS images. J. Appl. Remote Sens. 2015, 9. [Google Scholar] [CrossRef]
  32. Weng, Q.; Gao, F.; Fu, P. Generating daily land surface temperature at Landsat resolution by fusing Landsat and MODIS data. Remote Sens. Environ. 2014, 145, 55–67. [Google Scholar] [CrossRef]
  33. Jarihani, A.A.; McVicar, T.R.; Van Niel, T.G.; Emelyanova, I.V.; Callow, J.N.; Johansen, K. Blending Landsat and MODIS Data to Generate Multispectral Indices: A Comparison of “Index-then-Blend” and “Blend-then-Index” Approaches. Remote Sens. 2014, 6, 9213–9238. [Google Scholar] [CrossRef][Green Version]
  34. Xu, H.Q. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  35. Du, Y.; Zhang, Y.; Ling, F.; Wang, Q.; Li, W.; Li, X. Water Bodies’ Mapping from Sentinel-2 Imagery with Modified Normalized Difference Water Index at 10-m Spatial Resolution Produced by Sharpening the SWIR Band. Remote Sens. 2016, 8, 354–370. [Google Scholar] [CrossRef][Green Version]
  36. Shankman, D.; Liang, Q. Landscape Changes and Increasing Flood Frequency in China’s Poyang Lake Region. Prof. Geogr. 2003, 55, 434–445. [Google Scholar] [CrossRef]
  37. Xu, G.; Qin, Z. Flood Estimation Methods for Poyang Lake Area. J. Lake Sci. 1998, 10, 31–36. [Google Scholar]
  38. Shankman, D.; Keim, B.D.; Song, J. Flood frequency in China’s Poyang Lake Region: Trends and teleconnections. Int. J. Climatol. 2006, 26, 1255–1266. [Google Scholar] [CrossRef]
  39. Feng, L.; Hu, C.M.; Chen, X.L.; Cai, X.B.; Tian, L.Q.; Gan, W.X. Assessment of inundation changes of Poyang Lake using MODIS observations between 2000 and 2010. Remote Sens. Environ. 2012, 121, 80–92. [Google Scholar] [CrossRef]
  40. Aiazzi, B.; Baronti, S.; Selva, M.; Alparone, L. Enhanced Gram-Schmidt Spectral Sharpening Based on Multivariate Regression of MS and Pan Data. In Proceedings of the IEEE International Conference on Geoscience and Remote Sensing, Denver, CO, USA, 31 July–4 August 2006.
  41. Kneusel, R.T.; Kneusel, P.N. Novel PET/CT Image Fusion via Gram-Schmidt Spectral Sharpening. In Proceedings of the SPIE—Medical Imaging 2013: Image Processing, Lake Buena Vista, FL, USA, 9 February 2013.
  42. McFeeters, S.K. The use of the normalized difference water index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  43. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar]
Figure 1. Study area displayed as NPP-VIIRS Imagery bands (R3G2B1).
Figure 1. Study area displayed as NPP-VIIRS Imagery bands (R3G2B1).
Remotesensing 08 00631 g001
Figure 2. Three pairs of data sets from Landsat and NPP-VIIRS.
Figure 2. Three pairs of data sets from Landsat and NPP-VIIRS.
Remotesensing 08 00631 g002
Figure 3. Flowchart of methodology.
Figure 3. Flowchart of methodology.
Remotesensing 08 00631 g003
Figure 4. (a) mNDWI image derived by IB approach; (b) mNDWI image derived from actual Landsat; (c) water map derived from BI approach; (d) BI mNDWI image derived by BI approach; (e) water map derived from IB approach; (f) water map derived from actual Landsat.
Figure 4. (a) mNDWI image derived by IB approach; (b) mNDWI image derived from actual Landsat; (c) water map derived from BI approach; (d) BI mNDWI image derived by BI approach; (e) water map derived from IB approach; (f) water map derived from actual Landsat.
Remotesensing 08 00631 g004
Figure 5. Intra-class variances of different threshold values on three mNDWI images, along with their optimal thresholds.
Figure 5. Intra-class variances of different threshold values on three mNDWI images, along with their optimal thresholds.
Remotesensing 08 00631 g005
Figure 6. (a) Difference between IB derived mNDWI and Landsat mNDWI; (b) difference between BI derived mNDWI and Landsat mNDWI.
Figure 6. (a) Difference between IB derived mNDWI and Landsat mNDWI; (b) difference between BI derived mNDWI and Landsat mNDWI.
Remotesensing 08 00631 g006
Figure 7. (a) Evaluation map of IB result; (b) evaluation map of BI result.
Figure 7. (a) Evaluation map of IB result; (b) evaluation map of BI result.
Remotesensing 08 00631 g007
Table 1. Corresponding band list of Landsat OLI and Suomi NPP-VIIRS.
Table 1. Corresponding band list of Landsat OLI and Suomi NPP-VIIRS.
Spectral RegionLandsat OLI BandLandsat OLI Wavelength (μm)NPP-VIIRS BandNPP-VIIRS Wavelength (μm)Landsat OLI Pixel Size (m)NPP-VIIRS Pixel Size (m)
Coastal10.433–0.453M20.436−0.45430750
Blue20.450–0.515M30.478–0.48830750
* Green30.525–0.600M40.545–0.56530750
Red40.630–0.680M5/I10.662–0.682/0.600–0.68030750/375
NIR50.845–0.885M7/I20.846–0.885/0.850–0.88030750/375
* SWIR161.560–1.660M10/I31.580–1.640/1.580–1.64030750/375
SWIR272.100–2.300M112.230–2.28030750
Pan80.5000–0.680----15--
Cirrus91.360–1.390M91.371–1.38630750
* Bands used for calculating mNDWI.
Table 2. Accuracy evaluation results indicated by indices.
Table 2. Accuracy evaluation results indicated by indices.
Blending ApproachOmission Error (%)Commission Error (%)Overall Accuracy (%)Kappa Coefficient
IB2.721.0296.260.87
BI5.040.3994.570.80
Back to TopTop