Next Article in Journal
Landslide Displacement Monitoring by a Fully Polarimetric SAR Offset Tracking Method
Previous Article in Journal
Utilizing Multiple Lines of Evidence to Determine Landscape Degradation within Protected Area Landscapes: A Case Study of Chobe National Park, Botswana from 1982 to 2011
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Urban Flooding Mapping from Remote Sensing Images Using Generalized Regression Neural Network-Based Super-Resolution Algorithm

1
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
2
Fenner School of Environment and Society, The Australian National University, Canberra 2601, Australia
3
Commonwealth Scientific and Industrial Research Organization (CSIRO) Land and Water Flagship, Canberra 2601, Australia
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(8), 625; https://doi.org/10.3390/rs8080625
Submission received: 2 June 2016 / Revised: 24 July 2016 / Accepted: 26 July 2016 / Published: 28 July 2016

Abstract

:
Urban flooding is a serious natural hazard to many cities all over the world, which has dramatic impacts on the urban environment and human life. Urban flooding mapping has practical significance for the prevention and management of urban flood disasters. Remote sensing images with high temporal resolutions are widely used for urban flooding mapping, but have a limitation of relatively low spatial resolutions. In this study, a new method based on a generalized regression neural network (GRNN) is proposed to achieve improved accuracy in super-resolution mapping of urban flooding (SMUF) from remote sensing images. The GRNN-SMUF algorithm was proposed and then assessed using Landsat 5 and Landsat 8 images of Brisbane city in Australia and Wuhan city in China. Compared to three traditional methods, GRNN-SMUF mapped urban flooding more accurately according to both visual and quantitative assessments. The results of this study will improve the accuracy of urban flooding mapping using easily-available remote sensing images with medium-low spatial resolutions and will be propitious to the prevention and management of urban flood disasters.

Graphical Abstract

1. Introduction

Urban flooding is the inundation of land or property in densely-populated areas usually caused by heavy rainfalls. It is a serious natural hazard to many cities all over the world, which has dramatic impacts on the urban environment and human life [1,2,3]. For example, Brisbane city of Australia and New York City of the United States have both experienced significant flood events in recent years. In January 2011, the Brisbane River flooded and inundated more than 20,000 houses [3]; in October 2012, Hurricane Sandy hit New York City and produced a major storm surge, which flooded much of the city [3]. Therefore, it is a crucial task to effectively monitor and manage urban flooding and to ensure the resilience of cities. Urban flooding mapping has practical significance for the prevention and management of urban flood disasters. Flooding mapping needs to use high temporal remote sensing images [4,5,6], but these images usually have relatively low spatial resolutions. The mixed pixel issue, in which one pixel covers multiple types of land surfaces, commonly occurs in such images. It negatively affects the mapping accuracy of urban flooding. One way to deal with the mixed pixel issue is to combine high temporal resolution images with high spatial resolution images to make both high temporal and high spatial resolution maps [7,8,9]. Another way is super-resolution mapping.
Super-resolution mapping, also known as sub-pixel mapping, has been designed to tackle the mixed pixel issue. It aims to obtain more accurate sub-pixel information based on the spatial dependence assumption [10] and can divide mixed pixels into multiple sub-pixels to achieve a higher mapping accuracy from remote sensing images with relatively low spatial resolutions. Every sub-pixel is classified to one land cover type according to fraction images, which represent the area proportions of land cover types within the mixed pixels. Many super-resolution mapping methods have been proposed [10,11,12,13,14,15,16,17], such as the pixel swapping algorithm, spatial attraction models (SAM), conditional random fields, noniterative interpolation-based method, particle swarm optimization, the Hopfield neural network and the back-propagation neural network (BPNN). However, because of the uncertainty of remote sensing images, super-resolution mapping of urban flooding (SMUF) from such images is complicated. There are many possible sub-pixel distributions in the mixed pixels in SMUF. It is still difficult to obtain the optimal results of SMUF. Therefore, SMUF needs a further improvement.
Neural networks have attracted extensive attention in artificial intelligence and relevant research fields [15,16,17], and their popularity is still increasing. The general regression neural network (GRNN) is a memory-based network and converges to the underlying regression surface [18]. The generalization ability of GRNN is maintained, even when applied to sparse data in a multidimensional measurement space [18]. GRNN features fast learning that does not require an iterative procedure [18]. Based on a probability density function of GRNN, the chances of falling into local optimums are very low in the supervised training process [19]. Due to its excellent performance in classification, prediction and control, GRNN has been used in many fields in recent years, such as environmental sciences [20,21,22,23,24], energy [25,26,27,28], food sciences [29,30], traffic [31,32], chemical sciences [33], pharmaceutical sciences [34] and remote sensing [35,36,37].
SMUF from remote sensing images using GRNN is relatively rare in the literature. In this study, a new GRNN-based SMUF (GRNN-SMUF) method is proposed to improve the accuracy in mapping urban flooding at a sub-pixel scale from remote sensing images. The main objectives are: (1) to develop the GRNN-SMUF algorithm; (2) to compare the effects of GRNN-SMUF to SAM-SMUF, standard BPNN-SMUF (SBPNN-SMUF) and Bayesian regulation BPNN-SMUF (BRBPNN-SMUF) using Landsat 5 and Landsat 8 images of Brisbane city in Australia and Wuhan city in China; and (3) to discuss the super-resolution mapping accuracy of GRNN-SMUF in relation to the spread parameter and the percentage of training samples.

2. Methodology

2.1. Principle of SMUF

SMUF is designed to acquire the distribution of urban flooding at the sub-pixel scale. It maximizes the spatial dependence and at the same time maintains the original flooding proportion of the mixed pixels. The fraction image of urban flooding is the input to SMUF, where fraction values stand for the proportion of flooding in mixed pixels. Let S stand for the scale factor between mixed pixels and their sub-pixels. SMUF divides mixed pixels into S × S sub-pixels. For example, if S is 5, SMUF divides each mixed pixel into 25 sub-pixels. An illustration of SMUF is shown in Figure 1. A fraction image is shown in Figure 1a. The fraction value of the central mixed pixel is 32%, so the mixed pixel can be regarded as a composition of 8 flooding sub-pixels and 17 non-flooding sub-pixels. The fraction value does not specify the spatial distribution of flooding, so there could be many different possible compositions of sub-pixels. Therefore, it is complicated to find out the optimal sub-pixel distribution, which gives the highest mapping accuracy. Figure 1c shows the corresponding discrete encoding of the central mixed pixel where flooding is represented by 1 and non-flooding is represented by 0.

2.2. Traditional Algorithms

SAM-SMUF uses SAM to acquire the distribution of urban flooding at the sub-pixel scale. SAM is on the basis of the fraction values in the neighbourhood, which act towards sub-pixels inside a central pixel [11]. In SAM, at most eight neighbouring pixels around the central pixel are taken into account for attraction.
SBPNN-SMUF constructs a local SMUF model based on SBPNN to describe the relationship between sub-pixel distributions within a mixed pixel and the fractions of the eight neighbouring pixels of the mixed pixel. SBPNN is made up of one input layer, one or more hidden layers and one output layer. SBPNN is trained through multiple feed-forward and back-propagated phases. BRBPNN-SMUF has the same architecture as SBPNN-SMUF. Different from SBPNN-SMUF, BRBPNN-SMUF uses Bayesian regulation to train the back-propagation neural network.

2.3. GRNN-SMUF Algorithm

GRNN is a powerful mathematical tool for solving complex nonlinear problems [18]. GRNN-SMUF constructs a local SMUF model based on GRNN to describe the relationship between sub-pixel distributions within a mixed pixel and the fractions of the eight neighbouring pixels of the mixed pixel. GRNN-SMUF consists of an input layer, one pattern layer, one summation layer and an output layer. The pattern layer and summation layer are also called hidden layers because they are internal to the neural network and do not have direct contact with the external environment. The GRNN-SMUF architecture is shown in Figure 2. GRNN-SMUF does not require an iterative training procedure. There are eight neurons in the input layer, corresponding to eight adjacent neighbours to the mixed pixel. The input layer is connected to the pattern layer. Each neuron in the pattern layer represents a training pattern. The pattern layer performs a nonlinear transformation on the input data. Its output measures the distance of the input data from the training patterns. The summation layer has two different types of neurons—one D summation neuron (in pink) and multiple S summation neurons (in green), respectively. All of the neurons in the pattern layer are connected to the one D summation neuron and the S summation neurons. The D summation neuron is used to compute the sum of unweighted outputs of the neurons in the pattern layer. The S summation neurons are used to calculate the sum of weighted outputs of the neurons in the pattern layer. The neuron in the output layer calculates the quotient of the two outputs of the summation layer to yield the predicted result. When the scale factor S is 5, there are 25 neurons in the output layer, corresponding to the 25 sub-pixels within the mixed pixel.
Let independent variable X = [ x 1 , x 2 , x 8 ] T be the input vector, which corresponds to eight adjacent neighbours of a mixed pixel. Let dependent variable Y = [ y 1 , y 2 , y S * S ] T be the output vector, which corresponds to the distributions of S * S sub-pixels within the mixed pixel. GRNN can estimate the value of Y for a new X through the network. The estimated Y ^ ( X ) can be calculated as follows [18]:
Y ^ ( X ) = i = 1 n Y i exp ( D i 2 / 2 σ 2 ) i = 1 n exp ( D i 2 / 2 σ 2 )
D i 2 = ( X X i ) T ( X X i )
where Y ^ ( X ) is the weighted average of all of the training samples in essence, and the weight for Y i is the exponential of the squared Euclidian distance between X and X i · ( X i , Y i ) is a training sample of ( X , Y ) . n is the number of training samples. σ is the spread parameter, which is the kernel width of the Gaussian function. The value of the spread parameter affects the performance of GRNN-SMUF.

3. Case Study

3.1. Study Materials

Heavy floods hit Brisbane city and Wuhan city in January 2011 and June 2013, respectively. Two study areas were chosen from these two cities for comparison. The Landsat 5 image for Brisbane city was acquired on 16 January 2011, and the Landsat 8 image for Wuhan city was acquired on 13 June 2013, respectively. Both image sizes are 500 × 500 pixels. The spatial resolution of the images is 30 m. Two study areas are shown in Figure 3. The flooding reference images in Figure 3 were derived from the corresponding Landsat images using the modified normalized difference water index [38,39,40]. The flooding fraction images in Figure 3 were obtained by aggregating the corresponding flooding reference images. The aggregated pixel value equals the proportion of flooding pixels inside the corresponding S × S window. In this study, the scale factor S is set at five, so the spatial resolution of the flooding fraction images is 150 m.

3.2. Experimental Results

The four SMUF methods for comparison in the study are SAM-SMUF, SBPNN-SMUF, BRBPNN-SMUF and GRNN-SMUF. The inputs of the SMUF methods were the flooding fraction images. The same neighbouring type was used for all of the methods. Thirty percent mixed pixels were randomly selected as training samples for SBPNN-SMUF, BRBPNN-SMUF and GRNN-SMUF. The hidden layer number of SBPNN-SMUF and BRBPNN-SMUF was one. The spread parameter of GRNN-SMUF was set at 0.2.
The results of the two study areas for the four SMUF methods are shown in Figure 4 for Brisbane and Figure 5 for Wuhan. The same small regions from the reference images and result images are zoomed to show the details in Figure 4f and Figure 5f. As shown in Figure 4, especially in Figure 4f, GRNN-SMUF produced the most satisfactory visual SMUF result among the four SMUF methods for Brisbane city, being the most similar to the reference image. GRNN-SMUF mapped the Brisbane River and its tributaries more smoothly and continuously than other SMUF methods. From Figure 5, especially from Figure 5f, GRNN-SMUF also produced the most satisfactory visual SMUF result for Wuhan city.
For quantitative assessments of the different SMUF methods, we compared the SMUF results using overall accuracy (OA), Kappa coefficient (KC), average producer’s accuracy (APA) and average user’s accuracy (AUA) [41,42,43] (Table 1). All non-mixed pixels in the flooding fraction images were excluded from calculating the mapping accuracy. From Table 1, we can see that GRNN-SMUF outperformed other methods with the highest OA, KC, APA and AUA. SBPNN-SMUF was the worst performer using these measures, while BRBPNN-SMUF outperformed SAM-SMUF in general. For example, the OA values of SAM-SMUF, SBPNN-SMUF, BRBPNN-SMUF and GRNN-SMUF are 83.0%, 78.7%, 84.6% and 85.8% in Study Area 1, respectively. The OA values of SAM-SMUF, SBPNN-SMUF, BRBPNN-SMUF and GRNN-SMUF are 83.9%, 79.5%, 84.9% and 86.1% in Study Area 2, respectively.
SMUF is a complex multidimensional issue. There are many possible sub-pixel distributions in mixed pixels in SMUF. The generalization ability of GRNN is maintained, even when applied to sparse data in a multidimensional measurement space [18]. Therefore, GRNN-SMUF can obtain satisfactory results and outperform the traditional methods in this complex situation.

4. Discussion

4.1. Discussion of the Spread Parameter

The spread parameter (SP) is the kernel width of the Gaussian function and is a key parameter of GRNN-SMUF, which affects its mapping accuracy. The super-resolution mapping accuracy of GRNN-SMUF in relation to SP was analysed. The Landsat 5 image for Brisbane city was used with different SP values. Other parameters were the same as those in the case study. Super-resolution mapping accuracy of GRNN-SMUF for each SP value is shown in Figure 6 and Table 2. It indicates that with the increase of the SP value, the OA value firstly increases to the maximum value of 85.8% when SP is 0.20 and then decreases. Although KC, APA and AUA reach their maximum values at different SP values (0.10, 0.10, 0.40, respectively), they also firstly increase to the maximum and then decrease in general. That is because the larger the SP value, the smoother the function approximation, while the function approximation will not fit the training samples closely, if the SP value is too large.

4.2. Discussion of Training Sample Numbers

GRNN-SMUF is a supervised algorithm, where the number of training samples (TS) affects the mapping accuracy of GRNN-SMUF. The super-resolution mapping accuracy of GRNN-SMUF in relation to TS was analysed. The Landsat 5 image for Brisbane city was used with different TS values. Other parameters were the same as those in the case study. The super-resolution mapping accuracy of GRNN-SMUF for each TS value is shown in Figure 7 and Table 3. It shows that the higher the percentage of TS, the higher the value of OA. The value of OA increases from 83.2% to 88.8% when the percentage of TS rises from 10% to 100%. The values of KC, APA and AUA have a similar increasing trend as that of OA. That is because the larger the percentage of TS, the more closely the function approximation fits the samples, which increases the accuracy of SMUF.

5. Conclusions

Urban flooding is a serious natural hazard for many cities all over the world. In this study, a new method called GRNN-SMUF was proposed to achieve improved accuracy in super-resolution mapping of urban flooding from remote sensing images. The GRNN-SMUF algorithm was proposed and then assessed using Landsat 5 and Landsat 8 images from Brisbane city in Australia and Wuhan city in China. GRNN-SMUF was compared to three other SMUF methods, and it mapped the urban flooding more smoothly and continuously in the two cities. Besides a superior performance visually, SMUF consistently achieved more accurate results than these other SMUF methods according to the quantitative measures of OA, KC, APA and AUA. The OA values of SAM-SMUF, SBPNN-SMUF, BRBPNN-SMUF and GRNN-SMUF for Brisbane city are 83.0%, 78.7%, 84.6% and 85.8%, respectively. The OA values of SAM-SMUF, SBPNN-SMUF, BRBPNN-SMUF and GRNN-SMUF for Wuhan city are 83.9%, 79.5%, 84.9% and 86.1%, respectively. The super-resolution mapping accuracy of GRNN-SMUF in relation to the spread parameter and to the percentage of training samples was discussed.
The results of this study will improve the accuracy of urban flooding mapping from remote sensing images with medium-low spatial resolutions and will be propitious to the prevention and management of urban flood disasters. Possible further study of this research will focus on the integration of GRNN-SMUF and other intelligent algorithms to further improve the accuracy of SMUF from remote sensing images.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (Grant No. 41371343). The authors are grateful to Susan Cuddy at Commonwealth Scientific and Industrial Research Organization (CSIRO) for her helpful suggestions.

Author Contributions

Linyi Li proposed the method, analysed the results and wrote the paper. Tingbao Xu and Yun Chen supervised the research work, analysed the results and contributed to the construction of the paper structure.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Osti, R.; Nakasu, T. Lessons learned from southern and eastern Asian urban floods: From a local perspective. J. Flood Risk Manag. 2016, 9, 22–35. [Google Scholar] [CrossRef]
  2. Bathrellos, G.D.; Karymbalis, E.; Skilodimou, H.D.; Gaki-Papanastassiou, K.; Baltas, E.A. Urban flood hazard assessment in the basin of Athens Metropolitan city, Greece. Environ. Earth Sci. 2016, 75, 1–14. [Google Scholar] [CrossRef]
  3. Burke, M.I.; Sipe, N. Urban ferries and catastrophic floods experiences and lessons learned in Brisbane, Australia, and New York City. Transp. Res. Rec. 2014, 2459, 127–132. [Google Scholar] [CrossRef]
  4. Ticehurst, C.; Dutta, D.; Karim, F.; Petheram, C.; Guerschman, J.P. Improving the accuracy of daily MODIS OWL flood inundation mapping using hydrodynamic modeling. Nat. Hazards 2015, 78, 803–820. [Google Scholar] [CrossRef]
  5. Dao, P.D.; Liou, Y.A. Object-based flood mapping and affected rice field estimation with Landsat 8 OLI and MODIS data. Remote Sens. 2015, 7, 5077–5097. [Google Scholar] [CrossRef]
  6. Teluguntla, P.; Ryu, D.; George, B.; Walker, J.P.; Malano, H.M. Mapping flooded rice paddies using time series of MODIS imagery in the Krishna River Basin, India. Remote Sens. 2015, 7, 8858–8882. [Google Scholar] [CrossRef]
  7. Zhang, F.; Zhu, X.; Liu, D. Blending MODIS and Landsat images for urban flood mapping. Int. J. Remote Sens. 2014, 35, 3237–3253. [Google Scholar] [CrossRef]
  8. Zhang, H.K.; Huang, B.; Zhang, M.; Cao, K.; Yu, L. A generalization of spatial and temporal fusion methods for remotely sensed surface parameters. Int. J. Remote Sens. 2015, 36, 4411–4445. [Google Scholar] [CrossRef]
  9. Farina, A.; Morabito, F.C.; Serpico, S.; Simone, G. Fusion of radar images: State of art and perspective. In Proceedings of the 2001 CIE International Conference on Radar, Beijing, China, 15–18 October 2001; pp. 9–15.
  10. Atkinson, P.M. Sub-pixel target mapping from soft-classified, remotely sensed imagery. Photogramm. Eng. Remote Sens. 2005, 71, 839–846. [Google Scholar] [CrossRef]
  11. Mertens, K.C.; de Baets, B.; Verbeke, L.P.C.; de Wulf, R.R. A sub-pixel mapping algorithm based on sub-pixel/pixel spatial attraction models. Int. J. Remote Sens. 2006, 27, 3293–3310. [Google Scholar] [CrossRef]
  12. Su, Y. Spatial continuity and self-similarity in super-resolution mapping: Self-similar pixel swapping. Remote Sens. Lett. 2016, 7, 338–347. [Google Scholar] [CrossRef]
  13. Zhao, J.; Zhong, Y.; Wu, Y.; Zhang, L.; Shu, H. Sub-pixel mapping based on conditional random fields for hyperspectral remote sensing imagery. IEEE J. Sel. Top. Signal Process. 2015, 9, 1049–1060. [Google Scholar] [CrossRef]
  14. Sanchez-Beato, A.; Pajares, G. Noniterative interpolation-based super-resolution minimizing aliasing in the reconstructed image. IEEE Trans. Image Process. 2008, 17, 1817–1826. [Google Scholar] [CrossRef] [PubMed]
  15. Li, L.; Chen, Y.; Xu, T.; Huang, C.; Liu, R.; Shi, K. Integration of Bayesian regulation back-propagation neural network and particle swarm optimization for enhancing sub-pixel mapping of flood inundation in river basins. Remote Sens. Lett. 2016, 7, 631–640. [Google Scholar] [CrossRef]
  16. Shi, W.; Zhao, Y.; Wang, Q. Sub-pixel mapping based on BP neural network with multiple shifted remote sensing images. J. Infrared Millim. Waves 2014, 33, 527–532. [Google Scholar]
  17. Li, X.; Ling, F.; Du, Y.; Feng, Q.; Zhang, Y. A spatial-temporal Hopfield neural network approach for super-resolution land cover mapping with multi-temporal different resolution remotely sensed images. ISPRS J. Photogramm. Remote Sens. 2014, 93, 76–87. [Google Scholar] [CrossRef]
  18. Specht, D.F. A generalized regression neural network. IEEE Trans. Neural Netw. 1991, 2, 568–576. [Google Scholar] [CrossRef] [PubMed]
  19. Niu, D.; Wang, H.; Gu, Z. Short-term load forecasting using general regression neural network. In Proceedings of the International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18–21 August 2005; pp. 4076–4082.
  20. Li, X.; Zecchin, A.C.; Maier, H.R. Selection of smoothing parameter estimators for general regression neural networks—Applications to hydrological and water resources modeling. Environ. Model. Softw. 2014, 59, 162–186. [Google Scholar] [CrossRef]
  21. Zhou, Q.; Jiang, H.; Wang, J.; Zhou, J. A hybrid model for PM2.5 forecasting based on ensemble empirical mode decomposition and a general regression neural network. Sci. Total Environ. 2014, 496, 264–274. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, Z.; Shao, D.; Yang, H.; Yang, S. Prediction of water quality in South to North Water Transfer Project of China based on GA-optimized general regression neural network. Water Sci. Technol. Water Supply 2015, 15, 150–157. [Google Scholar] [CrossRef]
  23. Zhang, X.; Ling, Z.; Zhong, T.; Wang, K. Simulation of the availability index of soil copper content using general regression neural network. Environ. Earth Sci. 2011, 64, 1697–1702. [Google Scholar]
  24. Yin, S.; Tang, D.; Jin, X.; Chen, W.; Pu, N. A combined rotated general regression neural network method for river flow forecasting. Hydrol. Sci. J. J. Des. Sci. Hydrol. 2016, 61, 669–682. [Google Scholar] [CrossRef]
  25. Antanasijevic, D.; Pocajt, V.; Ristic, M.; Peric-Grujic, A. Modeling of energy consumption and related GHG (greenhouse gas) intensity and emissions in Europe using general regression neural networks. Energy 2015, 84, 816–824. [Google Scholar] [CrossRef]
  26. Boufounas, E.M.; Boumhidi, J.; Ouriagli, M.; Boumhidi, I. A robust power control of the dfig wind turbine based on general regression neural network and APSO algorithm. Int. J. Power Energy Syst. 2015, 35, 64–73. [Google Scholar] [CrossRef]
  27. Hong, C.; Cheng, F.; Chen, C. Optimal control for variable-speed wind generation systems using General Regression Neural Network. Int. J. Electr. Power Energy Syst. 2014, 60, 14–23. [Google Scholar] [CrossRef]
  28. Nose-Filho, K.; Plasencia Lotufo, A.D.; Minussi, C.R. Short-term multinodal load forecasting using a modified general regression neural network. IEEE Trans. Power Deliv. 2011, 26, 2862–2869. [Google Scholar] [CrossRef]
  29. Oscar, T.P. General regression neural network model for behavior of Salmonella on chicken meat during cold storage. J. Food Sci. 2014, 79, 978–987. [Google Scholar] [CrossRef] [PubMed]
  30. Oscar, T.P. General regression neural network and Monte Carlo Simulation Model for survival and growth of Salmonella on raw chicken skin as a function of serotype, temperature, and time for use in risk assessment. J. Food Prot. 2009, 72, 2078–2087. [Google Scholar] [PubMed]
  31. Kuang, X.; Xu, L.; Huang, Y.; Liu, F. Real-time forecasting for short-term traffic flow based on general regression neural network. In Proceedings of the 8th World Congress on Intelligent Control and Automation, Jinan, China, 7–9 July 2010; pp. 2776–2780.
  32. Celikoglu, H.B.; Dell’Orco, M. General regression neural network method for delay modeling in dynamic network loading. In Proceedings of the 6th International Conference on Traffic and Transportation Studies, Nanjing, China, 5–7 August 2008; pp. 352–362.
  33. Niwa, T. Using general regression and probabilistic neural networks to predict human intestinal absorption with topological descriptors derived from two-dimensional chemical structures. J. Chem. Inf. Comput. Sci. 2003, 43, 113–119. [Google Scholar] [CrossRef] [PubMed]
  34. Yap, C.W.; Chen, Y.Z. Quantitative structure-pharmacokinetic relationships for drug distribution properties by using general regression neural network. J. Pharm. Sci. 2005, 94, 153–168. [Google Scholar] [CrossRef] [PubMed]
  35. Jia, K.; Liang, S.; Liu, S.; Li, Y.; Xiao, Z.; Yao, Y.; Jiang, B.; Zhao, X.; Wang, X.; Xu, S.; et al. Global land surface fractional vegetation cover estimation using general regression neural networks from MODIS surface reflectance. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4787–4796. [Google Scholar] [CrossRef]
  36. Xiao, Z.; Liang, S.; Wang, J.; Chen, P.; Yin, X.; Zhang, L.; Song, J. Use of general regression neural networks for generating the GLASS leaf area index product from time-series MODIS surface reflectance. IEEE Trans. Geosci. Remote Sens. 2014, 52, 209–223. [Google Scholar] [CrossRef]
  37. Pandey, A.; Thapa, K.B.; Prasad, R.; Singh, K.P. General regression neural network and radial basis neural network for the estimation of crop variables of lady finger. J. Indian Soc. Remote Sens. 2012, 40, 709–715. [Google Scholar] [CrossRef]
  38. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  39. Du, Y.; Zhang, Y.; Ling, F.; Wang, Q.; Li, W.; Li, X. Water bodies’ mapping from Sentinel-2 imagery with modified normalized difference water index at 10-m spatial resolution produced by sharpening the SWIR band. Remote Sens. 2016, 8, 354. [Google Scholar] [CrossRef] [Green Version]
  40. Fisher, A.; Flood, N.; Danaher, T. Comparing Landsat water index methods for automated water classification in eastern Australia. Remote Sens. Environ. 2016, 175, 167–182. [Google Scholar] [CrossRef]
  41. Liu, C.; Frazier, P.; Kumar, L. Comparative assessment of the measures of thematic classification accuracy. Remote Sens. Environ. 2007, 107, 606–616. [Google Scholar] [CrossRef]
  42. Blakey, T.; Melesse, A.; Hall, M.O. Supervised classification of benthic reflectance in shallow subtropical waters using a generalized pixel-based classifier across a time series. Remote Sens. 2015, 7, 5098–5116. [Google Scholar] [CrossRef]
  43. Moller, M.; Birger, J.; Gidudu, A.; Glasser, C. A framework for the geometric accuracy assessment of classified objects. Int. J. Remote Sens. 2013, 34, 8685–8698. [Google Scholar] [CrossRef]
Figure 1. An illustration of SMUF (scale = 5). (a) Fraction image of urban flooding; (b) A possible SMUF result; (c) Corresponding discrete encoding of the central mixed pixel.
Figure 1. An illustration of SMUF (scale = 5). (a) Fraction image of urban flooding; (b) A possible SMUF result; (c) Corresponding discrete encoding of the central mixed pixel.
Remotesensing 08 00625 g001
Figure 2. General regression neural network (GRNN)-SMUF architecture (scale = 5).
Figure 2. General regression neural network (GRNN)-SMUF architecture (scale = 5).
Remotesensing 08 00625 g002
Figure 3. Experimental images of the two study areas. (a) Brisbane city, Australia; (b) Wuhan city, China.
Figure 3. Experimental images of the two study areas. (a) Brisbane city, Australia; (b) Wuhan city, China.
Remotesensing 08 00625 g003
Figure 4. Comparisons of SMUF results for Brisbane city (scale = 5). (a) Reference image (500 × 500 pixels); (b) spatial attraction model (SAM)-SMUF; (c) standard back-propagation neural network (SBPNN)-SMUF; (d) Bayesian regulation BPNN (BRBPNN)-SMUF; (e) GRNN-SMUF; (f) zoomed images (50 × 50 pixels).
Figure 4. Comparisons of SMUF results for Brisbane city (scale = 5). (a) Reference image (500 × 500 pixels); (b) spatial attraction model (SAM)-SMUF; (c) standard back-propagation neural network (SBPNN)-SMUF; (d) Bayesian regulation BPNN (BRBPNN)-SMUF; (e) GRNN-SMUF; (f) zoomed images (50 × 50 pixels).
Remotesensing 08 00625 g004
Figure 5. Comparisons of SMUF results for Wuhan city (scale = 5). (a) Reference image (500 × 500 pixels); (b) SAM-SMUF; (c) SBPNN-SMUF; (d) BRBPNN-SMUF; (e) GRNN-SMUF; (f) zoomed images (50 × 50 pixels).
Figure 5. Comparisons of SMUF results for Wuhan city (scale = 5). (a) Reference image (500 × 500 pixels); (b) SAM-SMUF; (c) SBPNN-SMUF; (d) BRBPNN-SMUF; (e) GRNN-SMUF; (f) zoomed images (50 × 50 pixels).
Remotesensing 08 00625 g005
Figure 6. Super-resolution mapping accuracy of GRNN-SMUF in relation to the spread parameter (SP).
Figure 6. Super-resolution mapping accuracy of GRNN-SMUF in relation to the spread parameter (SP).
Remotesensing 08 00625 g006
Figure 7. Super-resolution mapping accuracy of GRNN-SMUF in relation to the percentage of training samples (TS).
Figure 7. Super-resolution mapping accuracy of GRNN-SMUF in relation to the percentage of training samples (TS).
Remotesensing 08 00625 g007
Table 1. Quantitative assessments of different SMUF methods. KC, Kappa coefficient; APA, average producer’s accuracy; AUA, average user’s accuracy.
Table 1. Quantitative assessments of different SMUF methods. KC, Kappa coefficient; APA, average producer’s accuracy; AUA, average user’s accuracy.
MethodsStudy Area 1Study Area 2
OA (%)KCAPA (%)AUA (%)OA (%)KCAPA (%)AUA (%)
SAM-SMUF83.00.50071.583.483.90.55875.581.9
SBPNN-SMUF78.70.40668.473.779.50.43569.875.2
BRBPNN-SMUF84.60.57376.183.084.90.59878.282.3
GRNN-SMUF85.80.60377.385.386.10.62879.584.2
Table 2. Super-resolution mapping accuracy of GRNN-SMUF in relation to the spread parameter (SP).
Table 2. Super-resolution mapping accuracy of GRNN-SMUF in relation to the spread parameter (SP).
SPOA (%)KCAPA (%)AUA (%)
0.0584.30.58878.380.9
0.1085.50.61078.783.1
0.1585.50.60277.784.0
0.2085.80.60377.385.3
0.2585.50.58776.185.7
0.3085.30.57875.585.9
0.3585.10.57074.986.1
0.4084.80.55674.186.5
0.4584.40.54473.486.1
0.5084.00.53072.885.3
Table 3. Super-resolution mapping accuracy of GRNN-SMUF in relation to the percentage of training samples (TS).
Table 3. Super-resolution mapping accuracy of GRNN-SMUF in relation to the percentage of training samples (TS).
TS (%)OA (%)KCAPA (%)AUA (%)
1083.20.54074.980.3
2084.60.56875.783.3
3085.80.60377.385.3
4086.60.62978.786.1
5087.00.64079.286.9
6087.70.65980.087.8
7088.00.67080.688.2
8088.30.67680.988.6
9088.60.68481.289.2
10088.80.68881.289.9

Share and Cite

MDPI and ACS Style

Li, L.; Xu, T.; Chen, Y. Improved Urban Flooding Mapping from Remote Sensing Images Using Generalized Regression Neural Network-Based Super-Resolution Algorithm. Remote Sens. 2016, 8, 625. https://doi.org/10.3390/rs8080625

AMA Style

Li L, Xu T, Chen Y. Improved Urban Flooding Mapping from Remote Sensing Images Using Generalized Regression Neural Network-Based Super-Resolution Algorithm. Remote Sensing. 2016; 8(8):625. https://doi.org/10.3390/rs8080625

Chicago/Turabian Style

Li, Linyi, Tingbao Xu, and Yun Chen. 2016. "Improved Urban Flooding Mapping from Remote Sensing Images Using Generalized Regression Neural Network-Based Super-Resolution Algorithm" Remote Sensing 8, no. 8: 625. https://doi.org/10.3390/rs8080625

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop