Next Article in Journal
Characteristics of Low-Oxygen Oxidation Ditch with Improved Nitrogen Removal
Next Article in Special Issue
Application of Artificial Neural Networks for Mangrove Mapping Using Multi-Temporal and Multi-Source Remote Sensing Imagery
Previous Article in Journal
Effect of Rainfall and pH on Musty Odor Produced in the Sanbe Reservoir
Previous Article in Special Issue
Modelling, Characterizing, and Monitoring Boreal Forest Wetland Bird Habitat with RADARSAT-2 and Landsat-8 Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Synergic Use of Sentinel-1 and Sentinel-2 Imagery for Complex Wetland Classification Using Generative Adversarial Network (GAN) Scheme

1
Civil Engineering Department, Faculty of Engineering, University of Karabük, Karabük 78050, Turkey
2
Department of Electrical and Computer Engineering, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada
3
C-CORE, 1 Morrissey Road, St. John’s, NL A1B 3X5, Canada
4
The Canada Centre for Mapping and Earth Observation, Ottawa, ON K1S 5K2, Canada
5
Department of Environmental Resources Engineering, College of Environmental Science and Forestry (SUNY ESF), State University of New York, Syracuse, NY 13210, USA
*
Author to whom correspondence should be addressed.
Water 2021, 13(24), 3601; https://doi.org/10.3390/w13243601
Submission received: 1 November 2021 / Revised: 30 November 2021 / Accepted: 13 December 2021 / Published: 15 December 2021
(This article belongs to the Special Issue Mapping and Monitoring of Wetlands)

Abstract

:
Due to anthropogenic activities and climate change, many natural ecosystems, especially wetlands, are lost or changing at a rapid pace. For the last decade, there has been increasing attention towards developing new tools and methods for the mapping and classification of wetlands using remote sensing. At the same time, advances in artificial intelligence and machine learning, particularly deep learning models, have provided opportunities to advance wetland classification methods. However, the developed deep and very deep algorithms require a higher number of training samples, which is costly, logistically demanding, and time-consuming. As such, in this study, we propose a Deep Convolutional Neural Network (DCNN) that uses a modified architecture of the well-known DCNN of the AlexNet and a Generative Adversarial Network (GAN) for the generation and classification of Sentinel-1 and Sentinel-2 data. Applying to an area of approximately 370 sq. km in the Avalon Peninsula, Newfoundland, the proposed model with an average accuracy of 92.30% resulted in F-1 scores of 0.82, 0.85, 0.87, 0.89, and 0.95 for the recognition of swamp, fen, marsh, bog, and shallow water, respectively. Moreover, the proposed DCNN model improved the F-1 score of bog, marsh, fen, and swamp wetland classes by 4%, 8%, 11%, and 26%, respectively, compared to the original CNN network of AlexNet. These results reveal that the proposed model is highly capable of the generation and classification of Sentinel-1 and Sentinel-2 wetland samples and can be used for large-extent classification problems.

1. Introduction

Wetlands have been identified as one of the most valuable ecosystems on Earth for both fauna and flora in recent decades. Their functions are expected to provide critical support for at least seven of the United Nations’ 17 core Sustainable Development Goals [1]. Wetlands are regions that are permanently or intermittently inundated with fresh, brackish, or saltwater, including marine water less than six meters deep at low tide, whether artificially or naturally [1]. Water storage and purification, coastline protection, carbon and other nutrient processing, food security, and the support of huge biodiversity of plants and animals are some of the significant aspects of wetlands, depending on the wetland type [1,2]. Despite their necessity, wetlands are declining at a rate greater than any other environment, owing primarily to global climate change, as well as anthropogenic activities (e.g., urbanization and industrialization) [2]. Wetlands are widely recognized for providing a wide range of ecological services. They are subjected to extensive land-use change, pollution, and agricultural drainage, among other things, which are threatening the extent and viability of wetlands. The necessity for complete wetland inventories and subsequent monitoring capabilities to determine status and trends is essential, as it provides the foundation for directing effective evaluation, monitoring, and management of wetlands [3]. As such, large-scale monitoring and classification of distinct wetland types is critical for preventing further loss and implementing and evaluating preservation policies [4,5,6,7,8,9,10].
Different wetland classification methodologies have been proposed in response to the various information requirements. The Canadian National Wetland Working Group, for instance, distinguishes wetlands into five categories: swamps, bogs, fens, marshes, and shallow water/ponds [8,11]. Despite the fact that the necessity to inventory and evaluate wetlands is generally understood, the methodology and data resources utilized differ depending on the area of interest, financial and human resources, and the quality of information requested [9]. Wetland monitoring with the use of field-based tools (e.g., surveying engineering) is informative. However, it is time-consuming, logistically demanding, and expensive to implement over a wide or distant area. As a result of reduced costs and wider spatial coverage, wetland inventory and monitoring usually rely on remote sensing data and techniques [12,13,14]. However, because of their intrinsic dynamism and natural range of change, wetlands are difficult to accurately classify using remotely sensed data [13]. Water levels vary substantially from wetland to wetland, year to year, and even season to season [15], requiring the use of remotely sensed data with a higher temporal resolution than is normally necessary to map other less dynamic land cover types (i.e., traditional land use land cover mapping). Considering their responsiveness to different properties of wetland vegetation, prior studies indicated success of wetland mapping by incorporating multi-source remote sensing data obtained from optical and synthetic aperture radar (SAR) sensors [6,16,17,18,19]. Although hyperspectral data provide rich spectral information necessary for identifying spectrally comparable wetlands (e.g., bog and fen), this methodology is impracticable due to the high cost of data and difficult and limited availability [16]. Wetland mapping through multi-spectral remote sensing data is more feasible than hyperspectral data, due to the great accessibility and availability of these kinds of data [4,20,21,22]. The synergistic combination of Sentinel-1 and Sentinel-2 has proven its superiority over the use of single-source imaging techniques for wetland mapping [23]. Although, due to the intrinsic complexity of wetlands (e.g., similar spectral reflectance in optical and SAR images for different wetland classes), the satellite data capabilities are insufficient. As such, it is necessary to use and develop advanced machine learning (ML) methods for complex wetland monitoring and classification.
The two aspects of conventional ML classification include feature extraction and classification. Spatial, spectral, and temporal satellite data are translated into feature vectors in the feature extraction step. Those derived attributes are exploited to train and execute the ML model in the classifier phase [12,24,25,26]. This manual feature engineering (information extraction) leads to a significant dependency of the success of the ML algorithms on the quality of the feature selection process. On the other hand, instead of learning from experimental feature design, deep learning (DL) algorithms learn through representation. Internal feature representations are learned automatically; hence these methods are regarded as highly efficient for image classification [27,28,29,30,31]. The major reason for such efficiency is that, as compared to shallow ML models, DL models can typically notice more generalized trends. Deep learning approaches a higher performance due to their capability to include feature extraction in the optimization procedure [5]. It should be highlighted that while DL models accomplish impressive accuracy, they demand more training data and advanced computational resources than shallow ML methods. This contrasts with wetland classification, where data acquisition is costly and time-consuming. This issue can be addressed by two solutions, including transfer learning [32,33,34] and Generative Adversarial Network (GAN) [35,36,37,38,39,40]. These two solutions are described in some detail in the next section of this paper. As such, this research addresses the need for increased demand for training data required for the DL methods by developing a wetland classification model with the integration of the well-known CNN algorithm of the AlexNet and a GAN network. This paper proposes a novel technique for remote sensing image classification to increase the classification accuracy of complex wetlands. To the best of our knowledge, GANs have not been used for wetland classification. Once developed, the model is applied to wetland classification using a synergic integration of the Sentinel-1 and Sentinel-2 satellite observations of Avalon in Newfoundland, Canada.

2. Methods

2.1. Study Area and Satellite Data

The research region is the Avalon area of Newfoundland, Canada, which is located in the most eastern part of the province (Figure 1). The Avalon Peninsula is about 9220 square kilometers in size, with pleasant to warm summers and mild winters. The Avalon Peninsula is home to the city of St. John’s, which has a population of approximately 226,000 people. Wetland ecosystems and other natural environments can be observed in the area. The Avalon study area contains all of the Canadian Wetland Classification System (CWCS) wetland classes, including bog, fen, marsh, swamp, and shallow water, with peatlands (i.e., bog and fen) being the most prominent. The ground truth samples were gathered by a group of wetland scientists knowledgeable about the research area during the summers of 2015 to 2017. Potential wetland locations were discovered using Google Earth and RapidEye imagery prior to training data collection. During the field data collection, global positioning system (GPS) coordinates, notes, and images were used as a reference for the better delineation of wetland polygons. It should be noted that we selected the Avalon site as our study area because we are familiar with its spatial distribution of wetlands and have several precise ground truth data of this region.
Table 1 shows the total pixels of training and testing data. In the Python programming language, a stratified random sampling technique was utilized to partition ground truth data into 70 percent training and 30 percent testing for reference data.
The optical imagery used in this study is Sentinel-2A level-1C, which was captured on 5 June 2020. In addition to the optical image bands, various spectral indices were added to improve classification accuracy, as recommended by prior wetland studies [16,41]. A dual-polarized (VV/VH) level-1 ground range detected (GRD) Sentinel-1 image with the ascending orbit captured on 6 June 2020 was utilized for the SAR imaging. Moreover, two dual-polarized (HH/HV) images with the descending orbit captured on 4 June 2020, were employed. Various polarimetric features were obtained in addition to the normalized backscattering coefficients retrieved from the SAR images. See Table 2 for details of the image features used in the analyses for both the optical and SAR data.
Using the SNAP software’s sen2cor tool [43], the optical image of Sentinel-2 is atmospherically and radiometrically calibrated. SNAP was used to retrieve geocoded backscatter intensity images from three Sentinel-1 images. Then, the orbital metadata were updated, followed by radiometric calibration of Sentinel-1 imagery. Following that, unitless backscattering intensity images were converted into normalized backscatter coefficients σ 0 in dB values. Then, a Lee Sigma filter with a window size of 7 by 7 was used to decrease the inherent speckle noises in the SAR imagery. Finally, using the range-Doppler terrain correction approach, the imagery was geometrically rectified.

2.2. Methods

2.2.1. Generative Adversarial Network

As stated in the introduction, producing new ground truth data in remote sensing, specifically wetland mapping, is time-consuming, labor-intensive, and expensive. On the other hand, deep learning approaches have been successfully implemented and deployed in several domains of remote sensing, such as object detection [44,45] and classification [46,47]. However, they require a huge quantity of training data that contrasts with the current situation in complex wetland mapping. This problem can be solved by utilizing GAN’s innovative architecture, which was proposed by Goodfellow et al. and revolutionized the deep learning field [48]. There are two networks of generator and discriminator in the GAN design, as shown in Figure 2.
The generator network produces new synthetic samples using a random noise vector, as shown in Figure 2, while the discriminator aims to differentiate between the fake and real data. As a result, the GAN system is trained while the generator builds more realistic fake data, and the discriminator network attempts to discern between real and false (i.e., generated) data.

2.2.2. AlexNet

AlexNet was proposed by Krizhevsky [49] in a recognition duty competition called ImageNet as a classical and better-performing network structure of CNNs. AlexNet includes various contributions and inventions, such as introducing the activation function-ReLU and revolutionary dropout approaches, which helped avoid the over-fitting issue. The aim was to increase AlexNet’s validation accuracy while also increasing its generalization capability. AlexNet’s idea introduced a new window for emerging artificial intelligence technologies and provided a large space for engineering scientific research. The architecture of AlexNet is shown in Figure 3.
It should be noted that to reduce the complexity of the original AlexNet architecture, in our developed CNN model we replaced all kernel sizes to 1 by 1. We did not change the kernel size of the first (11 by 11 kernel size) and the last (3 by 3) convolutional layer.

2.2.3. Proposed Generative Adversarial Network Model

Sample generation for non-wetlands (e.g., urban areas) does not involve specialist knowledge, whereas sample creation for wetland classes demands biologists’ expertise and field data collecting. This is because, unlike non-wetlands, wetlands do not have strict boundaries, and several of them may contain equivalent vegetation species and patterns. As a result, we suggest a method that focuses primarily on creating new samples for the wetlands. The primary purpose is to generate samples for classes that have a lower number of ground truth data. We propose a model for the generation of Sentinel-1 and Sentinel-2 image samples for the classification of wetlands and non-wetlands in the pilot site of the Avalon (Figure 4), based on the GAMO [50] and 3D-HyperGAMO [51] models.
It should also be emphasized that, as with both GAMO and 3D-HyperGAMO, we employed a conditional map unit to produce samples from a random noise vector for the fake wetland data generation, but only for classes with fewer training data. The conditional map unit’s output was then flattened to create a conditional feature vector with a length of 196. After that, according to the labeled data, the product of the conditional map unit was attached to the patch generator [50,51] (Table 3). It is worth mentioning that we used LeakyRelu (see Equation (1)) as our activation function and 2-dimensional transposed convolutional layers in the structure of the conditional map unit. In our classification model, as seen in Figure 4, real Sentinel-1 and Sentinel-2 image sample patches are extracted from the features of the optical and SAR images. On the other hand, the generator network generates fake/synthetic samples of Sentinel-1 and Sentinel-2 images from a random noise vector that were sent to the discriminator network. It is worth mentioning that the conditional map unit sends the number of required synthetic samples. As there are seven minor classes (classes with a lower number of training data than the major class with the highest number of training samples), the generator network only creates synthetic samples for those seven minor classes. On the other hand, the discriminator network tries to distinguish the real samples (i.e., features extracted from Sentinel-1 and Sentinel-2 data) from fake ones created by the generator network. This procedure was repeated until the generator created more realistic synthetic patch samples that the discriminator network could hardly recognize any difference between the real and fake sample patches of Sentinel-1 and Sentinel-2 images. Then, to train the classifier, synthetic and real data were sent to the AlexNet network (i.e., AlexNet classifier).
L e a k y R e l u ( x ) = { x ,   i f   x 0 n e g a t i v e   s l o p e ,   o t h e r w i s e
As mentioned before, we used a light version of AlexNet architecture (reduced kernel sizes) for the discriminator network and classifier. We used 100, 64, and 0.0002 as the noise dimension, batch size, and learning rate, respectively, for training our proposed deep neural network.

2.2.4. Accuracy Assessment

Wetland classification results were evaluated based on the average accuracy, precision, recall, and F1-score statistical metrics (Equations (2)–(5)).
P r e c i s i o n = T r u e   p o s i t i v e ( T r u e   p o s i t i v e + F a l s e   p o s i t i v e )
R e c a l l = T r u e   p o s i t i v e ( T r u e   p o s i t i v e + F a l s e   n e g a t i v e )
F 1 s c o r e = 2 P r e c i s i o n R e c a l l P r e c i s i o n +   R e c a l l
A v e r a g e   A c c u r a c y = i = 1 n R e c a l l i n
where n is the number of classes.

3. Results

3.1. Statistical Comparison of Developed Models

Based on the achieved results, the proposed model, by generating several synthetic samples for the minor classes, could reach a relatively high level of average accuracy (i.e., 92.30%). In addition, the developed DCNN model achieved a high level of accuracy in terms of F-1 scores with values of 0.82, 0.85, 0.87, 0.89, and 0.95 for the recognition of swamp, fen, marsh, bog, and shallow water, respectively, using extracted features of Sentinel-1 and Sentinel-2 images. As seen in Table 4, the inclusion of samples from the Sentinel-1 image increased the F-1 scores by 2%, 3%, 5%, 7%, and 10% for the classification of shallow water, bog, marsh, swamp, and fen wetlands. Moreover, using the features of the Sentinel-1 image improved the average accuracy of the proposed DCNN method by approximately 4.4%. The highest F-1, recall, and precision values were obtained for the recognition of the bog and shallow water compared to the other wetland classes of the fen, marsh, and swamp. The reason can be partly explained by their higher numbers of real training data than that of the fen, marsh, and swamp wetland classes (Table 4). On the other hand, the proposed DCNN model outperformed the original DCNN network of AlexNet by 7.26% in terms of average accuracy. In addition, the F-1 score of bog, marsh, fen, and swamp wetland classes was improved by 4%, 8%, 11%, and 26%, respectively, compared to the original AlexNet classifier.
Based on the confusion matrices shown in Table 5, the proposed model was highly capable of differentiating the complex wetlands. The highest level of confusion was for the fen class. Several patches of fen were incorrectly classified as bog, marsh, and swamp classes. This is because wetlands have a lot of resemblance in terms of vegetation types, specifically bog and fen wetlands. As a result, their spectral reflectance in optical imaging will be comparable. For instance, in terms of tree dominance, marsh and upland forest are quite similar. This type of confusion is common in wetland classifications. From Table 5, it is clear that the confusion between wetland classes noticeably decreased by adding the extracted features of the Sentinel-1 image to the proposed DCNN model. For instance, the high level of confusion between upland and swamp wetland classes was substantially decreased by the proposed model that used both Sentinel-1 and Sentinel-2 features compared to the model that only utilized Sentinel-2 training samples. In addition, the generation of synthetic samples by the GAN network of the proposed DCNN algorithm substantially decreased the confusion between wetland classes, specifically marsh and fen wetlands, compared to the original AlexNet classifier (see Table 5).

3.2. Spatial Distribution of Wetlands in the Avalon

Wetland maps, the spatial distribution of bog, fen, marsh, swamp, and shallow water wetlands, as well as their area extents, are presented in this section (see Figure 5, Figure 6 and Figure 7). Based on the achieved classification maps, the best visual result was obtained by the proposed DCNN model using extracted Sentinel-1 and Sentinel-2 data features. For instance, the map produced by the AlexNet network over-classified the uplands and under-classified swamp wetlands.
Based on the obtained results of the proposed DCNN model (using extracted features of Sentinel-1 and Sentinel-2 images), marsh, swamp, bog, fen, and shallow water wetland classes had an area of approximately 65.78, 50.81, 23.30, 16, and 14.44 km2 in the study area (see Figure 7).

4. Discussion

To better understand the contribution of each extracted feature of Sentinel-1 and Sentinel-2 images, the variable importance of the several samples was measured. We ran the random forest classifier [52] 10 times for the spectral analysis and measured the minimum, maximum, and average values as presented in Figure 8. As expected, optical bands and indices were more effective for classifying wetland and non-wetland classes than the SAR features. Based on the Gini index for the prediction of the test data, the Normalised Difference Vegetation Index (NDVI) was the most influential variable, while the σ V V 0 was the least effective variable. As reported by previous studies, NDVI is highly efficient for the classification of vegetated lands, specifically wetlands [16]. The NDVI index is considered one of the most used and well-known vegetation indices for the characterization of vegetation phenology. In addition, the NDVI index decreases noise, including cloud shadows, sun illumination differences, atmospheric attenuation, and topographic variation. Moreover, the NDVI index is reported as an ideal index for the discrimination of wetland and non-wetland classes, and the obtained results agree with the previous research [6]. Notably, the medians of the normalized backscattering coefficients were more effective for the wetland and non-wetland classification. The reason should be due to the decrease in noise in the median of the normalized backscattering coefficients. Based on the spectral analysis, the m e d i a n   3   b y   3 ( σ V H 0 ) was the most effective extracted feature from SAR imagery. The reason is that the σ V H 0 observations are highly efficient for vegetated land recognition, due to their cross-polarized structure that is sensitive to vegetation canopies. The obtained results are in line with the study by [23], where the σ V H 0 observations had high importance for the classification of low-, medium-, and high-vegetated areas.
As reported by previous research [22,23], the synergic use of Sentinel-1 and Sentinel-2 data is superior to using a single source optical imagery for complex vegetation classification. Moreover, our results presented the effectiveness and contribution of features of Sentinel-1 for complex wetland classification. Moreover, from several samples, the normalized spectral and backscattering intensity values of wetland classes of bog, fen, marsh, swamp, and shallow water were extracted (see Figure 9). It is clear that the optical features are more distinguishable for the wetland classification. Although, the inclusion of the Sentinel-1 feature noticeably improves the per-class accuracies of wetlands as reported by the previous studies [22,23] and the results achieved in this research.
In terms of computation cost, it took approximately 60 min to train the proposed DCNN model to reach a high average accuracy level (i.e., 92.30%). It is worth highlighting that the original CNN network of the AlexNet has around 60 million parameters. As a result, the proposed network requires a large number of training data. In our proposed CNN model, we reduced the kernel sizes to 1 by 1 while adopting the original AlexNet architecture to reduce the computation cost. As shown in Table 1, the highest number of training data is for deep water regions with a number of 6928 samples. Thus, the generator network of the GAN model produces new fake samples for the other wetland and non-wetland classes to reach a value of 6928 training data. Considering that we still had a low number of training samples and the complexities of the wetlands, the proposed model achieved accurate results for the generation and classification of the wetlands, specifically for the bog and shallow water with F-1 scores of 0.89 and 0.95, respectively. Based on the achieved results and considering the intrinsic complexity of wetlands, such as their similar spectral reflectance in optical images for different wetland classes, the generation of the synthetic data with the GAN network did not increase the rate of errors. Conversely, a deep CNN network such as the AlexNet with a limited number of training data could achieve a high level of average accuracy (i.e., 92.30%), while the deep CNN model was capable of correctly differentiating the complex wetland classes. As such, the GAN model was efficient for producing high-quality training samples for the optical and SAR images of Sentinel-1 and -2. As such, the results achieved in this research open new opportunities for creating high-quality wetland ground truth data using advanced computer science algorithms, such as GAN networks. Combining this capability with big data and cloud computing capabilities will allow the production of better wetland maps to facilitate monitoring applications. It should also be noted that we used an Intel processor (i.e., i7-10750H central processing unit (CPU) of 2.60 GHz), a graphical processing unit (GPU) of NVIDIA GeForce RTX 2070, and a 16 GB random access memory (RAM) operating on 64-bit Windows 11 in our experiments. We used the Python TensorFlow library for the implementation of our methods.

5. Conclusions

Advances in machine learning algorithms, specifically the development of deep learning algorithms, have opened new windows for the remote sensing research community. One problem of the current DL methods, in the context of remote sensing image classification, is that they require a high number of samples used in the training phase. Producing ground truth data used for training and testing the algorithms are costly, logistically demanding, and time-consuming. To overcome such problems, we developed a model that generates Sentinel-1 and Sentinel-2 synthetic training samples using a GAN model and a modified architecture of the well-known DCNN of the AlexNet. In our model, we used a conditional unit map that only allows for the generation of the samples for the classes with a low number of training data. As such, we tackle the issue of imbalanced data in wetland classification, where there are much higher ground truth data for the non-wetlands compared to the wetlands. Based on the results, the developed model obtained a high level of accuracy in terms of F-1 scores, with values of 0.82, 0.85, 0.87, 0.89, and 0.95 for the classification of swamp, fen, marsh, bog, and shallow water, respectively. Moreover, the proposed DCNN model improved the F-1 score of bog, marsh, fen, and swamp wetland classes by 4%, 8%, 11%, and 26%, respectively, compared to the original AlexNet DCNN classifier. This model has a high potential for large area classification applications, where the availability of ground sample data is a more serious problem.

Author Contributions

Conceptualization, A.J. and M.M.; methodology, A.J. and M.M.; formal analysis, A.J.; writing—original draft preparation, A.J. and M.M.; writing—review and editing, M.M., F.M., B.B. and B.S.; supervision, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Convention Ramsar. The 4th Strategic Plan 2016–2024; Ramsar Convention Secretariat: Gland, Switzerland, 2016. [Google Scholar]
  2. Board, M.A. Millennium Ecosystem Assessment; World Resources Institute: Washington, DC, USA, 2005. [Google Scholar]
  3. Davidson, N.C. The Ramsar Convention on Wetlands. In The Wetland Book I: Structure and Function, Management and Methods; Springer Publishers: Dordrecht, The Netherlands, 2016. [Google Scholar]
  4. Jamali, A.; Mahdianpari, M.; Brisco, B.; Granger, J.; Mohammadimanesh, F.; Salehi, B. Wetland Mapping Using Multi-Spectral Satellite Imagery and Deep Convolutional Neural Networks: A Case Study in Newfoundland and Labrador, Canada. Can. J. Remote Sens. 2021, 47, 243–260. [Google Scholar] [CrossRef]
  5. Jamali, A.; Mahdianpari, M.; Brisco, B.; Granger, J.; Mohammadimanesh, F.; Salehi, B. Comparing Solo versus Ensemble Convolutional Neural Networks for Wetland Classification Using Multi-Spectral Satellite Imagery. Remote Sens. 2021, 13, 2046. [Google Scholar] [CrossRef]
  6. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Homayouni, S.; Gill, E. The First Wetland Inventory Map of Newfoundland at a Spatial Resolution of 10 m Using Sentinel-1 and Sentinel-2 Data on the Google Earth Engine Cloud Computing Platform. Remote Sens. 2019, 11, 43. [Google Scholar] [CrossRef] [Green Version]
  7. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Motagh, M. Random Forest Wetland Classification Using ALOS-2 L-Band, RADARSAT-2 C-Band, and TerraSAR-X Imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 13–31. [Google Scholar] [CrossRef]
  8. Li, Z.; Chen, H.; White, J.C.; Wulder, M.A.; Hermosilla, T. Discriminating Treed and Non-Treed Wetlands in Boreal Ecosystems Using Time Series Sentinel-1 Data. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 102007. [Google Scholar] [CrossRef]
  9. Fournier, R.A.; Grenier, M.; Lavoie, A.; Hélie, R. Towards a Strategy to Implement the Canadian Wetland Inventory Using Satellite Remote Sensing. Can. J. Remote Sens. 2007, 33, S1–S16. [Google Scholar] [CrossRef]
  10. Marton, J.M.; Creed, I.F.; Lewis, D.B.; Lane, C.R.; Basu, N.B.; Cohen, M.J.; Craft, C.B. Geographically Isolated Wetlands Are Important Biogeochemical Reactors on the Landscape. BioScience 2015, 65, 408–418. [Google Scholar] [CrossRef] [Green Version]
  11. National Wetlands Working Group. The Canadian Wetland Classification System; National Wetlands Working Group: Waterloo, ON, Canada, 1997. [Google Scholar]
  12. Rezaee, M.; Mahdianpari, M.; Zhang, Y.; Salehi, B. Deep Convolutional Neural Network for Complex Wetland Classification Using Optical Remote Sensing Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3030–3039. [Google Scholar] [CrossRef]
  13. Tiner, R.W. Wetlands: An Overview. In Remote Sensing of Wetlands: Applications and Advances; Tiner, R.W., Lang, M.W., Klemas, V.V., Eds.; CRC Press: Boca Raton, FL, USA, 2015; pp. 20–35. [Google Scholar]
  14. DeLancey, E.R.; Simms, J.F.; Mahdianpari, M.; Brisco, B.; Mahoney, C.; Kariyeva, J. Comparing Deep Learning and Shallow Learning for Large-Scale Wetland Classification in Alberta, Canada. Remote Sens. 2020, 12, 2. [Google Scholar] [CrossRef] [Green Version]
  15. Mitsch, W.J.; Gosselink, J.G. Wetlands; Wiley & Sons, Inc.: Hoboken, NJ, USA, 2007. [Google Scholar]
  16. Mahdianpari, M.; Granger, J.E.; Mohammadimanesh, F.; Salehi, B.; Brisco, B.; Homayouni, S.; Gill, E.; Huberty, B.; Lang, M. Meta-Analysis of Wetland Classification Using Remote Sensing: A Systematic Review of a 40-Year Trend in North America. Remote Sens. 2020, 12, 1882. [Google Scholar] [CrossRef]
  17. Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very Deep Convolutional Neural Networks for Complex Land Cover Mapping Using Multispectral Remote Sensing Imagery. Remote Sens. 2018, 10, 1119. [Google Scholar] [CrossRef] [Green Version]
  18. Cai, Y.; Li, X.; Zhang, M.; Lin, H. Mapping Wetland Using the Object-Based Stacked Generalization Method Based on Multi-Temporal Optical and SAR Data. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102164. [Google Scholar] [CrossRef]
  19. Fu, B.; Xie, S.; He, H.; Zuo, P.; Sun, J.; Liu, L.; Huang, L.; Fan, D.; Gao, E. Synergy of Multi-Temporal Polarimetric SAR and Optical Image Satellite for Mapping of Marsh Vegetation Using Object-Based Random Forest Algorithm. Ecol. Indic. 2021, 131, 108173. [Google Scholar] [CrossRef]
  20. Berhane, T.M.; Lane, C.R.; Wu, Q.; Autrey, B.C.; Anenkhonov, O.A.; Chepinoga, V.V.; Liu, H. Decision-Tree, Rule-Based, and Random Forest Classification of High-Resolution Multispectral Imagery for Wetland Mapping and Inventory. Remote Sens. 2018, 10, 580. [Google Scholar] [CrossRef] [Green Version]
  21. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of Machine-Learning Classification in Remote Sensing: An Applied Review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
  22. Jamali, A.; Mahdianpari, M.; Brisco, B.; Granger, J.; Mohammadimanesh, F.; Salehi, B. Deep Forest Classifier for Wetland Mapping Using the Combination of Sentinel-1 and Sentinel-2 Data. GIScience Remote Sens. 2021, 58, 1072–1089. [Google Scholar] [CrossRef]
  23. Slagter, B.; Tsendbazar, N.E.; Vollrath, A.; Reiche, J. Mapping Wetland Characteristics Using Temporally Dense Sentinel-1 and Sentinel-2 Data: A Case Study in the St. Lucia Wetlands, South Africa. Int. J. Appl. Earth Obs. Geoinf. 2020, 86, 102009. [Google Scholar] [CrossRef]
  24. Jamali, A. Improving Land Use Land Cover Mapping of a Neural Network with Three Optimizers of Multi-Verse Optimizer, Genetic Algorithm, and Derivative-Free Function. Egypt. J. Remote Sens. Space Sci. 2020, 24, 373–390. [Google Scholar] [CrossRef]
  25. Jamali, A. Land Use Land Cover Mapping Using Advanced Machine Learning Classifiers: A Case Study of Shiraz City, Iran. Earth Sci. Inform. 2020, 13, 1015–1030. [Google Scholar] [CrossRef]
  26. Moayedi, H.; Jamali, A.; Gibril, M.B.A.; Kok Foong, L.; Bahiraei, M. Evaluation of Tree-Base Data Mining Algorithms in Land Used/Land Cover Mapping in a Semi-Arid Environment through Landsat 8 OLI Image; Shiraz, Iran. Geomat. Nat. Hazards Risk 2020, 11, 724–741. [Google Scholar] [CrossRef]
  27. Korot, E.; Guan, Z.; Ferraz, D.; Wagner, S.K.; Zhang, G.; Liu, X.; Faes, L.; Pontikos, N.; Finlayson, S.G.; Khalid, H.; et al. Code-Free Deep Learning for Multi-Modality Medical Image Classification. Nat. Mach. Intell. 2021, 3, 288–298. [Google Scholar] [CrossRef]
  28. Algan, G.; Ulusoy, I. Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey. Knowl.-Based Syst. 2021, 215, 106771. [Google Scholar] [CrossRef]
  29. Jia, S.; Jiang, S.; Lin, Z.; Li, N.; Xu, M.; Yu, S. A Survey: Deep Learning for Hyperspectral Image Classification with Few Labeled Samples. Neurocomputing 2021, 448, 179–204. [Google Scholar] [CrossRef]
  30. Yuan, Y.; Wang, C.; Jiang, Z. Proxy-Based Deep Learning Framework for Spectral-Spatial Hyperspectral Image Classification: Efficient and Robust. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
  31. Ren, Y.; Li, X.; Yang, X.; Xu, H. Development of a Dual-Attention U-Net Model for Sea Ice and Open Water Classification on SAR Images. IEEE Geosci. Remote Sens. Lett. 2021, 1–5. [Google Scholar] [CrossRef]
  32. Khan, M.A.; Akram, T.; Zhang, Y.-D.; Sharif, M. Attributes Based Skin Lesion Detection and Recognition: A Mask RCNN and Transfer Learning-Based Deep Learning Framework. Pattern Recognit. Lett. 2021, 143, 58–66. [Google Scholar] [CrossRef]
  33. Jiao, W.; Wang, Q.; Cheng, Y.; Zhang, Y. End-to-End Prediction of Weld Penetration: A Deep Learning and Transfer Learning Based Method. J. Manuf. Process. 2021, 63, 191–197. [Google Scholar] [CrossRef]
  34. Mishra, P.; Passos, D. Realizing Transfer Learning for Updating Deep Learning Models of Spectral Data to Be Used in New Scenarios. Chemom. Intell. Lab. Syst. 2021, 212, 104283. [Google Scholar] [CrossRef]
  35. Lin, J.; Li, Y.; Yang, G. FPGAN: Face de-Identification Method with Generative Adversarial Networks for Social Robots. Neural Netw. 2021, 133, 132–147. [Google Scholar] [CrossRef]
  36. Suh, S.; Lee, H.; Lukowicz, P.; Lee, Y.O. CEGAN: Classification Enhancement Generative Adversarial Networks for Unraveling Data Imbalance Problems. Neural Netw. 2021, 133, 69–86. [Google Scholar] [CrossRef]
  37. Zhang, H.; Song, Y.; Han, C.; Zhang, L. Remote Sensing Image Spatiotemporal Fusion Using a Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4273–4286. [Google Scholar] [CrossRef]
  38. Audebert, N.; Le Saux, B.; Lefevre, S. Generative Adversarial Networks for Realistic Synthesis of Hyperspectral Samples. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22 July 2018; pp. 4359–4362. [Google Scholar]
  39. Ji, S.; Wang, D.; Luo, M. Generative Adversarial Network-Based Full-Space Domain Adaptation for Land Cover Classification From Multiple-Source Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3816–3828. [Google Scholar] [CrossRef]
  40. Zhao, S.; Yang, S.; Gu, J.; Liu, Z.; Feng, Z. Symmetrical Lattice Generative Adversarial Network for Remote Sensing Images Compression. ISPRS J. Photogramm. Remote Sens. 2021, 176, 169–181. [Google Scholar] [CrossRef]
  41. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Brisco, B.; Gill, E. Full and Simulated Compact Polarimetry Sar Responses to Canadian Wetlands: Separability Analysis and Classification. Remote Sens. 2019, 11, 516. [Google Scholar] [CrossRef] [Green Version]
  42. Tucker, C.J. Red and Photographic Infrared Linear Combinations for Monitoring Vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef] [Green Version]
  43. Louis, J.; Debaecker, V.; Pflug, B.; Main-Knorn, M.; Bieniarz, J.; Mueller-Wilm, U.; Cadau, E.; Gascon, F. Sentinel-2 Sen2Cor: L2A Processor for Users; Spacebooks Online: Barcelona, Spain, 2016; pp. 1–8. [Google Scholar]
  44. Cheng, G.; Han, J. A Survey on Object Detection in Optical Remote Sensing Images. ISPRS J. Photogramm. Remote Sens. 2016, 117, 11–28. [Google Scholar] [CrossRef] [Green Version]
  45. Li, K.; Cheng, G.; Bu, S.; You, X. Rotation-Insensitive and Context-Augmented Object Detection in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2337–2348. [Google Scholar] [CrossRef]
  46. Hamida, A.B.; Benoit, A.; Lambert, P.; Ben, C. Amar 3-D Deep Learning Approach for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar] [CrossRef] [Green Version]
  47. Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Fully Convolutional Neural Networks for Remote Sensing Image Classification. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10 July 2016; pp. 5071–5074. [Google Scholar]
  48. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar]
  49. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  50. Subhra Mullick, S.; Datta, S.; Das, S. Generative Adversarial Minority Oversampling. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27–28 October 2019; pp. 1695–1704. [Google Scholar]
  51. Roy, S.K.; Haut, J.M.; Paoletti, M.E.; Dubey, S.R.; Plaza, A. Generative Adversarial Minority Oversampling for Spectral-Spatial Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
  52. Breiman, L. Random Forests. Mach. Learn. 2001, 54, 5–32. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The study area of the Avalon located in Newfoundland, Canada, (a) location of Newfoundland in the world map (bottom left image), (b) location of the Avalon study area with respect to the Newfoundland (top left image), and (c) Sentinel-2 true-color map of the study area (right image).
Figure 1. The study area of the Avalon located in Newfoundland, Canada, (a) location of Newfoundland in the world map (bottom left image), (b) location of the Avalon study area with respect to the Newfoundland (top left image), and (c) Sentinel-2 true-color map of the study area (right image).
Water 13 03601 g001
Figure 2. The discriminator and generator networks of a Generative Adversarial Network.
Figure 2. The discriminator and generator networks of a Generative Adversarial Network.
Water 13 03601 g002
Figure 3. The architecture of the AlexNet deep CNN (FC = fully connected layer, Conv = convolutional layer, Max pool = max pooling layer).
Figure 3. The architecture of the AlexNet deep CNN (FC = fully connected layer, Conv = convolutional layer, Max pool = max pooling layer).
Water 13 03601 g003
Figure 4. The architecture of the proposed GAN AlexNet.
Figure 4. The architecture of the proposed GAN AlexNet.
Water 13 03601 g004
Figure 5. Wetland classified maps using (a) Sentinel-2 true color of the study area, (b) the proposed model using Sentinel-1 and Sentinel-2 features, (c) the proposed model using only Sentinel-2 data, and (d) the original AlexNet CNN network.
Figure 5. Wetland classified maps using (a) Sentinel-2 true color of the study area, (b) the proposed model using Sentinel-1 and Sentinel-2 features, (c) the proposed model using only Sentinel-2 data, and (d) the original AlexNet CNN network.
Water 13 03601 g005
Figure 6. Spatial distribution of bog, fen, marsh, swamp, and shallow water wetland overlaid on the Sentinel-2 true-color image of the pilot site of the Avalon.
Figure 6. Spatial distribution of bog, fen, marsh, swamp, and shallow water wetland overlaid on the Sentinel-2 true-color image of the pilot site of the Avalon.
Water 13 03601 g006
Figure 7. Spatial extents of wetland classes of bog, fen, marsh, swamp, and shallow water of the study area of the Avalon (in km2).
Figure 7. Spatial extents of wetland classes of bog, fen, marsh, swamp, and shallow water of the study area of the Avalon (in km2).
Water 13 03601 g007
Figure 8. The variable importance of different bands, SAR backscattering coefficients, spectral indices, and polarimetric features on the final classification accuracy by the CRF classifier based on the Gini importance index.
Figure 8. The variable importance of different bands, SAR backscattering coefficients, spectral indices, and polarimetric features on the final classification accuracy by the CRF classifier based on the Gini importance index.
Water 13 03601 g008
Figure 9. Overview of normalized band intensities of several samples of bog, fen, marsh, swamp, and shallow water wetlands for Sentinel-1 and Sentinel-2 data in the pilot site of the Avalon.
Figure 9. Overview of normalized band intensities of several samples of bog, fen, marsh, swamp, and shallow water wetlands for Sentinel-1 and Sentinel-2 data in the pilot site of the Avalon.
Water 13 03601 g009
Table 1. Training and testing pixel of wetland samples in the study area of Avalon, Canada.
Table 1. Training and testing pixel of wetland samples in the study area of Avalon, Canada.
ClassNumber of Training PixelsNumber of Testing Pixels
Bog35001500
Fen2055881
Marsh1445619
Swamp1236530
Shallow Water2080891
Urban52352244
Deep Water69282969
Upland51392203
Table 2. Spectral bands, indices, the normalized backscattering coefficients, and polarimetric features extracted from optical and SAR imagery utilized in this research (NDVI: Normalized Difference Vegetation Index, EVI: Enhanced Vegetation Index, DVI: Difference Vegetation Index [42], RENDVI: Red Edge Normalized Difference Vegetation Index, NDWI: Normalized Difference Water Index).
Table 2. Spectral bands, indices, the normalized backscattering coefficients, and polarimetric features extracted from optical and SAR imagery utilized in this research (NDVI: Normalized Difference Vegetation Index, EVI: Enhanced Vegetation Index, DVI: Difference Vegetation Index [42], RENDVI: Red Edge Normalized Difference Vegetation Index, NDWI: Normalized Difference Water Index).
DataNormalized Backscattering Coefficients/Spectral BandsPolarimetric Features/Spectral Indices
Sentinel-1 σ V V 0 ,   σ V H 0 ,   σ H H 0 ,   σ H V 0 σ H H 0 + σ H V 0
σ V H 0 + σ V V 0
m e d i a n   3   b y   3 ( σ H H 0 )
m e d i a n   3   b y   3 ( σ H V 0 )
m e d i a n   3   b y   3 ( σ V H 0 )
m e d i a n   3   b y   3 ( σ V V 0 )
m e d i a n   3   b y   3 ( σ V H 0 + σ V V 0 )
Sentinel-2B2, B3, B4, B5, B6, B7, B8, B8A, B11, B12 N D V I = ( N I R R ) ( N I R + R )
E V I = 2.5 N I R R ( N I R + ( 2.4 R ) ) + 1
D V I = ( N I R R )
R E N D V I = ( N I R R E ) ( N I R + R E )
N D W I = ( N I R S W I R ) ( N I R + S W I R )
Table 3. The architecture of the conditional map unit (Conv2DTranspose = 2-dimensional transposed convolutional layers).
Table 3. The architecture of the conditional map unit (Conv2DTranspose = 2-dimensional transposed convolutional layers).
LayerFilters/Kernel SizeBatch NormalizationActivation Function
Dense256 × 7 × 7YesLeakyRelu
Reshape256, 7, 7--
Conv2DTranspose128, 5, 5YesLeakyRelu
Conv2DTranspose1, 5, 5YesLeakyRelu
Table 4. Results of the developed machine learning of the proposed model in terms of precision, F1-score, and recall (S1 = Sentinel-1, S2 = Sentinel-2).
Table 4. Results of the developed machine learning of the proposed model in terms of precision, F1-score, and recall (S1 = Sentinel-1, S2 = Sentinel-2).
ModelBogFenMarshSwampShallow WaterUrbanDeep WaterUplandAA (%)
GAN-AlexNet-S1S2 92.30
Precision0.910.810.810.860.950.9810.98
Recall0.880.820.900.890.95110.95
F-1 score0.890.820.850.870.950.9910.97
GAN-AlexNet-S2 87.92
Precision0.830.790.790.900.890.9910.96
Recall0.890.670.810.720.97110.98
F-1 score0.860.720.800.800.93110.97
AlexNet 85.04
Precision0.930.580.970.630.910.9810.96
Recall0.780.900.630.590.99110.91
F-1 score0.850.710.770.610.950.9910.93
Table 5. The confusion matrices of the proposed DCNN model and the AlexNet classifier (S1 = Sentinel-1, S2 = Sentinel-2).
Table 5. The confusion matrices of the proposed DCNN model and the AlexNet classifier (S1 = Sentinel-1, S2 = Sentinel-2).
ModelBogFenMarshSwampShallow Water UrbanDeep WaterUpland
GAN-AlexNet-S1S2
Bog13171537191003
Fen11372420230001
Marsh310557735106
Swamp1531847000024
Shallow water10460844000
Urban00002243001
Deep Water00014029640
Upland30432503402098
GAN-AlexNet-S2
Bog13421301953001
Fen23058945101006
Marsh3105036801016
Swamp35123538420062
Shallow water4 200864012
Urban03320223402
Deep Water000023029460
Upland63141901112149
AlexNet
Bog1166323335000
Fen74794130009
Marsh127439334782026
Swamp0161431200053
Shallow water3040882020
Urban00000224301
Deep Water00004029650
Upland010014603602011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jamali, A.; Mahdianpari, M.; Mohammadimanesh, F.; Brisco, B.; Salehi, B. A Synergic Use of Sentinel-1 and Sentinel-2 Imagery for Complex Wetland Classification Using Generative Adversarial Network (GAN) Scheme. Water 2021, 13, 3601. https://doi.org/10.3390/w13243601

AMA Style

Jamali A, Mahdianpari M, Mohammadimanesh F, Brisco B, Salehi B. A Synergic Use of Sentinel-1 and Sentinel-2 Imagery for Complex Wetland Classification Using Generative Adversarial Network (GAN) Scheme. Water. 2021; 13(24):3601. https://doi.org/10.3390/w13243601

Chicago/Turabian Style

Jamali, Ali, Masoud Mahdianpari, Fariba Mohammadimanesh, Brian Brisco, and Bahram Salehi. 2021. "A Synergic Use of Sentinel-1 and Sentinel-2 Imagery for Complex Wetland Classification Using Generative Adversarial Network (GAN) Scheme" Water 13, no. 24: 3601. https://doi.org/10.3390/w13243601

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop