Coastal Wetland Classification with Deep U-Net Convolutional Networks and Sentinel-2 Imagery: A Case Study at the Tien Yen Estuary of Vietnam
Abstract
:1. Introduction
- What are the advantages of integrating deep learning and multi-temporal remote sensing images for monitoring coastal wetland classification?
- How do the ResU-Net34 models for coastal wetland classification improve from the benchmark methods?
- How are wetland types distributed in the northeastern part of Vietnam?
2. Materials and Methods
2.1. Study Area
2.2. Selection of the Wetland Types for This Research
2.3. Data and Sample Collection
2.3.1. Input Dataset Preparation
2.3.2. Wetland Classification in Sentinel-2 Imagine
2.4. ResU-Net Architecture for Coastal Wetland Classification
- Encoder and ResBlock architecture
- INPUT layer is added at the beginning of the ResU-Net to insert the raw pixel values of all input images to the training model. In this study, four bands (red, green, blue, and near-infrared bands), the raw Sentinel-2 images depicted in Section 2.3.1 were merged with the DEM data. Then, the input data were separated into 1820 sub-images with the dimension of 128-pixel wide, 128-pixel height, and five spectral bands.
- BATCH NORMALIZATION layer is used to standardize outcomes from the CONV layer to the same size, before a new measurement. This layer is used to optimize the distribution of the activation values during the model development, avoiding internal covariate shift problems [64]. Every layer of input data is standardized by using the mean and variance (or standard deviation - ) parameters representing the relation between input and output batch data in the following formula:
- PADDING layers is a simple process to add zero-layers to input images in order to preserve information on the image corners and edges for calculation as good as the information on the image middle.
- POOLING layer is a sampled discretization process to work downscaling data by 2 × 2 spatial matrices [58]. In the ResU-Net models, the max-pooling layer was used only once before coming to the ResBlocks. In this study, the max-pooling layer is used once in the eighth layer (Appendix A). Instead of using the pooling layers to downsampling, the stride is increased from one to two
- CONV layers calculate the neural outputs using a collection of filters. The filter width and length values chosen are smaller than the input values. In this study, the chosen dimension of filters is 3 × 3. The filter slides across the images, linking input images with local regions. New pixel values are calculated with the input based on a ReLU activation functions for the filters (more detailed in Section 2.5). The ReLU functionality use max (0, x)—the threshold at zero—to preserve the images’ considerable size (128 × 128 × 5) and speed up the ResU-Net models during the convergence process [62]. In this study, the authors selected 34 CONV layers for ResU-Net construction. 64, 128, 256, and 512 filters chosen for the 34 CONV layers in the contracting direction to reduce the training and validation performance.
- 2.
- Decoder architecture
- CONCATENATE layers are used to link information from the encoder path to the decoder path. The data is standardized from the batch normalization, and activation functions in the encoder path will be combined with up-sampled data. This process makes the prediction more accurate.
- UP-SAMPLING layers is a simple, weight-free layer that doubles the input dimensions and can be used in a generative model, following a traditional convolution layer [66]. Up-sampling is applied to recover the size of the segmentation map on the decoding path with a value of 2.
2.5. Alternative Options to Develop Resu-Net Models
2.5.1. Loss Functions
2.5.2. Optimizer Methods
2.6. Model Comparison
2.6.1. Random Forest (RF)
2.6.2. Support Vector Machine (SVM)
2.7. Application of Trained Resu-Net Models for New Coastal Wetland Classification
3. Results
3.1. ResU-Net Model Performance
3.2. Accuracy Comparison among the Trained Models
3.3. Wetland Cover Changes in Tien Yen Estuary
4. Discussion
4.1. Comparison with Formal Networks/Frameworks
4.2. Improvement of Land Cover Classification
5. Conclusions
- What are the advantages of integrating deep learning and multi-temporal remote sensing images for monitoring wetland classification? The completed deep learning models can be used to interpret new satellite images in any coastal area and at any time, especially in hard-to-access areas among reefs and rocky marine shores. The use of deep learning models can help coastal managers to monitor the dynamic ecosystems annually in the wetlands that have been commonly done every five years by ecologists.
- How do the ResU-Net34 models for coastal wetland classification improve from the benchmark methods? The geomorphological and land cover characteristics of nine wetland ecosystem types were recorded during the training process of ResU-Net models with an accuracy of 83% and loss function value of 1.4 based on the use of the Adam optimizer. The best-trained ResU-Net model was used to successfully classify the wetland types in the Tien Yen estuary for four years. It can potentially be used to classify whole Vietnamese coastal wetlands in the future.
- How are wetland types distributed in the northeastern part of Vietnam? Nine wetland types distributed mainly in three regions, including the Cai Lan bay, Tien Yen estuary, and the coastal area of Mong Cai city. Due to the effect of rivers, the estuary and shallow marine waters have significant fluctuation. The area of the aquaculture pools and mangrove area has been narrowed, while the marine subtidal aquatic beds have been expanded.
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Appendix A
No. Layer | Type | Output Shape | Para-Meter | No. Layer | Type | Output Shape | Para-Meter |
---|---|---|---|---|---|---|---|
1 | Input Layer | 128;128;4 | 0 | 101 | Add | 8;8;256 | 0 |
2 | Batch Normalization | 128;128;4 | 12 | 102 | Batch Normalization | 8;8;256 | 1024 |
3 | ZeroPadding2D | 134;134;4 | 0 | 103 | Activation | 8;8;256 | 0 |
4 | Conv2D | 64;64;64 | 12,544 | 104 | ZeroPadding2 | 10;10;256 | 0 |
5 | Batch Normalization | 64;64;64 | 256 | 105 | Conv2D | 8;8;256 | 589,824 |
6 | Activation | 64;64;64 | 0 | 106 | Batch Normalization | 8;8;256 | 1024 |
7 | ZeroPadding2D | 66;66;64 | 0 | 107 | Activation | 8;8;256 | 0 |
8 | MaxPooling2D | 32;32;64 | 0 | 108 | ZeroPadding2 | 10;10;256 | 0 |
9 | Batch Normalization | 32;32;64 | 256 | 109 | Conv2D | 8;8;256 | 589,824 |
10 | Activation | 32;32;64 | 0 | 110 | Add | 8;8;256 | 0 |
11 | ZeroPadding2D | 34;34;64 | 0 | 111 | Batch Normalization | 8;8;256 | 1024 |
12 | Conv2D | 32;32;64 | 36,864 | 112 | Activation | 8;8;256 | 0 |
13 | Batch Normalization | 32;32;64 | 256 | 113 | ZeroPadding2 | 10;10;256 | 0 |
14 | Activation | 32;32;64 | 0 | 114 | Conv2D | 8;8;256 | 589,824 |
15 | ZeroPadding2D | 34;34;64 | 0 | 115 | Batch Normalization | 8;8;256 | 1024 |
16 | Conv2D | 32;32;64 | 36,864 | 116 | Activation | 8;8;256 | 0 |
17 | Conv2D | 32;32;64 | 4,096 | 117 | ZeroPadding2 | 10;10;256 | 0 |
18 | Add 1 | 32;32;64 | 0 | 118 | Conv2D | 8;8;256 | 589,824 |
19 | Batch Normalization | 32;32;64 | 256 | 119 | Add | 8;8;256 | 0 |
20 | Activation | 32;32;64 | 0 | 120 | Batch Normalization | 8;8;256 | 1024 |
21 | ZeroPadding2D | 34;34;64 | 0 | 121 | Activation | 8;8;256 | 0 |
22 | Conv2D | 32;32;64 | 36,864 | 122 | ZeroPadding2 | 10;10;256 | 0 |
23 | Batch Normalization | 32;32;64 | 256 | 123 | Conv2D | 8;8;256 | 589,824 |
24 | Activation | 32;32;64 | 0 | 124 | Batch Normalization | 8;8;256 | 1024 |
25 | ZeroPadding2D | 34;34;64 | 0 | 125 | Activation | 8;8;256 | 0 |
26 | Conv2D | 32;32;64 | 36,864 | 126 | ZeroPadding2 | 10;10;256 | 0 |
27 | Add 2 | 32;32;64 | 0 | 127 | Conv2D | 8;8;256 | 589,824 |
28 | Batch Normalization | 32;32;64 | 256 | 128 | Add | 8;8;256 | 0 |
29 | Activation | 32;32;64 | 0 | 129 | Batch Normalization | 8;8;256 | 1024 |
30 | ZeroPadding2D | 34;34;64 | 0 | 130 | Activation | 8;8;256 | 0 |
31 | Conv2D | 32;32;64 | 36,864 | 131 | ZeroPadding2 | 10;10;256 | 0 |
32 | Batch Normalization | 32;32;64 | 256 | 132 | Conv2D | 4;4;512 | 1,179,648 |
33 | Activation | 32;32;64 | 0 | 133 | Batch Normalization | 4;4;512 | 2048 |
34 | ZeroPadding2D | 34;34;64 | 0 | 134 | Activation | 4;4;512 | 0 |
35 | Conv2D | 32;32;64 | 36,864 | 135 | ZeroPadding2 | 6;6;512 | 0 |
36 | Add 3 | 32;32;64 | 0 | 136 | Conv2D | 4;4;512 | 2,359,296 |
37 | Batch Normalization | 32;32;64 | 256 | 137 | Conv2D | 4;4;512 | 131,072 |
38 | Activation | 32;32;64 | 0 | 138 | Add | 4;4;512 | 0 |
39 | ZeroPadding2D | 34;34;64 | 0 | 139 | Batch Normalization | 4;4;512 | 2048 |
40 | Conv2D | 16;16;128 | 73,728 | 140 | Activation | 4;4;512 | 0 |
41 | Batch Normalization | 16;16;128 | 512 | 141 | ZeroPadding2 | 6;6;512 | 0 |
42 | Activation | 16;16;128 | 0 | 142 | Conv2D | 4;4;512 | 2,359,296 |
43 | ZeroPadding2D | 18;18;128 | 0 | 143 | Batch Normalization | 4;4;512 | 2048 |
44 | Conv2D | 16;16;128 | 147,456 | 144 | Activation | 4;4;512 | 0 |
45 | Conv2D | 16;16;128 | 8192 | 145 | ZeroPadding2 | 6;6;512 | 0 |
46 | Add 4 | 16;16;128 | 0 | 146 | Conv2D | 4;4;512 | 2,359,296 |
47 | Batch Normalization | 16;16;128 | 512 | 147 | Add | 4;4;512 | 0 |
48 | Activation | 16;16;128 | 0 | 148 | Batch Normalization | 4;4;512 | 2048 |
49 | ZeroPadding2 | 18;18;128 | 0 | 149 | Activation | 4;4;512 | 0 |
50 | Conv2D | 16;16;128 | 147,456 | 150 | ZeroPadding2 | 6;6;512 | 0 |
51 | Batch Normalization | 16;16;128 | 512 | 151 | Conv2D | 4;4;512 | 2,359,296 |
52 | Activation | 16;16;128 | 0 | 152 | Batch Normalization | 4;4;512 | 2048 |
53 | ZeroPadding2 | 18;18;128 | 0 | 153 | Activation | 4;4;512 | 0 |
54 | Conv2D | 16;16;128 | 147,456 | 154 | ZeroPadding2 | 6;6;512 | 0 |
55 | Add 5 | 16;16;128 | 0 | 155 | Conv2D | 4;4;512 | 2,359,296 |
56 | Batch Normalization | 16;16;128 | 512 | 156 | Add | 4;4;512 | 0 |
57 | Activation | 16;16;128 | 0 | 157 | Batch Normalization | 4;4;512 | 2048 |
58 | ZeroPadding2 | 18;18;128 | 0 | 158 | Activation | 4;4;512 | 0 |
59 | Conv2D | 16;16;128 | 147,456 | 159 | Up-Sampling | 8;8;512 | 0 |
60 | Batch Normalization | 16;16;128 | 512 | 160 | Concatenate | 8;8;768 | 0 |
61 | Activation | 16;16;128 | 0 | 161 | Conv2D | 8;8;256 | 1,769,472 |
62 | ZeroPadding2 | 18;18;128 | 0 | 162 | Batch Normalization | 8;8;256 | 1024 |
63 | Conv2D | 16;16;128 | 147,456 | 163 | Activation | 8;8;256 | 0 |
64 | Add | 16;16;128 | 0 | 164 | Conv2D | 8;8;256 | 589,824 |
65 | Batch Normalization | 16;16;128 | 512 | 165 | Batch Normalization | 8;8;256 | 1024 |
66 | Activation | 16;16;128 | 0 | 166 | Activation | 8;8;256 | 0 |
67 | ZeroPadding2 | 18;18;128 | 0 | 167 | Up-Sampling | 16;16;256 | 0 |
68 | Conv2D | 16;16;128 | 147,456 | 168 | Concatenate | 16;16;384 | 0 |
69 | Batch Normalization | 16;16;128 | 512 | 169 | Conv2D | 16;16;128 | 442,368 |
70 | Activation | 16;16;128 | 0 | 170 | Batch Normalization | 16;16;128 | 512 |
71 | ZeroPadding2 | 18;18;128 | 0 | 171 | Activation | 16;16;128 | 0 |
72 | Conv2D | 16;16;128 | 147,456 | 172 | Conv2D | 16;16;128 | 147,456 |
73 | Add | 16;16;128 | 0 | 173 | Batch Normalization | 16;16;128 | 512 |
74 | Batch Normalization | 16;16;128 | 512 | 174 | Activation | 16;16;128 | 0 |
75 | Activation | 16;16;128 | 0 | 175 | Up-Sampling | 32;32;128 | 0 |
76 | ZeroPadding2 | 18;18;128 | 0 | 176 | Concatenate | 32;32;192 | 0 |
77 | Conv2D | 8;8;256 | 294,912 | 177 | Conv2D | 32;32;64 | 110,592 |
78 | Batch Normalization | 8;8;256 | 1024 | 178 | Batch Normalization | 32;32;64 | 256 |
79 | Activation | 8;8;256 | 0 | 179 | Activation | 32;32;64 | 0 |
80 | ZeroPadding2 | 10;10;256 | 0 | 180 | Conv2D | 32;32;64 | 36,864 |
81 | Conv2D | 8;8;256 | 589,824 | 181 | Batch Normalization | 32;32;64 | 256 |
82 | Conv2D | 8;8;256 | 32,768 | 182 | Activation | 32;32;64 | 0 |
83 | Add | 8;8;256 | 0 | 183 | Up-Sampling | 64;64;64 | 0 |
84 | Batch Normalization | 8;8;256 | 1024 | 184 | Concatenate | 64;64;128 | 0 |
85 | Activation | 8;8;256 | 0 | 185 | Conv2D | 64;64;32 | 36,864 |
86 | ZeroPadding2 | 10;10;256 | 0 | 186 | Batch Normalization | 64;64;32 | 128 |
87 | Conv2D | 8;8;256 | 589,824 | 187 | Activation | 64;64;32 | 0 |
88 | Batch Normalization | 8;8;256 | 1024 | 188 | Conv2D | 64;64;32 | 9216 |
89 | Activation | 8;8;256 | 0 | 189 | Batch Normalization | 64;64;32 | 128 |
90 | ZeroPadding2 | 10;10;256 | 0 | 190 | Activation | 64;64;32 | 0 |
91 | Conv2D | 8;8;256 | 589,824 | 191 | Up-Sampling | 128;128;32 | 0 |
92 | Add | 8;8;256 | 0 | 192 | Conv2D | 128;128;16 | 4608 |
93 | Batch Normalization | 8;8;256 | 1024 | 193 | Batch Normalization | 128;128;16 | 64 |
94 | Activation | 8;8;256 | 0 | 194 | Activation | 128;128;16 | 0 |
95 | ZeroPadding2 | 10;10;256 | 0 | 195 | Conv2D | 128;128;16 | 2304 |
96 | Conv2D | 8;8;256 | 589,824 | 196 | Batch Normalization | 128;128;16 | 64 |
97 | Batch Normalization | 8;8;256 | 1024 | 197 | Activation | 128;128;16 | 0 |
98 | Activation | 8;8;256 | 0 | 198 | Conv2D | 128;128;9 | 1305 |
99 | ZeroPadding2 | 10;10;256 | 0 | 199 | Activation | 128;128;9 | 0 |
100 | Conv2D | 8;8;256 | 589,824 |
References
- Dugan, P.J. Wetland Conservation: A Review of Current Issues and Action; IUCN: Gland, Switzerland, 1990. [Google Scholar]
- Paalvast, P.; Velde, G. Van Der Ocean & Coastal Management Long term anthropogenic changes and ecosystem service consequences in the northern part of the complex Rhine-Meuse estuarine system. Ocean Coast. Manag. 2014, 92, 50–64. [Google Scholar] [CrossRef] [Green Version]
- Mahoney, P.C.; Bishop, M.J. Assessing risk of estuarine ecosystem collapse. Ocean Coast. Manag. 2017, 140, 46–58. [Google Scholar] [CrossRef]
- Li, T.; Gao, X. Ecosystem services valuation of Lakeside Wetland park beside Chaohu Lake in China. Water (Switzerland) 2016, 8, 301. [Google Scholar] [CrossRef]
- Russi, D.; ten Brink, P.; Farmer, A.; Bandura, T.; Coates, D.; Dorster, J.; Kumar, R.; Davidson, N. The Economics of Ecosystems and Biodiversity for Water and Wetlands; IEEP London and Brussels: London, UK, 2012. [Google Scholar]
- RAMSA. Wetlands: A global disappearing act. Available online: https://www.ramsar.org/document/ramsar-fact-sheet-3-wetlands-a-global-disappearing-act (accessed on 8 October 2020).
- Davidson, N.C. How much wetland has the world lost? Long-term and recent trends in global wetland area. Mar. Freshw. Res. 2014, 65, 934–941. [Google Scholar] [CrossRef]
- CBD. Wetlands and Ecosystem Services; United Nations, 2015. [Google Scholar]
- Duc, L.D. Wetland Reserves in Vietnam (In Vietnamese); Centre for.; Agricultural Publishing House: Hanoi, Vietnam, 1993. [Google Scholar]
- Buckton, S.T.; Cu, N.; Quynh, H.Q.; Tu, N.D. The Conservation of Key Wetland Sites in the Mekong Delta; BirdLife International Vietnam Porgramme: Hanoi, Vietnam, 1989. [Google Scholar]
- Hawkins, S.; To, P.X.; Phuong, P.X.; Thuy, P.T.; Tu, N.D.; Cuong, C.V.; Brown, S.; Dart, P.; Robertson, S.; Vu, N.; et al. Roots in the Water: Legal Frameworks for Mangrove PES in Vietnam; Katoomba Group’s Legal Initiative Country Study Series: Washington, DC, USA, 2010. [Google Scholar]
- McDonough, S.; Gallardo, W.; Berg, H.; Trai, N.V.; Yen, N.Q. Wetland ecosystem service values and shrimp aquaculture relationships in Can Gio, Vietnam. Ecol. Indic. 2014, 46, 201–213. [Google Scholar] [CrossRef]
- Pedersen, A.; Nguyen, H.T. The Conservation of Key Coastal Wetland Sites in the Red River Delta; Hanoi BirdLife International Programme; Eames, J.C., Ed.; BirdLife International: Hanoi, Vietnam, 1996. [Google Scholar]
- Naganuma, K. Environmental planning of Quang Ninh province to 2020 vision to 2030. Quang Ninh Prov. People’s Comm. 2014. [Google Scholar]
- Lary, D.J.; Alavi, A.H.; Gandomi, A.H.; Walker, A.L. Machine learning in geosciences and remote sensing. Geosci. Front. 2016, 7, 3–10. [Google Scholar] [CrossRef] [Green Version]
- Balakrishnan, N.; Muthukumarasamy, G. Crop Production - Ensemble Machine Learning Model for Prediction. Int. J. Comput. Sci. Softw. Eng. 2016, 5, 148–153. [Google Scholar]
- Ma, X.; Deng, X.; Qi, L.; Jiang, Y.; Li, H.; Wang, Y.; Xing, X. Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields. PLoS ONE 2019, 14, 1–13. [Google Scholar] [CrossRef]
- Dang, K.B.; Burkhard, B.; Windhorst, W.; Müller, F. Application of a hybrid neural-fuzzy inference system for mapping crop suitability areas and predicting rice yields. Environ. Model. Softw. 2019, 114, 166–180. [Google Scholar] [CrossRef]
- Shi, Q.; Li, W.; Tao, R.; Sun, X.; Gao, L. Ship Classification Based on Multifeature Ensemble with Convolutional Neural Network. Remote Sens. 2019, 11, 419. [Google Scholar] [CrossRef] [Green Version]
- Gray, P.C.; Fleishman, A.B.; Klein, D.J.; McKown, M.W.; Bézy, V.S.; Lohmann, K.J.; Johnston, D.W. A convolutional neural network for detecting sea turtles in drone imagery. Methods Ecol. Evol. 2019, 10, 345–355. [Google Scholar] [CrossRef]
- Guo, Q.; Jin, S.; Li, M.; Yang, Q.; Xu, K.; Ju, Y.; Zhang, J.; Xuan, J.; Liu, J.; Su, Y.; et al. Application of deep learning in ecological resource research: Theories, methods, and challenges. Sci. China Earth Sci. 2020, 2172. [Google Scholar] [CrossRef]
- Dang, K.B.; Dang, V.B.; Bui, Q.T.; Nguyen, V.V.; Pham, T.P.N.; Ngo, V.L. A Convolutional Neural Network for Coastal Classification Based on ALOS and NOAA Satellite Data. IEEE Access 2020, 8, 11824–11839. [Google Scholar] [CrossRef]
- Gebrehiwot, A.; Hashemi-Beni, L.; Thompson, G.; Kordjamshidi, P.; Langan, T.E. Deep convolutional neural network for flood extent mapping using unmanned aerial vehicles data. Sensors (Switzerland) 2019, 19, 1486. [Google Scholar] [CrossRef] [Green Version]
- Feng, P.; Wang, B.; Liu, D.L.; Yu, Q. Machine learning-based integration of remotely-sensed drought factors can improve the estimation of agricultural drought in South-Eastern Australia. Agric. Syst. 2019, 173, 303–316. [Google Scholar] [CrossRef]
- Dang, K.B.; Windhorst, W.; Burkhard, B.; Müller, F. A Bayesian Belief Network – Based approach to link ecosystem functions with rice provisioning ecosystem services. Ecol. Indic. 2018. [Google Scholar] [CrossRef]
- Guo, M.; Li, J.; Sheng, C.; Xu, J.; Wu, L. A review of wetland remote sensing. Sensors (Switzerland) 2017, 17, 777. [Google Scholar] [CrossRef] [Green Version]
- Mahdianpari, M.; Granger, J.E.; Mohammadimanesh, F.; Salehi, B.; Brisco, B.; Homayouni, S.; Gill, E.; Huberty, B.; Lang, M. Meta-analysis of wetland classification using remote sensing: A systematic review of a 40-year trend in North America. Remote Sens. 2020, 12, 1882. [Google Scholar] [CrossRef]
- Ozesmi, S.L.; Bauer, M.E. Satellite remote sensing of wetlands. Wetl. Ecol. Manag. 2002, 10, 381–402. [Google Scholar] [CrossRef]
- Davis, T.J. (Ed.) The Ramsar Convention Manual: A Guide for the Convention on Wetlands of International Importance Especially as waterfowl Habitat; Ramsar Convention Bureau: Gland, Switzerland, 1994. [Google Scholar]
- Tian, S.; Zhang, X.; Tian, J.; Sun, Q. Random forest classification of wetland landcovers from multi-sensor data in the arid region of Xinjiang, China. Remote Sens. 2016, 8, 954. [Google Scholar] [CrossRef] [Green Version]
- Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Motagh, M. Random forest wetland classification using ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 13–31. [Google Scholar] [CrossRef]
- Chen, X.; Wang, T.; Liu, S.; Peng, F.; Tsunekawa, A.; Kang, W.; Guo, Z.; Feng, K. A New Application of Random Forest Algorithm to Estimate Coverage of Moss-Dominated Biological. Remote Sens. 2019, 11, 18. [Google Scholar]
- Liu, T.; Abd-Elrahman, A. Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification. ISPRS J. Photogramm. Remote Sens. 2018, 139, 154–170. [Google Scholar] [CrossRef]
- Alizadeh, M.R.; Nikoo, M.R. A fusion-based methodology for meteorological drought estimation using remote sensing data. Remote Sens. Environ. 2018, 211, 229–247. [Google Scholar] [CrossRef]
- Garg, L.; Shukla, P.; Singh, S.K.; Bajpai, V.; Yadav, U. Land use land cover classification from satellite imagery using mUnet: A modified UNET architecture. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019), Prague, Czech Republic, 25–27 February 2019; Volume 4, pp. 359–365. [Google Scholar]
- Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Stoian, A.; Poulain, V.; Inglada, J.; Poughon, V.; Derksen, D. Land cover maps production with high resolution satellite image time series and convolutional neural networks: Adaptations and limits for operational systems. Remote Sens. 2019, 11, 1986. [Google Scholar] [CrossRef] [Green Version]
- Liu, B.; Li, Y.; Li, G.; Liu, A. A spectral feature based convolutional neural network for classification of sea surface oil spill. ISPRS Int. J. Geo-Information 2019, 8, 160. [Google Scholar] [CrossRef] [Green Version]
- Pouliot, D.; Latifovic, R.; Pasher, J.; Duffe, J. Assessment of convolution neural networks for wetland mapping with landsat in the central Canadian boreal forest region. Remote Sens. 2019, 11, 772. [Google Scholar] [CrossRef] [Green Version]
- DeLancey, E.R.; Simms, J.F.; Mahdianpari, M.; Brisco, B.; Mahoney, C.; Kariyeva, J. Comparing deep learning and shallow learning for large-scalewetland classification in Alberta, Canada. Remote Sens. 2020, 12, 2. [Google Scholar] [CrossRef] [Green Version]
- Gordana, K.; Avdan, U. AVDAN Evaluating Sentinel-2 Red-Edge Bands for Wetland Classification. Proceedings 2019, 18, 12. [Google Scholar] [CrossRef] [Green Version]
- Slagter, B.; Tsendbazar, N.-E.; Vollrath, A.; Reiche, J. Mapping wetland characteristics using temporally dense Sentinel-1 and Sentinel-2 data: A case study in the St. Lucia wetlands, South Africa. Int. J. Appl. Earth Obs. Geoinf. 2020, 86, 102009. [Google Scholar] [CrossRef]
- Wang, X.; Gao, X.; Zhang, Y.; Fei, X.; Chen, Z.; Wang, J.; Zhang, Y.; Lu, X.; Zhao, H. Land-cover classification of coastal wetlands using the RF algorithm for Worldview-2 and Landsat 8 images. Remote Sens. 2019, 11, 1927. [Google Scholar] [CrossRef] [Green Version]
- Abubakar, F.A.; Boukari, S. A Convolutional Neural Network with K-Neareast Neighbor for Image Classification. Int. J. Adv. Res. Comput. Commun. Eng. (IJARCCE) 2018, 7, 1–7. [Google Scholar] [CrossRef]
- Bacour, C.; Baret, F.; Béal, D.; Weiss, M.; Pavageau, K. Neural network estimation of LAI, fAPAR, fCover and LAI×Cab, from top of canopy MERIS reflectance data: Principles and validation. Remote Sens. Environ. 2006, 105, 313–325. [Google Scholar] [CrossRef]
- Zambrano, F.; Vrieling, A.; Nelson, A.; Meroni, M.; Tadesse, T. Prediction of drought-induced reduction of agricultural productivity in Chile from MODIS, rainfall estimates, and climate oscillation indices. Remote Sens. Environ. 2018, 219, 15–30. [Google Scholar] [CrossRef]
- Feng, Q.; Yang, J.; Zhu, D.; Liu, J.; Guo, H.; Bayartungalag, B.; Li, B. Integrating multitemporal Sentinel-1/2 data for coastal land cover classification using a multibranch convolutional neural network: A case of the Yellow River Delta. Remote Sens. 2019, 11, 1006. [Google Scholar] [CrossRef] [Green Version]
- Amaral, G.; Bushee, J.; Cordani, U.G.; KAWASHITA, K.; Reynolds, J.H.; ALMEIDA, F.F.M.D.E.; de Almeida, F.F.M.; Hasui, Y.; de Brito Neves, B.B.; Fuck, R.A.; et al. Overview of Wetlands Status in Viet Nam Following 15 Years of Ramsar Convention Implementation Table. J. Petrol. 2013, 369, 1689–1699. [Google Scholar] [CrossRef]
- Tran, H.D.; Ta, T.T.; Tran, T.T. Importance of Tien Yen Estuary (Northern Vietnam) for early-stage Nuchequula nuchalis (Temminck & Schlegel, 1845). Chiang Mai Univ. J. Nat. Sci. 2016, 15, 67–76. [Google Scholar] [CrossRef]
- Nguyen, T.N.; Duong, T.T.; Nguyen, A.D.; Nguyen, T.L.; Pham, T.D. Primary assessment of water quality and phytoplankton diversity in Dong Rui Wetland, Tien Yen District, Quang Ninh Province. VNU J. Sci. 2017, 33, 6. [Google Scholar]
- Ha, N.T.T.; Koike, K.; Nhuan, M.T. Improved accuracy of chlorophyll-a concentration estimates from MODIS Imagery using a two-band ratio algorithm and geostatistics: As applied to the monitoring of eutrophication processes over Tien Yen Bay (Northern Vietnam). Remote Sens. 2013, 6, 421–442. [Google Scholar] [CrossRef] [Green Version]
- De Groot, D.; Brander, L.; Finlayson, M. Wetland Ecosystem Services. Wetl. B. 2016, 1–11. [Google Scholar] [CrossRef]
- He, Z.; He, D.; Mei, X.; Hu, S. Wetland classification based on a new efficient generative adversarial network and Jilin-1 satellite image. Remote Sens. 2019, 11, 2455. [Google Scholar] [CrossRef] [Green Version]
- Hoang, V.T.; Le, D.D. Wetland Classification System in Vietnam; CRES, Viet.; Vietnam Environment Administration: Hanoi, Vietnam, 2006. [Google Scholar]
- Stage, A.R.; Salas, C. Composition and Productivity. Soc. Am. For. 2007, 53, 486–492. [Google Scholar]
- Ghuffar, S. DEM generation from multi satellite Planetscope imagery. Remote Sens. 2018, 10, 1462. [Google Scholar] [CrossRef] [Green Version]
- Mussardo, G. Digital Elevation Models of the Northern Gulf Coast: Procedures, Data sources and analysis. Stat. F. Theor 2019, 53, 1689–1699. [Google Scholar] [CrossRef]
- Diakogiannis, F.I.; Waldner, F.; Caccetta, P.; Wu, C. ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote Sens. 2020, 162, 94–114. [Google Scholar] [CrossRef] [Green Version]
- Perez, H.; Tah, J.H.M.; Mosavi, A. Deep learning for detecting building defects using convolutional neural networks. Sensors (Switzerland) 2019, 19, 3556. [Google Scholar] [CrossRef] [Green Version]
- Scott, G.J.; Marcum, R.A.; Davis, C.H.; Nivin, T.W. Fusion of Deep Convolutional Neural Networks for Land Cover Classification of High-Resolution Imagery. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1638–1642. [Google Scholar] [CrossRef]
- Zhang, P.; Ke, Y.; Zhang, Z.; Wang, M.; Li, P.; Zhang, S. Urban land use and land cover classification using novel deep learning models based on high spatial resolution satellite imagery. Sensors (Switzerland) 2018, 18, 3717. [Google Scholar] [CrossRef] [Green Version]
- Liu, Z.; Feng, R.; Wang, L.; Zhong, Y.; Cao, L. D-Resunet: Resunet and Dilated Convolution for High Resolution Satellite Imagery Road Extraction. Int. Geosci. Remote Sens. Symp. 2019, 3927–3930. [Google Scholar] [CrossRef]
- Jakovljevic, G.; Govedarica, M.; Alvarez-Taboada, F. A deep learning model for automatic plastic mapping using unmanned aerial vehicle (UAV) data. Remote Sens. 2020, 12, 1515. [Google Scholar] [CrossRef]
- Garcia-Pedrero, A.; Lillo-Saavedra, M.; Rodriguez-Esparragon, D.; Gonzalo-Martin, C. Deep Learning for Automatic Outlining Agricultural Parcels: Exploiting the Land Parcel Identification System. IEEE Access 2019, 7, 158223–158236. [Google Scholar] [CrossRef]
- Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sens. Environ. 2018, 216, 57–70. [Google Scholar] [CrossRef] [Green Version]
- Iglovikov, V.; Mushinskiy, S.; Osin, V. Satellite Imagery Feature Detection using Deep Convolutional Neural Network: A Kaggle Competition. Available online: https://arxiv.org/abs/1706.06169 (accessed on 8 October 2020).
- Gulli, A.; Pal, S. Deep Learning with Keras—Implement Neural Networks with Keras on Theano and TensorFlow; Packt Publishing Ltd.: Birmingham, UK, 2017; ISBN 9781787128422. [Google Scholar]
- Lapin, M.; Hein, M.; Schiele, B. Analysis and Optimization of Loss Functions for Multiclass, Top-k, and Multilabel Classification. Pattern Anal. Mach. Intell. 2017, 8828, 1–20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, B.; Liu, Y.; Wang, X. Gradient Harmonized Single-Stage Detector. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 8577–8584. [Google Scholar] [CrossRef]
- Ahuja, K. Estimating Kullback-Leibler Divergence Using Kernel Machines. In Proceedings of the 2019 53rd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 3–6 November 2019; pp. 690–696. [Google Scholar] [CrossRef] [Green Version]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327. [Google Scholar] [CrossRef] [Green Version]
- Pasupa, K.; Vatathanavaro, S.; Tungjitnob, S. Convolutional neural networks based focal loss for class imbalance problem: A case study of canine red blood cells morphology classification. J. Ambient Intell. Humaniz. Comput. 2020. [Google Scholar] [CrossRef] [Green Version]
- Sørensen, T. A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons. K. Danske Vidensk. Selsk. 1948, 5, 1–34. [Google Scholar]
- Wang, L.; Yang, Y.; Min, R.; Chakradhar, S. Accelerating deep neural network training with inconsistent stochastic gradient descent. Neural Networks 2017, 93, 219–229. [Google Scholar] [CrossRef] [Green Version]
- Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Hasan, M.; Van Essen, B.C.; Awwal, A.A.S.; Asari, V.K. A state-of-the-art survey on deep learning theory and architectures. Electronics 2019, 8, 292. [Google Scholar] [CrossRef] [Green Version]
- Falbel, D.; Allaire, J.; François; Tang, Y.; Van Der Bijl, W.; Keydana, S. R Interface to “Keras”. Available online: https://keras.rstudio.com (accessed on 8 October 2020).
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Piragnolo, M.; Masiero, A.; Pirotti, F. Comparison of Random Forest and Support Vector Machine classifiers using UAV remote sensing imagery. Geophys. Res. Abstr. EGU Gen. Assem. 2017, 19, 15692. [Google Scholar]
- Tien Bui, D.; Bui, Q.T.; Nguyen, Q.P.; Pradhan, B.; Nampak, H.; Trinh, P.T. A hybrid artificial intelligence approach using GIS-based neural-fuzzy inference system and particle swarm optimization for forest fire susceptibility modeling at a tropical area. Agric. For. Meteorol. 2017, 233, 32–44. [Google Scholar] [CrossRef]
- Karatzoglou, A.; Meyer, D.; Hornik, K. Support Vector Algorithm in R. J. Stat. Softw. 2006, 15, 1–28. [Google Scholar] [CrossRef] [Green Version]
- Sannigrahi, S.; Chakraborti, S.; Joshi, P.K.; Keesstra, S.; Sen, S.; Paul, S.K.; Kreuter, U.; Sutton, P.C.; Jha, S.; Dang, K.B. Ecosystem service value assessment of a natural reserve region for strengthening protection and conservation. J. Environ. Manag. 2019, 244, 208–227. [Google Scholar] [CrossRef] [PubMed]
- Ge, W.; Cheng, Q.; Tang, Y.; Jing, L.; Gao, C. Lithological classification using Sentinel-2A data in the Shibanjing ophiolite complex in Inner Mongolia, China. Remote Sens. 2018, 10, 638. [Google Scholar] [CrossRef] [Green Version]
- Su, Y.X.; Xu, H.; Yan, L.J. Support vector machine-based open crop model (SBOCM): Case of rice production in China. Saudi J. Biol. Sci. 2017, 24, 537–547. [Google Scholar] [CrossRef] [PubMed]
- Tien Bui, D.; Tuan, T.A.; Hoang, N.D.; Thanh, N.Q.; Nguyen, D.B.; Van Liem, N.; Pradhan, B. Spatial prediction of rainfall-induced landslides for the Lao Cai area (Vietnam) using a hybrid intelligent approach of least squares support vector machines inference model and artificial bee colony optimization. Landslides 2017, 14, 447–458. [Google Scholar] [CrossRef]
No. | Eco. | Wetland Types | RAMSAR | MONRE | Research Area |
---|---|---|---|---|---|
1 | Natural coastal wetland | Permanent shallow marine waters | x | x | x |
2 | Marine subtidal aquatic beds | x | x | x | |
3 | Coral reefs | x | x | ||
4 | Rocky marine shores | x | x | x | |
5 | Sand, shingle or pebble shores | x | x | x | |
6 | Estuarine waters | x | x | x | |
7 | Intertidal mud, sand or salt flats | x | x | ||
8 | Intertidal marshes | x | x | ||
9 | Intertidal forested wetlands | x | x | x | |
10 | Coastal brackish/saline lagoons | x | x | ||
11 | Coastal freshwater lagoons | x | x | ||
12 | Karst and other subterranean hydrological systems | x | |||
13 | Man-made wetland | Aquaculture ponds | x | x | x |
14 | Farm ponds | x | x | x | |
15 | Irrigated land | x | x | x | |
16 | Seasonally flooded agricultural land | x | x | ||
17 | Salt exploitation sites | x | x | ||
18 | Canals and drainage channels, ditches | x | x | ||
19 | Karst and other subterranean hydrological systems | x |
Formula | Optimizer Method | Algorithms |
---|---|---|
11 | Adam | |
12 | Adamax | |
13 | Adagrad | |
14 | Nadam | |
15 | RMSprop | and |
16 | SGD | |
where is parameter value; is the learning rates; t is time step; = 10-8; is the gradient; E[g]—moving average of squared gradients; m, v are estimates of first and second moments; —the max operation; —moving average parameter (good default value—0.9); —step size. |
No. | Model | ACC Score (%) | IoU Score (%) | Loss | |||
---|---|---|---|---|---|---|---|
Training | Validation | Training | Validation | Training | Validation | ||
1 | Adagrad | 9.1 | 9.3 | 8.2 | 8.7 | 0.991 | 1.309 |
2 | Adam | 96.9 | 90.0 | 94.1 | 82.5 | 0.868 | 1.365 |
3 | Adamax | 92.9 | 69.4 | 87.1 | 57.5 | 0.959 | 1.361 |
4 | Nadam | 96.2 | 82.8 | 92.7 | 72.8 | 0.921 | 1.343 |
5 | RMSprop | 97.0 | 85.7 | 94.2 | 76.3 | 0.866 | 1.280 |
6 | SGD | 7.9 | 8.5 | 6.2 | 7.3 | 0.973 | 1.358 |
No. | Class | No. Sample | Aggregated Class Accuracy of Models (%) | |||||||
---|---|---|---|---|---|---|---|---|---|---|
ResU-Net | SVM | RF | ||||||||
Adagrad | SGD | Nadam | RMSprop | Adam | Adamax | |||||
1 | Inland areas | 156 | 1.3 | 97.4 | 90.9 | 94.2 | 95.5 | 13.6 | 79.2 | 80.5 |
2 | Shallow marine waters | 57 | 3.5 | 89.5 | 87.7 | 98.2 | 94.7 | 21.1 | 0.0 | 43.9 |
3 | Marine subtidal aquatic beds | 139 | 2.9 | 81.6 | 90.4 | 93.4 | 94.9 | 0.7 | 6.6 | 24.3 |
4 | Rocky marine shores | 271 | 49.8 | 94.4 | 95.1 | 97.7 | 97.0 | 16.5 | 63.5 | 49.6 |
5 | Sand, shingle or pebble shores | 77 | 3.9 | 92.0 | 94.7 | 97.3 | 94.7 | 9.3 | 20.0 | 24.0 |
6 | Estuarine waters | 25 | 0.0 | 72.2 | 77.8 | 88.9 | 88.9 | 11.1 | 0.0 | 77.8 |
7 | Intertidal forested wetlands | 62 | 48.4 | 85.0 | 78.3 | 90.0 | 95.0 | 1.7 | 28.3 | 48.3 |
8 | Aquaculture ponds | 196 | 10.7 | 86.9 | 88.0 | 92.1 | 94.8 | 26.2 | 84.3 | 36.6 |
9 | Farm ponds | 119 | 2.5 | 91.6 | 91.6 | 93.3 | 95.0 | 5.9 | 72.3 | 73.1 |
10 | Seasonal flooded agricultural lands | 70 | 22.9 | 61.4 | 67.1 | 62.9 | 58.6 | 57.1 | 78.6 | 68.6 |
Total OA (%) | 18.4 | 84.7 | 85.1 | 88.8 | 89.5 | 12.7 | 50.5 | 46.4 | ||
Cohen’s kappa | 8.5 | 84.4 | 85.2 | 89.1 | 89.6 | 6.9 | 46.7 | 43.2 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dang, K.B.; Nguyen, M.H.; Nguyen, D.A.; Phan, T.T.H.; Giang, T.L.; Pham, H.H.; Nguyen, T.N.; Tran, T.T.V.; Bui, D.T. Coastal Wetland Classification with Deep U-Net Convolutional Networks and Sentinel-2 Imagery: A Case Study at the Tien Yen Estuary of Vietnam. Remote Sens. 2020, 12, 3270. https://doi.org/10.3390/rs12193270
Dang KB, Nguyen MH, Nguyen DA, Phan TTH, Giang TL, Pham HH, Nguyen TN, Tran TTV, Bui DT. Coastal Wetland Classification with Deep U-Net Convolutional Networks and Sentinel-2 Imagery: A Case Study at the Tien Yen Estuary of Vietnam. Remote Sensing. 2020; 12(19):3270. https://doi.org/10.3390/rs12193270
Chicago/Turabian StyleDang, Kinh Bac, Manh Ha Nguyen, Duc Anh Nguyen, Thi Thanh Hai Phan, Tuan Linh Giang, Hoang Hai Pham, Thu Nhung Nguyen, Thi Thuy Van Tran, and Dieu Tien Bui. 2020. "Coastal Wetland Classification with Deep U-Net Convolutional Networks and Sentinel-2 Imagery: A Case Study at the Tien Yen Estuary of Vietnam" Remote Sensing 12, no. 19: 3270. https://doi.org/10.3390/rs12193270
APA StyleDang, K. B., Nguyen, M. H., Nguyen, D. A., Phan, T. T. H., Giang, T. L., Pham, H. H., Nguyen, T. N., Tran, T. T. V., & Bui, D. T. (2020). Coastal Wetland Classification with Deep U-Net Convolutional Networks and Sentinel-2 Imagery: A Case Study at the Tien Yen Estuary of Vietnam. Remote Sensing, 12(19), 3270. https://doi.org/10.3390/rs12193270