DGFNet: Dual Gate Fusion Network for Land Cover Classification in Very High-Resolution Images
Abstract
:1. Introduction
- We propose a simple but efficient encoder–decoder segmentation network, which effectively captures the global content and fuses different multi-level features, improving the performance of land-cover classification in VHR images.
- We propose a novel feature enhancement module (FEM). It combines local information and global context information, enhancing the representation of different layer features.
- A dual-gate fusion module (DGFM) with the gate mechanism is proposed, which promotes the fusion of low-level spatial features and high-level semantic features effectively.
- Exhaustive experiments are conducted to prove the effectiveness of the proposed network. We also achieve the state-of-art performance of 88.87% MIoU on the LandCover dataset and 72.25% MIoU on the ISPRS Potsdam dataset.
2. Related Work
2.1. DCNNs in Land-Cover Classification
2.2. Gate Mechanism in Neural Networks
3. Methods
3.1. Overall Framwork
3.2. Feature Enhancement Module
3.3. Dual Gate Fusion Module
4. Experiment
4.1. Dataset Description
4.2. Implementation Settings
4.2.1. Parameter Setting
4.2.2. Evaluation Metrics
4.3. Model Analysis
4.3.1. Influence of Different Modules on Classification
4.3.2. Influence of Different Training Size on Classification
4.4. Comparisons with Other Networks
4.4.1. Comparison of LandCover Dataset
4.4.2. ISPRS Potsdam Dataset
4.4.3. Model Size and Efficiency Analysis
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Abbreviations
VHR | Very High-Resolution |
DCNN | Deep Convolutional Neural Networks |
DGFNet | Dual Gate Fusion Network |
FEM | Feature11Enhancement Module |
DGFM | Dual Gate Fusion Module |
DSM | Digital Surface Mode |
MIoU | Mean Intersection over Union |
FWIoU | Frequency Weighted Intersection over Union |
MPA | Mean Pixel Accuracy |
PA | Pixel Accuracy |
References
- López, J.A.; Verdiguier, E.I.; Chova, L.G.; Marí, J.M.; Barreiro, J.R.; Valls, G.C.; Maravilla, J.C. Land cover classification of VHR airborne images for citrus grove identification. ISPRS J. Photogram. Remote Sens. 2011, 66, 115–123. [Google Scholar] [CrossRef]
- Hegazy, I.R.; Kaloop, M.R. Monitoring urban growth and land use change detection with GIS and remote sensing techniques in Daqahlia governorate Egypt. Int. J. Sustain. Built Environ. 2015, 4, 117–124. [Google Scholar] [CrossRef] [Green Version]
- Zhang, X.; Du, S.; Wang, Q. Hierarchical semantic cognition for urban functional zones with VHR satellite images and POI data. ISPRS J. Photogram. Remote Sens. 2017, 132, 170–184. [Google Scholar] [CrossRef]
- Stefanov, W.L.; Ramsey, M.S.; Christensen, P.R. Monitoring urban land cover change: An expert system approach to land cover classification of semiarid to arid urban centers. Remote Sens. Environ. 2001, 77, 173–185. [Google Scholar] [CrossRef]
- Bayarsaikhan, U.; Boldgiv, B.; Kim, K.R.; Park, K.A.; Lee, D. Change detection and classification of land cover at Hustai National Park in Mongolia. Int. J. Appl. Earth Obs. Geoinf. 2009, 11, 273–280. [Google Scholar] [CrossRef]
- Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS J. Photogram. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
- Tuia, D.; Ratle, F.; Pacifici, F.; Kanevski, M.F.; Emery, W.J. Active learning methods for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2218–2232. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 404–417. [Google Scholar]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef] [Green Version]
- Cheng, G.; Han, J. A survey on object detection in optical remote sensing images. ISPRS J. Photogram. Remote Sens. 2016, 117, 11–28. [Google Scholar] [CrossRef] [Green Version]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Hong, S.; Oh, J.; Lee, H.; Han, B. Learning transferrable knowledge for semantic segmentation with deep convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3204–3212. [Google Scholar]
- Kampffmeyer, M.; Salberg, A.B.; Jenssen, R. Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1–9. [Google Scholar]
- Babenko, A.; Lempitsky, V. Aggregating local deep features for image retrieval. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1269–1277. [Google Scholar]
- Gordo, A.; Almazán, J.; Revaud, J.; Larlus, D. Deep image retrieval: Learning global representations for image search. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 241–257. [Google Scholar]
- Zhou, W.; Newsam, S.; Li, C.; Shao, Z. PatternNet: A benchmark dataset for performance evaluation of remote sensing image retrieval. ISPRS J. Photogramm. Remote Sens. 2018, 145, 197–209. [Google Scholar] [CrossRef] [Green Version]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 4–8 October 2015; pp. 234–241. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 3146–3154. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5693–5703. [Google Scholar]
- Nogueira, K.; Dalla Mura, M.; Chanussot, J.; Schwartz, W.R.; dos Santos, J.A. Dynamic multicontext segmentation of remote sensing images based on convolutional networks. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7503–7520. [Google Scholar] [CrossRef] [Green Version]
- Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. arXiv 2015, arXiv:1511.07122. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the 15th European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Li, A.; Jiao, L.; Zhu, H.; Li, L.; Liu, F. Multitask Semantic Boundary Awareness Network for Remote Sensing Image Segmentation. IEEE Trans. Geosci. Remote Sens. 2021. [Google Scholar] [CrossRef]
- Marmanis, D.; Schindler, K.; Wegner, J.D.; Galliani, S.; Datcu, M.; Stilla, U. Classification with an edge: Improving semantic image segmentation with boundary detection. ISPRS J. Photogram. Remote Sens. 2018, 135, 158–172. [Google Scholar] [CrossRef] [Green Version]
- Wang, H.; Wang, Y.; Zhang, Q.; Xiang, S.; Pan, C. Gated convolutional neural network for semantic segmentation in high-resolution images. Remote Sens. 2017, 9, 446. [Google Scholar] [CrossRef] [Green Version]
- Mou, L.; Zhu, X.X. RiFCN: Recurrent network in fully convolutional network for semantic segmentation of high resolution remote sensing images. arXiv 2018, arXiv:1805.02091. [Google Scholar]
- Mou, L.; Hua, Y.; Zhu, X.X. Relation matters: Relational context-aware fully convolutional network for semantic segmentation of high-resolution aerial images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7557–7569. [Google Scholar] [CrossRef]
- Liu, C.; Zeng, D.; Wu, H.; Wang, Y.; Jia, S.; Xin, L. Urban land cover classification of high-resolution aerial imagery using a relation-enhanced multiscale convolutional network. Remote Sens. 2020, 12, 311. [Google Scholar] [CrossRef] [Green Version]
- Chen, X.; Wang, G.; Guo, H.; Zhang, C.; Wang, H.; Zhang, L. Mfa-net: Motion feature augmented network for dynamic hand gesture recognition from skeletal data. Sensors 2019, 19, 239. [Google Scholar] [CrossRef] [Green Version]
- Sun, W.; Wang, R. Fully convolutional networks for semantic segmentation of very high resolution remotely sensed images combined with DSM. IEEE Geosci. Remote Sens. Lett. 2018, 15, 474–478. [Google Scholar] [CrossRef]
- Diakogiannis, F.I.; Waldner, F.; Caccetta, P.; Wu, C. ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote Sens. 2020, 162, 94–114. [Google Scholar] [CrossRef] [Green Version]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Dauphin, Y.N.; Fan, A.; Auli, M.; Grangier, D. Language modeling with gated convolutional networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 933–941. [Google Scholar]
- Gehring, J.; Auli, M.; Grangier, D.; Yarats, D.; Dauphin, Y.N. Convolutional sequence to sequence learning. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1243–1252. [Google Scholar]
- Li, L.; Kameoka, H. Deep clustering with gated convolutional networks. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 16–20. [Google Scholar]
- Yang, C.; An, Z.; Zhu, H.; Hu, X.; Zhang, K.; Xu, K.; Li, C.; Xu, Y. Gated convolutional networks with hybrid connectivity for image classification. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12581–12588. [Google Scholar]
- Yu, J.; Lin, Z.; Yang, J.; Shen, X.; Lu, X.; Huang, T.S. Free-form image inpainting with gated convolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 4471–4480. [Google Scholar]
- Chang, Y.L.; Liu, Z.Y.; Lee, K.Y.; Hsu, W. Free-form video inpainting with 3d gated convolution and temporal patchgan. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 9066–9075. [Google Scholar]
- Rayatdoost, S.; Rudrauf, D.; Soleymani, M. Multimodal gated information fusion for emotion recognition from EEG signals and facial behaviors. In Proceedings of the 2020 International Conference on Multimodal Interaction, Utrecht, The Netherlands, 11–15 October 2020; pp. 655–659. [Google Scholar]
- Cao, C.; Lan, C.; Zhang, Y.; Zeng, W.; Lu, H.; Zhang, Y. Skeleton-based action recognition with gated convolutional neural networks. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 3247–3257. [Google Scholar] [CrossRef]
- Xue, W.; Li, T. Aspect based sentiment analysis with gated convolutional networks. arXiv 2018, arXiv:1805.07043. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Chao, P.; Zhang, X.; Gang, Y.; Luo, G.; Jian, S. Large Kernel Matters—Improve Semantic Segmentation by Global Convolutional Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4353–4361. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Boguszewski, A.; Batorski, D.; Ziemba-Jankowska, N.; Zambrzycka, A.; Dziedzic, T. LandCover. ai: Dataset for Automatic Mapping of Buildings, Woodlands and Water from Aerial Imagery. arXiv 2020, arXiv:2005.02264. [Google Scholar]
- Rottensteiner, F.; Sohn, G.; Jung, J.; Gerke, M.; Baillard, C.; Benitez, S.; Breitkopf, U. The ISPRS benchmark on urban object classification and 3D building reconstruction. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 25 August–1 September 2012; pp. 293–298. [Google Scholar]
- Yang, M.; Yu, K.; Zhang, C.; Li, Z.; Yang, K. Denseaspp for semantic segmentation in street scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3684–3692. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Huang, J.; Weng, L.; Chen, B.; Xia, M. DFFAN: Dual Function Feature Aggregation Network for Semantic Segmentation of Land Cover. ISPRS Int. J. Geo-Inf. 2021, 10, 125. [Google Scholar] [CrossRef]
- Chen, B.; Xia, M.; Huang, J. Mfanet: A multi-level feature aggregation network for semantic segmentation of land cover. Remote Sens. 2021, 13, 731. [Google Scholar] [CrossRef]
Method | FEM | DGFM | MIoU (%) | FWIoU (%) | MPA (%) | PA (%) |
---|---|---|---|---|---|---|
Baseline | × | × | 85.72 | 91.89 | 91.41 | 95.75 |
✓ | × | 87.69 | 92.54 | 92.89 | 96.10 | |
× | ✓ | 88.27 | 92.89 | 92.63 | 96.30 | |
✓ | ✓ | 88.87 | 94.02 | 93.28 | 96.41 |
Method | FEM | DGFM | Build. (%) | Wood. (%) | Water (%) | Back. (%) | MIoU (%) |
---|---|---|---|---|---|---|---|
Baseline | × | × | 67.18 | 90.33 | 92.19 | 93.17 | 85.72 |
✓ | × | 71.69 | 90.96 | 93.22 | 93.71 | 87.69 | |
× | ✓ | 73.94 | 91.35 | 93.82 | 94.00 | 88.27 | |
✓ | ✓ | 75.72 | 91.56 | 94.03 | 94.16 | 88.87 |
Method | Size | MIoU (%) | FWIoU (%) | MPA (%) | PA (%) |
---|---|---|---|---|---|
DGFNet | 3 × 288 × 288 | 87.89 | 92.86 | 92.51 | 96.27 |
3 × 320 × 320 | 88.55 | 93.00 | 93.04 | 96.36 | |
3 × 352 × 352 | 88.40 | 92.99 | 93.16 | 96.35 | |
3 × 384 × 384 | 88.87 | 94.02 | 93.28 | 96.41 |
Method | MIoU (%) | FWIoU (%) | MPA (%) | PA (%) |
---|---|---|---|---|
DAnet [26] | 75.78 | 88.61 | 83.10 | 93.87 |
PSPNet [27] | 80.69 | 90.54 | 86.79 | 95.00 |
FCN-8s [18] | 83.64 | 91.35 | 89.29 | 95.46 |
HRNet [28] | 84.08 | 91.20 | 89.43 | 95.38 |
Deeplabv3+ [32] | 84.99 | 91.89 | 90.66 | 95.75 |
DenseASPP [57] | 85.02 | 91.56 | 91.10 | 95.56 |
U-Net [24] | 85.65 | 91.70 | 90.83 | 95.66 |
SegNet [25] | 85.69 | 92.02 | 90.96 | 95.82 |
DFFAN [59] (report) | 84.81 | 89.21 | 90.64 | - |
MFANet [60] (report) | 86.45 | 89.89 | 92.09 | 95.50 |
DGFNet (ours) | 88.87 | 94.02 | 93.28 | 96.41 |
Method | Build. (%) | Wood. (%) | Water (%) | Back. (%) | mIoU (%) |
---|---|---|---|---|---|
DAnet [26] | 37.74 | 87.12 | 87.90 | 90.37 | 75.78 |
PSPnet [27] | 51.11 | 89.10 | 90.54 | 90.03 | 80.69 |
FCN-8s [18] | 60.36 | 89.80 | 91.67 | 92.73 | 83.64 |
HRNet [28] | 62.85 | 89.61 | 91.27 | 92.58 | 84.08 |
Deeplabv3+ [32] | 64.12 | 90.39 | 92.30 | 93.16 | 84.99 |
DenseASPP [57] | 64.90 | 90.02 | 92.32 | 92.82 | 85.02 |
U-Net [24] | 67.34 | 90.04 | 92.20 | 93.02 | 85.65 |
SegNet [25] | 66.43 | 90.50 | 92.57 | 93.26 | 85.69 |
DFFAN [59] (report) | - | - | - | - | 84.81 |
MFANet [60] (report) | 75.09 | 87.78 | 91.66 | 91.25 | 86.45 |
DGFNet (ours) | 75.72 | 91.56 | 94.03 | 94.16 | 88.87 |
Method | P (%) | R (%) | F1-Score (%) |
---|---|---|---|
DAnet [26] | 85.22 | 83.10 | 84.15 |
PSPnet [27] | 89.77 | 86.62 | 88.17 |
FCN-8s [18] | 91.74 | 89.29 | 90.50 |
HRNet [28] | 92.42 | 89.43 | 90.90 |
Deeplabv3+ [32] | 92.16 | 90.66 | 91.40 |
DenseASPP [57] | 91.79 | 91.10 | 91.45 |
U-Net [24] | 93.04 | 90.83 | 91.92 |
SegNet [25] | 92.86 | 90.97 | 91.90 |
DGFNet (ours) | 94.60 | 93.28 | 93.93 |
Method | MIoU (%) | FWIoU (%) | MPA (%) | PA (%) |
---|---|---|---|---|
DAnet [26] | 60.21 | 70.23 | 71.31 | 82.25 |
PSPnet [27] | 67.73 | 74.71 | 77.77 | 85.31 |
FCN-8s [18] | 68.30 | 74.97 | 78.31 | 85.46 |
HRNet [28] | 67.52 | 73.82 | 77.48 | 84.71 |
Deeplabv3+ [32] | 70.34 | 76.24 | 79.84 | 86.28 |
SegNet [25] | 70.36 | 76.17 | 79.55 | 86.23 |
DenseASPP [57] | 71.01 | 77.33 | 79.58 | 87.05 |
DGFNet (ours) | 72.25 | 77.96 | 81.37 | 87.41 |
Method | P (%) | R (%) | F1-Socre (%) |
---|---|---|---|
DAnet [26] | 77.10 | 71.31 | 74.09 |
PSPnet [27] | 82.20 | 77.76 | 79.93 |
FCN-8s [18] | 82.10 | 78.31 | 80.16 |
HRNet [28] | 82.17 | 77.48 | 79.75 |
Deeplabv3+ [32] | 83.77 | 79.84 | 81.76 |
SegNet [25] | 83.97 | 79.55 | 81.71 |
DenseASPP [57] | 85.58 | 79.58 | 82.48 |
DGFNet (ours) | 85.12 | 81.37 | 83.20 |
Method | Backbone | Parameters (M) | Time (ms) | MIoU (%) |
---|---|---|---|---|
DAnet [26] | ResNet 50 | 45.24 | 10.46 | 75.78 |
PSPnet [27] | ResNet 50 | 50.85 | 12.58 | 80.69 |
FCN-8s [18] | ResNet 50 | 22.81 | 11.83 | 83.64 |
HRNet [28] | - | 1.46 | 12.68 | 84.08 |
Deeplabv3+ [32] | ResNet 50 | 38.48 | 9.83 | 85.02 |
U-Net [24] | - | 32.93 | 6.05 | 85.69 |
SegNet [25] | ResNet 50 | 28.08 | 7.35 | 84.99 |
DenseASPP [57] | DenseNet 201 | 20.47 | 22.91 | 85.65 |
DGFNet (ours) | ResNet 50 | 44.53 | 15.94 | 88.87 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guo, Y.; Wang, F.; Xiang, Y.; You, H. DGFNet: Dual Gate Fusion Network for Land Cover Classification in Very High-Resolution Images. Remote Sens. 2021, 13, 3755. https://doi.org/10.3390/rs13183755
Guo Y, Wang F, Xiang Y, You H. DGFNet: Dual Gate Fusion Network for Land Cover Classification in Very High-Resolution Images. Remote Sensing. 2021; 13(18):3755. https://doi.org/10.3390/rs13183755
Chicago/Turabian StyleGuo, Yongjie, Feng Wang, Yuming Xiang, and Hongjian You. 2021. "DGFNet: Dual Gate Fusion Network for Land Cover Classification in Very High-Resolution Images" Remote Sensing 13, no. 18: 3755. https://doi.org/10.3390/rs13183755
APA StyleGuo, Y., Wang, F., Xiang, Y., & You, H. (2021). DGFNet: Dual Gate Fusion Network for Land Cover Classification in Very High-Resolution Images. Remote Sensing, 13(18), 3755. https://doi.org/10.3390/rs13183755