Urban Building Extraction and Modeling Using GF-7 DLC and MUX Images
Abstract
:1. Introduction
1.1. Building Extraction
1.2. Building Modeling
1.3. Stereo Mapping Satellite
2. Methods
2.1. Multilevel Features Fusion Network
2.1.1. Dense Block
2.1.2. Atrous Spatial Pyramid Pooling
2.1.3. Feature Fusion Module
2.1.4. Sample Processing
2.2. DSM Generation with Stereo Images
2.3. Building Modeling and Urban Scene Generation
2.3.1. Building Modeling
2.3.2. Urban Scene Generation
3. Experimental Results
3.1. Experimental Setup
3.1.1. Data
3.1.2. Setup
3.2. Network Training
3.3. Results of Building Extraction
3.3.1. Extraction Results
3.3.2. Accuracy Assessment
3.4. Results of Building Modeling
3.4.1. DSM Generation Results
3.4.2. LOD1 Building Model
3.4.3. Urban Scene Visualization with Unreal Engine 4
4. Discussion
4.1. Building Extraction Using MFFN
4.2. Building Modeling and Urban Scene Rendering
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Batty, M. Digital twins. Environ. Plan. B Urban Anal. City Sci. 2018, 45, 817–820. [Google Scholar] [CrossRef]
- Dembski, F.; Wössner, U.; Letzgus, M.; Ruddat, M.; Yamu, C. Urban Digital Twins for Smart Cities and Citizens: The Case Study of Herrenberg, Germany. Sustainability 2020, 12, 2307. [Google Scholar] [CrossRef] [Green Version]
- Dowman, I. Automatic feature extraction for urban landscape models. Adding value to Remotely Sensed Data. In Proceedings of the 26th Annual Conference of the Remote Sensing Society, Leicester, UK, 12–14 September 2000. [Google Scholar]
- Sohn, G.; Dowman, I. Data fusion of high-resolution satellite imagery and LiDAR data for automatic building extraction. ISPRS J. Photogramm. Remote Sens. 2007, 62, 43–63. [Google Scholar] [CrossRef]
- Xinming, T.A.; Junfeng, X.I.; Fan, M.O.; Xianhui, D.O.; Xin, L.I.; Shaoning, L.I.; Song, L.I.; Genghua, H.U.; Xingke, F.U.; Ren, L.I.; et al. GF-7 dual-beam laser altimeter on-orbit geometric calibration and test verification. Acta Geod. Cartogr. Sin. 2021, 50, 384–395. [Google Scholar]
- Shahrabi, B. Automatic Recognition and 3D Reconstruction of Buildings through Computer Vision and Digital Photogrammetry. Ph.D. Thesis, University of Stuttgart, Stuttgart, Germany, 2002. [Google Scholar]
- Haala, N.; Kada, M. An update on automatic 3D building reconstruction. ISPRS J. Photogramm. Remote Sens. 2010, 65, 570–580. [Google Scholar] [CrossRef]
- Grigillo, D.; Fras, M.K.; Petrovic, D. Automated building extraction from IKONOS images in suburban areas. Int. J. Remote Sens. 2012, 33, 5149–5170. [Google Scholar] [CrossRef]
- Lee, D.S.; Shan, J.; Bethel, J.S. Class-Guided Building Extraction from Ikonos Imagery. Photogramm. Eng. Remote Sens. 2003, 69, 143–150. [Google Scholar] [CrossRef] [Green Version]
- Bruzzone, L.; Prieto, D. Automatic analysis of the difference image for unsupervised change detection. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1171–1182. [Google Scholar] [CrossRef] [Green Version]
- Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
- Friedl, M.; Brodley, C. Decision tree classification of land cover from remotely sensed data. Remote Sens. Environ. 1997, 61, 399–409. [Google Scholar] [CrossRef]
- Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
- Lek, S.; Guégan, J.-F. Artificial neural networks as a tool in ecological modelling, an introduction. Ecol. Model. 1999, 120, 65–73. [Google Scholar] [CrossRef]
- Zhang, L.P.; Zhang, L.F.; Du, B. Deep Learning for Remote Sensing Data A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
- Mnih, V. Machine Learning for Aerial Image Labeling; University of Toronto: Toronto, ON, Canada, 2013. [Google Scholar]
- Alshehhi, R.; Marpu, P.R.; Woon, W.L.; Dalla Mura, M. Simultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2017, 130, 139–149. [Google Scholar] [CrossRef]
- Fu, G.; Liu, C.; Zhou, R.; Sun, T.; Zhang, Q. Classification for High Resolution Remote Sensing Imagery Using a Fully Convolutional Network. Remote Sens. 2017, 9, 498. [Google Scholar] [CrossRef] [Green Version]
- Liu, T.; Abd-Elrahman, A.; Morton, J.; Wilhelm, V.L. Comparing fully convolutional networks, random forest, support vector machine, and patch-based deep convolutional neural networks for object-based wetland mapping using images from small unmanned aircraft system. GIScience Remote Sens. 2018, 55, 243–264. [Google Scholar] [CrossRef]
- Yi, Y.; Zhang, Z.; Zhang, W.; Zhang, C.; Li, W.; Zhao, T. Semantic Segmentation of Urban Buildings from VHR Remote Sensing Imagery Using a Deep Convolutional Neural Network. Remote Sens. 2019, 11, 1774. [Google Scholar] [CrossRef] [Green Version]
- Huang, Z.; Cheng, G.; Wang, H.; Li, H.; Shi, L.; Pan, C. Building extraction from multi-source remote sensing images via deep deconvolution neural networks. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 1835–1838. [Google Scholar]
- Feng, W.; Sui, H.; Hua, L.; Xu, C. Improved Deep Fully Convolutional Network with Superpixel-Based Conditional Random Fields for Building Extraction. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 52–55. [Google Scholar]
- Liu, P.; Liu, X.; Liu, M.; Shi, Q.; Yang, J.; Xu, X.; Zhang, Y. Building Footprint Extraction from High-Resolution Images via Spatial Residual Inception Convolutional Neural Network. Remote Sens. 2019, 11, 830. [Google Scholar] [CrossRef] [Green Version]
- Shengjun, X. Building segmentation in remote sensing image based on multiscale-feature fusion dilated convolution resnet. Opt. Precis. Eng. 2020, 28, 1588–1599. [Google Scholar]
- Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5686–5696. [Google Scholar]
- Huang, J.; Zheng, Z.; Huang, G. Multi-Stage HRNet: Multiple Stage High-Resolution Network for Human Pose Estimation. arXiv 2019, arXiv:1910.05901. [Google Scholar]
- Yang, H.; Wu, P.; Yao, X.; Wu, Y.; Wang, B.; Xu, Y. Building Extraction in Very High Resolution Imagery by Dense-Attention Networks. Remote Sens. 2018, 10, 1768. [Google Scholar] [CrossRef] [Green Version]
- Thomson, C.; Boehm, J. Automatic Geometry Generation from Point Clouds for BIM. Remote Sens. 2015, 7, 11753–11775. [Google Scholar] [CrossRef] [Green Version]
- Sohn, G.; Dowman, I. Terrain surface reconstruction by the use of tetrahedron model with the MDL criterion. International Archives of the Photogrammetry. Remote Sens. Spat. Inf. Sci. 2002, 34, 336–344. [Google Scholar]
- Haala, N.; Peter, M.; Kremer, J.; Hunter, G. Mobile LIDAR Mapping for 3D Point Cloud Collection in Urban Areas-A Performance Test. In Proceedings of the XXI ISPRS Congress, Beijing, China, 3–11 July 2008. [Google Scholar]
- Xiong, X.; Adan, A.; Akinci, B.; Huber, D. Automatic creation of semantically rich 3D building models from laser scanner data. Autom. Constr. 2013, 31, 325–337. [Google Scholar] [CrossRef] [Green Version]
- Shiramizu, K.; Doi, K.; Aoyama, Y. Generation of a high-accuracy regional DEM based on ALOS/PRISM imagery of East Antarctica. Polar Sci. 2017, 14, 30–38. [Google Scholar] [CrossRef]
- Li, L.; Luo, H.; Zhu, H. Estimation of the Image Interpretability of ZY-3 Sensor Corrected Panchromatic Nadir Data. Remote Sens. 2014, 6, 4409–4429. [Google Scholar] [CrossRef] [Green Version]
- Tang, X.M.; Zhang, G.; Zhu, X.Y.; Pan, H.B.; Jiang, Y.H.; Zhou, P.; Wang, X.; Guo, L. Triple Linear-array Image Geometry Model of ZiYuan-3 Surveying Satellite and Its Validation. Acta Geod. Cartogr. Sin. 2012, 41, 191–198. [Google Scholar] [CrossRef]
- Maxar. Optical Imagery. Available online: https://resources.maxar.com/optical-imagery (accessed on 18 June 2021).
- Yang, J.K.; Wang, C.J.; Sun, L.; Zhu, Y.H.; Huang, Y. Design Critical Technology of Two-line Array Camera for GF-7 Satellite. Spacecr. Eng. 2020, 29, 61–67. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Wang, C.; Zhu, Y.; Yu, S.; Yin, Y. Design and Implementation of the Dual Line Array Camera for GF-7 Satellite. Spacecr. Recovery Remote Sens. 2020, 41, 29–38. [Google Scholar]
- Heiko, H. Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 2, 328–341. [Google Scholar]
Specification | Value |
---|---|
Orbit height | 506 km |
Orbit inclination | 97.421° |
Descending node time | 10:30 a.m. |
Revisit cycle | 60 days |
Mission Duration | 8 years |
Swath width | 20 km at nadir |
Radiometric resolution | 12 bit |
Panchromatic GSD | 0.65 m at backward (+26°) |
0.80 m at forward (−5°) | |
Panchromatic band | 450 nm to 900 nm |
Multispectral GSD | 2.6 m at backward |
Multispectral band | Band 1 (Blue): 450 nm to 520 nm |
Band 2 (Green): 520 nm to 590 nm | |
Band 3 (Red): 630 nm to 690 nm | |
Band 4 (Near-infrared red): 770 nm to 890 nm |
Image | Network | OA | F1 | IoU |
---|---|---|---|---|
GF-7 | BiseNet | 55.26% | 35.63% | 21.68% |
DeeplabV3+ | 16.05% | 25.43% | 14.57% | |
DenseNet | 93.37% | 75.09% | 60.11% | |
Our Method | 95.29% | 83.92% | 72.30% | |
GF-2 | BiseNet | 60.57% | 45.59% | 29.53% |
DeeplabV3+ | 89.18% | 64.01% | 32.12% | |
DenseNet | 87.33% | 50.78% | 34.03% | |
Our Method | 96.73% | 86.17% | 75.71% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Luo, H.; He, B.; Guo, R.; Wang, W.; Kuai, X.; Xia, B.; Wan, Y.; Ma, D.; Xie, L. Urban Building Extraction and Modeling Using GF-7 DLC and MUX Images. Remote Sens. 2021, 13, 3414. https://doi.org/10.3390/rs13173414
Luo H, He B, Guo R, Wang W, Kuai X, Xia B, Wan Y, Ma D, Xie L. Urban Building Extraction and Modeling Using GF-7 DLC and MUX Images. Remote Sensing. 2021; 13(17):3414. https://doi.org/10.3390/rs13173414
Chicago/Turabian StyleLuo, Heng, Biao He, Renzhong Guo, Weixi Wang, Xi Kuai, Bilu Xia, Yuan Wan, Ding Ma, and Linfu Xie. 2021. "Urban Building Extraction and Modeling Using GF-7 DLC and MUX Images" Remote Sensing 13, no. 17: 3414. https://doi.org/10.3390/rs13173414
APA StyleLuo, H., He, B., Guo, R., Wang, W., Kuai, X., Xia, B., Wan, Y., Ma, D., & Xie, L. (2021). Urban Building Extraction and Modeling Using GF-7 DLC and MUX Images. Remote Sensing, 13(17), 3414. https://doi.org/10.3390/rs13173414