Convolutional Neural Network-Based Method for Agriculture Plot Segmentation in Remote Sensing Images
Abstract
:1. Introduction
2. Related Works
3. Materials and Methods
3.1. Software and Hardware Configuration
3.2. Evaluation Indicators
3.3. Experimental Data
3.3.1. Data Labeling
- (a)
- Dam field: A farmland formed by building a dam in a ravine to block soil washed down from the mountain.
- (b)
- Stacked fields: In the low-lying areas with dense river networks in southern China, farmers excavate soil from rivers and canals to adapt to seasonal floods and rising water levels and use it to build stacked high fields. Its high terrain and good drainage make it easy for crop irrigation, which is conducive to planting various dry land crops, especially suitable for the production of fruits and vegetables.
- (c)
- Dry land: This refers to a field that relies on natural precipitation for irrigation and does not hold water on its surface. Most plants are drought tolerant, such as wheat, cotton, and corn.
- (d)
- Paddy field: This refers to flooded farmland. Mainly concentrated in plain and basin hilly areas, with flat terrain and deep soil layers, it is suitable for planting semi aquatic crops such as rice and taro.
- (e)
- Taitian: A farmland formed as a platform above the ground and surrounded by ditches. The purpose is to eliminate waterlogging and improve alkaline land. It is beneficial for planting crops such as cotton, vegetables, and grain, and is a product of effective utilization of arable land resources in barren areas.
- (f)
- Terrace: A stepped cultivated land that slopes downwards at a certain slope. This type of land can effectively utilize the arable land resources of slopes and hills, making it easy to cultivate crops such as rice and wheat.
- (g)
- Striped fields: Fields surrounded by agricultural ditches and windbreak forests. Their land utilization rate is high, making it easy for machine farming and irrigation.
- (h)
- Dike field: Waterfront farmland formed by farmers building embankments along the river, near the sea, or by the lake for reclamation. It is an effective farming method for transforming lowlands and fully utilizing land resources.
3.3.2. Data Augmentation
4. Improved TransUNet Network Remote Sensing Plot Segmentation Experiment
4.1. Residual Module Improvements
4.2. Jump Connection Improvements
4.3. High-Resolution Remote Sensing Image Land Segmentation Software
4.3.1. Development Environment
4.3.2. Interface Design
5. Results
5.1. Mainstream Neural Network Remote Sensing Plot Segmentation Experiments Results
5.2. Improved TransUNet Network Remote Sensing Plot Segmentation Experiment Results
5.3. Ablation Experiments
6. Discussion
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Thenkabail, P.S. Global croplands and their importance for water and food security in the twenty-first century: Towards an ever green revolution that combines a second green revolution with a blue revolution. Remote Sens. 2010, 2, 2305–2312. [Google Scholar] [CrossRef]
- Zhai, T.; Du, Q. Research on Augmented Reality Software Technology in Remote Sensing. Wirel. Internet Technol. 2017, 20, 52–53. [Google Scholar]
- Li, Y.; Zhao, X.; Tan, S. Remote sensing monitoring of mine development environment based on high resolution Satellite imagery. J. Neijiang Norm. Univ. 2021, 36, 68–72. [Google Scholar]
- Meng, J.; Wu, B.; Du, X.; Zhang, F.; Zhang, M.; Dong, T. Application progress and prospect of remote sensing in Precision agriculture. Remote Sens. Land Resour. 2011, 3, 1–7. [Google Scholar]
- Atzberger, C. Advances in remote sensing of agriculture: Context description, existing operational monitoring systems and major information needs. Remote Sens. 2013, 5, 949–981. [Google Scholar]
- Leo, O.; Lemoine, G. Land Parcel Identification Systems in the Frame of Regulation (ec) 1593, 2000 Version; Report; Publications Office of the European Union: Brussels, Belgium, 2000. [Google Scholar]
- Guo, Z.; Chen, Y. Image thresholding algorithm. J. Commun. Univ. China (Nat. Sci. Ed.) 2008, 2, 77–82. [Google Scholar]
- Duan, R.; Li, Q.; Li, Y. A review of image edge detection methods. Opt. Technol. 2005, 3, 415–419. [Google Scholar]
- Li, H.; Guo, L.; Liu, H. Remote sensing image fusion method based on region segmentation. J. Photonics 2005, 12, 1901–1905. [Google Scholar]
- Qi, L.; Chen, P.H.; Wang, D.; Chen, L.K.; Wang, W.; Dong, L. A ship target detection method based on SRM segmentation and hierarchical line segment features. J. Jiangsu Univ. Sci. Technol. (Nat. Sci. Ed.) 2020, 34, 34–40. [Google Scholar]
- Qin, Y.; Ji, M. A semantic segmentation method for high-resolution remote sensing images combining scene classification data. Comput. Appl. Softw. 2020, 37, 126–129+134. [Google Scholar]
- Shang, J.D.; Liu, Y.Q.; Gao, Q.D. Semantic segmentation of road scenes with multi-scale feature extraction. Comput. Appl. Softw. 2021, 38, 174–178. [Google Scholar]
- Qi, L.; Li, B.Y.; Chen, L.K. Improved Faster R-CNN based ship target detection algorithm. China Shipbuild. 2020, 61 (Suppl. S1), 40–51. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar]
- Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Convolutional neural networks for large-scale remote-sensing image classification. IEEE Trans. Geosci. Remote Sens. 2016, 55, 645–657. [Google Scholar] [CrossRef]
- Li, Y.; Xiao, C.; Zhang, H.; Li, X.; Chen, J. Semantic segmentation of remote sensing images by deep convolutional fusion of conditional random fields. Remote Sens. Land Resour. 2020, 32, 15–22. [Google Scholar]
- Su, J.; Yang, L.; Jing, W. A semantic segmentation method for high-resolution remote sensing images based on U-Net. Comput. Eng. Appl. 2019, 55, 207–213. [Google Scholar]
- Chen, T.H.; Zheng, S.Q.; Yu, J. Remote sensing image segmentation using improved DeepLab network. Meas. Control. Technol. 2018, 37, 34–39. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Shelhamer, E.; Long, J.; Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar]
- Yang, J.; Zhou, Z.; Du, Z.; Xu, Q.; Yin, H.; Liu, R. High resolution remote sensing image extraction of rural construction land based on SegNet semantic model. J. Agric. Eng. 2019, 35, 251–258. [Google Scholar]
- Jiang, J.; Lyu, C.; Liu, S.; He, Y.; Hao, X. RWSNet: A semantic segmentation network based on SegNet combined with random walk for remote sensing. Int. J. Remote Sens. 2020, 41, 487–505. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
- Pan, T. Technical characteristics of Gaofen-2 satellite. China Aerosp. 2015, 1, 3–9. [Google Scholar]
- Wu, H.F. Research on the application of ARCGIS in the calculation and statistics of the area of land acquisition measurement. Sichuan Archit. 2022, 42, 73–75. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. Microsoft Res. 2015, 770–778. [Google Scholar] [CrossRef]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 764–773. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Huang, R. AI image recognition tool based on PYQT5. Mod. Ind. Econ. Informatiz. 2023, 13, 90–91+94. [Google Scholar] [CrossRef]
Sensor | Band Number | Wavelength Range | Spatial Resolution | Width |
---|---|---|---|---|
Panchromatic | 1 | 0.45–0.90 μm | 0.8 m | 45 km |
Multispectral | 2 | 0.45–0.52 μm | 3.2 m | |
3 | 0.52–0.59 μm | 3.2 m | ||
4 | 0.63–0.69 μm | 3.2 m | ||
5 | 0.77–0.89 μm | 3.2 m |
Band No. | Band | Wave Length (μm) | Highlighting Features |
---|---|---|---|
Band1 | Blue | 0.45–0.52 | Water bodies, soils, and vegetation |
Band2 | Green | 0.52–0.59 | Vegetation and trees |
Band3 | Red | 0.63–0.69 | Easy to observe bare ground and plant density |
Band4 | Near-infrared | 0.77–0.89 | Estimate biomass and distinguish paths |
Source | Training Set | Validation Set | Test Set | Total by Location | Total |
---|---|---|---|---|---|
Zhejiang Province | 2550 | 737 | 368 | 3655 | 5136 |
Anhui Province | 2586 | 750 | 375 | 3711 |
Methods | PA (%) | Recall (%) | F1-S (%) | IoU (%) |
---|---|---|---|---|
SegNet | 77.07 | 80.56 | 65.29 | 48.47 |
TransUNet | 86.07 | 83.56 | 81.94 | 69.41 |
UNet | 83.22 | 76.42 | 79.50 | 65.98 |
DeeplabV3+ | 80.65 | 84.04 | 71.79 | 56.00 |
Methods | PA (%) | Recall (%) | F1-S (%) | IoU (%) |
---|---|---|---|---|
TransUNet | 86.07 | 83.56 | 81.94 | 69.41 |
Improvements to TransUNet | 92.39 | 91.41 | 92.63 | 86.28 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qi, L.; Zuo, D.; Wang, Y.; Tao, Y.; Tang, R.; Shi, J.; Gong, J.; Li, B. Convolutional Neural Network-Based Method for Agriculture Plot Segmentation in Remote Sensing Images. Remote Sens. 2024, 16, 346. https://doi.org/10.3390/rs16020346
Qi L, Zuo D, Wang Y, Tao Y, Tang R, Shi J, Gong J, Li B. Convolutional Neural Network-Based Method for Agriculture Plot Segmentation in Remote Sensing Images. Remote Sensing. 2024; 16(2):346. https://doi.org/10.3390/rs16020346
Chicago/Turabian StyleQi, Liang, Danfeng Zuo, Yirong Wang, Ye Tao, Runkang Tang, Jiayu Shi, Jiajun Gong, and Bangyu Li. 2024. "Convolutional Neural Network-Based Method for Agriculture Plot Segmentation in Remote Sensing Images" Remote Sensing 16, no. 2: 346. https://doi.org/10.3390/rs16020346
APA StyleQi, L., Zuo, D., Wang, Y., Tao, Y., Tang, R., Shi, J., Gong, J., & Li, B. (2024). Convolutional Neural Network-Based Method for Agriculture Plot Segmentation in Remote Sensing Images. Remote Sensing, 16(2), 346. https://doi.org/10.3390/rs16020346