PODD: A Dual-Task Detection for Greenhouse Extraction Based on Deep Learning
Abstract
:1. Introduction
- A dual-task deep learning method called Pixel-based and Object-based Dual-task Detection (PODD) that combines object detection and semantic segmentation is proposed to estimate the quantity and extract the area of agricultural greenhouses simultaneously based on remote sensing images. The proposed improvements are applied respectively on the two independent branches of dual-task detection (pixel-based branch and object-based branch), which enhance the effectiveness of greenhouse detection and extraction in RGB images by evaluating the metrics separately. All of the metrics in the dual-task algorithm reach 95% and even higher, which proves to surpass the state-of-the-art solutions in accurate distribution and precise numerical values of greenhouses. The PODD algorithm can attain more efficiency and overall quality for information acquisition, which proves that it could be useful for agricultural greenhouse automatic extraction (both quantity and area) in large areas to guide agricultural and environmental protection policymaking.
- To accurately estimate the number of greenhouses, the paper uses the target detection algorithm of the You Only Look Once X (YOLOX) [40] network embedded with two kinds of unoriginal modules, Convolutional Block Attention Module (CBAM) [41] and Adaptive Spatial Feature Fusion (ASFF) [42]. The introduction of CBAM and ASFF can retain more important feature information and fully use the features in different scales, which can bring better performance in detecting greenhouses according to the experiment result. Experimental results show that the mAP and F1-score of the improved YOLOX network reach 97.65% and 97.50%, 1.50% and 2.59% higher than the original YOLOX solution.
- To precisely extract the area of greenhouses, the paper uses the semantic segmentation model of the DeeplabV3+ [43,44,45] network with ResNet-101 [46] as the feature extraction network. The adoption of the feature extraction network ResNet-101 is proven to effectively reduce the problem of holes and plaques in extracting area, which promote better efficiency in extracting greenhouses by achieving peak mIoU and mAP metrics according to the experiment result. Experimental results show that the accuracy and mIoU of the DeeplabV3+ network reach 99.2% and 95.8%, 0.5% and 2.5% higher than the UNet solution.
- Image fusion technology is used to integrate pixel-based and object-based results in visualization. Access to comprehensive information on precise greenhouse quantity and area can bring data support for some agricultural measures from two aspects. Spatial distribution errors caused by a single result can be made up by the integration between object-based image results and pixel-based image results visually. At the same time, the integration and combination of two results can complement each other and support each other visually and numerically.
2. Study Area and Dataset
2.1. Study Area
2.2. Data Source
2.3. Greenhouse Dataset
3. Dual-Task Algorithm
3.1. Dual-Task Learning Module
3.1.1. Mosaic Data Enhancement
3.1.2. Transfer Learning
3.2. Target Detection Network for Greenhouse Quantity Estimation
3.2.1. Feature Attention Mechanism
3.2.2. Adaptive Spatial Feature Fusion
3.3. Semantic Segmentation Network for Greenhouse Area Extraction
3.4. Integration for Greenhouse Area Extraction and Quantity Estimation
4. Experimental Results
4.1. Evaluation Metrics
4.2. Estimation of Greenhouse Quantity
4.3. Extraction for Greenhouse Area
4.3.1. Evaluation of DeeplabV3+ Models with Different Backbone Networks
4.3.2. Comparative Test of Different Network Structures
4.4. Integration for Greenhouse Area Extraction and Quantity Estimation
5. Discussion
5.1. Advantages of PODD Algorithms
5.2. Limitations and Further Perspectives
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- He, W.Q.; Yan, C.R.; Liu, S.; Chang, R.; Wang, X.; Cao, S.; Liu, Q. The use of plastic mulch film in typical cotton planting regions and the associated environmental pollution. J. Agro-Environ. Sci. 2009, 28, 1618–1622. [Google Scholar]
- Sun, S.; Li, J.M.; Ma, Y.B.; Zhao, H.W. Accumulation of heavy metals in soil and vegetables of greenhouses in Hebei Province, China. J. Agric. Resour. Environ. 2019, 36, 236–244. [Google Scholar]
- Ren, C.; Sun, H.W.; Zhang, P.; Zhang, K. Pollution characteristics of soil phthalate esters in Beijing-Tianjin-Hebei Region. In Proceedings of the 19th Conference of Soil Environment Professional Committee of Chinese Soil Society and the 2nd Symposium of Soil Pollution Prevention and Control and Remediation Technology in Shandong Province, Jinan, China, 18 August 2017. [Google Scholar]
- Li, J.; Zhao, G.X.; Li, T.; Yue, Y.D. Information on greenhouse vegetable fields in TM images Technology research. J. Soil Water Conserv. 2004, 18, 126–129. [Google Scholar]
- Aguera, F.; Liu, J.G. Automatic greenhouse delineation from QuickBird and Ikonos satellite images. Comput. Electron. Agric. 2009, 6, 191–200. [Google Scholar] [CrossRef]
- Aguera, F.; Aguilar, M.A.; Aguilar, F.J. Detecting greenhouse changes from QuickBird imagery on the mediterranean coast. Int. J. Remote Sens. 2006, 27, 4751–4767. [Google Scholar] [CrossRef] [Green Version]
- Aguera, F.; Aguilar, M.A.; Aguilar, F.J. Using texture analysis to improve per-pixel classification of very high-resolution images for mapping plastic greenhouses. ISPRS J. Photogramm. Remote Sens. 2008, 63, 635–646. [Google Scholar] [CrossRef]
- Yang, D.; Chen, J.; Zhou, Y.; Chen, X.; Chen, X.; Cao, X. Mapping plastic greenhouse with medium spatial resolution satellite data: Development of a new spectral index. ISPRS J. Photogramm. Remote Sens. 2017, 128, 47–60. [Google Scholar] [CrossRef]
- Chen, J.; Shen, R.P.; Li, B.L.; Ti, C.; Yan, X.; Zhou, M.; Wang, S. The development of plastic greenhouse index based on Logistic regression analysis. Remote Sens. Land Resour. 2019, 31, 43–50. [Google Scholar]
- Liu, T.Y.; Zhao, Z.; Shi, T.G. An Extraction Method of Plastic Greenhouse Based on Sentinel-2. Agric. Eng. 2021, 11, 91–98. [Google Scholar]
- Wang, Z.; Zhang, Q.; Qian, J.; Xiao, X. Research on remote sensing detection of greenhouses based on enhanced water body index—Taking Jiangmen area of Guangdong as an example. Integr. Technol. 2017, 6, 11–21. [Google Scholar]
- Balcik, F.B.; Senel, G.; Goksel, C. Greenhouse mapping using object-based classification and Sentinel-2 satellite imagery. In Proceedings of the 2019 8th International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Istanbul, Turkey, 16–19 July 2019; pp. 1–5. [Google Scholar]
- Novelli, A.; Tarantino, E. Combining ad hoc spectral indices based on LANDSAT-8 OLI/TIRS sensor data for the detection of plastic cover vineyard. Remote Sens. Lett. 2015, 12, 933–941. [Google Scholar] [CrossRef]
- Wu, J.Y.; Liu, X.L.; Bai, Y.C.; Shi, Z.T.; Fu, Z. Recognition of plastic greenhouses based on GF-2 data combined with multi-texture features. J. Agric. Eng. 2019, 35, 173–183. [Google Scholar]
- Gao, M.J.; Jiang, Q.N.; Zhao, Y.Y.; Yang, W.; Shi, M. Comparison of plastic greenhouse extraction methods based on GF-2 remote sensing images. J. China Agric. Univ. 2018, 23, 125–134. [Google Scholar]
- Zhu, D.H.; Liu, Y.M.; Feng, Q.L.; Ou, C.; Guo, H.; Liu, J. Spatial-temporal Dynamic Changes of Agricultural Greenhouses in Shandong Province in Recent 30 Years Based on Google Earth Engine. J. Agric. Mach. 2020, 51, 8. [Google Scholar]
- Ma, H.R.; Luo, Z.Q.; Chen, P.T.; Guan, B. Extraction of agricultural greenhouse based on high-resolution remote sensing images and machine learning. Hubei Agric. Sci. 2020, 59, 199–202. [Google Scholar]
- Balcik, F.B.; Senel, G.; Goksel, C. Object-Based Classification of Greenhouses Using Sentinel-2 MSI and SPOT-7 Images: A Case Study from Anamur (Mersin), Turkey. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2769–2777. [Google Scholar] [CrossRef]
- Zhao, L.; Ren, H.Y.; Yang, L.S. Retrieval of Agriculture Greenhouse based on GF-2 Remote Sensing Images. Remote Sens. Technol. Appl. 2019, 34, 677–684. [Google Scholar]
- Li, Q.X. Extraction and analysis of agricultural greenhouse area based on high-resolution remote sensing data-taking Daxing District, Beijing as an example. Beijing Water 2016, 6, 14–17. [Google Scholar]
- Zhou, J.; Fan, X.W.; Liu, Y.H. Research on the method of UAV remote sensing in plastic greenhouse recognition. China Agric. Inf. 2019, 31, 95–111. [Google Scholar]
- Wang, J.M.; Li, Y. Research on data clustering and image segmentation based on K-means algorithm. J. Pingdingshan Univ. 2014, 29, 43–45. [Google Scholar]
- Yang, W.; Fang, T.; Xu, G. Semi-supervised learning remote sensing image classification based on Naive Bayesian. Comput. Eng. 2010, 36, 167–169. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Krähenbühl, P.; Koltun, V. Efficient inference in fully connected crfs with gaussian edge potentials. In Proceedings of the Advances in Neural Information Processing Systems, London, UK, 20–23 October 2012; pp. 109–117. [Google Scholar]
- Wu, G.M.; Chen, Q.; Ryosuke, S.; Guo, Z.; Shao, X.; Xu, Y. High precision building detection from aerial imagery using a U-Net like convolutional architecture. Acta Geod. Cartogr. Sin. 2018, 47, 864–872. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 39, 2481–2495. [Google Scholar] [CrossRef]
- Kavita, B.; Vijaya, M. Evaluation of deep learning CNN model for land use land cover classification and crop identification using Hyperspectral remote sensing images. J. Indian Soc. Remote Sens. 2019, 47, 1949–1958. [Google Scholar]
- Shi, W.X.; Lei, Y.T.; Wang, Y.T.; Yuan, Y.; Chen, J.B. Research on Remote Sensing Extraction Method of Agricultural Greenhouse Based on Deep Learning. Radio Eng. 2021, 51, 1477–1484. [Google Scholar]
- Song, T.Q.; Zhang, X.; Li, J.; Fan, H.S.; Sun, Y.Y.; Zong, D.; Liu, T.X. Research on application of deep learning in multi-temporal greenhouse extraction. Comput. Eng. Appl. 2020, 56, 242–248. [Google Scholar]
- Zheng, L.; He, Z.M.; Ding, H.Y. Research on the Sparse Plastic Shed Extraction from High Resolution Images Using ENVINet5 Deep Learning Method. Remote Sens. Technol. Appl. 2021, 36, 908–915. [Google Scholar]
- Li, M.; Zhang, Z.; Lei, L.; Wang, X.; Guo, X. Agricultural Greenhouses Detection in High-Resolution Satellite Images Based on Convolutional Neural Networks: Comparison of Faster R-CNN, YOLO v3 and SSD. Sensors 2020, 20, 4938. [Google Scholar] [CrossRef]
- Lin, N.; Feng, L.R.; Zhang, X.Q. Aircraft detection in remote sensing image based on optimized Faster-RCNN. Remote Sens. Technol. Appl. 2021, 36, 275–284. [Google Scholar]
- Qian, J.R. Research on Dynamic Human Ear Recognition Method Based on Deep Learning. Ph.D. Thesis, Changchun University, Changchun, China, 2021. [Google Scholar]
- Li, Q.; Chen, J.J.; Li, Q.T.; Li, B.P.; Lu, K.X.; Zan, L.Y.; Chen, Z.C. Detection of tailings pond in Beijing-Tianjin-Hebei region based on SSD model. Remote Sens. Technol. Appl. 2021, 36, 293–303. [Google Scholar]
- Cheng, G.; Zhou, P.; Han, J. Learning Rotation-Invariant Convolutional Neural Networks for Object Detection in VHR Optical Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7405–7415. [Google Scholar] [CrossRef]
- Ma, A.L.; Chen, D.Y.; Zhong, Y.F.; Zheng, Z.; Zhang, L. National-scale greenhouse mapping for high spatial resolution remote sensing imagery using a dense object dual-task deep learning framework: A case study of China. ISPRS J. Photogramm. Remote Sens. 2021, 181, 279–294. [Google Scholar] [CrossRef]
- Chen, D.Y.; Zhong, Y.F.; Ma, A.L.; Cao, L. Dense greenhouse extraction in high spatial resolution remote sensing imagery. In Proceedings of the 2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa Village, HI, USA, 16–26 July 2020; pp. 4092–4095. [Google Scholar]
- Liu, Y.; Chen, D.; Ma, A.; Zhong, Y.; Fang, F.; Xu, K. Multiscale u-shaped CNN building instance extraction framework with edge constraint for high-spatial resolution remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2021, 29, 6106–6120. [Google Scholar] [CrossRef]
- Zheng, G.; Liu, S.T.; Wang, F.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series in 2021. In Proceedings of the International Conference on Machine Learning, Vienna, Austria, 18–24 July 2021. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the ECCV2018, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Liu, S.T.; Huang, D.; Wang, Y.H. Learning Spatial Fusion for Single-Shot Object Detection. arXiv 2019, arXiv:1911.09516. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [Green Version]
- Liu, J.; Wang, Z.; Cheng, K. An improved algorithm for semantic segmentation of remote sensing images based on DeepLabv3+. In Proceedings of the 5th International Conference on Communication and Information Processing, Chongqing, China, 15–17 November 2019; pp. 124–128. [Google Scholar]
- Li, Z.; Wang, R.; Zhang, W.; Hu, F.; Meng, L. Multiscale features supported DeepLabV3+ optimization scheme for accurate water semantic segmentation. IEEE Access 2019, 7, 155787–155804. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Qiu, X.L. China successfully launched Gaofen-2 satellite. China Aerosp. 2014, 9, 8. [Google Scholar]
- Pan, T. Technical Characteristics of Gaofen-2 Satellite. China Aerosp. 2015, 1, 3–9. [Google Scholar]
- Defries, R.S. NDVI-derived land cover classifications at a global scale. Int. J. Remote Sens. 1994, 15, 3567–3586. [Google Scholar] [CrossRef]
- Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
- Yun, S.; Han, D.; Chun, S.; Oh, S.J.; Yoo, Y.; Choe, J. CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 20–26 October 2019; pp. 6022–6031. [Google Scholar]
- Yosinski, J.; Jeff, C.; Yoshua, B.; Hod, L. How transferable are features in deep neural networks? In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 3320–3328. [Google Scholar]
- Sara, V.; Joao, C.; Lourdes, A.; Jorge, B. Reconstructing PASCAL VOC. In Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Vishnu, S. Deep Learning with PyTorch(M); Packt Publishing: Birmingham, UK, 2018. [Google Scholar]
- Stefan, E.; Eiji, U.; Kenji, D. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Netw. 2018, 107, 3–11. [Google Scholar]
- Wang, X.J.; Ouyang, W. Multi-scale Recurrent Attention Network for Image Motion Deblurring. Infrared Laser Eng. 2022, 51, 20210605-1. [Google Scholar]
- Zhu, X.Z.; Cheng, D.Z.; Zhang, Z.; Lin, S.; Dai, J. An Empirical Study of Spatial Attention Mechanisms in Deep Networks. In Proceedings of the ICCV2019, Seoul, Korea, 27 October–3 November 2019; pp. 6687–6696. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the ECCV2018, Munich, Germany, 8–14 September 2018; pp. 833–851. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the CVPR2018, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the CVPR2017, Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Feng, S.T.; Sheng, Z.Y.; Hou, X.H.; Tian, Y.; Bi, F.K. YOLOV5 Remote Sensing Image Vehicle Target Detection Based on Spinning Box Regression. In Proceedings of the 15th National Conference on Signal and Intelligent Information Processing and Application, Chongqing, China, 19 August 2022. [Google Scholar]
- Guo, X.; Li, P. Mapping plastic materials in an urban area: Development of the normalized difference plastic index using WorldView-3 superspectral data. ISPRS J. Photogramm. Remote Sens. 2020, 169, 214–226. [Google Scholar] [CrossRef]
- Shi, L.F.; Huang, X.J.; Zhong, T.Y.; Taubenbock, H. Mapping Plastic Greenhouses Using Spectral Metrics Derived from GaoFen-2 Satellite Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 49–59. [Google Scholar] [CrossRef]
- Chen, W.; Xu, Y.M.; Zhang, Z.; Yang, L.; Pan, X.B.; Jia, Z. Mapping agricultural plastic greenhouses using Google Earth images and deep learning. Comput. Electron. Agric. 2021, 191, 106552. [Google Scholar] [CrossRef]
- Wu, C.F.; Deng, J.S.; Wang, K.; Ma, L.G.; Tahmassebi, A.R.S. Object-based classification approach for greenhouse mapping using Landsat-8 imagery. Int. J. Agric. Biol. Eng. 2016, 9, 79–88. [Google Scholar]
- Aguilar, M.A.; Novelli, A.; Nemmaoui, A.; Aguilar, F.J.; González-Yebra, Ó. Optimizing Multiresolution Segmentation for Extracting Plastic Greenhouses from WorldView-3 Imagery; Springer: Cham, Switzerland, 2017. [Google Scholar]
- Zhong, C.; Ting, Z.; Chao, O. End-to-End Airplane Detection Using Transfer Learning in Remote Sensing Images. Remote Sens. 2018, 10, 139. [Google Scholar]
Imaging Equipment | Ground Resolution/m | Bands | Spectral Range/μm |
---|---|---|---|
Panchromatic camera | 0.81 | Panchromatic band | 0.45~0.90 |
Multispectral camera | 3.24 | Band 1: Blue | 0.45~0.52 |
Band 2: Green | 0.52~0.59 | ||
Band 3: Red | 0.63~0.69 | ||
Band 4: Near-infrared | 0.77~0.89 |
The Network Type | Recall/% | Precision/% | F1-Score/% | mAP/% |
---|---|---|---|---|
YOLOV5 | 96.57 | 94.86 | 96.00 | 95.06 |
YOLOX | 97.71 | 96.14 | 96.92 | 96.65 |
YOLOX+CBAM | 98.61 | 96.25 | 97.42 | 97.62 |
YOLOX+ASFF | 98.12 | 96.15 | 97.13 | 96.98 |
YOLOX+CBAM+ASFF (proposed) | 98.61 | 96.40 | 97.50 | 97.65 |
Number of Neural Network Groups | Backbone | Batch-Size | Epoch | Training Time/h | Validation Set mIoU | Validation Set Acc |
---|---|---|---|---|---|---|
Group1:MobileNet_4_200 | MobileNetV2 | 4 | 200 | 3.27 | 0.956 | 0.992 |
Group2:MobileNet_8_200 | MobileNetV2 | 8 | 200 | 2.98 | 0.956 | 0.992 |
Group3:ResNet_4_200 | ResNet-101 | 4 | 200 | 6.55 | 0.957 | 0.992 |
Group4:ResNet_8_200 | ResNet-101 | 8 | 200 | 5.78 | 0.958 | 0.992 |
Group5:Xception_4_200 | Aligned Xception | 4 | 200 | 7.98 | 0.943 | 0.990 |
Network Type | Accuracy/% | mIoU/% |
---|---|---|
ENVINet5 | 94.2 | 88.5 |
UNet | 98.7 | 93.3 |
DeeplabV3+ | 99.2 | 95.8 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Feng, J.; Wang, D.; Yang, F.; Huang, J.; Wang, M.; Tao, M.; Chen, W. PODD: A Dual-Task Detection for Greenhouse Extraction Based on Deep Learning. Remote Sens. 2022, 14, 5064. https://doi.org/10.3390/rs14195064
Feng J, Wang D, Yang F, Huang J, Wang M, Tao M, Chen W. PODD: A Dual-Task Detection for Greenhouse Extraction Based on Deep Learning. Remote Sensing. 2022; 14(19):5064. https://doi.org/10.3390/rs14195064
Chicago/Turabian StyleFeng, Junning, Dongliang Wang, Fan Yang, Jing Huang, Minghao Wang, Mengfan Tao, and Wei Chen. 2022. "PODD: A Dual-Task Detection for Greenhouse Extraction Based on Deep Learning" Remote Sensing 14, no. 19: 5064. https://doi.org/10.3390/rs14195064
APA StyleFeng, J., Wang, D., Yang, F., Huang, J., Wang, M., Tao, M., & Chen, W. (2022). PODD: A Dual-Task Detection for Greenhouse Extraction Based on Deep Learning. Remote Sensing, 14(19), 5064. https://doi.org/10.3390/rs14195064