Multi-SUAV Collaboration and Low-Altitude Remote Sensing Technology-Based Image Registration and Change Detection Network of Garbage Scattered Areas in Nature Reserves
Abstract
:1. Introduction
- (1)
- We propose an end-to-end CNN architecture RegCD-Net, which integrates registration and change detection functions in a network, to achieve registration and change detection in bi-temporal SUAV low-altitude remote sensing images.
- (2)
- We integrate global and local correlations to generate an optical flow pyramid and realize image registration through layer-by-layer optical flow fields.
- (3)
- We utilize nested connections to combine effective information in different layers and perform deep supervision through a combined attention module to achieve change detection.
- (4)
- We propose a method for generation change detection dataset with viewing angle changes using optical flow fields and generate a bi-temporal SUAV low-altitude remote sensing dataset for change detection in the garbage-scattered areas of nature reserves.
2. Related Work
3. Materials and Methods
3.1. Network Architecture
Algorithm 1 Inference of RegCD-Net for change detection |
3.2. Optical Flow Registration
3.2.1. Local Correlation
3.2.2. Global Correlation
3.2.3. Local and Global Correlation Assemble
3.2.4. Flow Decoder
3.2.5. Optical Flow Pyramid
3.3. Nested Connection Change Detection
3.3.1. Nested Connection Up-Sampling
3.3.2. Channel Attention
3.4. Multi-SUAV Collaboration Platform
3.4.1. Multi-SUAV Collaboration
3.4.2. Path Planning
3.4.3. Location
3.5. Training
3.5.1. Loss Function
3.5.2. Dataset Generation
4. Experiments
4.1. Training Datasets
4.2. Evaluation Metrics
4.3. Implementation Details
5. Results
- FC-Siam-Conc [58]: The baseline model for change detection, which is fully consisted of convolution. It is a simple combination of UNet and Siamese networks and uses feature concatenation to fuse the bi-temporal information.
- FC-Siam-Diff [58]: The baseline model for change detection, whose architecture is similar to FC-Siam-Conc, but uses multi-scale feature difference to fuse the bi-temporal information.
- UNet++_MSOF [59]: Feature fusion method, which inputs concatenated bi-temporal images into UNet++, and uses the multiple side output fusion for deep supervision.
- IFN [37]: Multi-scale feature concatenation method, which fuses the multi-level deep features of images with different features by attention modules, and uses a deep supervision strategy for optimization.
- DASNet [12]: Attention-based method, which extracts features by a Siamese backbone, and uses a dual attention mechanism to build connections between local features to obtain more discriminant feature representations.
- BIT [60]: Transformer-based method, which models contexts within the spatial-temporal domain through multi-attention heads, and projects them to the pixel space to refine the representation of the original features.
- SNUNet-CD [40]: Multi-scale feature concatenation method, which combines UNet++ and Siamese network, and uses the ensemble channel attention module to integrate multi-level outputs to perform deep supervision.
- RDP-Net [61]: Feature fusion method, which uses region detail preserving the network to improve the detection performance on boundaries and small regions.
6. Ablation Studies
6.1. Ablation on Loss Function
6.2. Effect of Assemble Correlation and Nested Connection
6.3. Comparison on Attention Modules
7. Discussion
- RegCD-Net can achieve the registration of bi-temporal SUAV low-altitude remote sensing images with viewpoint changes, with no pre-registration, better integrity, fast speed and relatively fewer parameter numbers. It can improve change detection accuracy through end-to-end optimization.
- Global and local assemble correlation can capture large displacements between feature maps in the deep layers of the network, and can achieve more detailed local associations in the shallow layers. It achieves fine registration of remote sensing images with large viewpoint changes by the construction of a coarse-to-fine optical flow pyramid.
- Nested connection up-sampling can combine the rich semantic information in the deep layers and the precise location information in the shallow layers through different skip connections and up-sampling pathways to achieve more delicate edge detection performance in the change detection maps.
- Channel group attention module CGAM can effectively focus on the intra-group common feature and the inter-group different features of four up-sampling outputs from different up-sampling pathways, and select more appropriate feature representation channels to make the boundary position in the change detection results more precise.
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Xiao, P.; Zhang, X.; Wang, D.; Yuan, M.; Feng, X.; Kelly, M. Change detection of built-up land: A framework of combining pixel-based detection and object-based recognition. ISPRS J. Photogramm. Remote Sens. 2016, 119, 402–414. [Google Scholar] [CrossRef]
- Gao, S.; Li, W.; Sun, K.; Wei, J.; Chen, Y.; Wang, X. Built-Up Area Change Detection Using Multi-Task Network with Object-Level Refinement. Remote Sens. 2022, 14, 957. [Google Scholar] [CrossRef]
- Xing, J.; Sieber, R.; Caelli, T. A scale-invariant change detection method for land use/cover change research. ISPRS J. Photogramm. Remote Sens. 2018, 141, 252–264. [Google Scholar] [CrossRef]
- Lv, Z.; Liu, T.; Zhang, P.; Atli Benediktsson, J.; Chen, Y. Land cover change detection based on adaptive contextual information using bi-temporal remote sensing images. Remote Sens. 2018, 10, 901. [Google Scholar] [CrossRef] [Green Version]
- Lu, M.; Pebesma, E.; Sanchez, A.; Verbesselt, J. Spatio-temporal change detection from multidimensional arrays: Detecting deforestation from MODIS time series. ISPRS J. Photogramm. Remote Sens. 2016, 117, 227–236. [Google Scholar] [CrossRef]
- Vega, P.J.S.; da Costa, G.A.O.P.; Feitosa, R.Q.; Adarme, M.X.O.; de Almeida, C.A.; Heipke, C.; Rottensteiner, F. An unsupervised domain adaptation approach for change detection and its application to deforestation mapping in tropical biomes. ISPRS J. Photogramm. Remote Sens. 2021, 181, 113–128. [Google Scholar] [CrossRef]
- Jiang, J.; Xing, Y.; Wei, W.; Yan, E.; Xiang, J.; Mo, D. DSNUNet: An Improved Forest Change Detection Network by Combining Sentinel-1 and Sentinel-2 Images. Remote Sens. 2022, 14, 5046. [Google Scholar] [CrossRef]
- Niu, C.; Zhang, H.; Liu, W.; Li, R.; Hu, T. Using a fully polarimetric SAR to detect landslide in complex surroundings: Case study of 2015 Shenzhen landslide. ISPRS J. Photogramm. Remote Sens. 2021, 174, 56–67. [Google Scholar] [CrossRef]
- Wang, X.; Fan, X.; Xu, Q.; Du, P. Change detection-based co-seismic landslide mapping through extended morphological profiles and ensemble strategy. ISPRS J. Photogramm. Remote Sens. 2022, 187, 225–239. [Google Scholar] [CrossRef]
- Tang, L.; Deng, Y.; Ma, Y.; Huang, J.; Ma, J. SuperFusion: A Versatile Image Registration and Fusion Network with Semantic Awareness. IEEE/CAA J. Autom. Sin. 2022, 9, 2121–2137. [Google Scholar] [CrossRef]
- Chen, H.; Shi, Z. A spatial-temporal attention-based method and a new dataset for remote sensing image change detection. Remote Sens. 2020, 12, 1662. [Google Scholar] [CrossRef]
- Chen, J.; Yuan, Z.; Peng, J.; Chen, L.; Huang, H.; Zhu, J.; Liu, Y.; Li, H. DASNet: Dual attentive fully convolutional Siamese networks for change detection in high-resolution satellite images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 1194–1206. [Google Scholar] [CrossRef]
- Chen, P.; Guo, L.; Zhang, X.; Qin, K.; Ma, W.; Jiao, L. Attention-Guided Siamese Fusion Network for Change Detection of Remote Sensing Images. Remote Sens. 2021, 13, 4597. [Google Scholar] [CrossRef]
- Zhang, L.; Hu, X.; Zhang, M.; Shu, Z.; Zhou, H. Object-level change detection with a dual correlation attention-guided detector. ISPRS J. Photogramm. Remote Sens. 2021, 177, 147–160. [Google Scholar] [CrossRef]
- Cheng, H.; Wu, H.; Zheng, J.; Qi, K.; Liu, W. A hierarchical self-attention augmented Laplacian pyramid expanding network for change detection in high-resolution remote sensing images. ISPRS J. Photogramm. Remote Sens. 2021, 182, 52–66. [Google Scholar] [CrossRef]
- Shen, Q.; Huang, J.; Wang, M.; Tao, S.; Yang, R.; Zhang, X. Semantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2022, 189, 78–94. [Google Scholar] [CrossRef]
- Chen, P.; Zhang, B.; Hong, D.; Chen, Z.; Yang, X.; Li, B. FCCDN: Feature constraint network for VHR image change detection. ISPRS J. Photogramm. Remote Sens. 2022, 187, 101–119. [Google Scholar] [CrossRef]
- Zhu, Q.; Guo, X.; Deng, W.; Guan, Q.; Zhong, Y.; Zhang, L.; Li, D. Land-use/land-cover change detection based on a Siamese global learning framework for high spatial resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2022, 184, 63–78. [Google Scholar] [CrossRef]
- Zhang, X.; He, L.; Qin, K.; Dang, Q.; Si, H.; Tang, X.; Jiao, L. SMD-Net: Siamese Multi-Scale Difference-Enhancement Network for Change Detection in Remote Sensing. Remote Sens. 2022, 14, 1580. [Google Scholar] [CrossRef]
- Zheng, J.; Tian, Y.; Yuan, C.; Yin, K.; Zhang, F.; Chen, F.; Chen, Q. MDESNet: Multitask Difference-Enhanced Siamese Network for Building Change Detection in High-Resolution Remote Sensing Images. Remote Sens. 2022, 14, 3775. [Google Scholar] [CrossRef]
- Li, J.; Zhu, S.; Gao, Y.; Zhang, G.; Xu, Y. Change Detection for High-Resolution Remote Sensing Images Based on a Multi-Scale Attention Siamese Network. Remote Sens. 2022, 14, 3464. [Google Scholar] [CrossRef]
- Ma, J.; Tang, L.; Fan, F.; Huang, J.; Mei, X.; Ma, Y. SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer. IEEE/CAA J. Autom. Sin. 2022, 9, 1200–1217. [Google Scholar] [CrossRef]
- Jaderberg, M.; Simonyan, K.; Zisserman, A. Spatial transformer networks. Adv. Neural Inf. Process. Syst. 2015, 28, 2017–2025. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; Brox, T. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2758–2766. [Google Scholar]
- Ilg, E.; Mayer, N.; Saikia, T.; Keuper, M.; Dosovitskiy, A.; Brox, T. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2462–2470. [Google Scholar]
- Ranjan, A.; Black, M.J. Optical flow estimation using a spatial pyramid network. In Proceedings of the IEEE conference on computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4161–4170. [Google Scholar]
- Sun, D.; Yang, X.; Liu, M.Y.; Kautz, J. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8934–8943. [Google Scholar]
- Melekhov, I.; Tiulpin, A.; Sattler, T.; Pollefeys, M.; Rahtu, E.; Kannala, J. Dgc-net: Dense geometric correspondence network. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 7–11 January 2019; pp. 1034–1042. [Google Scholar]
- Rocco, I.; Arandjelovic, R.; Sivic, J. Convolutional neural network architecture for geometric matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6148–6157. [Google Scholar]
- Xiao, X.; Lian, S.; Luo, Z.; Li, S. Weighted res-unet for high-quality retina vessel segmentation. In Proceedings of the 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 19–21 October 2018; pp. 327–331. [Google Scholar]
- Guan, S.; Khan, A.A.; Sikdar, S.; Chitnis, P.V. Fully dense UNet for 2-D sparse photoacoustic tomography artifact removal. IEEE J. Biomed. Health Inform. 2019, 24, 568–576. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Zheng, Z.; Wan, Y.; Zhang, Y.; Xiang, S.; Peng, D.; Zhang, B. CLNet: Cross-layer convolutional neural network for change detection in optical remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2021, 175, 247–267. [Google Scholar] [CrossRef]
- Bi, H.B.; Lu, D.; Zhu, H.H.; Yang, L.N.; Guan, H.P. STA-Net: Spatial-temporal attention network for video salient object detection. Appl. Intell. 2021, 51, 3450–3459. [Google Scholar] [CrossRef]
- Zhang, C.; Yue, P.; Tapete, D.; Jiang, L.; Shangguan, B.; Huang, L.; Liu, G. A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images. ISPRS J. Photogramm. Remote Sens. 2020, 166, 183–200. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–11. [Google Scholar]
- Fang, S.; Li, K.; Shao, J.; Li, Z. SNUNet-CD: A densely connected Siamese network for change detection of VHR images. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Hui, T.W.; Tang, X.; Loy, C.C. A lightweight optical flow CNN—Revisiting data fidelity and regularization. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2555–2569. [Google Scholar] [CrossRef] [Green Version]
- Hui, T.W.; Tang, X.; Loy, C.C. Liteflownet: A lightweight convolutional neural network for optical flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8981–8989. [Google Scholar]
- Kim, S.; Min, D.; Jeong, S.; Kim, S.; Jeon, S.; Sohn, K. Semantic attribute matching networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12339–12348. [Google Scholar]
- Rocco, I.; Cimpoi, M.; Arandjelović, R.; Torii, A.; Pajdla, T.; Sivic, J. Neighbourhood consensus networks. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Xing, L.; Fan, X.; Dong, Y.; Xiong, Z.; Xing, L.; Yang, Y.; Bai, H.; Zhou, C. Multi-UAV cooperative system for search and rescue based on YOLOv5. Int. J. Disaster Risk Reduct. 2022, 76, 102972. [Google Scholar] [CrossRef]
- Stanković, M.; Mirza, M.M.; Karabiyik, U. UAV forensics: DJI mini 2 case study. Drones 2021, 5, 49. [Google Scholar] [CrossRef]
- Salamh, F.E.; Mirza, M.M.; Karabiyik, U. UAV forensic analysis and software tools assessment: DJI Phantom 4 and Matrice 210 as case studies. Electronics 2021, 10, 733. [Google Scholar] [CrossRef]
- Kwak, J.; Sung, Y. Autonomous UAV flight control for GPS-based navigation. IEEE Access 2018, 6, 37947–37955. [Google Scholar] [CrossRef]
- Grayson, B.; Penna, N.T.; Mills, J.P.; Grant, D.S. GPS precise point positioning for UAV photogrammetry. Photogramm. Rec. 2018, 33, 427–447. [Google Scholar] [CrossRef] [Green Version]
- Annis, A.; Nardi, F.; Petroselli, A.; Apollonio, C.; Arcangeletti, E.; Tauro, F.; Belli, C.; Bianconi, R.; Grimaldi, S. UAV-DEMs for small-scale flood hazard mapping. Water 2020, 12, 1717. [Google Scholar] [CrossRef]
- Ajayi, O.G.; Salubi, A.A.; Angbas, A.F.; Odigure, M.G. Generation of accurate digital elevation models from UAV acquired low percentage overlapping images. Int. J. Remote Sens. 2017, 38, 3113–3134. [Google Scholar] [CrossRef]
- Uysal, M.; Toprak, A.S.; Polat, N. DEM generation with UAV Photogrammetry and accuracy analysis in Sahitler hill. Measurement 2015, 73, 539–543. [Google Scholar] [CrossRef]
- Xi, X.; Cao, X.; Yang, P.; Chen, J.; Quek, T.; Wu, D. Joint user association and UAV location optimization for UAV-aided communications. IEEE Wirel. Commun. Lett. 2019, 8, 1688–1691. [Google Scholar] [CrossRef]
- Duffy, J.P.; Cunliffe, A.M.; DeBell, L.; Sandbrook, C.; Wich, S.A.; Shutler, J.D.; Myers-Smith, I.H.; Varela, M.R.; Anderson, K. Location, location, location: Considerations when using lightweight drones in challenging environments. Remote Sens. Ecol. Conserv. 2018, 4, 7–19. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 24–27 October 2017; pp. 2980–2988. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Daudt, R.C.; Le Saux, B.; Boulch, A. Fully convolutional siamese networks for change detection. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 4063–4067. [Google Scholar]
- Peng, D.; Zhang, Y.; Guan, H. End-to-end change detection for high resolution satellite images using improved UNet++. Remote Sens. 2019, 11, 1382. [Google Scholar] [CrossRef] [Green Version]
- Chen, H.; Qi, Z.; Shi, Z. Remote sensing image change detection with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
- Chen, H.; Pu, F.; Yang, R.; Tang, R.; Xu, X. RDP-Net: Region Detail Preserving Network for Change Detection. arXiv 2022, arXiv:2202.09745. [Google Scholar] [CrossRef]
- Truong, P.; Danelljan, M.; Timofte, R. GLU-Net: Global-local universal network for dense flow and correspondences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6258–6268. [Google Scholar]
Parameter Name | Value |
---|---|
rotation range | ±30° |
translation range | ±0.2 |
scale range | 0.8–1.2 |
pixel intensity range | 0.5–1.5 per channel |
contrast range | 0.5–2 per channel |
Gaussian distribution | = 0, = 0.05 × 255 |
Gaussian blur probability | 0.5 |
Gaussian noise probability | 0.5 per channel |
Gaussian kernel size | 3 |
Method | Pre-Registration | Backbone | P (%) | R (%) | F1 (%) | Params (M) | FPS |
---|---|---|---|---|---|---|---|
FC-Siam-conc | × | UNet | 91.97 | 57.06 | 70.24 | 1.55 | × |
FC-Siam-diff | × | UNet | 92.20 | 56.69 | 70.02 | 1.35 | × |
UNet++_MSOF | × | UNet++ | 85.90 | 79.76 | 82.53 | 8.83 | × |
IFN | × | VGG16 | 94.88 | 57.75 | 71.8 | 35.99 | × |
DASNet | × | ResNet50 | 45.79 | 79.37 | 64.29 | 50.27 | × |
BIT | × | ResNet18 | 88.18 | 76.63 | 81.43 | 12.40 | × |
SNUNet-CD | × | UNet++ | 87.59 | 67.07 | 75.28 | 12.03 | × |
RDP-Net | × | RSNet | 87.92 | 85.62 | 86.73 | 1.70 | × |
FC-Siam-conc | ✓ | UNet | 95.94 | 94.31 | 94.67 | 15.14 (1.55 + 13.59) | 26.3 |
FC-Siam-diff | ✓ | UNet | 94.78 | 92.71 | 93.73 | 14.94 (1.35 + 13.59) | 26.0 |
UNet++_MSOF | ✓ | UNet++ | 95.28 | 94.16 | 94.72 | 22.42 (8.83 + 13.59) | 25.1 |
IFN | ✓ | VGG16 | 95.36 | 94.81 | 95.08 | 49.58 (35.99 + 13.59) | 21.2 |
DASNet | ✓ | ResNet50 | 85.42 | 96.42 | 90.59 | 63.86 (50.27 + 13.59) | 18.9 |
BIT | ✓ | ResNet18 | 93.35 | 95.29 | 94.31 | 25.99 (12.40 + 13.59) | 20.1 |
SNUNet-CD | ✓ | UNet++ | 96.40 | 95.12 | 95.74 | 25.62 (12.03 + 13.59) | 20.7 |
RDP-Net | ✓ | RSNet | 95.23 | 94.80 | 95.01 | 15.29 (1.70 + 13.59) | 22.0 |
RegCD-Net (our) | × | ResNet18 | 96.74 | 95.92 | 96.32 | 20.66 | 25.6 |
AEPE | P (%) | R (%) | F1 (%) | |
---|---|---|---|---|
4.20 | 95.42 | 95.35 | 95.38 | |
5.22 | 96.74 | 95.92 | 96.32 | |
6.15 | 95.28 | 94.07 | 94.67 | |
7.34 | 94.57 | 92.35 | 93.45 | |
8.48 | 94.15 | 90.66 | 92.37 | |
9.55 | 93.68 | 88.87 | 91.14 |
LC | GC | UC | NC | P (%) | R (%) | F1 (%) |
---|---|---|---|---|---|---|
✓ | × | ✓ | × | 91.33 | 86.55 | 88.82 |
✓ | × | × | ✓ | 93.24 | 88.98 | 91.00 |
✓ | ✓ | ✓ | × | 93.81 | 91.50 | 92.63 |
✓ | ✓ | × | ✓ | 94.88 | 94.02 | 94.45 |
CAM | SAM | CGAM (Our) | P (%) | R (%) | F1 (%) |
---|---|---|---|---|---|
× | × | × | 94.88 | 94.02 | 94.45 |
✓ | × | × | 95.26 | 94.74 | 94.99 |
× | ✓ | × | 94.96 | 94.93 | 94.94 |
✓ | ✓ | × | 95.82 | 94.96 | 95.38 |
× | × | ✓ | 96.74 | 95.92 | 96.32 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yan, K.; Dong, Y.; Yang, Y.; Xing, L. Multi-SUAV Collaboration and Low-Altitude Remote Sensing Technology-Based Image Registration and Change Detection Network of Garbage Scattered Areas in Nature Reserves. Remote Sens. 2022, 14, 6352. https://doi.org/10.3390/rs14246352
Yan K, Dong Y, Yang Y, Xing L. Multi-SUAV Collaboration and Low-Altitude Remote Sensing Technology-Based Image Registration and Change Detection Network of Garbage Scattered Areas in Nature Reserves. Remote Sensing. 2022; 14(24):6352. https://doi.org/10.3390/rs14246352
Chicago/Turabian StyleYan, Kai, Yaxin Dong, Yang Yang, and Lin Xing. 2022. "Multi-SUAV Collaboration and Low-Altitude Remote Sensing Technology-Based Image Registration and Change Detection Network of Garbage Scattered Areas in Nature Reserves" Remote Sensing 14, no. 24: 6352. https://doi.org/10.3390/rs14246352
APA StyleYan, K., Dong, Y., Yang, Y., & Xing, L. (2022). Multi-SUAV Collaboration and Low-Altitude Remote Sensing Technology-Based Image Registration and Change Detection Network of Garbage Scattered Areas in Nature Reserves. Remote Sensing, 14(24), 6352. https://doi.org/10.3390/rs14246352