Multi-Scale Similarity Guidance Few-Shot Network for Ship Segmentation in SAR Images
Abstract
:1. Introduction
- A multi-scale similarity guidance few-shot learning framework with a dual-branch structure is proposed to implement ship segmentation in heterogeneous SAR images with few annotated samples;
- A residual block combined with FRN is designed to improve the generalization capability in the target domain, which forms the encoder of the support and query branches for domain-independent feature extraction;
- A similarity guidance module is proposed and inserted between two branches at various scales to perform hand-on-hand segmentation guidance of the query branch by pixel-wise similarity measurement;
- A ship segmentation dataset named SARShip-4i is built, and the experiment results on this dataset demonstrate that the proposed MSG-FN has superior ship segmentation performance.
2. Related Work
2.1. Semantic Segmentation
2.2. Few-Shot Learning
2.3. Few-Shot Semantic Segmentation
3. Method
3.1. Problem Setup
3.2. MSG-FN Architecture
3.3. Residual Block Combined with FRN
3.4. Multi-Scale Similarity Guidance Module
3.5. Training and Inference
Algorithm 1. The Training and Test Procedures of the Proposed MSG-FN |
Input: Meta-training set and meta-testing set. |
Output: Network parameters θ. Initialization: Initialize MSG-FN with Kaiming uniform |
fordo
|
End |
fordo
|
End |
4. Experiment
4.1. SARShip-4i Dataset
4.2. Implementation Details
4.3. Evaluation Metrics
4.4. Comparison with the State-of-the-Art
4.5. Analysis of the Learning Strategy
4.6. Ablation Study
4.7. Robustness Analysis
4.8. Running Time
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
SAR | Synthetic aperture radar; |
MSG-FN | Multi-scale similarity guidance few-shot network; |
CFAR | Constant false alarm rate; |
CNN | Convolutional neural network; |
FSL | Few-shot learning; |
FRN | Filter response normalization; |
FCN | Fully convolutional network; |
PPM | Pyramid pooling module; |
ASPP | Atrous spatial pyramid pooling; |
Res-FRN | Residual block combined with FRN; |
SGM | Similarity guidance module; |
ResNet | Residual network; |
BN | Batch normalization; |
TLU | Threshold linear unit; |
ReLU | Rectified linear unit; |
SGD | Stochastic gradient descent; |
IoU | Intersection over union; |
The number of categories in the training set; | |
The number of samples of each category in the training set; | |
The number of training samples; | |
The meta-training set and the meta-testing set; | |
The support set and the query set; | |
The support image and its segmentation label; | |
The query image and its segmentation label; | |
The input tensor of the FRN layer; | |
The batch size during training; | |
The number of channels; | |
A small constant; | |
Learnable parameters; | |
The similarity value; | |
The support feature vector; | |
The query feature vector; | |
The target feature vector; | |
The number of true positives; | |
The number of false positives; | |
The number of false negatives; | |
References
- Lin, Z.; Ji, K.; Leng, X.; Kuang, G. Squeeze and excitation rank faster R-CNN for ship detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2019, 16, 751–755. [Google Scholar] [CrossRef]
- Zhang, X.; Wang, H.; Xu, C.; Lv, Y.; Fu, C.; Xiao, H.; He, Y. A lightweight feature optimizing network for ship detection in SAR image. IEEE Access 2019, 7, 141662–141678. [Google Scholar] [CrossRef]
- Cui, Z.; Li, Q.; Cao, Z.; Liu, N. Dense attention pyramid networks for multi-scale ship detection in SAR images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8983–8997. [Google Scholar] [CrossRef]
- Song, Y.; Peng, G.; Sun, D.; Xie, X. Active contours driven by Gaussian function and adaptive-scale local correntropy-based K-means clustering for fast image segmentation. Signal Process. 2020, 174, 107625. [Google Scholar] [CrossRef]
- Xia, X.; Lin, T.; Chen, Z.; Xu, H. Salient object segmentation based on active contouring. PLoS ONE 2018, 12, e0188118. [Google Scholar] [CrossRef] [Green Version]
- Zong, J.; Qiu, T.; Li, W.; Guo, D. Automatic ultrasound image segmentation based on local entropy and active contour model. Comput. Math. Appl. 2019, 78, 929–943. [Google Scholar] [CrossRef]
- Xu, L.; Xiao, J.; Yi, B.; Lou, L. An Improved C-V Image Segmentation Method Based on Level Set Model. In Proceedings of the International Conference on Intelligent Networks and Intelligent Systems, Wuhan, China, 1–3 November 2008; pp. 507–510. [Google Scholar]
- Visalakshi, N.K.; Suguna, J. K-means clustering using Max-min distance measure. In Proceedings of the Nafips 2009 Meeting of the North American, Cincinnati, OH, USA, 14–17 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1–6. [Google Scholar]
- Dubey, Y.K.; Mushrif, M.M. FCM Clustering Algorithms for Segmentation of Brain MR Images. Adv. Fuzzy Syst. 2016, 2016, 3406406. [Google Scholar] [CrossRef] [Green Version]
- Pham, D.L.; Prince, J.L. An adaptive fuzzy C-means algorithm for image segmentation in the presence of intensity inhomogeneities. Pattern Recognit. Lett. 1999, 20, 57–68. [Google Scholar] [CrossRef]
- Salem, W.S.; Ali, H.F.; Seddik, A.F. Spatial Fuzzy C-Means Algorithm for Bias Correction and Segmentation of Brain MRI Data. Int. Conf. Biomed. Eng. Sci. 2015, 57–65. Available online: https://www.researchgate.net/profile/Wedad-Sallam/publication/277710874_Spatial_Fuzzy_C-Means_Algorithm_for_Bias_Correction_and_Segmentation_of_Brain_MRI_Data/links/59302586a6fdcc89e7842e8a/Spatial-Fuzzy-C-Means-Algorithm-for-Bias-Correction-and-Segmentation-of-Brain-MRI-Data.pdf (accessed on 23 May 2023).
- Cui, Y.; Yang, J.; Zhang, X. New CFAR target detector for SAR images based on kernel density estimation and mean square error distance. J. Syst. Eng. Electron. 2012, 23, 40–46. [Google Scholar] [CrossRef]
- Huang, S.; Huang, W.; Zhang, T. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions. Sci. Rep. 2016, 6, 38596. [Google Scholar] [CrossRef] [Green Version]
- Hou, B.; Chen, X.; Jiao, L. Multilayer CFAR Detection of Ship Targets in Very High Resolution SAR Images. IEEE Geosci. Remote Sens. Lett. 2014, 12, 811–815. [Google Scholar]
- Crisp, D.J. The State-of-the-Art in Ship Detection in Synthetic Aperture Radar Imagery; Defence Science and Technology Organisation Salisbury (Australia) Info Sciences Lab: Salisbury, Australia, 2004. [Google Scholar]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 99, 2999–3007. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation; Springer: Cham, Switzerland, 2018. [Google Scholar]
- Ding, H.; Jiang, X.; Shuai, B.; Liu, A.; Wang, G. Semantic Correlation Promoted Shape-Variant Context for Segmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Henry, C.; Azimi, S.M.; Merkle, N. Road Segmentation in SAR Satellite Images with Deep Fully Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1867–1871. [Google Scholar] [CrossRef] [Green Version]
- Bianchi, F.M.; Grahn, J.; Eckerstorfer, M.; Malnes, E.; Vickers, H. Snow Avalanche Segmentation in SAR Images with Fully Convolutional Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 75–82. [Google Scholar] [CrossRef]
- Vanschoren, J. Meta-learning: A survey. arXiv 2018, arXiv:1810.03548. [Google Scholar]
- Dan, C.C.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images. Adv. Neural Inf. Process. Syst. 2012, 25, 2852–2860. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference Oil Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3432–3440. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Fisher, Y.; Koltun, V. Multi-Scale Context Aggregation by Dilated Convolutions. arXiv 2015, arXiv:1511.07122. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical image computing and computer-assisted intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Lin, G.; Milan, A.; Shen, C.; Reid, I. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Rezaee, M.R.; van der Zwet, P.M.J.; Lelieveldt, B.P.E.; van der Geest, R.J.; Reiber, J.H.C. A multiresolution image segmentation technique based on pyramidal segmentation and fuzzy clustering. IEEE Trans. Image Process. 2000, 9, 1238–1248. [Google Scholar] [CrossRef]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6230–6239. [Google Scholar] [CrossRef] [Green Version]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Yang, M.; Yu, K.; Zhang, C.; Li, Z.; Yang, K. DenseASPP for Semantic Segmentation in Street Scenes. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3684–3692. [Google Scholar] [CrossRef]
- Finn, C.; Abbeel, P.; Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv 2017, arXiv:1703.03400. [Google Scholar]
- Nichol, A.; Schulman, J. Reptile: A scalable metalearning algorithm. arXiv 2018, arXiv:1803.02999. [Google Scholar]
- Snell, J.; Swersky, K.; Zemel, R. Prototypical networks for few-shot learning. Adv. Neural Inf. Process. Syst. 2017, 30, 4077–4087. [Google Scholar]
- Liu, Y.; Zhang, X.; Zhang, S.; He, X. Part-aware prototype network for few-shot semantic segmentation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 142–158. [Google Scholar]
- Zhang, X.; Wei, Y.; Yang, Y.; Huang, T. Sg-one: Similarity guidance network for one-shot semantic segmentation. IEEE Trans. Cybern. 2020, 50, 3855–3865. [Google Scholar] [CrossRef]
- Zhang, C.; Lin, G.; Liu, F.; Yao, R.; Shen, C. Canet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 5217–5226. [Google Scholar]
- Perez-Rua, J.-M.; Zhu, X.; Hospedales, T.; Xiang, T. Incremental Few-Shot Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Kang, B.; Liu, Z.; Wang, X.; Yu, F.; Feng, J.; Darrell, T. Few-shot Object Detection Via Feature Reweighting. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Wang, K.; Liew, J.H.; Zou, Y.; Zhou, D.; Feng, J. Panet: Few-shot image semantic segmentation with prototype alignment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9197–9206. [Google Scholar]
- Tian, P.; Wu, Z.; Qi, L.; Wang, L.; Shi, Y.; Gao, Y. Differentiable meta-learning model for few-shot semantic segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7-12 February 2020; Volume 34, pp. 12087–12094. [Google Scholar]
- Li, Y.; Wang, N.; Shi, J.; Liu, J.; Hou, X. Revisiting Batch Normalization for Practical Domain Adaptation. arXiv 2016, arXiv:1603.04779. [Google Scholar]
- Singh, S.; Krishnan, S. Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
- Shaban, A.; Bansal, S.; Liu, Z.; Essa, I.; Boots, B. One-Shot Learning for Semantic Segmentation. In Proceedings of the British Machine Vision Conference 2017, London, UK, 4–7 September 2017. [Google Scholar]
- Nguyen, K.; Todorovic, S. Feature weighting and boosting for few-shot segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 622–631. [Google Scholar]
- Wei, S.; Zeng, X.; Qu, Q.; Wang, M.; Su, H.; Shi, J. HRSID: A High-Resolution SAR Images Dataset for Ship Detection and Instance Segmentation. IEEE Access 2020, 8, 120234–120254. [Google Scholar] [CrossRef]
- Yang, B.; Liu, C.; Li, B.; Jiao, J.; Ye, Q. Prototype Mixture Models for Few-Shot Semantic Segmentation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 763–778. [Google Scholar]
Region | Imaging Satellite | Resolution (m) | Num. of Images | Imaging Mode | Polarization | Incident Angle (°) | Min. Size of Ships (Pixel) |
---|---|---|---|---|---|---|---|
Qingdao | TanDEM-X | 0.3 | 1 | ST | HH | - | 68 |
Shanghai | TanDEM-X | 0.3 | 1 | ST | HH | - | 66 |
Hong Kong | TerraSAR-X | 1.0 | 1 | HS | HH | - | 761 |
Istanbul | TerraSAR-X | 0.3 | 1 | ST | VV | - | 54 |
Houston | Sentinel-1B | 3 | 40 | S3-SM | HH | 27.6~34.8 | 11 |
Sao Paulo | Sentinel-1B | 3 | 21 | S3-SM | HH | 27.6~34.8 | 24 |
Sao Paulo | Sentinel-1B | 3 | 20 | S3-SM | HV | 27.6~34.8 | 15 |
Barcelona | TerraSAR-X | 3 | 23 | SM | VV | 20~45 | 26 |
Chittagong | Sentinel-1B | 3 | 18 | S3-SM | VV | 27.6~34.8 | 23 |
Aswan Dam | TerraSAR-X | 0.5 | 2 | ST | HH | 20~60 | 3478 |
Shanghai | TerraSAR-X | 0.5 | 2 | ST | HH | 20~60 | 167 |
Panama Canal | TanDEM | 1 | 1 | HS | HH | 20~55 | 86 |
Visakhapatnam | TerraSAR-X | 1 | 1 | HS | VV | 20~55 | 182 |
Singapore | TerraSAR-X | 3 | 4 | SM | HH | 20~45 | 47 |
Strait Gibraltar | TerraSAR-X | 3 | 2 | SM | HH | 20~45 | 179 |
Bay Plenty | TerraSAR-X | 3 | 1 | SM | VV | 20~45 | 43 |
Fold | Test Regions |
---|---|
SARShip-40 | Visakhapatnam, Hong Kong, Barcelona, Chittagong |
SARShip-41 | Shanghai-HH, Singapore, Shanghai, Sao Paulo-HV |
SARShip-42 | Panama Canal, Bay Plenty, Istanbul, Sao Paulo-HH |
SARShip-43 | Aswan Dam, Strait Gibraltar, Qingdao, Houston |
Metric | Method | SARShip-40 | SARShip-41 | SARShip-42 | SARShip-43 | Mean |
---|---|---|---|---|---|---|
Precision | SG-One [37] | 0.4075 | 0.5777 | 0.5632 | 0.5507 | 0.5248 |
PMMs [48] | 0.6018 | 0.8717 | 0.6827 | 0.7973 | 0.7384 | |
RPMMs [48] | 0.6023 | 0.7252 | 0.7477 | 0.7805 | 0.7139 | |
MSG-FN (ours) | 0.6822 | 0.6890 | 0.8453 | 0.8221 | 0.7597 | |
Recall | SG-One [37] | 0.5208 | 0.6013 | 0.7093 | 0.6673 | 0.6247 |
PMMs [48] | 0.7512 | 0.6469 | 0.8667 | 0.8526 | 0.7794 | |
RPMMs [48] | 0.6940 | 0.7692 | 0.8246 | 0.8295 | 0.7793 | |
MSG-FN (ours) | 0.6699 | 0.7939 | 0.7889 | 0.8471 | 0.7750 | |
F1 | SG-One [37] | 0.4204 | 0.5266 | 0.5865 | 0.5710 | 0.5261 |
PMMs [48] | 0.6264 | 0.7265 | 0.7232 | 0.8132 | 0.7223 | |
RPMMs [48] | 0.6129 | 0.7297 | 0.7538 | 0.7930 | 0.7224 | |
MSG-FN (ours) | 0.6422 | 0.7026 | 0.8011 | 0.8282 | 0.7435 | |
IoU | SG-One [37] | 0.3038 | 0.3790 | 0.4359 | 0.4287 | 0.3869 |
PMMs [48] | 0.5081 | 0.5853 | 0.6035 | 0.7068 | 0.6009 | |
RPMMs [48] | 0.4897 | 0.6027 | 0.6320 | 0.6784 | 0.6007 | |
MSG-FN (ours) | 0.5314 | 0.5962 | 0.6927 | 0.7236 | 0.6360 |
Metric | Method | SARShip-40 | SARShip-41 | SARShip-42 | SARShip-43 | Mean |
---|---|---|---|---|---|---|
Precision | SG-One [37] | 0.4135 | 0.6830 | 0.6175 | 0.5722 | 0.5716 |
PMMs [48] | 0.6066 | 0.8731 | 0.6840 | 0.7967 | 0.7401 | |
RPMMs [48] | 0.6264 | 0.7544 | 0.7528 | 0.7980 | 0.7329 | |
MSG-FN (ours) | 0.6821 | 0.6891 | 0.8451 | 0.8225 | 0.7597 | |
Recall | SG-One [37] | 0.5191 | 0.5741 | 0.6926 | 0.6594 | 0.6113 |
PMMs [48] | 0.7494 | 0.6456 | 0.8674 | 0.8526 | 0.7788 | |
RPMMs [48] | 0.5938 | 0.6664 | 0.7246 | 0.7023 | 0.6718 | |
MSG-FN (ours) | 0.6705 | 0.7938 | 0.7892 | 0.8469 | 0.7751 | |
F1 | SG-One [37] | 0.4234 | 0.5748 | 0.6186 | 0.5822 | 0.5498 |
PMMs [48] | 0.6292 | 0.7263 | 0.7234 | 0.8131 | 0.7230 | |
RPMMs [48] | 0.5783 | 0.6922 | 0.7028 | 0.7360 | 0.6773 | |
MSG-FN (ours) | 0.6425 | 0.7027 | 0.8012 | 0.8282 | 0.7437 | |
IoU | SG-One [37] | 0.3065 | 0.4214 | 0.4661 | 0.4390 | 0.4083 |
PMMs [48] | 0.5106 | 0.5849 | 0.6037 | 0.7067 | 0.6015 | |
RPMMs [48] | 0.4418 | 0.5497 | 0.5590 | 0.5983 | 0.5372 | |
MSG-FN (ours) | 0.5319 | 0.5963 | 0.6929 | 0.7237 | 0.6362 |
Method | IoU | Precision | Recall | F1-Score |
---|---|---|---|---|
U-Net [27] | 0.5085 | 0.6461 | 0.7527 | 0.6411 |
PSPNet [30] | 0.4481 | 0.5086 | 0.8009 | 0.5659 |
U-Net (TL) | 0.5562 | 0.7129 | 0.7516 | 0.6886 |
PSPNet (TL) | 0.6071 | 0.7704 | 0.7380 | 0.7301 |
MSG-FN (1-shot) | 0.7236 | 0.8221 | 0.8471 | 0.8282 |
MSG-FN (5-shot) | 0.7237 | 0.8225 | 0.8469 | 0.8282 |
Method | Precision | Recall | F1-Score | IoU |
---|---|---|---|---|
W/o Res-FRN | 0.6925 | 0.8357 | 0.7269 | 0.6127 |
W/o Multi-scale SGM | 0.8201 | 0.8654 | 0.8319 | 0.7337 |
MSG-FN (ours) | 0.8400 | 0.8513 | 0.8353 | 0.7398 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, R.; Li, J.; Gou, S.; Lu, H.; Mao, S.; Guo, Z. Multi-Scale Similarity Guidance Few-Shot Network for Ship Segmentation in SAR Images. Remote Sens. 2023, 15, 3304. https://doi.org/10.3390/rs15133304
Li R, Li J, Gou S, Lu H, Mao S, Guo Z. Multi-Scale Similarity Guidance Few-Shot Network for Ship Segmentation in SAR Images. Remote Sensing. 2023; 15(13):3304. https://doi.org/10.3390/rs15133304
Chicago/Turabian StyleLi, Ruimin, Jichao Li, Shuiping Gou, Haofan Lu, Shasha Mao, and Zhang Guo. 2023. "Multi-Scale Similarity Guidance Few-Shot Network for Ship Segmentation in SAR Images" Remote Sensing 15, no. 13: 3304. https://doi.org/10.3390/rs15133304
APA StyleLi, R., Li, J., Gou, S., Lu, H., Mao, S., & Guo, Z. (2023). Multi-Scale Similarity Guidance Few-Shot Network for Ship Segmentation in SAR Images. Remote Sensing, 15(13), 3304. https://doi.org/10.3390/rs15133304