Contrastive Learning Network Based on Causal Attention for Fine-Grained Ship Classification in Remote Sensing Scenarios
Abstract
:1. Introduction
- To improve local feature quality, a causal attention model was developed based on feature decoupling (FD-CAM). A FD-CAM uses a decoupling function to guide the feature separation operation, eliminate the adhesion between the local features, and it uses counterfactual causal inference architecture to learn the true association between the features and classification results to reduce the influence of false association problems in data-driven deep learning.
- A feature aggregation module (FAM) is proposed for the weighted fusion and re-association of local features. A FAM weights the features extracted from the trunk network using the local attention weight from the FD-CAM’s learning to obtain the locally decoupled fusion features. Fusion features are input into the feature recoupling module (FRM) to realize the re-association between local features; the aggregation function is used to guide the clustering process of feature vectors.
- Extensive experiments were conducted on two publicly available ship datasets to evaluate the performance of the proposed approach. The experimental results showed that our method achieved better results compared to other methods, showing strong fine-grained classification ability.
2. Proposed Method
2.1. Overview of the Method
2.2. Causal Attention Model Based on Feature Decoupling
Algorithm 1. Regional feature decoupling and polymerization |
Input: Homology image region feature set and . The number of attention channels M. Output: Regional feature decoupling loss . Homologous map local feature aggregation loss . Initialization: , . /* The cyclic shift operation is performed on the region feature set R to ensure that the region feature vectors are paired once in the decoupling process. Since the decoupling takes place within the regional feature set, and are no longer represented separately. */ for s in range(M − 1) do /* Cyclic shift s steps. */ /* The vectors are decoupled in pairs. */ for m in range(M) do end end /* Homologous map feature aggregation. */ for m in range(M) do end Return , |
Algorithm 2. Local feature decoupling of non-homologous images |
Input: Local features and of the two branches’ input images. The batch size B of the input image. Output: Regional feature decoupling loss Ll. Initialization: Ll = 0. /* Iterate over all local features. */ for i in range(B) do for j in range(B) do # Determine whether it is a local feature of non-homologous image. If it is, perform feature decoupling; otherwise, skip it if i is equal to j then continue else end end end Return Ll |
2.3. Feature Aggregation Module
2.4. Loss Function
3. Experiments
3.1. Datasets
3.1.1. FGSC-23
3.1.2. FGSCR-42
3.2. Evaluation Metrics
3.3. Implementation Details
3.4. Ablation Studies
3.4.1. Effectiveness of the FD-CAM
3.4.2. Effectiveness of the FAM
3.4.3. Effectiveness of Attention Channels
3.4.4. Effectiveness of the Proxy Vector Number
3.4.5. Effectiveness of the Decoupling Function and Aggregation Function
3.5. Comparisons with Other Methods
4. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Wang, N.; Li, B.; Wei, X.; Wang, Y.; Yan, H. Ship detection in spaceborne infrared image based on lightweight CNN and multisource feature cascade decision. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4324–4339. [Google Scholar] [CrossRef]
- You, Y.; Ran, B.; Meng, G.; Li, Z.; Liu, F.; Li, Z. OPD-Net: Prow detection based on feature enhancement and improved regression model in optical remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2020, 59, 6121–6137. [Google Scholar] [CrossRef]
- Liu, Z.; Yuan, L.; Weng, L.; Yang, Y. A high resolution optical satellite image dataset for ship recognition and some new baselines. In Proceedings of the International Conference on Pattern Recognition Applications and Methods, Porto, Portugal, 24–26 February 2017; SciTePress: Vienna, Austria, 2017; Volume 2, pp. 324–331. [Google Scholar]
- Oliveau, Q.; Sahbi, H. Learning attribute representations for remote sensing ship category classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2830–2840. [Google Scholar] [CrossRef]
- Shi, Q.; Li, W.; Tao, R. 2D-DFrFT based deep network for ship classification in remote sensing imagery. In Proceedings of the 2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS), Beijing, China, 19–20 August 2018; IEEE: New York, NY, USA, 2018; pp. 1–5. [Google Scholar]
- Shi, Q.; Li, W.; Tao, R.; Sun, X.; Gao, L. Ship classification based on multifeature ensemble with convolutional neural network. Remote Sens. 2019, 11, 419. [Google Scholar] [CrossRef] [Green Version]
- Shi, J.; Jiang, Z.; Zhang, H. Few-shot ship classification in optical remote sensing images using nearest neighbor prototype representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3581–3590. [Google Scholar] [CrossRef]
- Xiao, Q.; Liu, B.; Li, Z.; Ni, W.; Yang, Z.; Li, L. Progressive data augmentation method for remote sensing ship image classification based on imaging simulation system and neural style transfer. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 9176–9186. [Google Scholar] [CrossRef]
- Goring, C.; Rodner, E.; Freytag, A.; Denzler, J. Nonparametric part transfer for fine-grained recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2489–2496. [Google Scholar]
- Branson, S.; Van Horn, G.; Belongie, S.; Perona, P. Bird species categorization using pose normalized deep convolutional nets. arXiv 2014, preprint. arXiv:1406.2952. [Google Scholar]
- Zhang, X.; Lv, Y.; Yao, L.; Xiong, W.; Fu, C. A new benchmark and an attribute-guided multi-level feature representation network for fine-grained ship classification in optical remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1271–1285. [Google Scholar] [CrossRef]
- Chen, Y.; Zhang, Z.; Chen, Z.; Zhang, Y.; Wang, J. Fine-Grained Classification of Optical Remote Sensing Ship Images Based on Deep Convolution Neural Network. Remote Sens. 2022, 14, 4566. [Google Scholar]
- Lin, T.Y.; Maji, S. Improved bilinear pooling with cnns. arXiv 2017, preprint. arXiv:1707.06772. [Google Scholar]
- Huang, L.; Wang, F.; Zhang, Y.; Xu, Q. Fine-Grained Ship Classification by Combining CNN and Swin Transformer. Remote Sens. 2022, 14, 3087. [Google Scholar] [CrossRef]
- Meng, H.; Tian, Y.; Ling, Y.; Li, T. Fine-grained ship recognition for complex background based on global to local and progressive learning. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Pearl, J.; Mackenzie, D. The Book of Why: The New Science of Cause and Effect; Basic Books: New York, NY, USA, 2018. [Google Scholar]
- Rao, Y.; Chen, G.; Lu, J.; Zhou, J. Counterfactual attention learning for fine-grained visual categorization and re-identification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Nashville, TN, USA, 20–25 June 2021; pp. 1025–1034. [Google Scholar]
- Xiong, W.; Xiong, Z.; Cui, Y. An Explainable Attention Network for Fine-Grained Ship Classification Using Remote-Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Chen, J.; Chen, K.; Chen, H.; Li, W.; Zou, Z.; Shi, Z. Contrastive Learning for Fine-Grained Ship Classification in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
- Zhan, X.; Xie, J.; Liu, Z.; Ong, Y.S.; Loy, C.C. Online deep clustering for unsupervised representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6688–6697. [Google Scholar]
- Hou, Q.; Zhang, L.; Cheng M, M.; Feng, J. Strip pooling: Rethinking spatial pooling for scene parsing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 4003–4012. [Google Scholar]
- Di, Y.; Jiang, Z.; Zhang, H. A public dataset for fine-grained ship classification in optical remote sensing images. Remote Sens. 2021, 13, 747. [Google Scholar] [CrossRef]
- Xia, G.S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Zhang, L. DOTA: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3974–3983. [Google Scholar]
- Liu, Z.; Wang, H.; Weng, L.; Yang, Y. Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1074–1078. [Google Scholar] [CrossRef]
- Bordes, A.; Bottou, L.; Gallinari, P. SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent. J. Mach. Learn. Res. 2009, 10, 1737–1754. [Google Scholar]
- Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, preprint. arXiv:1608.03983. [Google Scholar]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: New York, NY, USA, 2009; pp. 248–255. [Google Scholar]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. J. Mach. Learn. Res. 2010, 9, 249–256. [Google Scholar]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Zhao, W.; Tong, T.; Wang, H.; Zhao, F.; He, Y.; Lu, H. Diversity Consistency Learning for Remote-Sensing Object Recognition with Limited Labels. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–10. [Google Scholar] [CrossRef]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Lin, T.Y.; RoyChowdhury, A.; Maji, S. Bilinear CNN models for fine-grained visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1449–1457. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, preprint. arXiv:1704.04861. [Google Scholar]
- Du, R.; Chang, D.; Bhunia, A.K.; Xie, J.; Ma, Z.; Song, Y.-Z.; Guo, J. Fine-grained visual classification via progressive multi-granularity training of jigsaw patches. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer International Publishing: Cham, Switzerland, 2020; pp. 153–168. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Zhao, W.; Liu, J.; Liu, Y.; Zhao, F.; He, Y.; Lu, H. Teaching teachers first and then student: Hierarchical distillation to improve long-tailed object recognition in aerial images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
- Zhao, W.; Tong, T.; Yao, L.; Liu, Y.; Xu, C.; He, Y.; Lu, H. Feature balance for fine-grained object classification in aerial images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
- Lv, Y.; Zhang, X.; Xiong, W.; Cui, Y.; Cai, M. An end-to-end local-global-fusion feature extraction network for remote sensing image scene classification. Remote Sens. 2019, 11, 3006. [Google Scholar] [CrossRef] [Green Version]
- Nauta, M.; Van Bree, R.; Seifert, C. Neural prototype trees for interpretable fine-grained image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 14933–14943. [Google Scholar]
- Chen, Y.; Bai, Y.; Zhang, W.; Mei, T. Destruction and construction learning for fine-grained image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5157–5166. [Google Scholar]
- Yu, C.; Zhao, X.; Zheng, Q.; Zhang, P.; You, X. Hierarchical bilinear pooling for fine-grained visual recognition. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 574–589. [Google Scholar]
- Zhuang, P.; Wang, Y.; Qiao, Y. Learning attentive pairwise interaction for fine-grained classification. Proc. AAAI Conf. Artif. Intell. 2020, 34, 13130–13137. [Google Scholar] [CrossRef]
- Zheng, H.; Fu, J.; Zha, Z.J.; Luo, J. Looking for the devil in the details: Learning trilinear attention sampling network for fine-grained image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5012–5021. [Google Scholar]
- Wang, Y.; Lv, K.; Huang, R.; Song, S.; Yang, L.; Huang, G. Glance and focus: A dynamic approach to reducing spatial redundancy in image classification. Adv. Neural Inf. Process. Syst. 2020, 33, 2432–2444. [Google Scholar]
Backbone | FD-CAM | FAM | OA | AA | ||
---|---|---|---|---|---|---|
Counterfactual Causal Attention Structure | Feature Decoupling | FRM | Feature Polymerization | |||
ResNet50 | 88.83 | 88.66 | ||||
√ | 90.78 | 90.47 | ||||
√ | 90.66 | 90.37 | ||||
√ | √ | 90.90 | 90.63 | |||
√ | √ | √ | 90.90 | 91.02 | ||
√ | √ | √ | 91.75 | 91.32 | ||
√ | 91.02 | 90.60 | ||||
√ | 90.53 | 90.24 | ||||
√ | √ | 91.26 | 90.96 | |||
√ | √ | √ | 91.02 | 91.39 | ||
√ | √ | √ | 91.14 | 91.13 | ||
√ | √ | √ | √ | 92.48 | 91.43 |
Backbone | FD-CAM | FAM | AA | ||
---|---|---|---|---|---|
Counterfactual Causal Attention Structure | Feature Decoupling | FRM | Feature Polymerization | ||
ResNet50 | 92.47 | ||||
√ | 93.76 | ||||
√ | 93.07 | ||||
√ | √ | 94.19 | |||
√ | √ | √ | 94.48 | ||
√ | √ | √ | 94.76 | ||
√ | 94.13 | ||||
√ | 93.98 | ||||
√ | √ | 94.22 | |||
√ | √ | √ | 95.01 | ||
√ | √ | √ | 94.94 | ||
√ | √ | √ | √ | 95.42 |
M | OA | AA |
---|---|---|
4 | 90.66 | 90.28 |
8 | 90.90 | 90.75 |
16 | 92.48 | 91.43 |
32 | 91.02 | 90.71 |
n | OA | AA |
---|---|---|
1 | 90.53 | 90.08 |
2 | 92.48 | 91.43 |
3 | 90.78 | 90.53 |
4 | 90.53 | 90.26 |
Method | OA | AA | ||
---|---|---|---|---|
C2Net | 90.53 | 89.71 | ||
√ | 91.38 | 91.32 | ||
√ | 90.78 | 90.65 | ||
√ | √ | 92.48 | 91.43 |
Method | FDN [5] | DCL [31] | Inception v3 [32] | DenseNet121 [33] | B-CNN [34] | MobileNet [35] | ME-CNN [6] | PMG [36] | Xception [37] | T2FTS [38] | FBNet [39] | LGFFE [40] | C2Net (Ours) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AR0 | 85.57 | 83.50 | 87.63 | 86.60 | 84.54 | 84.54 | 93.81 | 87.60 | 89.69 | 87.63 | 92.80 | 92.78 | 94.85 |
AR1 | 88.24 | 85.30 | 91.18 | 82.35 | 91.18 | 88.24 | 91.18 | 83.30 | 88.24 | 91.67 | 94.10 | 94.12 | 94.12 |
AR2 | 85.19 | 84.30 | 89.81 | 89.81 | 86.11 | 88.89 | 87.04 | 91.70 | 91.67 | 94.92 | 91.70 | 95.37 | 93.75 |
AR3 | 81.82 | 77.30 | 77.27 | 72.73 | 90.91 | 86.36 | 63.64 | 100.00 | 81.82 | 89.83 | 95.50 | 81.82 | 100.00 |
AR4 | 89.83 | 93.20 | 91.53 | 84.75 | 89.83 | 88.16 | 86.44 | 96.60 | 96.61 | 75.36 | 94.90 | 96.61 | 89.66 |
AR5 | 77.78 | 77.80 | 83.33 | 77.78 | 83.33 | 77.78 | 77.78 | 68.90 | 88.89 | 91.18 | 77.80 | 83.33 | 82.22 |
AR6 | 81.36 | 86.40 | 76.27 | 72.88 | 76.27 | 86.44 | 76.27 | 84.20 | 86.44 | 96.77 | 89.80 | 83.05 | 84.21 |
AR7 | 72.22 | 100.00 | 94.44 | 77.78 | 83.33 | 83.33 | 66.67 | 100.00 | 83.33 | 100.00 | 94.40 | 72.22 | 100.00 |
AR8 | 77.42 | 96.80 | 100.00 | 100.00 | 96.77 | 100.00 | 83.87 | 100.00 | 90.32 | 89.66 | 100.00 | 96.77 | 100.00 |
AR9 | 66.67 | 88.90 | 55.56 | 77.78 | 77.78 | 77.78 | 83.33 | 72.50 | 94.44 | 82.22 | 94.40 | 83.33 | 88.41 |
AR10 | 87.50 | 97.90 | 89.58 | 93.75 | 91.67 | 91.67 | 100.00 | 84.80 | 93.75 | 90.91 | 93.80 | 97.92 | 87.88 |
AR11 | 90.00 | 60.00 | 100.00 | 100.00 | 100.00 | 90.00 | 100.00 | 91.20 | 100.00 | 81.82 | 100.00 | 100.00 | 65.00 |
AR12 | 93.10 | 82.80 | 93.10 | 93.10 | 93.10 | 93.10 | 93.10 | 60.00 | 82.76 | 77.78 | 100.00 | 96.55 | 98.15 |
AR13 | 77.78 | 60.00 | 68.89 | 75.56 | 73.33 | 75.56 | 82.22 | 83.30 | 77.78 | 100.00 | 68.90 | 77.78 | 72.22 |
AR14 | 75.00 | 75.00 | 80.00 | 85.00 | 80.00 | 80.00 | 85.00 | 95.00 | 85.00 | 94.44 | 85.00 | 80.00 | 100.00 |
AR15 | 85.71 | 85.70 | 92.86 | 100.00 | 100.00 | 92.86 | 100.00 | 90.90 | 92.86 | 100.00 | 100.00 | 92.86 | 90.91 |
AR16 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 94.40 | 100.00 | 85.00 | 100.00 | 100.00 | 95.45 |
AR17 | 73.91 | 87.00 | 76.81 | 75.36 | 72.46 | 75.36 | 78.26 | 100.00 | 76.81 | 92.86 | 78.30 | 85.51 | 98.31 |
AR18 | 72.73 | 72.70 | 66.67 | 78.79 | 75.76 | 69.70 | 81.82 | 94.90 | 81.82 | 100.00 | 84.80 | 81.82 | 88.89 |
AR19 | 45.00 | 35.00 | 40.00 | 55.00 | 55.00 | 45.00 | 60.00 | 77.80 | 70.00 | 50.00 | 60.00 | 65.00 | 93.22 |
AR20 | 77.78 | 66.70 | 72.22 | 66.67 | 61.11 | 55.56 | 66.67 | 86.40 | 77.78 | 77.78 | 66.70 | 77.78 | 94.44 |
AR21 | 100.00 | 95.00 | 90.91 | 100.00 | 100.00 | 100.00 | 100.00 | 88.90 | 100.00 | 95.00 | 100.00 | 100.00 | 96.77 |
AR22 | 90.91 | 100.00 | 87.63 | 90.91 | 90.91 | 90.91 | 100.00 | 93.50 | 90.91 | 100.00 | 100.00 | 90.91 | 94.44 |
OA | 82.30 | 83.60 | 83.88 | 84.00 | 84.00 | 84.24 | 85.58 | 87.20 | 87.76 | 88.73 | 89.30 | 89.45 | 92.48 |
AA | 81.54 | 82.23 | 82.86 | 84.20 | 84.93 | 83.53 | 85.09 | 88.08 | 87.87 | 88.91 | 89.69 | 88.07 | 91.43 |
Time(ms) | 7.92 | - | 6.89 | 7.05 | 11.17 | 5.92 | - | - | 9.17 | - | - | 6.85 | 6.17 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pan, C.; Li, R.; Hu, Q.; Niu, C.; Liu, W.; Lu, W. Contrastive Learning Network Based on Causal Attention for Fine-Grained Ship Classification in Remote Sensing Scenarios. Remote Sens. 2023, 15, 3393. https://doi.org/10.3390/rs15133393
Pan C, Li R, Hu Q, Niu C, Liu W, Lu W. Contrastive Learning Network Based on Causal Attention for Fine-Grained Ship Classification in Remote Sensing Scenarios. Remote Sensing. 2023; 15(13):3393. https://doi.org/10.3390/rs15133393
Chicago/Turabian StylePan, Chaofan, Runsheng Li, Qing Hu, Chaoyang Niu, Wei Liu, and Wanjie Lu. 2023. "Contrastive Learning Network Based on Causal Attention for Fine-Grained Ship Classification in Remote Sensing Scenarios" Remote Sensing 15, no. 13: 3393. https://doi.org/10.3390/rs15133393
APA StylePan, C., Li, R., Hu, Q., Niu, C., Liu, W., & Lu, W. (2023). Contrastive Learning Network Based on Causal Attention for Fine-Grained Ship Classification in Remote Sensing Scenarios. Remote Sensing, 15(13), 3393. https://doi.org/10.3390/rs15133393