DSAD: Multi-Directional Contrast Spatial Attention-Driven Feature Distillation for Infrared Small Target Detection
Abstract
Highlights
- Infrared small targets exhibit weak responses and are easily submerged by strong background noise. Moreover, existing distillation methods have insufficient ability to capture the spatial information of small targets, leading student networks to pay more attention to unrelated background but ignore the small target representation knowledge transferring.
- There are significant feature differences and representational gaps in hierarchical features between teacher and student networks. Forcing feature matching could weaken the learning ability of student networks for small targets.
- We propose a Multi-Directional Contrast Spatial Attention (DSA) mechanism for IRSTD. The Multi-Directional Contrast Spatial Attention (DSA) module can capture small target spatial features across eight discrete directions in a parameter-free manner, thereby enhancing detection performance without increasing computational cost.
- We design the Gaussian transformation of features to leverage feature discrepancies between student and teacher networks, and integrate spatial weights derived from the DSA module to design the Perception Weight Mean Square Error (PWMSE) distillation loss, enhancing the efficient transmission of small target feature representations. Our DSAD method achieves promising results and even exhibits comparable detection performance (e.g., IoU, Pd) to the teacher networks.
Abstract
1. Introduction
- 1.
- This paper proposes a Multi-Directional Contrast Spatial Attention-driven Feature Distillation (DSAD) method for IRSTD. Our method leverages feature discrepancies between student and teacher networks, and integrates spatial weights derived from the DSA module to design the PWMSE distillation loss, thereby achieving efficient transmission of small target feature representations.
- 2.
- We introduce a Multi-Directional Contrast Spatial Attention (DSA) mechanism for IRSTD. The DSA module can capture small target spatial features across eight discrete directions with a parameter-free manner, thereby enhancing detection performance without increasing computational cost.
- 3.
- The experimental results demonstrate that our DSAD can achieve promising results and even achieve comparable detection performance (e.g., , ) with the teacher network. The inference latency of the student networks can achieve 2× more acceleration on the NVIDIA AGX and HUAWEI Ascend-310B.
2. Related Work
2.1. Infrared Small Target Detection
2.2. Knowledge Distillation
3. Methodology
3.1. Motivation
3.2. Features Selection
3.3. Directional Contrast Spatial Attention
3.3.1. Spatial Aggregation
3.3.2. Directional Convolution Kernel
3.3.3. Directional Feature Contrast
3.3.4. Minimum Contrast Operation
3.4. Gaussian Transformation of Feature
3.5. Loss Function
4. Experiments
4.1. Evaluation Metrics
4.1.1. Intersection over Union
4.1.2. Probability of Detection
4.1.3. False-Alarm Rate
4.1.4. Parameters
4.1.5. Floating-Point Operations
4.2. Datesets and Implementation Details
4.3. Comparison to the State-of-the-Art Methods
4.3.1. Quantitative Results
4.3.2. Qualitative Results
4.4. Ablation Study
4.4.1. The Effectiveness of the DSA Mechanism in DSAD Method
4.4.2. The Effectiveness of the DSA Module for IRSTD Models
4.4.3. The Effectiveness of DSA Module Collaborate DSAD Method
4.5. Inference Experiments on Edge Platforms
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhao, M.; Li, W.; Li, L.; Hu, J.; Ma, P.; Tao, R. Single-frame infrared small-target detection: A survey. IEEE Geosci. Remote Sens. Mag. 2022, 10, 87–119. [Google Scholar]
- Li, B.; Xiao, C.; Wang, L.; Wang, Y.; Lin, Z.; Li, M.; An, W.; Guo, Y. Dense nested attention network for infrared small target detection. IEEE Trans. Image Process. 2022, 32, 1745–1758. [Google Scholar] [CrossRef]
- Tang, W.; Dai, Q.; Hao, F. An efficient knowledge distillation-based detection method for infrared small targets. Remote Sens. 2024, 16, 3173. [Google Scholar]
- Li, Z.; Xu, P.; Chang, X.; Yang, L.; Zhang, Y.; Yao, L.; Chen, X. When object detection meets knowledge distillation: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 10555–10579. [Google Scholar] [CrossRef]
- Deng, L.; Li, G.; Han, S.; Shi, L.; Xie, Y. Model compression and hardware acceleration for neural networks: A comprehensive survey. Proc. IEEE 2020, 108, 485–532. [Google Scholar] [CrossRef]
- Zhang, M.; Wang, Y.; Guo, J.; Li, Y.; Gao, X.; Zhang, J. IRSAM: Advancing segment anything model for infrared small target detection. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; pp. 233–249. [Google Scholar]
- Deng, H.; Sun, X.; Liu, M.; Ye, C.; Zhou, X. Small infrared target detection based on weighted local difference measure. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4204–4214. [Google Scholar]
- Cheng, H.; Zhang, M.; Shi, J.Q. A survey on deep neural network pruning: Taxonomy, comparison, analysis, and recommendations. IEEE Trans. Pattern Anal. Mach. Intell. 2024. early access. [Google Scholar]
- He, Y.; Xiao, L. Structured pruning for deep convolutional neural networks: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 46, 2900–2919. [Google Scholar] [CrossRef] [PubMed]
- Liu, G.; Lin, Z.; Yan, S.; Sun, J.; Yu, Y.; Ma, Y. Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 171–184. [Google Scholar]
- Yang, J.; Shen, X.; Xing, J.; Tian, X.; Li, H.; Deng, B.; Huang, J.; Hua, X.-S. Quantization networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. Comput. Sci. 2015, 14, 38–39. [Google Scholar]
- Li, B.; Wang, Y.; Wang, L.; Zhang, F.; Liu, T.; Lin, Z.; An, W.; Guo, Y. Monte Carlo linear clustering with single-point supervision is enough for infrared small target detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 1009–1019. [Google Scholar]
- Yang, Z.; Li, Z.; Shao, M.; Shi, D.; Yuan, Z.; Yuan, C. Masked generative distillation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 53–69. [Google Scholar]
- Cao, W.; Zhang, Y.; Gao, J.; Cheng, A.; Cheng, K.; Cheng, J. PKD: General distillation framework for object detectors via Pearson correlation coefficient. Adv. Neural Inf. Process. Syst. 2023, 35, 15394–15406. [Google Scholar]
- Wang, J.; Chen, Y.; Zheng, Z.; Li, X.; Cheng, M.-M.; Hou, Q. CrossKD: Cross-head knowledge distillation for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 16520–16530. [Google Scholar]
- Zagoruyko, S.; Komodakis, N. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Xu, G.; Liu, Z.; Li, X.; Loy, C.C. Knowledge distillation meets self-supervision. In Proceedings of the European Conference on Computer Vision, Virtual, 23–28 August 2020; pp. 588–604. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Wei, Y.; You, X.; Li, H. Multiscale patch-based contrast measure for small infrared target detection. Pattern Recognit. 2016, 58, 216–226. [Google Scholar] [CrossRef]
- Chen, C.L.P.; Li, H.; Wei, Y.; Xia, T.; Tang, Y.Y. A local contrast method for small infrared target detection. IEEE Trans. Geosci. Remote Sens. 2014, 52, 574–581. [Google Scholar] [CrossRef]
- Zhu, H.; Liu, S.; Deng, L.; Li, Y.; Xiao, F. Infrared small target detection via low-rank tensor completion with top-hat regularization. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1004–1016. [Google Scholar] [CrossRef]
- Chung, W.Y.; Lee, I.H.; Park, C.G. Lightweight infrared small target detection network using full-scale skip connection U-Net. IEEE Geosci. Remote Sens. Lett. 2023, 20, 7000705. [Google Scholar]
- Guo, T.; Zhou, B.; Luo, F.; Zhang, L.; Gao, X. DMFNet: Dual-encoder multistage feature fusion network for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5614214. [Google Scholar]
- Li, B.; Wang, L.; Wang, Y.; Wu, T.; Lin, Z.; Li, M.; An, W.; Guo, Y. Mixed-precision network quantization for infrared small target segmentation. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5000812. [Google Scholar]
- Li, B.; Ying, X.; Li, R.; Liu, Y.; Shi, Y.; Li, M.; Zhang, X.; Hu, M.; Wu, C.; Zhang, Y.; et al. ICPR 2024 competition on resource-limited infrared small target detection challenge: Methods and results. In Proceedings of the International Conference on Pattern Recognition, Kolkata, India, 1–5 December 2024; pp. 62–77. [Google Scholar]
- Cai, Z.; Xing, S.; Quan, S.; Su, X.; Wang, J. A power-distribution joint optimization arrangement for multi-point source jamming system. Results Eng. 2025, 27, 106856. [Google Scholar]
- Dai, Y.; Wu, Y.; Zhou, F.; Barnard, K. Attentional local contrast networks for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9813–9824. [Google Scholar] [CrossRef]
- Wu, X.; Hong, D.; Chanussot, J. UIU-Net: U-Net in U-Net for infrared small object detection. IEEE Trans. Image Process. 2022, 32, 364–376. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Zhang, T.; Li, L.; Cao, S.; Pu, T.; Peng, Z. Attention-guided pyramid context networks for detecting infrared small target under complex background. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 4250–4261. [Google Scholar] [CrossRef]
- Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.-W.; Wu, J. UNet 3+: A full-scale connected UNet for medical image segmentation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
- Zhang, M.; Yue, K.; Zhang, J.; Li, Y.; Gao, X. Exploring feature compensation and cross-level correlation for infrared small target detection. In Proceedings of the 30th ACM International Conference on Multimedia, Lisbon, Portugal, 10–14 October 2022; pp. 1857–1865. [Google Scholar]
- Zhang, M.; Yang, H.; Guo, J.; Li, Y.; Gao, X.; Zhang, J. IRPruneDet: Efficient infrared small target detection via wavelet structure-regularized soft channel pruning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 26–27 February 2024; pp. 7224–7232. [Google Scholar]
- Zhang, M.; Yue, K.; Li, B.; Guo, J.; Li, Y.; Gao, X. Single-frame infrared small target detection via Gaussian curvature inspired network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5005013. [Google Scholar]
- Wang, L.; Yoon, K.-J. Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3048–3068. [Google Scholar] [CrossRef]
- Romero, A.; Ballas, N.; Kahou, S.E.; Chassang, A.; Bengio, Y. FitNets: Hints for Thin Deep Nets. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Huang, Z.; Wang, N. Like what you like: Knowledge distill via neuron selectivity transfer. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Park, W.; Kim, D.; Lu, Y.; Cho, M. Relational knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3967–3976. [Google Scholar]
- Peng, B.; Jin, X.; Liu, J.; Li, D.; Wu, Y.; Liu, Y.; Zhou, S.; Zhang, Z. Correlation congruence for knowledge distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5007–5016. [Google Scholar]
- Zhang, W.; Feng, W.; Li, M.; Lyu, S.; Xu, T.-B. A saliency-transformer combined knowledge distillation guided network for infrared small target detection. In Proceedings of the International Conference on Signal and Information Processing, Networking and Computers, Beijing, China, 15–17 December 2022; pp. 88–95. [Google Scholar]
- Xue, J.; Li, J.; Han, Y.; Wang, Z.; Deng, C.; Xu, T. Feature-based knowledge distillation for infrared small target detection. IEEE Geosci. Remote Sens. Lett. 2024, 21, 6005305. [Google Scholar] [CrossRef]
- Dai, Y.; Li, X.; Zhou, F.; Qian, Y.; Chen, Y.; Yang, J. One-stage cascade refinement networks for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5000917. [Google Scholar] [CrossRef]
- Duchi, J.; Hazan, E.; Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 2011, 12, 2121–2159. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada, 8–14 December 2019; pp. 8026–8037. [Google Scholar]
Method | Params | FLOPs | NUDT-SIRST | NUAA-SIRST | ||||
---|---|---|---|---|---|---|---|---|
(×) | (×) | |||||||
DNA-num16 | 4.70 M | 14.19 G | 86.13% | 98.52% | 0.250 | 75.86% | 96.58% | 1.38 |
DNA-num4 | 0.30 M | 0.97 G | 79.68% | 97.35% | 1.13 | 74.29% | 95.82% | 2.25 |
KD [12] | 0.30 M | 0.97 G | 80.70% | 96.93% | 1.25 | 74.38% | 95.06% | 1.31 |
NST [38] | 0.30 M | 0.97 G | 83.15% | 97.46% | 0.859 | 75.13% | 95.44% | 1.38 |
AT [17] | 0.30 M | 0.97 G | 81.39% | 97.04% | 1.20 | 74.96% | 95.82% | 1.49 |
KDSVD [18] | 0.30 M | 0.97 G | 82.74% | 97.20% | 0.657 | 74.51% | 96.19% | 1.80 |
RKD [39] | 0.30 M | 0.97 G | 81.78% | 97.35% | 1.04 | 75.23% | 95.06% | 1.54 |
CCKD [40] | 0.30 M | 0.97 G | 81.73% | 96.19% | 0.703 | 73.53% | 94.68% | 2.37 |
MGD [14] | 0.30 M | 0.97 G | 83.04% | 97.99% | 1.35 | 73.34% | 95.44% | 3.02 |
PKD [15] | 0.30 M | 0.97 G | 83.62% | 97.99% | 1.27 | 74.65% | 96.20% | 1.25 |
CrossKD [16] | 0.30 M | 0.97 G | 83.36% | 97.79% | 1.13 | 74.66% | 95.81% | 1.57 |
DSAD (ours) | 0.30 M | 0.97 G | 84.79% | 98.10% | 0.882 | 75.69% | 97.34% | 1.65 |
Method | Params | FLOPs | NUDT-SIRST | NUAA-SIRST | ||||
---|---|---|---|---|---|---|---|---|
(×) | (×) | |||||||
AFMU-num16 | 1.88 M | 22.73 G | 83.21% | 97.57% | 0.855 | 75.62% | 95.43% | 1.65 |
AMFU-num4 | 0.12 M | 1.48 G | 78.98% | 95.45% | 1.15 | 73.13% | 93.92% | 1.23 |
KD [12] | 0.12 M | 1.48 G | 79.15% | 95.66% | 1.15 | 73.66% | 93.92% | 1.79 |
MGD [14] | 0.12 M | 1.48 G | 79.29% | 96.40% | 1.59 | 74.55% | 94.29% | 1.64 |
PKD [15] | 0.12 M | 1.48 G | 79.89% | 96.72% | 1.88 | 74.86% | 94.68% | 1.28 |
CrossKD [16] | 0.12 M | 1.48 G | 79.36% | 96.19% | 0.850 | 74.15% | 94.67% | 1.33 |
DSAD (ours) | 0.12 M | 1.48 G | 80.79% | 97.14% | 1.73 | 75.26% | 95.06% | 0.920 |
Method | Params | FLOPs | NUDT-SIRST | NUAA-SIRST | ||||
---|---|---|---|---|---|---|---|---|
(×) | (×) | |||||||
DMF-num16 | 11.11 M | 40.21 G | 86.43% | 98.41% | 0.398 | 75.03% | 96.96% | 1.40 |
DMF-num4 | 0.70 M | 2.62 G | 80.94% | 96.93% | 1.29 | 72.29% | 95.82% | 1.20 |
KD [12] | 0.70 M | 2.62 G | 81.50% | 97.67% | 1.39 | 72.57% | 95.82% | 1.38 |
MGD [14] | 0.70 M | 2.62 G | 82.74% | 97.99% | 1.06 | 73.14% | 96.19% | 2.61 |
PKD [15] | 0.70 M | 2.62 G | 83.40% | 97.88% | 0.597 | 73.58% | 96.09% | 1.51 |
CrossKD [16] | 0.70 M | 2.62 G | 83.70% | 98.09% | 3.86 | 73.33% | 96.20% | 1.38 |
DSAD (ours) | 0.70 M | 2.62 G | 84.30% | 98.20% | 0.489 | 74.16% | 96.20% | 1.33 |
Method | Params | FLOPs | NUDT-SIRST | NUAA-SIRST | ||||
---|---|---|---|---|---|---|---|---|
(×) | (×) | |||||||
DNA-num16 | 4.70 M | 14.19 G | 86.13% | 98.52% | 0.250 | 75.86% | 96.58% | 1.38 |
DNA-num4 | 0.30 M | 0.97 G | 79.68% | 97.35% | 1.13 | 74.29% | 95.82% | 2.25 |
1×SE(T,S) | 0.30 M | 0.97 G | 82.21% | 97.46% | 0.480 | 73.88% | 95.44% | 2.27 |
×SE(T,S) | 0.30 M | 0.97 G | 83.31% | 97.35% | 1.03 | 74.74% | 96.57% | 1.42 |
×SE(T,S) | 0.30 M | 0.97 G | 84.79% | 98.10% | 0.882 | 75.69% | 97.34% | 1.65 |
Method | None | +DSA | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Params | FLOPs | () | Params | FLOPs | () | |||||
DNA-num16 (teacher) | 4.70 M | 14.19 G | 86.13% | 98.52% | 0.250 | 4.71 M | 14.05 G | 87.42% | 99.05% | 0.322 |
DNA-num4 (student) | 0.30 M | 0.97 G | 79.68% | 97.35% | 1.13 | 0.31 M | 0.92 G | 81.34% | 98.10% | 0.951 |
AMFU-num16 (teacher) | 1.88 M | 22.73 G | 83.21% | 97.57% | 0.855 | 1.88 M | 22.69 G | 85.87% | 98.09% | 0.609 |
AMFU-num4 (student) | 0.12 M | 1.48 G | 78.98% | 95.45% | 1.15 | 0.12 M | 1.47 G | 79.33% | 96.30% | 1.18 |
DMF-num16 (teacher) | 11.11M | 40.21 G | 86.43% | 98.41% | 0.398 | 11.12M | 40.21 G | 86.84% | 98.73% | 0.625 |
DMF-num4 (student) | 0.70 M | 2.62 G | 80.94% | 96.93% | 1.29 | 0.71 M | 2.62 G | 82.21% | 97.99% | 0.936 |
Method | Params | FLOPs | NUDT-SIRST | NUAA-SIRST | ||||
---|---|---|---|---|---|---|---|---|
(×) | (×) | |||||||
DNA-num16 | 4.70 M | 14.19 G | 86.13% | 98.52% | 0.250 | 75.86% | 96.58% | 1.38 |
DNA-num4(DSAD) | 0.30 M | 0.97 G | 84.79% | 98.10% | 0.882 | 75.69% | 97.34% | 1.65 |
+DNA-num4(DSAD) | 0.31 M | 0.92 G | 84.80% | 98.20% | 0.575 | 75.88% | 98.10% | 1.99 |
AMFU-num16 | 1.88 M | 22.73 G | 83.21% | 97.57% | 0.855 | 75.62% | 95.43% | 1.65 |
AMFU-num4(DSAD) | 0.12 M | 1.48 G | 80.79% | 97.14% | 1.73 | 75.26% | 95.06% | 0.920 |
+AMFU-num4(DSAD) | 0.12 M | 1.47 G | 81.78% | 97.35% | 1.15 | 75.38% | 95.31% | 1.10 |
DMF-num16 | 11.11 M | 40.21 G | 86.43% | 98.41% | 0.398 | 75.03% | 96.96% | 1.40 |
DMF-num4(DSAD) | 0.70 M | 2.62 G | 84.30% | 98.20% | 0.489 | 74.16% | 96.20% | 1.33 |
+DMF-num4(DSAD) | 0.71 M | 2.62 G | 86.02% | 98.73% | 0.301 | 75.00% | 96.96% | 1.48 |
Edge Platform | Metrics | DNA-num16 | DNA-num4 | AMFU-num16 | AMFU-num4 | DMF-num16 | DMF-num4 |
---|---|---|---|---|---|---|---|
FLOPs | 14.19 G | 0.97 G | 22.73 G | 1.48 G | 40.21 G | 2.62 G | |
Params | 4.70 M | 0.30 M | 1.88 M | 0.12 M | 11.11 M | 0.70 M | |
NVIDIA AGX (32TOPS) | Inf_time | 45.84 ms | 17.24 ms | 72.59 ms | 27.98 ms | 98.61 ms | 33.25 ms |
Decline | — | 28.60 ms | — | 44.61 ms | — | 65.36 ms | |
Speedup | — | 2.66× | — | 2.59× | — | 2.96× | |
HUAWEI Ascend-310B (20TOPS) | Inf_time | 55.00 ms | 19.84 ms | 82.40 ms | 35.24 ms | 114.64 ms | 40.90 ms |
Decline | — | 35.16 ms | — | 47.16 ms | — | 73.74 ms | |
Speedup | — | 2.77× | — | 2.34× | — | 2.80× |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Li, B.; Zhang, G.; Chen, J.; Deng, S.; Zhang, H. DSAD: Multi-Directional Contrast Spatial Attention-Driven Feature Distillation for Infrared Small Target Detection. Remote Sens. 2025, 17, 3466. https://doi.org/10.3390/rs17203466
Li Y, Li B, Zhang G, Chen J, Deng S, Zhang H. DSAD: Multi-Directional Contrast Spatial Attention-Driven Feature Distillation for Infrared Small Target Detection. Remote Sensing. 2025; 17(20):3466. https://doi.org/10.3390/rs17203466
Chicago/Turabian StyleLi, Yonghao, Boyang Li, Guoliang Zhang, Jun Chen, Siyi Deng, and Hanxiao Zhang. 2025. "DSAD: Multi-Directional Contrast Spatial Attention-Driven Feature Distillation for Infrared Small Target Detection" Remote Sensing 17, no. 20: 3466. https://doi.org/10.3390/rs17203466
APA StyleLi, Y., Li, B., Zhang, G., Chen, J., Deng, S., & Zhang, H. (2025). DSAD: Multi-Directional Contrast Spatial Attention-Driven Feature Distillation for Infrared Small Target Detection. Remote Sensing, 17(20), 3466. https://doi.org/10.3390/rs17203466