Next Article in Journal
SFE-DETR: An Enhanced Transformer-Based Face Detector for Small Target Faces in Open Complex Scenes
Previous Article in Journal
AFCLNet: An Attention and Feature-Consistency-Loss-Based Multi-Task Learning Network for Affective Matching Prediction in Music–Video Clips
Previous Article in Special Issue
Automated Remote Detection of Falls Using Direct Reconstruction of Optical Flow Principal Motion Parameters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Advancing SAR Target Recognition Through Hierarchical Self-Supervised Learning with Multi-Task Pretext Training

Electrical & Computer Engineering Department, Tuskegee University, Tuskegee, AL 36088, USA
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(1), 122; https://doi.org/10.3390/s26010122
Submission received: 28 October 2025 / Revised: 9 December 2025 / Accepted: 16 December 2025 / Published: 24 December 2025

Abstract

Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems face significant challenges due to limited labeled data availability and persistent domain gaps between synthetic and measured imagery. This paper presents a comprehensive self-supervised learning (SSL) framework that eliminates dependency on synthetic data while achieving state-of-the-art performance through multi-task pretext training and extensive downstream classifier evaluation. We systematically evaluate our SSL framework across diverse downstream classifiers spanning different computational paradigms and architectural families. Our study encompasses traditional machine learning approaches (SVM, Random Forest, XGBoost, Gradient Boosting), deep convolutional neural networks (ResNet, U-Net, MobileNet, EfficientNet), and a generative adversarial network. We conduct extensive experiments using the SAMPLE dataset with rigorous evaluation protocols. Results demonstrate that SSL significantly improves SAR ATR performance, with SVM achieving 99.63% accuracy, ResNet18 reaching 97.40% accuracy, and Random Forest demonstrating 99.26% accuracy. Our multi-task SSL framework employs nine carefully designed pretext tasks, including geometric invariance, signal robustness, and multi-scale analysis. Cross-validation experiments validate the generalizability and robustness of our findings. Rigorous comparison with SimCLR baseline validates that task-based SSL outperforms contrastive learning for SAR ATR. This work establishes a new paradigm for SAR ATR that leverages inherent radar data structure without synthetic augmentation, providing practical guidelines for deploying SSL-based SAR ATR systems and a foundation for future domain-specific self-supervised learning research in remote sensing applications.
Keywords: machine learning; deep learning; self-supervised learning (SSL); remote sensing; computer vision; signal processing; synthetic aperture radar (SAR); automatic target recognition (ATR) machine learning; deep learning; self-supervised learning (SSL); remote sensing; computer vision; signal processing; synthetic aperture radar (SAR); automatic target recognition (ATR)

Share and Cite

MDPI and ACS Style

Siam, M.A.; Noor, D.F.; Ndoye, M.; Khan, J.F. Advancing SAR Target Recognition Through Hierarchical Self-Supervised Learning with Multi-Task Pretext Training. Sensors 2026, 26, 122. https://doi.org/10.3390/s26010122

AMA Style

Siam MA, Noor DF, Ndoye M, Khan JF. Advancing SAR Target Recognition Through Hierarchical Self-Supervised Learning with Multi-Task Pretext Training. Sensors. 2026; 26(1):122. https://doi.org/10.3390/s26010122

Chicago/Turabian Style

Siam, Md Al, Dewan Fahim Noor, Mandoye Ndoye, and Jesmin Farzana Khan. 2026. "Advancing SAR Target Recognition Through Hierarchical Self-Supervised Learning with Multi-Task Pretext Training" Sensors 26, no. 1: 122. https://doi.org/10.3390/s26010122

APA Style

Siam, M. A., Noor, D. F., Ndoye, M., & Khan, J. F. (2026). Advancing SAR Target Recognition Through Hierarchical Self-Supervised Learning with Multi-Task Pretext Training. Sensors, 26(1), 122. https://doi.org/10.3390/s26010122

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop