MHD-Protonet: Margin-Aware Hard Example Mining for SAR Few-Shot Learning via Dual-Loss Optimization
Abstract
1. Introduction
- We introduce a margin-aware hard example mining strategy that dynamically identifies and refines hard queries by enforcing geometric constraints on SAR embeddings, improving class separability in the presence of inter-class similarity.
- We propose a dual-loss optimization approach that combines cross-entropy and margin loss, effectively reducing misclassification errors and enhancing robustness to noise and intra-class variability.
- We provide a comprehensive experimental evaluation on standard SAR datasets under few-shot learning settings, demonstrating that MHD-Protonet achieves significant performance improvements, with an accuracy of 76.80% in five-way one-shot tasks.
- We illustrate the effectiveness and efficiency of MHD-Protonet in real-time SAR classification, showing strong generalization from minimal labeled data while maintaining low computational complexity.
2. Related Work
2.1. Meta-Learning and Few-Shot Learning
2.2. Enhancing Prototypes in Few-Shot Learning
2.3. Adapting Few-Shot Learning to SAR Image Classification
2.4. Bridging the Gap with MHD-Protonet
3. Methodology
3.1. Proposed Framework: MHD-ProtoNet
- Input Sampling: Each few-shot learning episode begins with the sampling of a support set (a few labeled examples per class) and a query set (unlabeled examples to be classified). This episodic setup mimics the meta-testing conditions during training and is standard practice in few-shot learning frameworks [13].
- Feature Extraction: All support and query images are passed through a shared ConvNet64 encoder to extract embeddings. This lightweight convolutional architecture has been widely adopted in SAR few-shot learning settings due to its efficiency and effectiveness in representing radar-specific features [13,18].
- Prototype Computation: For each class in the support set, a prototype is computed by averaging the embeddings of its support examples. These prototypes act as reference vectors in the embedding space and serve as the basis for classification.
- Query Evaluation and Distance Computation: Each query embedding is compared to all class prototypes using the Euclidean distance. The closest prototype determines the predicted label for the query.
- Hard Example Detection and Margin Refinement: In the embedding space (as highlighted in Figure 2), the model identifies hard example queries that lie closer to an incorrect prototype than to their true prototype plus a predefined margin. These typically result from class similarity or noise-induced variability. For these hard cases, the model enforces a geometric margin by increasing the distance to the nearest incorrect prototype by a fixed amount. This margin-aware adjustment refines the decision boundary, encouraging separation between confusing classes.
- Loss Computation: MHD-ProtoNet optimizes two loss components:
- –
- Cross-Entropy Loss (): Applied to all query samples to encourage correct classification.
- –
- Hard Example Margin Loss (): Applied only to hard examples that violate the margin condition, encouraging geometric separation by pulling queries closer to their correct prototype and pushing them away from incorrect ones.
- Model Optimization: The two loss terms are combined into a total objective function. The model parameters, including the feature extractor, are updated via backpropagation using gradient-based optimization.
3.2. Hard Example Mining and Margin Enforcement
3.2.1. Hard Example Mining
- Support Set: A set of labeled samples used to calculate class prototypes.
- Query Set: A set of unlabeled samples that the model will classify based on the learned prototypes.
3.2.2. Margin Enforcement
- Pulls the query embedding closer to the correct prototype ;
- Pushes it away from the nearest incorrect prototype .
3.3. Dual-Loss Formulation
3.3.1. Cross-Entropy Loss
3.3.2. Hard Example Margin (HEM) Loss
3.4. MHD-ProtoNet Algorithm
Algorithm 1 MHD-ProtoNet: training and inference |
|
4. Experimental Setup
4.1. Dataset Description
4.2. Few-Shot Learning Setup
- A support set with 1 image per class (5 images total);
- A query set with 15 images per class (75 images total).
4.3. Implementation Setup
5. Experimental Results
5.1. Main Results
5.2. Per-Class Performance
5.3. Embedding Space Visualization (Five-Way One-Shot)
5.4. Stability Across Episodes
5.5. Ablation Study
5.5.1. Component-Wise Contributions
- ProtoNet (Baseline): Standard prototypical network using only cross-entropy loss.
- +Margin Only: Incorporates margin-aware hard example mining with a fixed margin (), but without dual-loss optimization.
- +Dual Loss Only: Applies both cross-entropy and margin loss (), without hard example mining.
- MHD-ProtoNet (Full): The proposed full model combining both margin mining and dual-loss optimization, with a learnable margin parameter.
- Adding margin-aware hard example mining alone resulted in a improvement over the baseline, demonstrating its effectiveness in separating visually similar classes.
- Dual-loss optimization contributed a +3.44% boost, highlighting its role in learning more stable and noise-resistant embeddings.
- The full MHD-ProtoNet achieved the highest performance at , a gain over the baseline. This underscores the complementary benefits of integrating both mechanisms and the advantage of using a learnable margin parameter.
5.5.2. Margin Parameter Sensitivity
5.5.3. Loss Weight () Sensitivity
6. Analysis and Discussion
6.1. Key Observations
- Separate visually similar classes (e.g., tanks vs. armored trucks) by enforcing geometric margins between prototypes.
- Suppress intra-class scatter caused by speckle noise and viewpoint variations through dual-loss optimization.
6.2. Component Contributions
6.3. Comparative Analysis with Existing Methods
6.4. Limitations and Future Directions
- Dataset Generalization: Evaluations are confined to MSTAR, which lacks diversity in target types. Testing on datasets like OpenSARShip or SEN1-2 could validate broader applicability.
- Fixed Margin Assumption: The learnable margin adapts to task difficulty but remains global. Class-specific margins, informed by radar cross-section (RCS) signatures, could better capture hierarchical relationships.
- Computational Overhead: Dual-loss optimization increases training complexity. Future work could explore adaptive loss weighting or pruning techniques to reduce inference latency.
- Transfer Learning for Feature Enhancement: Utilize pre-trained models on large-scale SAR or related imaging datasets as feature extractors. Fine-tune these models on specific SAR datasets to adapt the features to target recognition tasks. This provides a robust initial set of features and reduces dependency on large amounts of labeled SAR data [45,46].
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
SAR | Synthetic Aperture Radar |
ATR | Automatic Target Recognition |
MSTAR | Moving and Stationary Target Acquisition and Recognition |
CNN | Convolutional Neural Network |
ReLU | Rectified Linear Unit |
PROTONET | Prototypical Network |
ST-PN | Spatial Transformed Prototypical Network |
ADMM-GCN | ADMM-based Graph Convolutional Network |
LST-ACGAN | Label Smoothing and Triplet Loss-based Auxiliary Classification GAN |
DGP-NET | Dense Graph Prototype Network |
GANS | Generative Adversarial Networks |
FSL | Few-Shot Learning |
MAML | Model-Agnostic Meta-Learning |
HEM | Hard Example Margin |
LCE | Cross-Entropy Loss |
LHEM | Hard Example Margin Loss |
LG | Global Loss |
HIN | Hybrid Inference Network |
t-SNE | t-distributed Stochastic Neighbor Embedding |
Meta-SGD | Meta-learning with Stochastic Gradient Descent (SGD) |
References
- Zhang, R.; Wang, Z.; Li, Y.; Wang, J.; Wang, Z. FewSAR: A Few-Shot SAR Image Classification Benchmark. arXiv 2023, arXiv:2306.09592. Available online: https://arxiv.org/abs/2306.09592 (accessed on 12 February 2024).
- Li, Y.; Zhang, H.; Xue, X.; Jiang, Y.; Shen, Q. Deep Learning for Remote Sensing Image Classification: A Survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1264. [Google Scholar] [CrossRef]
- Zhang, Y.; Sun, X.; Sun, H.; Zhang, Z.; Diao, W.; Fu, K. High Resolution SAR Image Classification with Deeper Convolutional Neural Network. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 2374–2377. [Google Scholar] [CrossRef]
- Geng, J.; Wang, H.; Fan, J.; Ma, X. SAR Image Classification via Deep Recurrent Encoding Neural Networks. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2255–2269. [Google Scholar] [CrossRef]
- El Housseini, A.; Toumi, A.; Khenchaf, A. Deep Learning for Target Recognition from SAR Images. In Proceedings of the 2017 Seminar on Detection Systems Architectures and Technologies (DAT), Algiers, Algeria, 20–22 February 2017; pp. 1–5. [Google Scholar] [CrossRef]
- Toumi, A.; Cexus, J.-C.; Khenchaf, A.; Tartivel, A. Transfer Learning on CNN Architectures for Ship Classification on SAR Images. In Proceedings of the Sea Tech Week—Session Remote Sensing, Brest, France, 12–16 October 2020; Available online: https://hal.science/hal-03109596 (accessed on 5 April 2024).
- Toumi, A.; Cexus, J.-C.; Khenchaf, A. Comparative Performances of CNN Models for SAR Targets Classification. In Proceedings of the 2024 IEEE 7th International Conference on Advanced Technologies, Signal and Image Processing (ATSIP), Sousse, Tunisia, 11–13 July 2024; pp. 122–127. [Google Scholar] [CrossRef]
- Toumi, A.; Cexus, J.-C.; Khenchaf, A.; Abid, M. A Combined CNN-LSTM Network for Ship Classification on SAR Images. Sensors 2024, 24, 7954. [Google Scholar] [CrossRef] [PubMed]
- Kong, J.; Zhang, F. SAR Target Recognition with Generative Adversarial Network (GAN)-Based Data Augmentation. In Proceedings of the 2021 13th International Conference on Advanced Infocomm Technology (ICAIT), Yanji, China, 15–18 October 2021; pp. 215–218. [Google Scholar] [CrossRef]
- Tang, J.; Zhang, F.; Zhou, Y.; Yin, Q.; Hu, W. A Fast Inference Network for SAR Target Few-Shot Learning Based on Improved Siamese Networks. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 1212–1215. [Google Scholar] [CrossRef]
- Khenchaf, Y.; Toumi, A. Siamese Neural Network for Automatic Target Recognition Using Synthetic Aperture Radar. In Proceedings of the IGARSS 2023—2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023; pp. 7503–7506. [Google Scholar] [CrossRef]
- Finn, C.; Abbeel, P.; Levine, S. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. arXiv 2017, arXiv:1703.03400. Available online: https://arxiv.org/abs/1703.03400 (accessed on 19 April 2024).
- Snell, J.; Swersky, K.; Zemel, R. Prototypical Networks for Few-Shot Learning. arXiv 2017, arXiv:1703.05175. Available online: https://arxiv.org/abs/1703.05175 (accessed on 21 April 2024).
- Li, Z.; Yang, F.; Song, F.; Hou, L.; Wang, K.; Yang, Y. Meta-SGD: Learning to Learn Stochastic Gradient Descent. arXiv 2017, arXiv:1707.09835. Available online: https://arxiv.org/pdf/1707.09835 (accessed on 25 April 2024).
- Zhao, P.; Huang, L.; Xin, Y.; Guo, J.; Pan, Z. Multi-Aspect SAR Target Recognition Based on Prototypical Network with a Small Number of Training Samples. Sensors 2021, 21, 4333. [Google Scholar] [CrossRef]
- Yu, X.; Yu, H.; Liu, Y.; Ren, H. Enhanced Prototypical Network with Customized Region-Aware Convolution for Few-Shot SAR ATR. Remote Sens. 2024, 16, 3563. [Google Scholar] [CrossRef]
- Wang, Y.; Yao, Q.; Kwok, J.T.; Ni, L.M. Generalizing from a Few Examples: A Survey on Few-Shot Learning. ACM Comput. Surv. 2020, 53, 1–34. Available online: https://arxiv.org/pdf/1904.05046 (accessed on 18 August 2024). [CrossRef]
- Wang, N.; Jin, W.; Bi, H.; Xu, C.; Gao, J. A Survey on Deep Learning for Few-Shot PolSAR Image Classification. Remote Sens. 2024, 16, 4632. [Google Scholar] [CrossRef]
- He, K.; Pu, N.; Lao, M.; Lew, M.S. Few-Shot and Meta-Learning Methods for Image Understanding: A Survey. Int. J. Multimed. Inf. Retr. 2023, 12, 14. [Google Scholar] [CrossRef]
- Fu, K.; Zhang, T.; Zhang, Y.; Wang, Z.; Sun, X. Few-Shot SAR Target Classification via Meta-Learning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Ravi, S.; Larochelle, H. Optimization as a Model for Few-Shot Learning. In Proceedings of the International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017; Available online: https://openreview.net/pdf?id=rJY0-Kcll (accessed on 30 August 2024).
- Demertzis, K.; Iliadis, L. GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspectral Image Analysis and Classification. Algorithms 2020, 13, 61. [Google Scholar] [CrossRef]
- Zhang, X.; Luo, Y. Feature Transformation-Based Few-Shot Class-Incremental Learning. Algorithms 2025, 18, 422. [Google Scholar] [CrossRef]
- Li, A.; Huang, W.; Lan, X.; Feng, J.; Li, Z.; Wang, L. Boosting Few-Shot Learning with Adaptive Margin Loss. arXiv 2020, arXiv:2005.13826. Available online: https://arxiv.org/abs/2005.13826 (accessed on 15 September 2024). [CrossRef]
- Arik, S.O.; Pfister, T. ProtoAttend: Attention-Based Prototypical Learning. arXiv 2019, arXiv:1902.06292. Available online: https://arxiv.org/abs/1902.06292 (accessed on 27 November 2024).
- Liu, T.; Ke, Z.; Li, Y.; Silamu, W. Knowledge-Enhanced Prototypical Network with Class Cluster Loss for Few-Shot Relation Classification. PLoS ONE 2023, 18, e0286915. [Google Scholar] [CrossRef]
- Hamzaoui, M.; Chapel, L.; Pham, M.T.; Lefèvre, S. A Hierarchical Prototypical Network for Few-Shot Remote Sensing Scene Classification. In Proceedings of the International Conference on Pattern Recognition and Artificial Intelligence, Paris, France, 1–3 June 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 208–220. [Google Scholar] [CrossRef]
- Cai, J.; Zhang, Y.; Guo, J.; Zhao, X.; Lv, J.; Hu, Y. ST-PN: A Spatial Transformed Prototypical Network for Few-Shot SAR Image Classification. Remote Sens. 2022, 14, 2019. [Google Scholar] [CrossRef]
- Patel, K.; Bhatt, C.; Mazzeo, P.L. Improved Ship Detection Algorithm from Satellite Images Using YOLOv7 and Graph Neural Network. Algorithms 2022, 15, 473. [Google Scholar] [CrossRef]
- Jin, J.; Xu, Z.; Zheng, N.; Wang, F. Graph-Based Few-Shot Learning for Synthetic Aperture Radar Automatic Target Recognition with Alternating Direction Method of Multipliers. Remote Sens. 2025, 17, 1179. [Google Scholar] [CrossRef]
- Xu, C.; Gao, L.; Su, H.; Zhang, J.; Wu, J.; Yan, W. Label Smoothing Auxiliary Classifier Generative Adversarial Network with Triplet Loss for SAR Ship Classification. Remote Sens. 2023, 15, 4058. [Google Scholar] [CrossRef]
- Zhou, X.; Wei, Q.; Zhang, Y. DGP-Net: Dense Graph Prototype Network for Few-Shot SAR Target Recognition. arXiv 2023, arXiv:2302.09584. Available online: https://arxiv.org/abs/2302.09584 (accessed on 4 January 2025).
- Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A Unified Embedding for Face Recognition and Clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar] [CrossRef]
- Shrivastava, A.; Gupta, A.; Girshick, R. Training Region-Based Object Detectors with Online Hard Example Mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 761–769. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
- Wu, C.Y.; Manmatha, R.; Smola, A.J.; Krahenbuhl, P. Sampling Matters in Deep Embedding Learning. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2840–2848. [Google Scholar] [CrossRef]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A Simple Framework for Contrastive Learning of Visual Representations. In Proceedings of the International Conference on Machine Learning (ICML), Virtual, 13–18 July 2020; pp. 1597–1607, PMLR. Available online: http://proceedings.mlr.press/v119/chen20a.html (accessed on 10 January 2025).
- Wang, Y.; Wu, X.M.; Li, Q.; Gu, J.; Xiang, W.; Zhang, L.; Li, V.O. Large Margin Few-Shot Learning. arXiv 2018, arXiv:1807.02872. Available online: https://arxiv.org/abs/1807.02872 (accessed on 18 January 2025). [CrossRef]
- Liu, B.; Cao, Y.; Lin, Y.; Li, Q.; Zhang, Z.; Long, M.; Hu, H. Negative Margin Matters: Understanding Margin in Few-Shot Classification. arXiv 2020, arXiv:2003.12060. Available online: https://arxiv.org/abs/2003.12060 (accessed on 22 January 2025). [CrossRef]
- Liao, L.; Du, L.; Zhang, W.; Chen, J. Adaptive Max-Margin One-Class Classifier for SAR Target Discrimination in Complex Scenes. Remote Sens. 2022, 14, 2078. [Google Scholar] [CrossRef]
- Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Hermans, A.; Beyer, L.; Leibe, B. In Defense of the Triplet Loss for Person Re-Identification. arXiv 2017, arXiv:1703.07737. Available online: https://arxiv.org/abs/1703.07737 (accessed on 17 September 2024). [CrossRef]
- Defense Advanced Research Project Agency (DARPA); Air Force Research Laboratory (AFRL). The Air Force Moving and Stationary Target Recognition (MSTAR) Database. 2014. Available online: https://www.sdms.afrl.af.mil/index.php?collection=mstar (accessed on 24 May 2025).
- Sung, F.; Yang, Y.; Zhang, L.; Xiang, T.; Torr, P.H.S.; Hospedales, T.M. Learning to Compare: Relation Network for Few-Shot Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 1199–1208. [Google Scholar] [CrossRef]
- Lu, C.; Li, W. Ship Classification in High-Resolution SAR Images via Transfer Learning with Small Training Dataset. Sensors 2018, 19, 63. [Google Scholar] [CrossRef]
- Huang, Z.; Pan, Z.; Lei, B. Transfer Learning with Deep Convolutional Neural Network for SAR Target Classification with Limited Labeled Data. Remote Sens. 2017, 9, 907. [Google Scholar] [CrossRef]
- Singh, A.; Singh, V.K. Exploring Deep Learning Methods for Classification of SAR Images: Towards NextGen Convolutions via Transformers. arXiv 2023, arXiv:2303.15852. Available online: https://arxiv.org/abs/2303.15852 (accessed on 2 November 2024). [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A Survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
Class | Description |
---|---|
BMP2 | Infantry fighting vehicle |
T72 | Main battle tank |
BTR70, BTR_60 | Armored personnel carriers |
BRDM_2 | Armored reconnaissance vehicle |
ZSU_23_4 | Self-propelled anti-aircraft gun |
2S1 | Self-propelled howitzer |
ZIL131 | Military transport truck |
T62 | Main battle tank |
D7 | Bulldozer (engineering vehicle) |
Class | BTR70 | ZSU_23_4 | BMP2 | ZIL131 | T62 | Total |
Images | 233 | 299 | 233 | 299 | 299 | 1363 |
Class | 2S1 | BRDM_2 | D7 | T72 | BTR_60 | Total |
Images | 274 | 274 | 274 | 196 | 2195 | 1213 |
Model | Five-Way One-Shot | Five-Way Five-Shot |
---|---|---|
ProtoNet | 69.38 ± 1.02 | 83.47 ± 0.32 |
MHD-ProtoNet | 76.80 ± 0.21 | 87.67 ± 0.13 |
Model | True Class | 2S1 | BRDM-2 | BTR-60 | D7 | T-72 |
---|---|---|---|---|---|---|
ProtoNet | 2S1 | 90.48 | 1.08 | 0.04 | 0.07 | 8.34 |
BTR_60 | 0.12 | 96.84 | 0.57 | 2.39 | 0.07 | |
T72 | 0.02 | 0.48 | 0.33 | 0.00 | 99.17 | |
BRDM_2 | 0.00 | 45.97 | 39.32 | 13.38 | 1.32 | |
D7 | 0.63 | 0.94 | 0.30 | 98.11 | 0.02 | |
MHD-ProtoNet | 2S1 | 99.62 | 0.16 | 0.06 | 0.08 | 0.07 |
BTR_60 | 0.00 | 31.86 | 66.99 | 0.96 | 0.19 | |
T72 | 0.00 | 0.23 | 1.54 | 0.00 | 98.23 | |
BRDM_2 | 0.00 | 84.38 | 14.15 | 1.25 | 0.22 | |
D7 | 0.13 | 0.34 | 0.21 | 99.25 | 0.06 |
Model Variant | Accuracy (%) |
---|---|
ProtoNet (Baseline) | |
+Margin Only | |
+Dual Loss Only | |
MHD-ProtoNet (Full) |
Method | Five-Way One-Shot | Five-Way Five-Shot |
---|---|---|
ProtoNet [13] | 69.38 ± 1.02 | 83.47 ± 0.10 |
ST-PN [28] | 72.54 ± 1.15 | 86.15 ± 0.84 |
ADMM-GCN [30] | 61.79 ± 0.56 | 74.01 ± 0.53 |
RelationNet [44] | 61.81 ± 1.31 | 73.08 ± 1.13 |
DGP-Net [32] | 68.60 ± 0.42 | 76.80 ± 0.35 |
MHD-ProtoNet | 76.80 ± 0.21 | 87.67 ± 0.13 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zayani, M.; Toumi, A.; Khalfallah, A. MHD-Protonet: Margin-Aware Hard Example Mining for SAR Few-Shot Learning via Dual-Loss Optimization. Algorithms 2025, 18, 519. https://doi.org/10.3390/a18080519
Zayani M, Toumi A, Khalfallah A. MHD-Protonet: Margin-Aware Hard Example Mining for SAR Few-Shot Learning via Dual-Loss Optimization. Algorithms. 2025; 18(8):519. https://doi.org/10.3390/a18080519
Chicago/Turabian StyleZayani, Marii, Abdelmalek Toumi, and Ali Khalfallah. 2025. "MHD-Protonet: Margin-Aware Hard Example Mining for SAR Few-Shot Learning via Dual-Loss Optimization" Algorithms 18, no. 8: 519. https://doi.org/10.3390/a18080519
APA StyleZayani, M., Toumi, A., & Khalfallah, A. (2025). MHD-Protonet: Margin-Aware Hard Example Mining for SAR Few-Shot Learning via Dual-Loss Optimization. Algorithms, 18(8), 519. https://doi.org/10.3390/a18080519