Category-Aware Two-Stage Divide-and-Ensemble Framework for Sperm Morphology Classification
Abstract
1. Introduction
- A two-stage classification framework is proposed that first classifies samples into two major categories before detailed classification, improving model robustness and prediction accuracy.
- A multi-staged ensemble voting mechanism is introduced to increase classification reliability by allowing models to use both primary and secondary votes.
- Multiple deep learning architectures are evaluated, with NFNet-based models identified as particularly effective for sperm morphology classification.
- The benefits of ensemble learning are demonstrated, showing that using an even-numbered ensemble with a structured voting strategy leads to superior performance compared to traditional odd-numbered ensembles.
2. Materials and Methods
2.1. Dataset and Preprocessing
2.2. Two-Stage Classification Pipeline
2.2.1. Stage 1: Splitter Model
- Category 1: Sperm abnormalities primarily related to the head and neck region. This includes 12 classes: AmorfHead, AsymmetricNeck, DoubleHead, NarrowAcrosome, PinHead, PyriformHead, RounHead, TaperedHead, ThickNeck, ThinNeck, TwistedNeck, and VacuolatedHead.
- Category 2: Normal sperm morphology and tail-related abnormalities. This includes 6 classes: CurlyTail, DoubleTail, LongTail, Normal, ShortTail, and TwistedTail.
2.2.2. Stage 2: Category-Specific Ensemble Classifiers
- ViT-Huge (CLIP Laion-2B): A vision transformer pretrained on approximately 2 billion image–text pairs, achieving robust generalization capabilities without fine-tuning [28]. Due to its extensive pretraining, it captures diverse visual features well-suited to sperm image variability.
- ViT-Large (CLIP OpenAI): A smaller ViT variant trained on the CLIP dataset by OpenAI [29], known for effective semantic understanding and robustness to image variability. Its moderate size makes it well-suited for fine-tuning on the dataset used in this study.
- ViT-Large (ImageNet-21k): A purely supervised vision transformer pretrained on the extensive ImageNet-21k dataset (14 million images, 21,843 classes) [30]. This model demonstrated strong baseline accuracy for fine-grained sperm classification tasks.
- DeepMind NFNet-F4: A high-performance CNN architecture known for robust feature extraction without normalization layers, enabling stable training and excellent transfer learning capabilities [31]. Its strengths lie in capturing localized morphological details critical for identifying specific abnormalities.
2.3. Model Architectures and Training Strategy
Training Details
- Data Augmentation: Augmentations (random flips, rotations, affine transforms, and random erasing) were applied only to the training set, increasing its effective size by 3.2 times. The validation set remained unchanged.
- Multi-Phase Training Strategy: A three-phase curriculum was employed for fine-tuning the pretrained model on the target dataset. This multi-phase training strategy—consisting of an initial warm-up, followed by head-only training, and finally gradual unfreezing of the backbone—is designed to stabilize training and maximize transfer learning performance. Similar phased fine-tuning approaches have been advocated in prior studies (in both general transfer learning and biomedical imaging contexts) to avoid catastrophic forgetting and to better leverage pretrained features [34,35]. All phases were executed sequentially as a single continuous training schedule, without resetting the optimizer or weights between phases. Each phase and its rationale are detailed below.
- Phase 1: Warm-Up Full-Network Fine-Tuning (Stabilization): All network layers were briefly fine-tuned (1–2 epochs) with a low learning rate, stabilizing pretrained weights and adjusting initial feature representations to the dataset.
- Phase 2: Freeze Backbone and Train Classifier Head (Linear Probing): The backbone was frozen, training only the classifier head. This allowed the model to rapidly learn decision boundaries specific to sperm morphology while preserving general pretrained features.
- Phase 3—Gradual Unfreezing: Layers of the backbone were gradually unfrozen from deepest to shallowest. Differential learning rates—higher for the classifier head and lower for backbone layers—facilitated controlled adaptation to the new task.
- Training Protocol and Hyperparameters: Training phases proceeded sequentially without optimizer resets. The AdamW optimizer with cosine annealing learning rate scheduling was used. Mixed-precision training reduced memory consumption, permitting larger batch sizes. Training continued up to 150 epochs with early stopping (patience of 12 epochs), selecting the checkpoint with the highest validation accuracy.
2.4. Evaluation Metrics and Ensemble Decision
2.5. Two–Stage Ensemble Fusion Strategy
3. Experimental Results
3.1. Splitter Model and Split Category Analysis
3.2. Experimental Results of Individual Models Applied After Splitting
3.3. Choosing the Right Models for Ensemble Methods
3.3.1. Handling Draws in Ensemble Predictions
- Step 1: Primary Voting RoundEach model votes for its top predicted class.
- Step 2: Secondary Voting RoundIf no class wins an outright majority, the second-choice votes from each model are included in the tally.
- Step 3: Fallback DecisionIf a tie persists after both voting rounds, the final decision defers to the model with the highest validation accuracy (NFNet-F4 in this case). This step is rarely needed but ensures a definitive prediction.
3.3.2. An Ablation Study to Find the Optimum Architecture
3.4. Confusion Matrix-Based Results
4. Discussion
4.1. Comparison with Baseline Results on the Hi-LabSpermMorpho Dataset
4.2. Effect of Splitting and Ensemble Methods on Accuracy
4.3. Future Work and Limitations
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- World Health Organization. Infertility Prevalence Estimates, 1990–2021; World Health Organization: Geneva, Switzerland, 2023.
- Aktas, A.; Serbes, G.; Yigit, M.H.; Aydin, N.; Uzun, H.; Ilhan, H.O. Hi-LabSpermMorpho: A Novel Expert-Labeled Dataset with Extensive Abnormality Classes for Deep Learning-Based Sperm Morphology Analysis. IEEE Access 2024, 12, 196070–196091. [Google Scholar] [CrossRef]
- Yániz, J.; Alquézar-Baeta, C.; Yagüe-Martínez, J.; Alastruey-Benedé, J.; Palacín, I.; Boryshpolets, S.; Kholodnyy, V.; Gadêlha, H.; Pérez-Pe, R. Expanding the limits of computer-assisted sperm analysis through the development of open software. Biology 2020, 9, 207. [Google Scholar] [CrossRef]
- Finelli, R.; Leisegang, K.; Tumallapalli, S.; Henkel, R.; Agarwal, A. The validity and reliability of computer-aided semen analyzers in performing semen analysis: A systematic review. Transl. Androl. Urol. 2021, 10, 3069. [Google Scholar] [CrossRef]
- Ilhan, H.O.; Serbes, G.; Aydin, N. Automated sperm morphology analysis approach using a directional masking technique. Comput. Biol. Med. 2020, 122, 103845. [Google Scholar] [CrossRef] [PubMed]
- Javadi, S.; Mirroshandel, S.A. A novel deep learning method for automatic assessment of human sperm images. Comput. Biol. Med. 2019, 109, 182–194. [Google Scholar] [CrossRef]
- Chang, V.; Garcia, A.; Hitschfeld, N.; Härtel, S. Gold-standard for computer-assisted morphological sperm analysis. Comput. Biol. Med. 2017, 83, 143–150. [Google Scholar] [CrossRef]
- Shaker, F.; Monadjemi, S.A.; Alirezaie, J.; Naghsh-Nilchi, A.R. A dictionary learning approach for human sperm heads classification. Comput. Biol. Med. 2017, 91, 181–190. [Google Scholar] [CrossRef]
- Ilhan, H.O.; Sigirci, I.O.; Serbes, G.; Aydin, N. A fully automated hybrid human sperm detection and classification system based on mobile-net and the performance comparison with conventional methods. Med. Biol. Eng. Comput. 2020, 58, 1047–1068. [Google Scholar] [CrossRef] [PubMed]
- Aktas, A.; Serbes, G.; Ilhan, H.O. The Performance Analysis of Convolutional Neural Networks and Vision Transformers in the Classification of Sperm Morphology. In Proceedings of the 2023 8th International Conference on Computer Science and Engineering (UBMK), Burdur, Turkiye, 13–15 September 2023; pp. 330–335. [Google Scholar]
- Başaran, E.; Cömert, Z.; Şengür, A.; Budak, Ü.; Çelik, Y.; Toğaçar, M. Chronic tympanic membrane diagnosis based on deep convolutional neural network. In Proceedings of the 2019 4th International Conference on Computer Science and Engineering (UBMK), Samsun, Turkey, 11–15 September 2019; pp. 1–4. [Google Scholar]
- Sertkaya, M.E.; Ergen, B.; Togacar, M. Diagnosis of eye retinal diseases based on convolutional neural networks using optical coherence images. In Proceedings of the 2019 23rd International Conference Electronics, Palanga, Lithuania, 17–19 June 2019; pp. 1–5. [Google Scholar]
- An, G.; Akiba, M.; Omodaka, K.; Nakazawa, T.; Yokota, H. Hierarchical deep learning models using transfer learning for disease detection and classification based on small number of medical images. Sci. Rep. 2021, 11, 4250. [Google Scholar] [CrossRef] [PubMed]
- Hu, M.; Wu, B.; Lu, D.; Xie, J.; Chen, Y.; Yang, Z.; Dai, W. Two-step hierarchical neural network for classification of dry age-related macular degeneration using optical coherence tomography images. Front. Med. 2023, 10, 1221453. [Google Scholar] [CrossRef]
- Kowsari, K.; Sali, R.; Ehsan, L.; Adorno, W.; Ali, A.; Moore, S.; Amadi, B.; Kelly, P.; Syed, S.; Brown, D. Hmic: Hierarchical medical image classification, a deep learning approach. Information 2020, 11, 318. [Google Scholar] [CrossRef]
- Spencer, L.; Fernando, J.; Akbaridoust, F.; Ackermann, K.; Nosrati, R. Ensembled Deep Learning for the Classification of Human Sperm Head Morphology. Adv. Intell. Syst. 2022, 4, 2200111. [Google Scholar] [CrossRef]
- Iqbal, I.; Mustafa, G.; Ma, J. Deep learning-based morphological classification of human sperm heads. Diagnostics 2020, 10, 325. [Google Scholar] [CrossRef]
- Riordon, J.; McCallum, C.; Sinton, D. Deep learning for the classification of human sperm. Comput. Biol. Med. 2019, 111, 103342. [Google Scholar] [CrossRef] [PubMed]
- Ilhan, H.O.; Serbes, G. Sperm morphology analysis by using the fusion of two-stage fine-tuned deep networks. Biomed. Signal Process. Control 2022, 71, 103246. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Chandra, S.; Gourisaria, M.K.; Gm, H.; Konar, D.; Gao, X.; Wang, T.; Xu, M. Prolificacy assessment of spermatozoan via state-of-the-art deep learning frameworks. IEEE Access 2022, 10, 13715–13727. [Google Scholar] [CrossRef]
- Romero, M.; Finke, J.; Rocha, C. A top-down supervised learning approach to hierarchical multi-label classification in networks. Appl. Netw. Sci. 2022, 7, 8. [Google Scholar] [CrossRef]
- Asafuddoula, M.; Verma, B.; Zhang, M. A divide-and-conquer-based ensemble classifier learning by means of many-objective optimization. IEEE Trans. Evol. Comput. 2017, 22, 762–777. [Google Scholar] [CrossRef]
- World Health Organization WHO Laboratory Manual for the Examination and Processing of Human Semen; World Health Organization: Geneva, Switzerland, 2021.
- Wang, S.; Li, X.; Li, J.; Wang, G.; Sun, X.; Zhu, B.; Qiu, H.; Yu, M.; Shen, S.; Zhang, T.; et al. Faceid-6m: A large-scale, open-source faceid customization dataset. arXiv 2025, arXiv:2503.07091. [Google Scholar]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 8748–8763. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Brock, A.; De, S.; Smith, S.L.; Simonyan, K. High-performance large-scale image recognition without normalization. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 1059–1071. [Google Scholar]
- Khan, A.A.; Chaudhari, O.; Chandra, R. A review of ensemble learning and data augmentation models for class imbalanced problems: Combination, implementation and evaluation. Expert Syst. Appl. 2024, 244, 122778. [Google Scholar] [CrossRef]
- Supriyadi, M.R.; Samah, A.B.A.; Muliadi, J.; Awang, R.A.R.; Ismail, N.H.; Majid, H.A.; Othman, M.S.B.; Hashim, S.Z.B.M. A systematic literature review: Exploring the challenges of ensemble model for medical imaging. BMC Med. Imaging 2025, 25, 128. [Google Scholar] [CrossRef] [PubMed]
- Howard, J.; Ruder, S. Universal language model fine-tuning for text classification. arXiv 2018, arXiv:1801.06146. [Google Scholar] [CrossRef]
- Cheng, P.C.; Chiang, H.H.K. Diagnosis of salivary gland tumors using transfer learning with fine-tuning and gradual unfreezing. Diagnostics 2023, 13, 3333. [Google Scholar] [CrossRef] [PubMed]
- Ganaie, M.A.; Hu, M.; Malik, A.K.; Tanveer, M.; Suganthan, P.N. Ensemble deep learning: A review. Eng. Appl. Artif. Intell. 2022, 115, 105151. [Google Scholar] [CrossRef]
- Maurício, J.; Domingues, I.; Bernardino, J. Comparing vision transformers and convolutional neural networks for image classification: A literature review. Appl. Sci. 2023, 13, 5521. [Google Scholar] [CrossRef]
- Yüzkat, M.; Ilhan, H.O.; Aydin, N. Multi-model CNN fusion for sperm morphology analysis. Comput. Biol. Med. 2021, 137, 104790. [Google Scholar] [CrossRef]
- Guo, Y.; Li, J.; Hong, K.; Wang, B.; Zhu, W.; Li, Y.; Lv, T.; Wang, L. Automated Deep Learning Model for Sperm Head Segmentation, Pose Correction, and Classification. Appl. Sci. 2024, 14, 11303. [Google Scholar] [CrossRef]
- Jabbari, H.; Bigdeli, N. New conditional generative adversarial capsule network for imbalanced classification of human sperm head images. Neural Comput. Appl. 2023, 35, 19919–19934. [Google Scholar] [CrossRef]
- Sapkota, N.; Zhang, Y.; Li, S.; Liang, P.; Zhao, Z.; Zhang, J.; Zha, X.; Zhou, Y.; Cao, Y.; Chen, D.Z. Shmc-Net: A Mask-Guided Feature Fusion Network for Sperm Head Morphology Classification. In Proceedings of the 2024 IEEE International Symposium on Biomedical Imaging (ISBI), Athens, Greece, 27–30 May 2024; pp. 1–5. [Google Scholar]
- Liu, R.; Wang, M.; Wang, M.; Yin, J.; Yuan, Y.; Liu, J. Automatic microscopy analysis with transfer learning for classification of human sperm. Appl. Sci. 2021, 11, 5369. [Google Scholar] [CrossRef]
BesLab | Histoplus | GBL | |
---|---|---|---|
Amorf Head | 3572 | 2861 | 1537 |
Tapered Head | 1399 | 1356 | 1150 |
Round Head | 257 | 415 | 201 |
Pyriform Head | 979 | 1843 | 1123 |
Pin Head | 782 | 423 | 118 |
Vacuolated Head | 1697 | 604 | 509 |
Double Head | 48 | 35 | 52 |
Narrow Acrosome | 2055 | 1992 | 1355 |
Twisted Neck | 1154 | 1566 | 855 |
Thick Neck | 1978 | 1340 | 934 |
Thin Neck | 192 | 173 | 180 |
Asymmetrical Neck | 366 | 492 | 458 |
Long Tail | 42 | 128 | 106 |
Twisted Tail | 706 | 2502 | 2277 |
Curly Tail | 1445 | 1385 | 955 |
Double Tail | 200 | 183 | 81 |
Short Tail | 991 | 538 | 359 |
Normal | 599 | 433 | 381 |
Total | 18,462 | 18,269 | 12,631 |
Model | Binary Accuracy (%) |
---|---|
ViT-Huge (CLIP, LAION-2B) | 93.0 |
ViT-Large (CLIP, OpenAI) | 94.0 |
ViT-Large (ImageNet-21k) | 97.3 |
DeepMind NFNet-F4 | 96.2 |
Model | Binary Accuracy (%) |
---|---|
DB3_v1_BesLab | 97.3 |
DB3_v1_Histoplus | 98.3 |
DB3_v1_GBL | 97.5 |
Model | DB3_v1_BesLab | DB3_v1_Histoplus | DB3_v1_GBL | |||
---|---|---|---|---|---|---|
Cat-1 | Cat-2 | Cat-1 | Cat-2 | Cat-1 | Cat-2 | |
Ensemble Method | 67.92% | 83.91% | 68.01% | 85.10% | 62.81% | 85.09% |
DeepMind NFNet-F4 | 65.90% | 78.21% | 65.95% | 82.34% | 62.00% | 82.28% |
ViT-Huge (CLIP Laion-2B) | 60.40% | 79.97% | 62.67% | 84.09% | 58.11% | 83.02% |
ViT-Large (CLIP OpenAI) | 60.37% | 82.29% | 63.13% | 83.65% | 57.47% | 82.70% |
ViT-Large (ImageNet-21k) | 59.91% | 78.34% | 64.72% | 82.10% | 60.25% | 81.80% |
Configuration | DB3_v1_BesLab | DB3_v1_Histoplus | DB3_v1_GBL |
---|---|---|---|
Splitter Model + Ensemble | 69.43 | 71.34 | 68.41 |
Splitter Model + Best Model per Category | 67.55 | 69.61 | 67.21 |
Ensemble Without Splitting | 67.49 | 69.37 | 67.09 |
Single Best Performer Model (NFNet-F4) | 66.20 | 68.35 | 66.39 |
Class | DB3_v1_BesLab | DB3_v1_Histoplus | DB3_v1_GBL | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
P(%) | R(%) | F1(%) | Sup. | P(%) | R(%) | F1(%) | Sup. | P(%) | R(%) | F1(%) | Sup. | |
AmorfHead | 61.41 | 62.74 | 62.07 | 3572 | 60.59 | 62.96 | 61.75 | 2859 | 57.77 | 57.58 | 57.67 | 1537 |
AsymmetricNeck | 31.19 | 9.29 | 14.32 | 366 | 38.64 | 17.28 | 23.88 | 492 | 37.82 | 19.65 | 25.86 | 458 |
DoubleHead | 91.67 | 22.92 | 36.67 | 48 | 100.00 | 11.43 | 20.51 | 35 | 80.00 | 30.77 | 44.44 | 52 |
NarrowAcrosome | 63.43 | 70.01 | 66.56 | 2054 | 64.93 | 68.88 | 66.85 | 1992 | 62.73 | 70.55 | 66.41 | 1355 |
PinHead | 98.45 | 97.19 | 97.81 | 782 | 98.80 | 97.16 | 97.97 | 423 | 97.87 | 78.63 | 87.20 | 117 |
PyriformHead | 63.33 | 60.22 | 61.74 | 978 | 68.99 | 78.01 | 73.22 | 1842 | 66.80 | 73.82 | 70.14 | 1123 |
RoundHead | 72.88 | 50.19 | 59.45 | 257 | 66.35 | 67.47 | 66.91 | 415 | 60.87 | 55.72 | 58.18 | 201 |
TaperedHead | 68.96 | 71.77 | 70.33 | 1399 | 68.49 | 68.29 | 68.39 | 1356 | 67.31 | 72.70 | 69.90 | 1150 |
ThickNeck | 65.15 | 65.91 | 65.53 | 1977 | 63.31 | 62.31 | 62.81 | 1340 | 56.28 | 58.03 | 57.14 | 934 |
ThinNeck | 40.00 | 12.50 | 19.05 | 192 | 46.67 | 20.23 | 28.23 | 173 | 43.27 | 25.00 | 31.69 | 180 |
TwistedNeck | 75.76 | 80.42 | 78.02 | 1154 | 78.09 | 80.64 | 79.35 | 1565 | 74.33 | 74.85 | 74.59 | 855 |
VacuolatedHead | 60.16 | 66.29 | 63.08 | 1697 | 59.21 | 52.15 | 55.46 | 604 | 52.95 | 52.85 | 52.90 | 509 |
CurlyTail | 86.29 | 90.30 | 88.25 | 1443 | 85.05 | 86.71 | 85.87 | 1384 | 79.25 | 80.00 | 79.62 | 955 |
DoubleTail | 88.46 | 69.00 | 77.53 | 200 | 85.23 | 69.40 | 76.51 | 183 | 77.27 | 41.98 | 54.40 | 81 |
LongTail | 33.33 | 7.14 | 11.76 | 42 | 56.25 | 14.06 | 22.50 | 128 | 71.43 | 9.43 | 16.67 | 106 |
Normal | 79.46 | 87.96 | 83.49 | 598 | 76.39 | 66.51 | 71.11 | 433 | 78.14 | 68.68 | 73.11 | 380 |
ShortTail | 84.90 | 81.13 | 82.97 | 991 | 78.13 | 68.40 | 72.94 | 538 | 63.67 | 51.25 | 56.79 | 359 |
TwistedTail | 70.39 | 64.31 | 67.21 | 706 | 83.91 | 90.59 | 87.12 | 2498 | 83.91 | 91.13 | 87.37 | 2277 |
Macro Avg | 68.62 | 59.40 | 61.43 | 18,456 | 71.06 | 60.14 | 62.30 | 18,260 | 67.32 | 56.26 | 59.12 | 12,629 |
Weighted Avg | 68.84 | 69.43 | 68.77 | 18,456 | 70.67 | 71.34 | 70.60 | 18,260 | 67.73 | 68.41 | 67.54 | 12,629 |
Overall Accuracy | 69.43 | 71.34 | 68.41 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Turkoglu, A.K.; Serbes, G.; Uzun, H.; Aktas, A.; Yigit, M.H.; Ilhan, H.O. Category-Aware Two-Stage Divide-and-Ensemble Framework for Sperm Morphology Classification. Diagnostics 2025, 15, 2234. https://doi.org/10.3390/diagnostics15172234
Turkoglu AK, Serbes G, Uzun H, Aktas A, Yigit MH, Ilhan HO. Category-Aware Two-Stage Divide-and-Ensemble Framework for Sperm Morphology Classification. Diagnostics. 2025; 15(17):2234. https://doi.org/10.3390/diagnostics15172234
Chicago/Turabian StyleTurkoglu, Aydın Kağan, Gorkem Serbes, Hakkı Uzun, Abdulsamet Aktas, Merve Huner Yigit, and Hamza Osman Ilhan. 2025. "Category-Aware Two-Stage Divide-and-Ensemble Framework for Sperm Morphology Classification" Diagnostics 15, no. 17: 2234. https://doi.org/10.3390/diagnostics15172234
APA StyleTurkoglu, A. K., Serbes, G., Uzun, H., Aktas, A., Yigit, M. H., & Ilhan, H. O. (2025). Category-Aware Two-Stage Divide-and-Ensemble Framework for Sperm Morphology Classification. Diagnostics, 15(17), 2234. https://doi.org/10.3390/diagnostics15172234