Symmetry-Aware SwinUNet with Integrated Attention for Transformer-Based Segmentation of Thyroid Ultrasound Images
Abstract
1. Introduction
- ▪
- A symmetry-aware SwinUNet with spatial attention for thyroid nodule segmentation.
- ▪
- A window-based hierarchical encoder of Swin Transformer with the merits to preserve spatial symmetry and concurrently model global and local information.
- ▪
- An architecture for low contrast, speckle noise and inter-patient variability in thyroid ultrasound images.
- ▪
- Included spatial attention mechanisms to attend to the relevant anatomical regions and inhibit the background.
- ▪
- Significant gain over the baseline model, by +15.38% IoU and +12.05% Dice.
- ▪
- A large-scale multi-epoch training study (10–800 epochs) with stable performance improvements.
- ▪
- Significant improvements in the segmentation metrics like precision, recall, and F1-score with longer training period.
2. Related Work
2.1. Deep Learning for Thyroid Scintigraphy and Ultrasound Segmentation
2.2. Hybrid and Residual U-Net Architectures for Thyroid Imaging
2.3. Advanced Hybrid, Transformer, and Attention-Based Models
3. Proposed Method

3.1. Flowchart
3.2. Algorithm
| Algorithm 1 End-to-End Training Algorithm for SwinUNet-Based Medical Image Segmentation |
| 1: Input: 2: -Medical image I ∈ ℝ^(H × W × 3) 3: -Ground truth mask M ∈ ℝ^(H × W × 1) 4: Output: 5: -Segmentation Prediction P ∈ ℝ^(H × W × C) 6: -Trained model parameters 7: Steps: 8: Initialize SwinUNet model with attention gates 9: Load pre-trained Swin Transformer weights 10: for each epoch, e = 1 to max_epochs do 11: for each batch (I, M) ∈ training_data do 12: // Forward Pass 13: features ← SwinEncoder(I) 14: attended_features ← AttentionGates(features) 15: P ← Decoder(attended_features) 16: // Loss Computation 17: loss ← CombinedLoss(P, M) 18: // Backward Pass 19: optimizer.zero_grad() 20: loss.backward() 21: optimizer.step() 22: end for 23: // Validation Phase 24: val_metrics ← Evaluate (model, validation_data) 25: if val_metrics.improved then 26: save_checkpoint(model) 27: end if 28: end for 29: return trained SwinUnet_withAttention model |

3.3. Hierarchical Feature Extraction Process
3.3.1. Decoder Architecture for Medical Image Segmentation
3.3.2. Swin Transformer Block Architecture
3.3.3. Swin Transformer Encoder and Attention Mechanism
3.3.4. Decoder Architecture and Attention Integration
3.4. Experimental Setup and Evaluation Metrics
3.4.1. Dataset, Preparation and Preprocessing
3.4.2. Preprocessing, Standardization, and Label Encoding
3.4.3. Data Augmentation Strategy
- Intersection over Union (IoU)
- Dice Similarity Coefficient:
3.5. Complete SwinUNet Architecture for Medical Image Segmentation

4. Results and Discussion
4.1. Performance Evaluation
| Epoch | Precision | Recall | F1-Score | Accuracy | IoU | AUC |
|---|---|---|---|---|---|---|
| 10 | 0.7752 | 0.8150 | 0.7631 | 0.9512 | 0.6760 | 0.9835 |
| 20 | 0.8104 | 0.8357 | 0.7882 | 0.9541 | 0.7020 | 0.9857 |
| 50 | 0.8250 | 0.8481 | 0.8009 | 0.9560 | 0.7100 | 0.9864 |
| 100 | 0.8368 | 0.8602 | 0.8154 | 0.9584 | 0.7160 | 0.9871 |
| 300 | 0.8512 | 0.8701 | 0.8327 | 0.9610 | 0.7300 | 0.9886 |
| 800 | 0.8705 | 0.8913 | 0.8551 | 0.9691 | 0.7800 | 0.9902 |





| Epoch | Precision | Recall | F1-Score | Accuracy | IoU | AUC |
|---|---|---|---|---|---|---|
| 10 | - | - | - | - | - | - |
| 20 | 0.0352 | 0.0207 | 0.0251 | 0.0029 | 0.026 | 0.0022 |
| 50 | 0.0146 | 0.0124 | 0.0127 | 0.0019 | 0.008 | 0.0007 |
| 100 | 0.0118 | 0.0121 | 0.0145 | 0.0024 | 0.006 | 0.0007 |
| 300 | 0.0144 | 0.0099 | 0.0173 | 0.0026 | 0.014 | 0.0015 |
| 800 | 0.0193 | 0.0212 | 0.0224 | 0.0081 | 0.05 | 0.0016 |
| Metric | Best Value | Epoch |
|---|---|---|
| Precision | 0.8705 | 800 |
| Recall | 0.8913 | 800 |
| F1-score | 0.8551 | 800 |
| Accuracy | 0.9691 | 800 |
| IoU | 0.780 | 800 |
| AUC | 0.9902 | 800 |
| Metric | Epoch 10 | Epoch 800 | % Increase |
|---|---|---|---|
| Precision | 0.7752 | 0.8705 | +12.3% |
| Recall | 0.815 | 0.8913 | +9.37% |
| F1-score | 0.7631 | 0.8551 | +12.05% |
| Accuracy | 0.9512 | 0.9691 | +1.88% |
| IoU | 0.676 | 0.780 | +15.38% |
| AUC | 0.9835 | 0.9902 | +0.68% |
4.2. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zuo, X.; Zhang, Y.; Wang, L. Federated learning via multi-attention guided UNet for thyroid nodule segmentation of ultrasound images. Neural Netw. 2025, 181, 106754. [Google Scholar] [CrossRef]
- Grani, G.; Sponziello, M.; Filetti, S.; Durante, C. Thyroid nodules: Diagnosis and management. Nat. Rev. Endocrinol. 2024, 20, 715–728. [Google Scholar] [CrossRef]
- Munsterman, R.; van der Velden, T.; Jansen, K. 3D Ultrasound Segmentation of Thyroid. WFUMB Ultrasound Open 2024, 2, 100055. [Google Scholar] [CrossRef]
- Li, X.; Chen, Y.; Liu, Z. DMSA-UNet for medical image segmentation. Knowl.-Based Syst. 2024, 299, 112050. [Google Scholar] [CrossRef]
- Li, X.; Fu, C.; Xu, S.; Sham, C.-W. Thyroid Ultrasound Image Database and Marker Mask Inpainting Method for Research and Development. Ultrasound Med. Biol. 2024, 50, 509–519. [Google Scholar] [CrossRef] [PubMed]
- Chaphekar, M.; Chandrakar, O. An improved deep learning models with hybrid architectures thyroid disease classification diagnosis. J. Neonatal Surg. 2025, 14, 1151–1162. [Google Scholar] [CrossRef]
- Wang, J.; Zheng, N.; Wan, H.; Yao, Q.; Jia, S.; Zhang, X.; Fu, S.; Ruan, J.; He, G.; Ouyang, N.; et al. Deep learning models for thyroid nodules diagnosis of fine-needle aspiration biopsy: A retrospective, prospective, multicentre study in China. Lancet Digit. Health 2024, 6, e458–e469. [Google Scholar] [CrossRef]
- Yadav, N.; Dass, R.; Virmani, J. A systematic review of machine learning based thyroid tumor characterisation using ultrasonographic images. J. Ultrasound 2024, 27, 209–224. [Google Scholar] [CrossRef]
- Cantisani, V.; Bojunga, J.; Durante, C.; Dolcetti, V.; Pacini, P. Multiparametric Ultrasound Evaluation of Thyroid Nodules. Ultraschall Med. 2025, 46, 14–35. [Google Scholar] [CrossRef]
- Gulame, M.B.; Dixit, V.V. Hybrid deep learning assisted multi-classification: Grading of malignant thyroid nodules. Int. J. Numer. Meth. Biomed. Engng 2024, 40, e3824. [Google Scholar] [CrossRef]
- Lu, X.; Chen, G.; Li, J.; Hu, X.; Sun, F. MAGCN: A Multiple Attention Graph Convolution Networks for Predicting Synthetic Lethality. IEEE/ACM Trans. Comput. Biol. Bioinform. 2023, 20, 2681–2689. [Google Scholar] [CrossRef]
- Xu, C.; Liu, W.; Chen, Y.; Ding, X. A Supervised Case-Based Reasoning Approach for Explainable Thyroid Nodule Diagnosis. Knowl.-Based Syst. 2022, 251, 109200. [Google Scholar] [CrossRef]
- Nie, X.; Zhou, X.; Tong, T.; Lin, X.; Wang, L.; Zheng, H.; Li, J.; Xue, E.; Chen, S.; Zheng, M.; et al. N-Net: A novel dense fully convolutional neural network for thyroid nodule segmentation. Front. Neurosci. 2022, 16, 872601. [Google Scholar] [CrossRef]
- Pan, S.; Liu, X.; Xie, N.; Zhang, Y.; Chen, L.; Li, H. EG-TransUNet: A transformer-based U-Net with enhanced and guided models for biomedical image segmentation. BMC Bioinform. 2023, 24, 85. [Google Scholar] [CrossRef] [PubMed]
- Dong, P.; Zhang, R.; Li, J.; Liu, C.; Liu, W.; Hu, J.; Yang, Y.; Li, X. An ultrasound image segmentation method for thyroid nodules based on dual-path attention mechanism-enhanced UNet++. BMC Med. Imaging 2024, 24, 341. [Google Scholar] [CrossRef]
- Chen, Y.; Zhang, X.; Li, D.; Park, H.; Li, X.; Liu, P.; Jin, J.; Shen, Y. Automatic Segmentation of Thyroid with the Assistance of the Devised Boundary Improvement Based on Multicomponent Small Dataset. Appl. Intell. 2023, 53, 19708–19723. [Google Scholar] [CrossRef] [PubMed]
- Das, D.; Iyengar, M.S.; Majdi, M.S.; Rodriguez, J.J.; Alsayed, M. Deep Learning for Thyroid Nodule Examination: A Technical Review. Artif. Intell. Rev. 2024, 57, 47. [Google Scholar] [CrossRef]
- Ma, X.; Sun, B.; Liu, W.; Sui, D.; Chen, J.; Tian, Z. AMSeg: A Novel Adversarial Architecture Based Multi-Scale Fusion Framework for Thyroid Nodule Segmentation. IEEE Access 2023, 11, 72911–72924. [Google Scholar] [CrossRef]
- Beyyala, A.; Priya, R.; Choudari, S.R.; Bhavani, R. Swin Transformer and Attention Guided Thyroid Nodule Segmentation on Ultrasound Images. Ingénierie Systèmes D’information 2024, 29, 75–81. [Google Scholar] [CrossRef]
- Yang, W.T.; Ma, B.Y.; Chen, Y. A Narrative Review of Deep Learning in Thyroid Imaging: Current Progress and Future Prospects. Quant. Imaging Med. Surg. 2024, 14, 2069–2088. [Google Scholar] [CrossRef]
- Sureshkumar, V.; Jaganathan, D.; Ravi, V.; Velleangiri, V.; Ravi, P. A Comparative Study on Thyroid Nodule Classification Using Transfer Learning Methods. Open Bioinform. J. 2024, 17, e18750362305982. [Google Scholar] [CrossRef]
- Sabouri, M.; Ahamed, S.; Asadzadeh, A.; Avval, A.H.; Bagheri, S.; Arabi, M.; Zakavi, S.R.; Askari, E.; Rasouli, A.; Aghaee, A.; et al. Thyroidiomics: An Automated Pipeline for Segmentation and Classification of Thyroid Pathologies from Scintigraphy Images. In Proceedings of the 12th European Workshop on Visual Information Processing (EUVIP), Geneva, Switzerland, 8–11 September 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Mau, M.A.; Krusen, M.; Ernst, F. Automatic Thyroid Scintigram Segmentation Using U-Net. In Bildverarbeitung für die Medizin 2025; Palm, C., Breininger, K., Deserno, T., Handels, H., Maier, A., Maier-Hein, K.H., Tolxdorff, T.M., Eds.; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2025; pp. 229–234. [Google Scholar] [CrossRef]
- Ludwig, M.; Ludwig, B.; Mikuła, A.; Biernat, S.; Rudnicki, J.; Kaliszewski, K. The Use of Artificial Intelligence in the Diagnosis and Classification of Thyroid Nodules: An Update. Cancers 2023, 15, 708. [Google Scholar] [CrossRef] [PubMed]
- Chi, J.; Li, Z.; Sun, Z.; Yu, X.; Wang, H. Hybrid transformer UNet for thyroid segmentation from ultrasound scans. Comput. Biol. Med. 2023, 153, 106453. [Google Scholar] [CrossRef]
- Peng, B.; Lin, W.; Zhou, W.; Bai, Y.; Luo, A.; Xie, S.; Yin, L. Enhanced Pediatric Thyroid Ultrasound Image Segmentation Using DC-Contrast U-Net. BMC Med. Imaging 2024, 24, 275. [Google Scholar] [CrossRef]
- Haribabu, K.; Prasath, R.; Praveen Joe, I.R. MLRT-UNet: An Efficient Multi-Level Relation Transformer-Based U-Net for Thyroid Nodule Segmentation. Comput. Model. Eng. Sci. 2025, 143, 413–448. [Google Scholar] [CrossRef]
- Pavithra, S.; Yamuna, G.; Arunkumar, R. Deep Learning Method for Classifying Thyroid Nodules Using Ultrasound Images. In Proceedings of the 2022 International Conference on Smart Technologies and Systems for Next Generation Computing (ICSTSN), Villupuram, India, 25–26 March 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Zeng, Y.; Zhang, Y.; Gong, N.; Li, M.; Wang, M. Research on Thyroid CT Image Segmentation Based on U-Shaped Convolutional Neural Network. Proc. SPIE 2023, 12705, 127051I. [Google Scholar] [CrossRef]
- Chen, Y.; Li, D.; Zhang, X.; Liu, F.; Shen, Y. A Devised Thyroid Segmentation with Multi-Stage Modification Based on Super-Pixel U-Net under Insufficient Data. Ultrasound Med. Biol. 2023, 49, 1728–1741. [Google Scholar] [CrossRef]
- Liu, X.; Hu, Y.; Chen, J. Hybrid CNN-Transformer model for medical image segmentation with pyramid convolution and multi-layer perceptron. Biomed. Signal Process. Control 2023, 86, 105331. [Google Scholar] [CrossRef]
- Inan, N.G.; Kocadağlı, O.; Yıldırım, D.; Meşe, İ.; Kovan, Ö. Multi-class classification of thyroid nodules from automatic segmented ultrasound images: Hybrid ResNet-based U-Net convolutional neural network approach. Comput. Methods Programs Biomed. 2024, 243, 107921. [Google Scholar] [CrossRef]
- Arepalli, L.; Kasukiurthi, V.R.; Dabbiru, M. Channel Boosted Convolutional Neural Network with SegNet-Based Segmentation for an Automatic Prediction of Thyroid Cancer. Soft Comput. 2025, 29, 2399–2415. [Google Scholar] [CrossRef]
- Xu, Y.; Quan, R.; Xu, W.; Huang, Y.; Chen, X.; Liu, F. Advances in Medical Image Segmentation: A Comprehensive Review of Traditional, Deep Learning and Hybrid Approaches. Bioengineering 2024, 11, 1034. [Google Scholar] [CrossRef]
- Al-Mukhtar, F.H.; Ali, A.A.; Al-Dahan, Z.T. Joint Segmentation and Classification. Zanco J. Pure Appl. Sci. 2024, 35, 60–71. [Google Scholar] [CrossRef]
- Yang, D.; Li, Y.; Yu, J. Multi-Task Thyroid Tumor Segmentation Based on the Joint Loss Function. Biomed. Signal Process. Control 2023, 79, 104249. [Google Scholar] [CrossRef]
- Xu, P. Research on thyroid nodule segmentation using an improved U-Net network. Rev. Int. Métodos Numér. Cálc. Diseño Ing. 2024, 40, 1–7. [Google Scholar] [CrossRef]
- Ozcan, A.; Tosun, Ö.; Donmez, E.; Sanwal, M. Enhanced-TransUNet for Ultrasound Segmentation of Thyroid Nodules. Biomed. Signal Process. Control 2024, 95, 106472. [Google Scholar] [CrossRef]
- Shao, J.; Pan, T.; Fan, L.; Li, Z.; Yang, J.; Zhang, S.; Zhang, J.; Chen, D.; Zhu, X.; Chen, H.; et al. FCG-Net: An innovative full-scale connected network for thyroid nodule segmentation in ultrasound images. Biomed. Signal Process. Control 2023, 86, 105048. [Google Scholar] [CrossRef]
- Hu, R.; Wang, H.; Zhang, S.; Zhang, W.; Xu, P. Improved U-Net Segmentation Model for Thyroid Nodules. IAENG Int. J. Comput. Sci. 2025, 52, 1407–1416. Available online: https://www.iaeng.org/IJCS/issues_v52/issue_5/index.html (accessed on 1 May 2025).
- Zheng, T.; Qin, H.; Cui, Y.; Wang, R.; Zhao, W.; Zhang, S.; Geng, S.; Zhao, L. Segmentation of thyroid glands and nodules in ultrasound images using the improved U-Net architecture. BMC Med. Imaging 2023, 23, 56. [Google Scholar] [CrossRef]
- Yetginler, B.; Atacak, İ. An Improved V-Net Model for Thyroid Nodule Segmentation. Appl. Sci. 2025, 15, 3873. [Google Scholar] [CrossRef]
- Zhang, J.; Qin, Q.; Ye, Q.; Ruan, T. ST-UNet: Swin Transformer Boosted U-Net with Cross-Layer Feature Enhancement for Medical Image Segmentation. Comput. Biol. Med. 2023, 153, 106516. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Pang, S.; Zhang, R.; Zhu, J.; Fu, X.; Tian, Y.; Gao, J. ATTransUNet: An Enhanced Hybrid Transformer Architecture for Ultrasound and Histopathology Image Segmentation. Comput. Biol. Med. 2023, 152, 106365. [Google Scholar] [CrossRef] [PubMed]
- Yang, C.; Ashraf, M.A.; Riaz, M.; Umwanzavugaye, P.; Chipusu, K.; Huang, H.; Xu, Y. Improving diagnostic precision in thyroid nodule segmentation from ultrasound images with a self-attention mechanism-based Swin U-Net model. Front. Oncol. 2025, 15, 1456563. [Google Scholar] [CrossRef] [PubMed]
- Ajilisa, O.A.; Jagathy Raj, V.P.; Sabu, M.K. Segmentation of thyroid nodules from ultrasound images using convolutional neural network architectures. J. Intell. Fuzzy Syst. 2022, 43, 687–705. [Google Scholar] [CrossRef]
- The TN3K: Thyroid Nodule Region Segmentation Dataset Is a Publicly. Available online: https://www.kaggle.com/datasets/tjahan/tn3k-thyroid-nodule-region-segmentation-dataset?select=trainval-mask (accessed on 16 July 2025).
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar] [CrossRef]
- Gong, H.; Chen, J.; Chen, G.; Li, H.; Li, G.; Chen, F. Thyroid region prior guided attention for ultrasound segmentation of thyroid nodules. Comput. Biol. Med. 2023, 155, 106389. [Google Scholar] [CrossRef]
- Pan, H.; Zhou, Q.; Latecki, L.J. SGUNET: Semantic Guided UNET for Thyroid Nodule Segmentation. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021; pp. 630–634. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015; Navab, N., Hornegger, J., Wells, W., Frangi, A., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. [Google Scholar] [CrossRef]
- Prochazka, A.; Zeman, J. Thyroid Nodule Segmentation in Ultrasound Images Using U-Net with ResNet Encoder: Achieving State-of-the-Art Performance on All Public Datasets. AIMS Med. Sci. 2025, 12, 124–144. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), Boston, MA, USA, 7–12 June 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 3431–3440. [Google Scholar]
- Feng, S.; Zhao, H.; Shi, F.; Cheng, X.; Wang, M.; Ma, Y.; Xiang, D.; Zhu, W.; Chen, X. CPFNet: Context Pyramid Fusion Network for Medical Image Segmentation. IEEE Trans. Med. Imaging 2020, 39, 3008–3018. [Google Scholar] [CrossRef]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Computer Vision–ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11211, pp. 801–818. [Google Scholar] [CrossRef]
- Gong, H.; Chen, G.; Wang, R.; Xie, X.; Mao, M.; Yu, Y.; Chen, F.; Li, G. Multi-Task Learning for Thyroid Nodule Segmentation with Thyroid Region Prior. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021; pp. 257–261. [Google Scholar] [CrossRef]








| Stage | Output Size | Feature Depth | Feature Type |
|---|---|---|---|
| 1 | 56 × 56 × C | Low | Shallow (Local textures) |
| 2 | 28 × 28 × 512 | Medium | Mid-level patterns |
| 3 | 14 × 14 × 512 | High | Deep semantic features |
| 4 | 7 × 7×1024 | Very High | Global contextual features |
| Model | Dataset | Accuracy (%) | IoU (%) | Dice (%) |
|---|---|---|---|---|
| TransUNet [48] | TN3K train | 96.86 ± 0.05 | 69.26 ± 0.55 | 81.84 ± 1.09 |
| TRFE+ [49] | TN3K train | 97.04 ± 0.10 | 71.38 ± 0.43 | 83.30 ± 0.26 |
| SGUNet [50] | TN3K train | 96.54 ± 0.09 | 66.05 ± 0.43 | 79.55 ± 0.86 |
| U-Net [51] | TN3K train | 96.46 ± 0.11 | 65.99 ± 0.66 | 79.51 ± 1.31 |
| ResUNet [52] | TN3K train | 97.18 ± 0.03 | 75.09 ± 0.22 | 83.77 ± 0.20 |
| SegNet [53] | TN3K train | 96.72 ± 0.12 | 66.54 ± 0.85 | 79.91 ± 1.69 |
| FCN [54] | TN3K train | 96.92 ± 0.04 | 68.18 ± 0.25 | 81.08 ± 0.50 |
| CPFNet [55] | TN3K train | 97.17 ± 0.06 | 70.50 ± 0.39 | 82.70 ± 0.78 |
| Deeplabv3+ [56] | TN3K train | 97.19 ± 0.05 | 70.60 ± 0.49 | 82.77 ± 0.98 |
| TRFE [57] | TN3K train | 96.71 ± 0.07 | 68.33 ± 0.68 | 81.19 ± 1.35 |
| SwinUNet_with Attention (our) | TN3K train | 96.91 ± 0.00 | 78.00 ± 0.00 | 87.60 ± 0.00 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Oad, A.; Koondhar, I.H.; Dong, F.; Liu, W.; Zou, B.; Liu, W.; Chen, Y.; Wu, Y. Symmetry-Aware SwinUNet with Integrated Attention for Transformer-Based Segmentation of Thyroid Ultrasound Images. Symmetry 2026, 18, 141. https://doi.org/10.3390/sym18010141
Oad A, Koondhar IH, Dong F, Liu W, Zou B, Liu W, Chen Y, Wu Y. Symmetry-Aware SwinUNet with Integrated Attention for Transformer-Based Segmentation of Thyroid Ultrasound Images. Symmetry. 2026; 18(1):141. https://doi.org/10.3390/sym18010141
Chicago/Turabian StyleOad, Ammar, Imtiaz Hussain Koondhar, Feng Dong, Weibing Liu, Beiji Zou, Weichun Liu, Yun Chen, and Yaoqun Wu. 2026. "Symmetry-Aware SwinUNet with Integrated Attention for Transformer-Based Segmentation of Thyroid Ultrasound Images" Symmetry 18, no. 1: 141. https://doi.org/10.3390/sym18010141
APA StyleOad, A., Koondhar, I. H., Dong, F., Liu, W., Zou, B., Liu, W., Chen, Y., & Wu, Y. (2026). Symmetry-Aware SwinUNet with Integrated Attention for Transformer-Based Segmentation of Thyroid Ultrasound Images. Symmetry, 18(1), 141. https://doi.org/10.3390/sym18010141

