A Review on Deep Learning Methods for Glioma Segmentation, Limitations, and Future Perspectives
Abstract
1. Introduction
- A comparison of more than 80 Deep Learning methods for glioma segmentation up to 2025, providing insights into their effectiveness and efficiency.
- An analysis of the number of tunable parameters of each method and their adaptability in clinical applications.
- An identification of current trends and limitations within the field, along with a proposal for future research directions and suggestions for improvement.
Literature Search Strategy
- “glioma segmentation”;
- “brain tumor” AND “deep learning”;
- “glioma” AND “CNN”, “U-Net”, “Transformer”, “Vision Transformer”, “SAM”, “multimodal segmentation”, “survey”.
2. Glioma Segmentation
2.1. Glioma
2.2. MRI for Glioma Segmentation
3. Deep Learning Methods for Glioma Segmentation
3.1. CNN-Based
3.2. Pure Transformer
3.3. Hybrid CNN-Transformer
3.4. Clinical Deployment
4. Performance Analysis
4.1. Glioma Databases
4.2. Segmentation Metrics
- A is the set of voxels in the ground truth segmentation.
- B is the set of voxels in the predicted segmentation.
- and are the number of voxels in each set.
- is the number of voxels that both segmentations share (overlap).
- A and B are the sets of boundary points from the ground truth and predicted segmentations, respectively.
- is the Euclidean distance between points and .
- is the minimum distance from a point a to any point in B.
- denotes the 95th percentile of all such minimum distances.
4.3. Quantitative Review of Existing Methods
4.4. Candidates for Real Deployment
- (C1) Robustness to Data Variability: A method is considered robust if it has been validated on at least two different datasets or incorporates specific domain generalization techniques to ensure stable performance across varied clinical data.
- (C2) Postoperative/Low-Quality Data Handling: This condition marks methods designed or proven to handle challenging data, such as images with post-surgical changes or those from low-field or older imaging scanners, which can be encountered in clinical workflows.
- (C3) Computational Efficiency: A model is deemed efficient for this comparison if it has fewer than 50 million trainable parameters. Where available, we also include specific metrics like GFLOPs and inference latency to provide a clearer picture of the required computational resources.
- (C4) All Tumor Regions Delineated: This indicates whether the model provides segmentation for all three key sub-regions (WT, TC, and ET), as comprehensive delineation is essential for accurate diagnosis and treatment planning.
Method | Description + Strengths | Limitations | C1 | C2 | C3 | C4 |
---|---|---|---|---|---|---|
Dense Unet+ [44] | CNN-based. Effective 4-modality MRI fusion; Weighted connections reduce semantic gaps; ROI-focused training for efficiency; Lower computational complexity for inference ( s in convolutional layers). | Longer training time than baseline UNet; ResBlock coefficients require further optimization. | ✓ | ✓ | ✓ | |
Kuntiyellannagari et al. [135] | CNN-based. Advanced noise reduction via hybrid filter; Ensemble of three models for improved accuracy; Novel optimization algorithm (MAVOA) refines results. | Limited generalizability and interpretability; High computational demand from its hybrid filter. | ✓ | ✓ | ||
Enhanced Unet [142] | CNN-based. Good generalization across multiple datasets; Optimized and simple architecture without complex additions; High computational efficiency ( s per image). | Exclusive reliance on the FLAIR modality; Needs validation on a wider range of glioma grades; Only provides results for WT tumor region. | ✓ | |||
CBAM-TransUNet [146] | Hybrid CNN-Transformer. Combines U-Net, Swin Transformer, and CBAM; Attention module at the bottleneck to focus on key features; Includes a thorough robustness and ablation analysis. | Aggressive data cropping; Use of a fixed, non-optimized weight in the loss function; Limited reproducibility. | ✓ | |||
Futrega et al. [83] | Hybrid CNN-Transformer. Extensive ablation study; Includes a highly optimized post-processing strategy. | Highly specialized for BraTS21; Only provides results for WT tumor region. | ✓ | |||
Unet Former [76] | Hybrid CNN-Transformer. Self-supervised pretraining for 3D medical images; Fully reproducible; Flexible architecture offering an accuracy and efficiency ( GFLOPs) trade-off. | pretraining requires massive datasets and high-end hardware; pretraining effectiveness only tested on CT scans. | ✓ | ✓ | ||
GBT-SAM [111] | Hybrid CNN-Transformer. Generalization across four tumor domains; Multimodal integration of full mp-MRI; Modeling 3D inter-slice correlation; Parameter-efficient method (6.4 M trainable parameters). | Dependent on bounding box prompts; Architecture specialized for four-modality MRI; Only provides results for WT tumor region. | ✓ | ✓ | ✓ | |
CFNet [74] | Hybrid CNN-Transformer. Modules for coarse-to-fine multimodal feature fusion; Rigorous ablation studies; Fully reproducible; High computational efficiency (35.47 GFLOPs). | Fails to address the impact of MRI artifacts; Operates on 2D slices, losing direct 3D volumetric context. | ✓ | |||
MAT [148] | Hybrid CNN-Transformer. Three-dimensional architecture using axial attention and self-distillation training that improves performance on small datasets; Smoother segmentation boundaries; Fully reproducible; Lightweight approach (11.7 M parameters). | Non-isotropic image resizing during preprocessing can distort anatomical geometry; Limited ablation studies. | ✓ | ✓ | ||
Arouse-Net [103] | Hybrid CNN-Transformer. Attention mechanism specifically enhances tumor edges; Effective use of dilated convolutions to expand the receptive field of the model; High computational efficiency (1 s inference time). | Insufficient experimental validation; Limited generalizability; Lacks reproducibility. | ✓ | ✓ |
4.5. Discussion
5. Conclusions
5.1. Recent Developments and Current Limitations
5.2. Future Directions
- Enhancing generalization across domains and datasets.
- Minimizing computational cost without compromising accuracy.
- Refining hybrid architectures for clinical integration.
- Domain Generalization techniques aim to train a single, robust model on data from multiple sources that can generalize well to completely unseen hospitals without needing retraining. This involves methods like advanced data augmentation, feature alignment, and disentanglement to learn domain-invariant features.
- Federated Learning offers a paradigm-shifting solution by training a shared global model across multiple institutions without ever centralizing or sharing sensitive patient data. Each institution trains the model locally, and only the model updates (weights or gradients) are sent to a central server for aggregation. This approach not only preserves data privacy but also naturally exposes the model to a diverse range of data, which has been shown to significantly boost the performance and robustness of brain tumor segmentation models. Validating models trained with these methods across multiple independent institutions (cross-institutional validation) is becoming the gold standard for proving their real-world clinical readiness.
Author Contributions
Funding
Informed Consent Statement
Conflicts of Interest
References
- Weller, M.; Wick, W.; Aldape, K.; Brada, M.; Berger, M.; Pfister, S.M.; Nishikawa, R.; Rosenthal, M.; Wen, P.Y.; Stupp, R.; et al. Glioma. Nat. Rev. Dis. Prim. 2015, 1, 1–18. [Google Scholar] [CrossRef]
- Harshini, E.; Chinnam, S.K. SWIN BTS Using Deep Learning. In Proceedings of the 2024 Second International Conference on Data Science and Information System (ICDSIS), Hassan, India, 17–18 May 2024; pp. 1–8. [Google Scholar]
- Hatamizadeh, A.; Nath, V.; Tang, Y.; Yang, D.; Roth, H.R.; Xu, D. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In Proceedings of the International MICCAI Brainlesion Workshop, Virtual, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 272–284. [Google Scholar]
- Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 2014, 34, 1993–2024. [Google Scholar] [CrossRef]
- El Hachimy, I.; Kabelma, D.; Echcharef, C.; Hassani, M.; Benamar, N.; Hajji, N. A comprehensive survey on the use of deep learning techniques in glioblastoma. Artif. Intell. Med. 2024, 154, 102902. [Google Scholar] [CrossRef]
- Ranjbarzadeh, R.; Caputo, A.; Tirkolaee, E.B.; Ghoushchi, S.J.; Bendechache, M. Brain tumor segmentation of MRI images: A comprehensive review on the application of artificial intelligence tools. Comput. Biol. Med. 2023, 152, 106405. [Google Scholar] [CrossRef]
- Zhu, J.; Qi, Y.; Wu, J. Medical sam 2: Segment medical images as video via segment anything model 2. arXiv 2024, arXiv:2408.00874. [Google Scholar] [CrossRef]
- Ghadimi, D.J.; Vahdani, A.M.; Karimi, H.; Ebrahimi, P.; Fathi, M.; Moodi, F.; Habibzadeh, A.; Khodadadi Shoushtari, F.; Valizadeh, G.; Mobarak Salari, H.; et al. Deep Learning-Based Techniques in Glioma Brain Tumor Segmentation Using Multi-Parametric MRI: A Review on Clinical Applications and Future Outlooks. J. Magn. Reson. Imaging 2025, 61, 1094–1109. [Google Scholar] [CrossRef] [PubMed]
- Biratu, E.S.; Schwenker, F.; Debelee, T.G.; Kebede, S.R.; Negera, W.G.; Molla, H.T. Enhanced region growing for brain tumor MR image segmentation. J. Imaging 2021, 7, 22. [Google Scholar] [CrossRef]
- Krishnapriya, S.; Karuna, Y. A survey of deep learning for MRI brain tumor segmentation methods: Trends, challenges, and future directions. Health Technol. 2023, 13, 181–201. [Google Scholar] [CrossRef]
- Kaur, R.; Doegar, A. Brain tumor segmentation using deep learning: Taxonomy, survey and challenges. In Brain Tumor MRI Image Segmentation Using Deep Learning Techniques; Elsevier: Amsterdam, The Netherlands, 2022; pp. 225–238. [Google Scholar]
- Chauhan, P.; Lunagaria, M.; Verma, D.K.; Vaghela, K.; Diwan, A.; Patole, S.; Mahadeva, R. Analyzing Brain Tumour Classification Techniques: A Comprehensive Survey. IEEE Access 2024, 12, 136389–136407. [Google Scholar] [CrossRef]
- Saeedi, S.; Rezayi, S.; Keshavarz, H.; Niakan Kalhori, S.R. MRI-based brain tumor detection using convolutional deep learning methods and chosen machine learning techniques. BMC Med. Inform. Decis. Mak. 2023, 23, 16. [Google Scholar] [CrossRef]
- Biratu, E.S.; Schwenker, F.; Ayano, Y.M.; Debelee, T.G. A survey of brain tumor segmentation and classification algorithms. J. Imaging 2021, 7, 179. [Google Scholar] [CrossRef] [PubMed]
- Umarani, C.M.; Gollagi, S.; Allagi, S.; Sambrekar, K.; Ankali, S.B. Advancements in deep learning techniques for brain tumor segmentation: A survey. Inform. Med. Unlocked 2024, 50, 101576. [Google Scholar] [CrossRef]
- Wang, P.; Yang, Q.; He, Z.; Yuan, Y. Vision transformers in multi-modal brain tumor MRI segmentation: A review. Meta-Radiol. 2023, 1, 100004. [Google Scholar] [CrossRef]
- Magadza, T.; Viriri, S. Deep learning for brain tumor segmentation: A survey of state-of-the-art. J. Imaging 2021, 7, 19. [Google Scholar] [CrossRef]
- Rasool, N.; Bhat, J.I. Glioma brain tumor segmentation using deep learning: A review. In Proceedings of the 2023 10th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 15–17 March 2023; pp. 484–489. [Google Scholar]
- Raghavendra, U.; Gudigar, A.; Paul, A.; Goutham, T.; Inamdar, M.A.; Hegde, A.; Devi, A.; Ooi, C.P.; Deo, R.C.; Barua, P.D.; et al. Brain tumor detection and screening using artificial intelligence techniques: Current trends and future perspectives. Comput. Biol. Med. 2023, 163, 107063. [Google Scholar] [CrossRef]
- Abidin, Z.U.; Naqvi, R.A.; Haider, A.; Kim, H.S.; Jeong, D.; Lee, S.W. Recent deep learning-based brain tumor segmentation models using multi-modality magnetic resonance imaging: A prospective survey. Front. Bioeng. Biotechnol. 2024, 12, 1392807. [Google Scholar] [CrossRef]
- Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; The PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Int. J. Surg. 2010, 8, 336–341. [Google Scholar] [CrossRef]
- Porz, N.; Bauer, S.; Pica, A.; Schucht, P.; Beck, J.; Verma, R.K.; Slotboom, J.; Reyes, M.; Wiest, R. Multi-modal glioblastoma segmentation: Man versus machine. PLoS ONE 2014, 9, e96873. [Google Scholar] [CrossRef]
- Bakas, S.; Reyes, M.; Jakab, A.; Bauer, S.; Rempfler, M.; Crimi, A.; Shinohara, R.T.; Berger, C.; Ha, S.M.; Rozycki, M.; et al. Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv 2018, arXiv:1811.02629. [Google Scholar] [CrossRef]
- Weller, M.; Wen, P.Y.; Chang, S.M.; Dirven, L.; Lim, M.; Monje, M.; Reifenberger, G. Glioma. Nat. Rev. Dis. Prim. 2024, 10, 33. [Google Scholar] [CrossRef] [PubMed]
- Claes, A.; Idema, A.J.; Wesseling, P. Diffuse glioma growth: A guerilla war. Acta Neuropathol. 2007, 114, 443–458. [Google Scholar] [CrossRef]
- Esteban-Rodríguez, I.; López-Muñoz, S.; Blasco-Santana, L.; Mejías-Bielsa, J.; Gordillo, C.H.; Jiménez-Heffernan, J.A. Cytological features of diffuse and circumscribed gliomas. Cytopathology 2024, 35, 534–544. [Google Scholar] [CrossRef]
- Chen, J.; Dahiya, S.M. Update on circumscribed gliomas and glioneuronal tumors. Surg. Pathol. Clin. 2020, 13, 249–266. [Google Scholar] [CrossRef]
- Chen, R.; Smith-Cohn, M.; Cohen, A.L.; Colman, H. Glioma subclassifications and their clinical significance. Neurotherapeutics 2017, 14, 284–297. [Google Scholar] [CrossRef]
- Wirsching, H.G.; Weller, M. Glioblastoma. Malignant Brain Tumors: State-of-the-Art Treatment; Springer: Berlin/Heidelberg, Germany, 2017; pp. 265–288. [Google Scholar]
- Davis, M.E. Glioblastoma: Overview of disease and treatment. Clin. J. Oncol. Nurs. 2016, 20, S2. [Google Scholar] [CrossRef] [PubMed]
- Bai, J.; Varghese, J.; Jain, R. Adult glioma WHO classification update, genomics, and imaging: What the radiologists need to know. Top. Magn. Reson. Imaging 2020, 29, 71–82. [Google Scholar] [CrossRef]
- Adewole, M.; Rudie, J.D.; Gbdamosi, A.; Toyobo, O.; Raymond, C.; Zhang, D.; Omidiji, O.; Akinola, R.; Suwaid, M.A.; Emegoakor, A.; et al. The brain tumor segmentation (brats) challenge 2023: Glioma segmentation in sub-saharan africa patient population (brats-africa). arXiv 2023, arXiv:2305.19369v1. [Google Scholar]
- Kazerooni, A.F.; Khalili, N.; Liu, X.; Haldar, D.; Jiang, Z.; Zapaishchykova, A.; Pavaine, J.; Shah, L.M.; Jones, B.V.; Sheth, N.; et al. BraTS-PEDs: Results of the multi-consortium international pediatric brain tumor segmentation challenge 2023. arXiv 2024, arXiv:2407.08855. [Google Scholar] [CrossRef]
- Despotović, I.; Goossens, B.; Philips, W. MRI segmentation of the human brain: Challenges, methods, and applications. Comput. Math. Methods Med. 2015, 2015, 450341. [Google Scholar] [CrossRef]
- Treister, D.; Kingston, S.; Hoque, K.E.; Law, M.; Shiroishi, M.S. Multimodal magnetic resonance imaging evaluation of primary brain tumors. Semin. Oncol. 2014, 41, 478–495. [Google Scholar] [CrossRef] [PubMed]
- Khorasani, A.; Kafieh, R.; Saboori, M.; Tavakoli, M.B. Glioma segmentation with DWI weighted images, conventional anatomical images, and post-contrast enhancement magnetic resonance imaging images by U-Net. Phys. Eng. Sci. Med. 2022, 45, 925–934. [Google Scholar] [CrossRef]
- Katti, G.; Ara, S.A.; Shireen, A. Magnetic resonance imaging (MRI)—A review. Int. J. Dent. Clin. 2011, 3, 65–70. [Google Scholar]
- Diana-Albelda, C.; Alcover-Couso, R.; García-Martín, Á.; Bescos, J. How SAM Perceives Different mp-MRI Brain Tumor Domains? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 4959–4970. [Google Scholar]
- Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, 17–21 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 424–432. [Google Scholar]
- Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
- Vu, M.H.; Nyholm, T.; Löfstedt, T. TuNet: End-to-end hierarchical brain tumor segmentation using cascaded networks. In Proceedings of the Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 5th International Workshop, BrainLes 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, 17 October 2019; Springer: Berlin/Heidelberg, Germany, 2020; pp. 174–186. [Google Scholar]
- Wang, G.; Li, W.; Ourselin, S.; Vercauteren, T. Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. In Proceedings of the Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: Third International Workshop, BrainLes 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, 14 September 2017; Springer: Berlin/Heidelberg, Germany, 2018; pp. 178–190. [Google Scholar]
- Raza, R.; Bajwa, U.I.; Mehmood, Y.; Anwar, M.W.; Jamal, M.H. dResU-Net: 3D deep residual U-Net based brain tumor segmentation from multimodal MRI. Biomed. Signal Process. Control 2023, 79, 103861. [Google Scholar] [CrossRef]
- Çetiner, H.; Metlek, S. DenseUNet+: A novel hybrid segmentation approach based on multi-modality images for brain tumor segmentation. J. King Saud-Univ.-Comput. Inf. Sci. 2023, 35, 101663. [Google Scholar] [CrossRef]
- Isensee, F.; Petersen, J.; Klein, A.; Zimmerer, D.; Jaeger, P.F.; Kohl, S.; Wasserthal, J.; Koehler, G.; Norajitra, T.; Wirkert, S.; et al. nnu-net: Self-adapting framework for u-net-based medical image segmentation. arXiv 2018, arXiv:1809.10486. [Google Scholar]
- Ding, Y.; Yu, X.; Yang, Y. RFNet: Region-aware fusion network for incomplete multi-modal brain tumor segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 19–25 June 2021; pp. 3975–3984. [Google Scholar]
- Huang, Z.; Lin, L.; Cheng, P.; Peng, L.; Tang, X. Multi-modal brain tumor segmentation via missing modality synthesis and modality-level attention fusion. arXiv 2022, arXiv:2203.04586. [Google Scholar]
- Wang, Y.; Zhang, Y.; Hou, F.; Liu, Y.; Tian, J.; Zhong, C.; Zhang, Y.; He, Z. Modality-pairing learning for brain tumor segmentation. In Proceedings of the Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, 4 October 2020; Springer: Berlin/Heidelberg, Germany, 2021; pp. 230–240. [Google Scholar]
- Liu, Y.; Mu, F.; Shi, Y.; Cheng, J.; Li, C.; Chen, X. Brain tumor segmentation in multimodal MRI via pixel-level and feature-level image fusion. Front. Neurosci. 2022, 16, 1000587. [Google Scholar] [CrossRef]
- Tong, J.; Wang, C. A dual tri-path CNN system for brain tumor segmentation. Biomed. Signal Process. Control 2023, 81, 104411. [Google Scholar] [CrossRef]
- Syazwany, N.S.; Nam, J.H.; Lee, S.C. MM-BiFPN: Multi-modality fusion network with Bi-FPN for MRI brain tumor segmentation. IEEE Access 2021, 9, 160708–160720. [Google Scholar] [CrossRef]
- Wang, Y.; Chen, J.; Bai, X. Gradient-assisted deep model for brain tumor segmentation by multi-modality MRI volumes. Biomed. Signal Process. Control 2023, 85, 105066. [Google Scholar] [CrossRef]
- Myronenko, A. 3D MRI brain tumor segmentation using autoencoder regularization. In Proceedings of the Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 16 September 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 311–320. [Google Scholar]
- Sahoo, A.K.; Parida, P.; Muralibabu, K.; Dash, S. An improved DNN with FFCM method for multimodal brain tumor segmentation. Intell. Syst. Appl. 2023, 18, 200245. [Google Scholar] [CrossRef]
- Oktay, O. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar] [CrossRef]
- Akbar, A.S.; Fatichah, C.; Suciati, N. Unet3D with multiple atrous convolutions attention block for brain tumor segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Virtual, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 182–193. [Google Scholar]
- Zhao, L.; Ma, J.; Shao, Y.; Jia, C.; Zhao, J.; Yuan, H. MM-UNet: A multimodality brain tumor segmentation network in MRI images. Front. Oncol. 2022, 12, 950706. [Google Scholar] [CrossRef]
- Xing, Z.; Yu, L.; Wan, L.; Han, T.; Zhu, L. NestedFormer: Nested modality-aware transformer for brain tumor segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 140–150. [Google Scholar]
- Liang, J.; Yang, C.; Zhong, J.; Ye, X. BTSwin-Unet: 3D U-shaped symmetrical Swin transformer-based network for brain tumor segmentation with self-supervised pre-training. Neural Process. Lett. 2023, 55, 3695–3713. [Google Scholar] [CrossRef]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-unet: Unet-like pure transformer for medical image segmentation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 205–218. [Google Scholar]
- Sagar, A. EMSViT: Efficient multi scale vision transformer for biomedical image segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Virtual, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 39–51. [Google Scholar]
- Peiris, H.; Hayat, M.; Chen, Z.; Egan, G.; Harandi, M. A robust volumetric transformer for accurate 3D tumor segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 162–172. [Google Scholar]
- Wei, C.; Ren, S.; Guo, K.; Hu, H.; Liang, J. High-resolution Swin transformer for automatic medical image segmentation. Sensors 2023, 23, 3420. [Google Scholar] [CrossRef] [PubMed]
- Peiris, H.; Hayat, M.; Chen, Z.; Egan, G.; Harandi, M. Hybrid window attention based transformer architecture for brain tumor segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Singapore, 18 September 2022; pp. 173–182. [Google Scholar]
- Karimijafarbigloo, S.; Azad, R.; Kazerouni, A.; Ebadollahi, S.; Merhof, D. Mmcformer: Missing modality compensation transformer for brain tumor segmentation. In Proceedings of the Medical Imaging with Deep Learning, Paris, France, 3–5 July 2024; pp. 1144–1162. [Google Scholar]
- Liang, J.; Yang, C.; Zeng, M.; Wang, X. TransConver: Transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images. Quant. Imaging Med. Surg. 2022, 12, 2397. [Google Scholar] [CrossRef]
- Li, X.; Ma, S.; Tang, J.; Guo, F. TranSiam: Fusing multimodal visual features using transformer for medical image segmentation. arXiv 2022, arXiv:2204.12185. [Google Scholar] [CrossRef]
- Wenxuan, W.; Chen, C.; Meng, D.; Hong, Y.; Sen, Z.; Jiangyun, L. Transbts: Multimodal brain tumor segmentation using transformer. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2021; pp. 109–119. [Google Scholar]
- Jun, E.; Jeong, S.; Heo, D.W.; Suk, H.I. Medical transformer: Universal brain encoder for 3D MRI analysis. arXiv 2021, arXiv:2104.13633. [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Sui, X.; Luo, X.; Xu, X.; Liu, Y.; Goh, R. Medical image segmentation using squeeze-and-expansion transformers. arXiv 2021, arXiv:2105.09511. [Google Scholar]
- Fang, F.; Yao, Y.; Zhou, T.; Xie, G.; Lu, J. Self-supervised multi-modal hybrid fusion network for brain tumor segmentation. IEEE J. Biomed. Health Inform. 2021, 26, 5310–5320. [Google Scholar] [CrossRef]
- Zhou, T.; Canu, S.; Vera, P.; Ruan, S. Feature-enhanced generation and multi-modality fusion based deep neural network for brain tumor segmentation with missing MR modalities. Neurocomputing 2021, 466, 102–112. [Google Scholar] [CrossRef]
- Hatamizadeh, A.; Tang, Y.; Nath, V.; Yang, D.; Myronenko, A.; Landman, B.; Roth, H.R.; Xu, D. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 574–584. [Google Scholar]
- Cheng, Y.; Zheng, Y.; Wang, J. CFNet: Automatic multi-modal brain tumor segmentation through hierarchical coarse-to-fine fusion and feature communication. Biomed. Signal Process. Control 2025, 99, 106876. [Google Scholar] [CrossRef]
- Liu, J.; Zheng, J.; Jiao, G. Transition Net: 2D backbone to segment 3D brain tumor. Biomed. Signal Process. Control 2022, 75, 103622. [Google Scholar] [CrossRef]
- Hatamizadeh, A.; Xu, Z.; Yang, D.; Li, W.; Roth, H.; Xu, D. Unetformer: A unified vision transformer model and pre-training framework for 3d medical image segmentation. arXiv 2022, arXiv:2204.00631. [Google Scholar]
- Pham, Q.D.; Nguyen-Truong, H.; Phuong, N.N.; Nguyen, K.N.; Nguyen, C.D.; Bui, T.; Truong, S.Q. Segtransvae: Hybrid cnn-transformer with regularization for medical image segmentation. In Proceedings of the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, India, 28–31 March 2022; pp. 1–5. [Google Scholar]
- Li, J.; Wang, W.; Chen, C.; Zhang, T.; Zha, S.; Wang, J.; Yu, H. TransBTSV2: Towards better and more efficient volumetric segmentation of medical images. arXiv 2022, arXiv:2201.12785. [Google Scholar] [CrossRef]
- Nalawade, S.; Ganesh, C.; Wagner, B.; Reddy, D.; Das, Y.; Yu, F.F.; Fei, B.; Madhuranthakam, A.J.; Maldjian, J.A. Federated learning for brain tumor segmentation using MRI and transformers. In Proceedings of the International MICCAI Brainlesion Workshop, Virtual, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 444–454. [Google Scholar]
- Shi, Y.; Micklisch, C.; Mushtaq, E.; Avestimehr, S.; Yan, Y.; Zhang, X. An ensemble approach to automatic brain tumor segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Virtual, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 138–148. [Google Scholar]
- Jia, Q.; Shu, H. Bitr-unet: A cnn-transformer combined network for mri brain tumor segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Virtual, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 3–14. [Google Scholar]
- Dobko, M.; Kolinko, D.I.; Viniavskyi, O.; Yelisieiev, Y. Combining CNNs with transformer for multimodal 3D MRI brain tumor segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Virtual, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 232–241. [Google Scholar]
- Futrega, M.; Milesi, A.; Marcinkiewicz, M.; Ribalta, P. Optimized U-Net for brain tumor segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Virtual, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 15–29. [Google Scholar]
- Yang, H.; Shen, Z.; Li, Z.; Liu, J.; Xiao, J. Combining global information with topological prior for brain tumor segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Virtual, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 204–215. [Google Scholar]
- Gai, D.; Zhang, J.; Xiao, Y.; Min, W.; Zhong, Y.; Zhong, Y. RMTF-Net: Residual mix transformer fusion net for 2D brain tumor segmentation. Brain Sci. 2022, 12, 1145. [Google Scholar] [CrossRef]
- Zhang, Y.; He, N.; Yang, J.; Li, Y.; Wei, D.; Huang, Y.; Zhang, Y.; He, Z.; Zheng, Y. mmformer: Multimodal medical transformer for incomplete multimodal learning of brain tumor segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 107–117. [Google Scholar]
- Wang, P.; Liu, S.; Peng, J. AST-Net: Lightweight hybrid transformer for multimodal brain tumor segmentation. In Proceedings of the 2022 26th International Conference on Pattern Recognition (ICPR), Montréal, QC, Canada, 21–25 August 2022; pp. 4623–4629. [Google Scholar]
- Huang, L.; Zhu, E.; Chen, L.; Wang, Z.; Chai, S.; Zhang, B. A transformer-based generative adversarial network for brain tumor segmentation. Front. Neurosci. 2022, 16, 1054948. [Google Scholar] [CrossRef]
- Zhou, T. Modality-level cross-connection and attentional feature fusion based deep neural network for multi-modal brain tumor segmentation. Biomed. Signal Process. Control. 2023, 81, 104524. [Google Scholar] [CrossRef]
- Vatanpour, M.; Haddadnia, J. TransDoubleU-Net: Dual Scale Swin Transformer With Dual Level Decoder for 3D Multimodal Brain Tumor Segmentation. IEEE Access 2023, 11, 125511–125518. [Google Scholar] [CrossRef]
- Pang, X.; Zhao, Z.; Wang, Y.; Li, F.; Chang, F. LGMSU-Net: Local Features, Global Features, and Multi-Scale Features Fused the U-Shaped Network for Brain Tumor Segmentation. Electronics 2022, 11, 1911. [Google Scholar] [CrossRef]
- Liang, J.; Yang, C.; Zeng, L. 3D PSwinBTS: An efficient transformer-based Unet using 3D parallel shifted windows for brain tumor segmentation. Digit. Signal Process. 2022, 131, 103784. [Google Scholar] [CrossRef]
- Tian, W.; Li, D.; Lv, M.; Huang, P. Axial attention convolutional neural network for brain tumor segmentation with multi-modality MRI scans. Brain Sci. 2022, 13, 12. [Google Scholar] [CrossRef] [PubMed]
- Gao, H.; Miao, Q.; Ma, D.; Liu, R. Deep mutual learning for brain tumor segmentation with the fusion network. Neurocomputing 2023, 521, 213–220. [Google Scholar] [CrossRef]
- Lu, Y.; Chang, Y.; Zheng, Z.; Sun, Y.; Zhao, M.; Yu, B.; Tian, C.; Zhang, Y. GMetaNet: Multi-scale ghost convolutional neural network with auxiliary MetaFormer decoding path for brain tumor segmentation. Biomed. Signal Process. Control. 2023, 83, 104694. [Google Scholar] [CrossRef]
- Zeineldin, R.A.; Karar, M.E.; Elshaer, Z.; Coburger, J.; Wirtz, C.R.; Burgert, O.; Mathis-Ullrich, F. Explainable hybrid vision transformers and convolutional network for multimodal glioma segmentation in brain MRI. Sci. Rep. 2024, 14, 3713. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Alcover-Couso, R.; Escudero-Viñolo, M.; SanMiguel, J.C.; Bescós, J. Gradient-based Class Weighting for Unsupervised Domain Adaptation in Dense Prediction Visual Tasks. arXiv 2024, arXiv:2407.01327. [Google Scholar] [CrossRef]
- Alcover-Couso, R.; SanMiguel, J.C.; Escudero-Viñolo, M.; Martínez, J.M. Layer-wise Model Merging for Unsupervised Domain Adaptation in Segmentation Tasks. arXiv 2024, arXiv:2409.15813. [Google Scholar] [CrossRef]
- Alcover-Couso, R.; Escudero-Vinolo, M.; SanMiguel, J.C.; Martinez, J.M. Soft labelling for semantic segmentation: Bringing coherence to label down-sampling. arXiv 2023, arXiv:2302.13961. [Google Scholar]
- Cheng, J.; Kuang, H.; Yang, S.; Yue, H.; Liu, J.; Wang, J. Segmentation-Guided Deep Learning for Glioma Survival Risk Prediction with Multimodal MRI. B Data Min. Anal. 2025, 8, 364–382. [Google Scholar] [CrossRef]
- Wu, S.; Chen, Z.; Sun, P. 3D U-TFA: A deep convolutional neural network for automatic segmentation of glioblastoma. Biomed. Signal Process. Control. 2025, 99, 106829. [Google Scholar] [CrossRef]
- Li, H.; Qi, X.; Hu, Y.; Zhang, J. Arouse-Net: Enhancing Glioblastoma Segmentation in Multi-Parametric MRI with a Custom 3D Convolutional Neural Network and Attention Mechanism. Mathematics 2025, 13, 160. [Google Scholar] [CrossRef]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Vancouver, BC, Canada, 18–22 June 2023; pp. 4015–4026. [Google Scholar]
- Zhang, K.; Liu, D. Customized segment anything model for medical image segmentation. arXiv 2023, arXiv:2304.13785. [Google Scholar] [CrossRef]
- Cheng, J.; Ye, J.; Deng, Z.; Chen, J.; Li, T.; Wang, H.; Su, Y.; Huang, Z.; Chen, J.; Jiang, L.; et al. Sam-med2d. arXiv 2023. [Google Scholar] [CrossRef]
- Alcover-Couso, R.; Escudero-Viñolo, M.; SanMiguel, J.C.; Bescos, J. VLMs meet UDA: Boosting Transferability of Open Vocabulary Segmentation with Unsupervised Domain Adaptation. arXiv 2024, arXiv:2412.09240. [Google Scholar] [CrossRef]
- Montalvo, J.; García-Martín, Á.; Carballeira, P.; SanMiguel, J.C. Unsupervised Class Generation to Expand Semantic Segmentation Datasets. arXiv 2025, arXiv:2501.02264. [Google Scholar] [CrossRef]
- Wu, J.; Ji, W.; Liu, Y.; Fu, H.; Xu, M.; Xu, Y.; Jin, Y. Medical sam adapter: Adapting segment anything model for medical image segmentation. arXiv 2023, arXiv:2304.12620. [Google Scholar] [CrossRef]
- Ma, J.; He, Y.; Li, F.; Han, L.; You, C.; Wang, B. Segment anything in medical images. Nat. Commun. 2024, 15, 654. [Google Scholar] [CrossRef] [PubMed]
- Diana-Albelda, C.; Alcover-Couso, R.; García-Martín, Á.; Bescos, J.; Escudero-Viñolo, M. GBT-SAM: A Parameter-Efficient Depth-Aware Model for Generalizable Brain tumour Segmentation on mp-MRI. arXiv 2025, arXiv:2503.04325. [Google Scholar]
- Ravi, N.; Gabeur, V.; Hu, Y.T.; Hu, R.; Ryali, C.; Ma, T.; Khedr, H.; Rädle, R.; Rolland, C.; Gustafson, L.; et al. Sam 2: Segment anything in images and videos. arXiv 2024, arXiv:2408.00714. [Google Scholar]
- Domadia, S.G.; Thakkar, F.N.; Ardeshana, M.A. Recent advancement in learning methodology for segmenting brain tumor from magnetic resonance imaging—A review. Multimed. Tools Appl. 2023, 82, 34809–34845. [Google Scholar] [CrossRef]
- Kurmukov, A.; Dalechina, A.; Saparov, T.; Belyaev, M.; Zolotova, S.; Golanov, A.; Nikolaeva, A. Challenges in building of deep learning models for glioblastoma segmentation: Evidence from clinical data. In Public Health and Informatics; IOS Press: Amsterdam, The Netherlands, 2021; pp. 298–302. [Google Scholar]
- Bonada, M.; Rossi, L.F.; Carone, G.; Panico, F.; Cofano, F.; Fiaschi, P.; Garbossa, D.; Di Meco, F.; Bianconi, A. Deep learning for MRI segmentation and molecular subtyping in glioblastoma: Critical aspects from an emerging field. Biomedicines 2024, 12, 1878. [Google Scholar] [CrossRef]
- Perkuhn, M.; Stavrinou, P.; Thiele, F.; Shakirin, G.; Mohan, M.; Garmpis, D.; Kabbasch, C.; Borggrefe, J. Clinical evaluation of a multiparametric deep learning model for glioblastoma segmentation using heterogeneous magnetic resonance imaging data from clinical routine. Investig. Radiol. 2018, 53, 647–654. [Google Scholar] [CrossRef]
- Bianconi, A.; Rossi, L.F.; Bonada, M.; Zeppa, P.; Nico, E.; De Marco, R.; Lacroce, P.; Cofano, F.; Bruno, F.; Morana, G.; et al. Deep learning-based algorithm for postoperative glioblastoma MRI segmentation: A promising new tool for tumor burden assessment. Brain Inform. 2023, 10, 26. [Google Scholar] [CrossRef]
- Cepeda, S.; Romero, R.; Luque, L.; García-Pérez, D.; Blasco, G.; Luppino, L.T.; Kuttner, S.; Esteban-Sinovas, O.; Arrese, I.; Solheim, O.; et al. Deep learning-based postoperative glioblastoma segmentation and extent of resection evaluation: Development, external validation, and model comparison. Neuro-Oncol. Adv. 2024, 6, vdae199. [Google Scholar] [CrossRef]
- Holtzman Gazit, M.; Faran, R.; Stepovoy, K.; Peles, O.; Shamir, R.R. Post-operative glioblastoma multiforme segmentation with uncertainty estimation. Front. Hum. Neurosci. 2022, 16, 932441. [Google Scholar] [CrossRef]
- Lotan, E.; Zhang, B.; Dogra, S.; Wang, W.; Carbone, D.; Fatterpekar, G.; Oermann, E.; Lui, Y. Development and practical implementation of a deep learning–based pipeline for automated pre-and postoperative glioma segmentation. Am. J. Neuroradiol. 2022, 43, 24–32. [Google Scholar] [CrossRef] [PubMed]
- Hochreuter, K.M.; Ren, J.; Nijkamp, J.; Korreman, S.S.; Lukacova, S.; Kallehauge, J.F.; Trip, A.K. The effect of editing clinical contours on deep-learning segmentation accuracy of the gross tumor volume in glioblastoma. Phys. Imaging Radiat. Oncol. 2024, 31, 100620. [Google Scholar] [CrossRef] [PubMed]
- Bertels, J.; Eelbode, T.; Berman, M.; Vandermeulen, D.; Maes, F.; Bisschops, R.; Blaschko, M.B. Optimizing the dice score and jaccard index for medical image segmentation: Theory and practice. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 92–100. [Google Scholar]
- Zhao, C.; Shi, W.; Deng, Y. A new Hausdorff distance for image matching. Pattern Recognit. Lett. 2005, 26, 581–586. [Google Scholar] [CrossRef]
- Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M.; et al. The Cancer Imaging Archive (TCIA): Maintaining and operating a public information repository. J. Digit. Imaging 2013, 26, 1045–1057. [Google Scholar] [CrossRef] [PubMed]
- Pontén, F.; Jirström, K.; Uhlen, M. The Human Protein Atlas—a tool for pathology. J. Pathol. J. Pathol. Soc. Great Br. Irel. 2008, 216, 387–393. [Google Scholar] [CrossRef]
- Weinstein, J.N.; Collisson, E.A.; Mills, G.B.; Shaw, K.R.; Ozenberger, B.A.; Ellrott, K.; Shmulevich, I.; Sander, C.; Stuart, J.M. The cancer genome atlas pan-cancer analysis project. Nat. Genet. 2013, 45, 1113–1120. [Google Scholar] [CrossRef]
- Pati, S.; Baid, U.; Zenk, M.; Edwards, B.; Sheller, M.; Reina, G.A.; Foley, P.; Gruzdev, A.; Martin, J.; Albarqouni, S.; et al. The federated tumor segmentation (fets) challenge. arXiv 2021, arXiv:2105.05874. [Google Scholar] [CrossRef]
- Satushe, V.; Vyas, V.; Metkar, S.; Singh, D.P. Advanced CNN Architecture for Brain Tumor Segmentation and Classification using BraTS-GOAT 2024 Dataset. Curr. Med. Imaging 2025, e15734056344235. [Google Scholar] [CrossRef] [PubMed]
- Bonato, B.; Nanni, L.; Bertoldo, A. Advancing precision: A comprehensive review of MRI segmentation datasets from brats challenges (2012–2025). Sensors 2025, 25, 1838. [Google Scholar] [CrossRef] [PubMed]
- Ghaffari, M.; Sowmya, A.; Oliver, R. Automated brain tumor segmentation using multimodal brain scans: A survey based on models submitted to the BraTS 2012–2018 challenges. IEEE Rev. Biomed. Eng. 2019, 13, 156–168. [Google Scholar] [CrossRef]
- Binney, N.; Hyde, C.; Bossuyt, P.M. On the origin of sensitivity and specificity. Ann. Intern. Med. 2021, 174, 401–407. [Google Scholar] [CrossRef]
- Müller, D.; Soto-Rey, I.; Kramer, F. Towards a guideline for evaluation metrics in medical image segmentation. BMC Res. Notes 2022, 15, 210. [Google Scholar] [CrossRef]
- LaBella, D.; Schumacher, K.; Mix, M.; Leu, K.; McBurney-Lin, S.; Nedelec, P.; Villanueva-Meyer, J.; Shapey, J.; Vercauteren, T.; Chia, K.; et al. Brain tumor segmentation (brats) challenge 2024: Meningioma radiotherapy planning automated segmentation. arXiv 2024, arXiv:2405.18383. [Google Scholar] [CrossRef]
- Moawad, A.W.; Janas, A.; Baid, U.; Ramakrishnan, D.; Saluja, R.; Ashraf, N.; Jekel, L.; Amiruddin, R.; Adewole, M.; Albrecht, J.; et al. The Brain Tumor Segmentation-Metastases (BraTS-METS) Challenge 2023: Brain Metastasis Segmentation on Pre-treatment MRI. arXiv 2024, arXiv:2306.00838v3. [Google Scholar]
- Kuntiyellannagari, B.; Dwarakanath, B. Glioma segmentation using hybrid filter and modified African vulture optimization. Bull. Electr. Eng. Inform. 2025, 14, 1447–1455. [Google Scholar] [CrossRef]
- Akil, M.; Saouli, R.; Kachouri, R. Fully automatic brain tumor segmentation with deep learning-based selective attention using overlapping patches and multi-class weighted cross-entropy. Med. Image Anal. 2020, 63, 101692. [Google Scholar]
- Chang, Y.; Zheng, Z.; Sun, Y.; Zhao, M.; Lu, Y.; Zhang, Y. DPAFNet: A residual dual-path attention-fusion convolutional neural network for multimodal brain tumor segmentation. Biomed. Signal Process. Control. 2023, 79, 104037. [Google Scholar] [CrossRef]
- Li, X.; Jiang, Y.; Li, M.; Zhang, J.; Yin, S.; Luo, H. MSFR-Net: Multi-modality and single-modality feature recalibration network for brain tumor segmentation. Med. Phys. 2023, 50, 2249–2262. [Google Scholar] [CrossRef] [PubMed]
- Rastogi, D.; Johri, P.; Donelli, M.; Kadry, S.; Khan, A.A.; Espa, G.; Feraco, P.; Kim, J. Deep learning-integrated MRI brain tumor analysis: Feature extraction, segmentation, and Survival Prediction using Replicator and volumetric networks. Sci. Rep. 2025, 15, 1437. [Google Scholar] [CrossRef]
- de los Reyes, A.M.; Lord, V.H.; Buemi, M.E.; Gandía, D.; Déniz, L.G.; Alemán, M.N.; Suárez, C. Combined use of radiomics and artificial neural networks for the three-dimensional automatic segmentation of glioblastoma multiforme. Expert Syst. 2024, 41, e13598. [Google Scholar] [CrossRef]
- Beser-Robles, M.; Castellá-Malonda, J.; Martínez-Gironés, P.M.; Galiana-Bordera, A.; Ferrer-Lozano, J.; Ribas-Despuig, G.; Teruel-Coll, R.; Cerdá-Alberich, L.; Martí-Bonmatí, L. Deep learning automatic semantic segmentation of glioblastoma multiforme regions on multimodal magnetic resonance images. Int. J. Comput. Assist. Radiol. Surg. 2024, 19, 1743–1751. [Google Scholar] [CrossRef] [PubMed]
- Amri, Y.; Slama, A.B.; Mbarki, Z.; Selmi, R.; Trabelsi, H. Automatic Glioma Segmentation Based on Efficient U-Net Model using MRI Images. Intell.-Based Med. 2025, 11, 100216. [Google Scholar] [CrossRef]
- Pinaya, W.H.; Tudosiu, P.D.; Gray, R.; Rees, G.; Nachev, P.; Ourselin, S.; Cardoso, M.J. Unsupervised brain imaging 3D anomaly detection and segmentation with transformers. Med. Image Anal. 2022, 79, 102475. [Google Scholar] [CrossRef]
- Hu, Z.; Li, L.; Sui, A.; Wu, G.; Wang, Y.; Yu, J. An efficient R-transformer network with dual encoders for brain glioma segmentation in MR images. Biomed. Signal Process. Control. 2023, 79, 104034. [Google Scholar] [CrossRef]
- Wen, L.; Sun, H.; Liang, G.; Yu, Y. A deep ensemble learning framework for glioma segmentation and grading prediction. Sci. Rep. 2025, 15, 4448. [Google Scholar] [CrossRef]
- Chen, X.; Yang, L. Brain tumor segmentation based on CBAM-TransUNet. In Proceedings of the 1st ACM Workshop on Mobile and Wireless Sensing for Smart Healthcare, Sydney, Australia, 21 October 2022; pp. 33–38. [Google Scholar]
- Hou, Q.; Peng, Y.; Wang, Z.; Wang, J.; Jiang, J. MFD-Net: Modality Fusion Diffractive Network for Segmentation of Multimodal Brain Tumor Image. IEEE J. Biomed. Health Inform. 2023, 27, 5958–5969. [Google Scholar] [CrossRef] [PubMed]
- Liu, C.; Kiryu, H. 3D medical axial transformer: A lightweight transformer model for 3D brain tumor segmentation. In Proceedings of the Medical Imaging with Deep Learning, Paris, France, 3–5 July 2024; pp. 799–813. [Google Scholar]
- Deng, G.; Zou, K.; Ren, K.; Wang, M.; Yuan, X.; Ying, S.; Fu, H. SAM-U: Multi-box prompts triggered uncertainty estimation for reliable SAM in medical image. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Vancouver, BC, Canada, 8–12 October 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 368–377. [Google Scholar]
Year | Total Cases | Train | Validation | Test |
---|---|---|---|---|
2012 | 50 | 35 | N/A | 15 |
2013 | 60 | 35 | N/A | 25 |
2014 | 238 | 200 | N/A | 38 |
2015 | 253 | 200 | N/A | 53 |
2016 | 391 | 200 | N/A | 191 |
2017 | 477 | 285 | 46 | 146 |
2018 | 542 | 285 | 66 | 191 |
2019 | 626 | 335 | 125 | 166 |
2020 | 660 | 369 | 125 | 166 |
2021 | 2040 | 1251 | 219 | 570 |
2022 | 2000 | 1251 | 219 | 530 |
2023 | 2040 | 1251 | 219 | 570 |
2024 | 2200 | 1540 | 220 | 440 |
Id | Algorithm | Dice(WT)↑ | Dice(ET)↑ | Dice(TC)↑ | HD95↓ | #Params(M)↓ | Dataset |
---|---|---|---|---|---|---|---|
3 | Wang et al. [42] | 90.50 | 78.59 | 83.78 | 16.50 | 20.00 * | Brats17 |
4 | Att. Unet [55] | 87.24 | 74.51 | 76.85 | 7.61 | 34.90 | Brats18 |
6 | Myronenko [53] | 91.00 | 82.33 | 86.68 | 5.10 | 7.70 * | Brats18 |
8 | DenseMultiOCM [136] | 86.00 | 73.20 | 73.33 | 8.75 | 35.00 * | Brats18 |
39 | DPAFNet [137] | 90.50 | 79.50 | 83.90 | 5.05 | 28.00 * | Brats18 |
41 | MSFR-Net [138] | 90.90 | 80.70 | 85.80 | 4.82 | 37.85 | Brats18 |
1 | 3D U-Net [39] | 85.36 | 72.21 | 71.05 | 12.94 | 19.00 | Brats19 |
2 | V-Net [40] | 85.18 | 72.43 | 73.46 | 8.92 | 37.70 | Brats19 |
7 | TuNet [41] | 90.34 | 78.42 | 81.12 | 4.77 | 30.50 * | Brats19 |
50 | Tong et al. [50] | 88.50 | 75.10 | 77.60 | 61.84 | Brats19 | |
86 | AR-CA [120] | 83.00 | 72.00 | 84.00 | 15.00 * | Brats19 | |
5 | nnUnet [45] | 90.70 | 81.40 | 84.80 | 5.95 | 41.20 | Brats20 |
9 | RFNet [46] | 86.98 | 61.47 | 78.23 | 35.00 * | Brats20 | |
12 | Modality-pairing [48] | 92.40 | 86.30 | 89.80 | 4.92 | 42.00 * | Brats20 |
17 | MM-BiFPN [51] | 83.58 | 77.95 | 81.47 | 21.41 | Brats20 | |
20 | MAF-Net [47] | 88.00 | 41.80 | 67.90 | 75.00 * | Brats20 | |
26 | Dres-Unet [43] | 86.60 | 80.04 | 83.57 | 30.47 | Brats20 | |
42 | MM-Unet [57] | 85.00 | 76.20 | 76.50 | 8.47 | 111.40 | Brats20 |
43 | Liu et al. [49] | 89.50 | 77.45 | 81.78 | 6.40 | 64.00 * | Brats20 |
61 | Sahoo et al. [54] | 90.36 | 85.75 | 17.00 * | Brats20 | ||
62 | GAM-Net [52] | 89.91 | 75.80 | 84.02 | 5.30 | 88.00 * | Brats20 |
78 | Kuntiyellannagari et al. [135] | 97.00 | 91.00 | 96.00 | 147.00 * | Brats20 | |
84 | Rastogi et al. [139] | 87.56 | 86.46 | 86.66 | 40.00 * | Brats20 | |
87 | GBManalizer [140] | 89.33 | 80.69 | 83.72 | 6.84 | 0.004 | Brats20 |
88 | RH-GlioSeg-nnU-Net [118] | 88.00 | 78.00 | 72.00 | 41.20 * | Brats20 | |
35 | MAAB [56] | 89.07 | 78.02 | 80.73 | 19.59 | 24.00 * | Brats21 |
64 | Dense Unet+ [44] | 95.80 | 93.70 | 95.50 | 6.92 | Brats23 | |
75 | Beser-Robles et al. [141] | 71.00 | 81.00 | 79.00 | 19.00 | Brats23 | |
79 | Enhanced Unet [142] | 91.87 | 4.12 | 46.00 * | Brats23 |
Id | Algorithm | Dice(WT)↑ | Dice(ET)↑ | Dice(TC)↑ | HD95↓ | #Params(M)↓ | Dataset |
---|---|---|---|---|---|---|---|
28 | Pinaya et al. [143] | 61.70 | 48.91 * | Brats18 | |||
77 | MMCFormer [65] | 85.00 | 64.70 | 79.20 | 8.57 | Brats18 | |
27 | BTSwin-Unet [59] | 90.28 | 78.38 | 81.73 | 5.18 | 35.60 | Brats19 |
29 | EMSViT [61] | 90.28 | 79.24 | 82.23 | 5.49 | 47.50 * | Brats19 |
56 | Swin-Unet [60] | 86.89 | 75.62 | 76.63 | 7.89 | 27.10 | Brats19 |
45 | NestedFormer [58] | 92.20 | 80.00 | 86.40 | 5.05 | 10.48 | Brats20 |
44 | VT-Unet [62] | 92.24 | 86.31 | 89.53 | 3.51 | 87.00 | Brats21 |
58 | HRSTNet-4 [63] | 91.90 | 82.92 | 87.62 | 8.94 | 266.33 | Brats23 |
72 | CR-Swin2-VT [64] | 91.38 | 81.71 | 85.40 | 9.97 | 90.00 * | Brats23 |
Id | Algorithm | Dice(WT)↑ | Dice(ET)↑ | Dice(TC)↑ | HD95↓ | #Params(M)↓ | Dataset |
---|---|---|---|---|---|---|---|
40 | ERTN [144] | 83.20 | 73.59 | 77.93 | 5.13 | 95.00 * | Brats17 |
16 | Zhou et al. [72] | 82.90 | 59.10 | 74.90 | 7.10 | 37.00 * | Brats18 |
46 | mmFormer [86] | 89.60 | 85.80 | 77.60 | 7.85 | 106.00 | Brats18 |
47 | LGMSU-Net [91] | 87.35 | 69.02 | Brats18 | |||
53 | Tongxue Zhou [89] | 86.50 | 87.00 | 79.40 | 3.60 | 36.00 | Brats18 |
10 | TransBTS [68] | 88.42 | 77.58 | 80.96 | 7.17 | 30.60 | Brats19 |
14 | SegTran [70] | 89.50 | 74.00 | 81.70 | 93.10 | Brats19 | |
15 | Fang et al. [71] | 92.67 | 83.54 | 89.47 | 1.95 | 15.37 | Brats19 |
19 | Transition-Net [75] | 91.25 | 74.85 | 84.46 | 14.15 | 44.00 * | Brats19 |
22 | TransConver [66] | 90.19 | 78.40 | 82.57 | 4.74 | 9.00 | Brats19 |
25 | TransBTSv2 [68] | 90.42 | 80.24 | 84.87 | 4.87 | 15.30 | Brats19 |
55 | Gao et al. [94] | 90.10 | 80.10 | 84.00 | 4.73 | 5.90 | Brats19 |
57 | GMetaNet [95] | 90.20 | 78.40 | 82.50 | 4.84 | 6.10 | Brats19 |
73 | TransXAI [96] | 88.20 | 74.50 | 78.20 | 6.19 | 87.00 * | Brats19 |
13 | Medical Transformer [69] | 87.33 | 58.82 | 69.69 | 2.41 | Brats20 | |
18 | UneTR [73] | 89.90 | 78.80 | 84.20 | 5.25 | 92.58 | Brats20 |
23 | TranSiam [67] | 89.34 | 5.65 | 7.98 | Brats20 | ||
30 | Nalawade et al. [79] | 87.40 | 72.10 | 77.30 | 27.09 | 64.00 * | Brats20 |
38 | RMTF-Net [85] | 81.80 | 59.00 * | Brats20 | |||
51 | AST-Net [87] | 90.40 | 77.80 | 84.20 | 14.23 | 10.50 | Brats20 |
52 | Huang et al. [88] | 90.30 | 70.80 | 81.50 | 15.99 | 144.86 | Brats20 |
67 | TransDoubleU-Net [90] | 92.87 | 79.16 | 86.51 | 10.77 | 93.00 * | Brats20 |
68 | GSG U-net [145] | 91.28 | 85.88 | 85.82 | 5.39 | 60.00 * | Brats20 |
76 | SwinBTS [2] | 95.06 | 85.36 | 83.30 | 10.03 | 64.00 * | Brats20 |
80 | SGS-Net [101] | 89.79 | 76.12 | 76.29 | 85.00 * | Brats20 | |
82 | CFNet [74] | 91.60 | 90.29 | 90.46 | 1.51 | 49.18 | Brats20 |
21 | Unet Former [76] | 93.22 | 88.80 | 92.10 | 8.49 | 58.96 | Brats21 |
24 | SegTransVAE [77] | 90.52 | 85.48 | 92.60 | 4.10 | 44.70 | Brats21 |
31 | Shi et al. [80] | 89.15 | 81.94 | 73.81 | 13.09 | 143.00 * | Brats21 |
32 | BiTr-Unet [81] | 90.97 | 81.87 | 84.34 | 13.01 | 43.50 | Brats21 |
33 | Dobko et al. [82] | 86.98 | 84.96 | 92.56 | 10.05 | 94.00 * | Brats21 |
34 | Futrega et al. [83] | 91.63 | 43.00 * | Brats21 | |||
36 | Swin UneTR [3] | 92.60 | 85.80 | 88.50 | 5.21 | 61.98 | Brats21 |
37 | COTRNet [84] | 89.34 | 77.60 | 80.21 | 17.24 | 46.51 | Brats21 |
48 | CBAM-TransUNet [146] | 93.08 | 87.76 | 91.49 | 4.01 | 96.00 | Brats21 |
49 | 3D PSwinBTS [92] | 92.64 | 82.62 | 86.72 | 10.78 | 20.40 | Brats21 |
54 | AABTS-Net [93] | 91.10 | 77.70 | 83.80 | 4.42 | 75.00 * | Brats21 |
60 | Med-SA [109] | 88.70 | 9.50 | 13.00 | Brats22 | ||
59 | SAM [104] | 74.60 | 27.51 | 636.00 | Brats23 | ||
63 | SAM-Med2D [106] | 82.90 | 16.20 | 636.00 | Brats23 | ||
65 | MFD-Net [147] | 92.70 | 85.40 | 88.70 | 7.76 | 105.00 * | Brats23 |
66 | SAMed [105] | 77.30 | 19.07 | 18.81 | Brats23 | ||
69 | MAT [148] | 93.21 | 85.05 | 91.91 | 4.77 | 11.70 | Brats23 |
70 | MedSAM [110] | 83.60 | 14.90 | 636.00 | Brats23 | ||
71 | SAM-U [149] | 81.00 | 17.26 | 0.00 | Brats23 | ||
74 | Diana-Albelda et al. [38] | 61.90 | 32.00 | 12.00 | Brats23 | ||
81 | 3D U-TFA [102] | 95.06 | 85.36 | 83.30 | 10.03 | 11.31 * | Brats23 |
83 | GBT-SAM [111] | 93.54 | 6.40 | Brats23 | |||
85 | Arouse-Net [103] | 93.50 | 89.30 | 89.50 | 45.00 * | Brats23 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Diana-Albelda, C.; García-Martín, Á.; Bescos, J. A Review on Deep Learning Methods for Glioma Segmentation, Limitations, and Future Perspectives. J. Imaging 2025, 11, 269. https://doi.org/10.3390/jimaging11080269
Diana-Albelda C, García-Martín Á, Bescos J. A Review on Deep Learning Methods for Glioma Segmentation, Limitations, and Future Perspectives. Journal of Imaging. 2025; 11(8):269. https://doi.org/10.3390/jimaging11080269
Chicago/Turabian StyleDiana-Albelda, Cecilia, Álvaro García-Martín, and Jesus Bescos. 2025. "A Review on Deep Learning Methods for Glioma Segmentation, Limitations, and Future Perspectives" Journal of Imaging 11, no. 8: 269. https://doi.org/10.3390/jimaging11080269
APA StyleDiana-Albelda, C., García-Martín, Á., & Bescos, J. (2025). A Review on Deep Learning Methods for Glioma Segmentation, Limitations, and Future Perspectives. Journal of Imaging, 11(8), 269. https://doi.org/10.3390/jimaging11080269