Next Article in Journal
CLDN18.2-Targeted Therapy in Gastrointestinal Cancers
Previous Article in Journal
Structured Symptom Assessment in Dermato-Oncology Patients—A Prospective Observational Study of the Usability of a Symptom Questionnaire
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advances in Artificial Intelligence for Glioblastoma Radiotherapy Planning and Treatment

1
School of Engineering Medicine, Texas A&M Institute of Biosciences and Technology, Houston, 77030 TX, USA
2
Center for Genomics and Precision Medicine, Texas A&M Institute of Biosciences and Technology, Houston, 77030 TX, USA
3
ICON Clinical Research, 8307 Gault Lane, San Antonio, 78209 TX, USA
4
Vivian L. Smith Department of Neurosurgery, UTHealth Houston, Houston, 77030 TX, USA
5
Radiation Oncology Department, Centre Léon Bérard, 69373 Lyon, France
6
CentraleSupélec, University of Paris-Saclay, 91190 Gif-sur-Yvette, France
7
TheraPanacea, 7 Bis Boulevard, Bourdon, 75004 Paris, France
*
Author to whom correspondence should be addressed.
Cancers 2025, 17(23), 3762; https://doi.org/10.3390/cancers17233762
Submission received: 13 October 2025 / Revised: 15 November 2025 / Accepted: 20 November 2025 / Published: 25 November 2025
(This article belongs to the Special Issue Advances in Diagnostics and Treatments for Glioblastoma)

Simple Summary

Artificial intelligence holds the promise of enhanced glioblastoma radiotherapy by improving segmentation accuracy, incorporating biologically informed mathematical modeling, and integrating radiogenomic data for personalized treatment planning. Deep learning-based auto segmentation can achieve high accuracy with reduced interobserver variability, while tumor growth modeling can enable biologically guided, patient-specific dose mapping. Radiogenomic approaches combine imaging and molecular data to noninvasively predict biomarker status and support individualized therapy. However, clinical translation remains limited by the need for large multi-institutional datasets, interpretability, and standardized validation protocols. Emerging advances, such as adaptive radiotherapy, multimodal data incorporation, and foundation models, offer real-time adaptability and further personalization in glioblastoma treatment.

Abstract

Glioblastoma is an aggressive central nervous system tumor characterized by diffuse infiltration. Despite substantial advances in oncology, survival outcomes have shown little improvement over the past three decades. Radiotherapy remains a cornerstone of treatment; however, it faces several challenges, including considerable inter-observer variability in clinical target volume delineation, dose constraints associated with adjacent organs at risk, and the persistently poor prognosis of affected patients. Recent advances in artificial intelligence, particularly deep learning, have shown promise in automating radiation therapy mapping to improve consistency, accuracy, and efficiency. This narrative review explores current auto segmentation frameworks, dose mapping, and biologically informed radiotherapy planning guided by multimodal imaging and mathematical modeling. Studies have demonstrated reproducible tumor segmentations with DSCs exceeding 0.90, reduced planning within minutes, and emerging predictive capabilities for treatment response. Radiogenomic integration has enabled imaging-based classification of critical biomarkers with high accuracy, reinforcing the potential of deep learning models in personalized radiotherapy. Despite these innovations, deployment into clinical practice remains limited, primarily due to insufficient external validation and single-institution training datasets. This review emphasizes the importance of large, annotated imaging datasets, multi-institutional collaboration, and biologically explainable modeling to successfully translate deep learning into glioblastoma radiation planning and longitudinal monitoring.

1. Essentials

  • Deep learning-based auto segmentation models achieve high accuracy and substantially reduced inter-observer variability in glioblastoma radiotherapy planning (pp. 6–8)
  • Biologically informed mathematical modeling integrates tumor growth dynamics with imaging, enabling personalized radiotherapy dose mapping strategies (pp. 8–11)
  • Radiogenomic models integrating imaging and molecular data predict status of key biomarkers, supporting non-invasive tumor subtyping and personalized therapy (pp. 11–13)
  • Multi-institutional datasets, model interpretability, and standardized validation protocols remain critical barriers to clinical adoption of artificial intelligence-guided radiotherapy (pp. 13–15)
  • Recent advances, including adaptive radiotherapy, multimodal integration, and foundation models, enable personalization and real-time adaptability in glioblastoma radiotherapy (pp. 15–17)

2. Introduction

Glioblastoma is a highly invasive brain tumor with poor prognosis, which has remained rather static during the past three decades with a median survival of 14 months despite aggressive treatment; its mortality is further emphasized by 5-year survival rates near 5% [1]. Standard first-line treatment for high-grade tumors entails maximal safe resection followed by concurrent temozolomide and radiation therapy (RT) for 3–6 weeks with subsequent temozolomide for an additional 6 months. Tumor treating fields (TTF) therapy, an emerging adjunct modality that delivers alternating electric fields to disrupt mitosis, has shown modest survival benefits in select patients [2,3]. Treatment challenges include tumor heterogeneity and diffuse infiltration, difficulty in defining precise tumor margins, and resistance to standard therapies [1].
The current clinical workflow for glioblastoma management typically begins with a neurological exam and imaging such as MRI or PET/CT scans for initial diagnosis [4]. This is followed by surgical biopsy or maximal safe resection of the tumor and subsequent histopathological and molecular characterization of the tumor and surrounding tissue [5]. Aiming to maximize tumor resection volume while preserving healthy brain function, clinicians utilize tumor visualization and cortical mapping methods during surgery, such as ultrasound, fluorescent dyes, and intraoperative neuroanatomical navigation systems [6]. According to patient age, Karnofsky performance status, and tumor classification, clinicians develop a treatment plan utilizing a combination of temozolomide chemotherapy, palliative care such as corticosteroids, and RT, where RT planning plays a major role.
The current standard of care for RT in glioblastoma management follows European Organisation for Research and Treatment of Cancer and Radiation Therapy Oncology Group guidelines to generate clinical target volume (CTV), planning target volume (PTV), and gross target volume (GTV) RT dosage maps. European standards require delineation of the T1-weighted contrast-enhancing lesion plus a 2 cm margin, while Radiation Therapy Oncology Group includes FLAIR or T2-weighted abnormalities with a 2 cm margin [7]. An example of a standardized, manually segmented RT dose map for a patient with glioblastoma is shown in Figure 1.
Radiotherapy planning can be conceptually divided into three principal stages: treatment preparation, treatment delivery, and, when required, treatment adaptation.
  • Treatment preparation encompasses the delineation of target volumes and organs at risk, the adoption of dose prescriptions, and the determination of treatment plan through simulation.
  • Treatment delivery involves the fractionation of the simulated plan into multiple sessions and the systematic administration of radiation according to the established plan.
  • Treatment adaptation entails the continuous monitoring of treatment execution and the modification of the plan when anatomical or physiological changes compromise the ability of the initial plan to satisfy predefined dosimetric and clinical constraints.
Because glioblastomas grow aggressively and often infiltrate parenchyma near critical brain structures, accurate tumor delineation is essential for effective RT. Unfortunately, conventional CTV segmentation remains time-consuming and prone to user variability. Artificial intelligence (AI), particularly deep learning (DL) methods such as convolutional neural networks (CNNs), offers an opportunity to improve the accuracy, efficiency, and reproducibility of this process [8]. In simple terms, DL models mimic how the human brain recognizes patterns using synthetic neural nets, which comprise individual synapses, while convolutional neural networks specifically learn visual features from medical images to identify and outline tumor regions more consistently.
Numerous recent studies have attempted to optimize and automate the radiotherapy planning process [9]. The aim is to enable patient-specific segmentation and dose escalation by leveraging MRI/CT imaging and eventually uncover further dose principles through boosting and local dose control on the basis of radiomics and genetic biomarkers [10].
This review functions as a survey of research publications, exploring recent methodological advances, their progress over the last ten years, current implementations, and future directions. While there are presently several commercial products for tumor auto delineation ready for clinical deployment, a majority of the approaches discussed remain in early adoption stages. For predictive modeling in particular, regulatory certification will prove to be a significant challenge; its early adoption hinges upon extensive clinical evidence with cross-institutional validation. Where possible, the authors highlight whether each study was retrospective or prospective, the size of training/testing data, and testing cohorts. In parallel, we discuss emerging trends for AI in glioblastoma RT planning, including the rise in large-scale foundational models, which will undoubtedly improve inter-observer variability and clinical workflow.
During the last decade, DL models have emerged as a viable solution for the purposes of glioblastoma diagnosis, patient risk stratification, treatment dose escalation, and tumor and organs at risk auto segmentation. Deep learning architecture offers the ability to extract local and global trends among robust datasets without the necessity for quantitative engineering features, which are required for many supervised machine learning (ML) models, such as traditional classifiers or regressors. This review aims to critically evaluate the applications of deep learning in the development of innovative and personalized radiotherapy treatment strategies for glioblastoma, with a particular emphasis on their potential to enhance precision, adaptability, and clinical outcomes.

3. Current Challenges in Glioblastoma Management and Treatment

While CNN architectures have dominated the field of RT planning and auto segmentation, advancing these algorithms for broad, clinically meaningful use requires addressing limitations in this workflow. One of the main limitations is due to the diffuse infiltration of glioblastoma, which leads to ambiguous tumor margins that make it difficult to distinguish between diseased and healthy tissue [11]. Standard RT planning relies heavily on expert manual or semi-automated segmentation of tumor and edema margins using MRI, a process that is subjective due to the heterogeneity of glioblastoma [12]. This subjectivity is worsened by inconsistent imaging techniques between institutions and interpretational differences among clinicians, leading to significant variability in tumor delineation and RT field definition. This complicates accurate tumor segmentation and classification, hampering both clinical decision-making and the development of reliable AI models. Therefore, the quality and consistency of ground truth used for training DL models are limited, impacting model generalizability and performance.
In an effort to create consensus contouring guidelines for glioblastoma, a panel of 10 academic radiation oncologists specializing in brain tumor treatment contoured CTVs on four glioblastoma cases independently before convening to review their contours [13]. Variations across these experts spanned from the definition of T1C and T2-FLAIR signals (with kappa statistics of just 0.69 and 0.74, respectively) to expansions based on delineation of barriers to spread and preferred anatomic pathways of spread. Similarly, in re-irradiation settings, it is extremely challenging to define the extent of disease when tumor recurrences arise in a milieu of radiation necrosis (or treatment effect). Despite careful clinical annotation, training of models can easily be distorted by outliers in datasets [14], imposing costs on either target coverage and/or normal tissue sparing. Closely mimicking the challenges with standardization of RT treatment volumes is that of variability in imaging techniques. Not only do hardware (vendor, magnetic field strength and gradients, receiver coil geometry, etc.) and software (sequence acquisition and reconstruction algorithms) matter, but also deformation correction, intensity normalization, and validation of robustness, saliency, and sensitivity of models generated from such imaging datasets require close attention and careful benchmarking.
Furthermore, conventional RT approaches are largely static, utilizing only pre-treatment imaging as the basis for planning and failing to account for the dynamic changes that can occur during the course of and in response to therapy. Tumor volumes, peritumoral edema, and treatment-induced effects such as necrosis or pseudoprogression can alter the landscape of the tumor significantly, yet standard RT protocols are not designed to adapt to these temporal variations [15]. This can result in either under-treatment, where areas of true tumor progression fall outside of the planned target volume, or over-treatment, where unnecessary radiation is delivered to normal tissue. This can ultimately lead to post-treatment cognitive dysfunction in 30–50% of patients 6 months after RT [16]. Lastly, the growing availability of molecular data, such as transcriptomics and epigenomics, has not yet been fully incorporated into clinical workflows for RT planning. This data would provide valuable insights into tumor heterogeneity and treatment response, but the practical constraints of integrating large quantities of genomics data into RT planning still represent a critical gap in the field.

4. Tumor Delineation and Auto Segmentation

CNNs represent the most popular methods for auto segmentation; simplified inputs and outputs for such models are illustrated in Figure 1. This architecture and its variants have largely been considered state-of-the-art in computer vision and medical image processing for the last decade [17]. A variety of optimized CNN models (i.e., DeepMedic, ResNet, Seg-Net, etc.) are publicly available and serve as the initial building blocks in many of the studies outlined in this paper [18,19]. The most popular CNN architecture for auto segmentation is U-Net, a model which leverages an encoder–decoder framework for both down sampling and up sampling of data to preserve local and global image characteristics, thereby reducing noise while enhancing pertinent structural features [17]. Several commercial solutions for organs at risk segmentation also rely on similar technologies [20].
Performances of such models are standardized and compared primarily using the Dice similarity coefficient (DSC), which effectively calculates the spatial overlap between two sets of data, such as segmentations, normalized by sum of elements comprising both sets, such as total pixels occupied by the segmentations. A score of 1 designates perfect overlap between the predicted and ground truth volumes, which are typically manual segmentations or clinically approved auto segmentations, and 0 indicates no overlap [21].
The most common challenges for training and deploying CNNs include lack of large, annotated datasets of high quality and reproducibility. This has been addressed in recent years with large consortia of imaging data, such as the annual Medical Image Computing and Computer-Assisted Intervention Society Brain Tumor Segmentation (BRATS) challenge. This international dataset consists of shared multimodal MRIs, annotations, clinical outcomes, and expert-generated segmentations for three subregions: complete tumor, core tumor, and enhancing tumor. The BRATS challenge has played a critical role in advancing glioblastoma auto segmentation models over the past decade [22]. Early attempts of Medical Image Computing and Computer Assisted Intervention Society 2012 and 2013 BRATS challenge involved 20 algorithms trained on 65 multi-contrast MRI scans from both low- and high-grade glioma patients, with DSC performance ranging from 0.74 to 0.85 [22,23]. At that time, deep learning methods had not yet been re-established as state-of-the-art technology for segmentation and were not in use, as generative probabilistic methods were heavily favored [22]. As of 2018, clinicians still outperformed auto segmentation models, largely due to the limited size of annotated datasets available for training [24]. Despite this, CNNs soon became the leading architecture for segmentation and treatment planning by BRATS organizers in both 2017 and 2018 [25,26]. Since then, BRATS has continued to advance state-of-the-art computer vision for auto segmentation of gliomas. The 2021 benchmark pooled preoperative MRI data stacks from 2040 patients across multiple institutions. They also introduced the methylated-DNA-protein-cysteine methyltransferase methylation status challenge, inviting participants to train and validate radiogenomic predictions across a diverse clinical dataset [27]. In 2022, the winning ensemble primarily leveraged existing frameworks such as DeepSeg, DeepSCAN, nnU-Net, a novel self-configuring method that can reportedly be trained and deployed on a variety of different auto segmentations [28]. For whole tumor, enhancing tumor, and tumor core, the DSCs were 0.93, 0.88, and 0.88, respectively, on BRATS testing dataset [29]. A similar ensemble was developed in 2023, again using the popular nnU-Net framework; notably, the model also implemented data augmentation by generating synthetic MRI training data using generative adversarial networks. This approach was found to mitigate class imbalances with the addition of numerous unique tumor locations and compositions, allowing for high generalizability and DSCs of 0.90, 0.85, and 0.87 for whole tumor, enhancing tumor, and tumor core [30]. Most recently, the BRATS 2024 challenge placed increasing emphasis on post-treatment, including annotated resection cavity, non-enhancing tumor core, and non-enhancing T2/FLAIR hyperintensity. For this task, Ferreira, Moradi, and colleagues developed the top-performing model, once again using their nnU-Net synthetic data ensemble [31].
In recent years, DL models have continued to advance, offering improved precision, speed, and reproducibility in tumor-delineating auto segmentation. Numerous studies have since evaluated various CNN-based architectures (Table 1), including cascaded 3D Fully CNNs [32], hybrid ensemble models such as Incremental XCNet [33], and artificial neural networks validated on large institutional and public datasets [17]. Tools like AutoRANO demonstrated near-perfect intraclass correlation for volumetric tumor metrics using U-Net-based architectures [34]. One retrospective study trained a CNN model using diffusion tensor imaging to predict microscopic tumor infiltration margins, aligning with standardized guidelines for CTV delineation [7]. A recent multi-reader study found that deep neural networks reduced inter-reader variability and segmentation time in stereotactic radiosurgery planning compared to manual expert contours [21]. Architectural innovations, including attention-enhanced CNNs [35], densely connected micro-block Fully CNNs [36], holistically nested neural networks [37], and multiple U-Net variants [38], have achieved high DSCs while reducing processing time to mere seconds in some cases. Earlier studies using BRATS datasets also explored strategies like kernel optimization [39], cascaded inputs [40], multimodal MRI integration [41], and dual-patch batch normalization techniques [42], all contributing to improvements in segmentation accuracy and model efficiency. A recent top performer called PKMI-Net was developed to automate segmentation of GTV, high-dose CTV, and low-dose CTV using non-contrast CT, multisequence MRI, and medical record inputs. The model was trained on 148 patients across four institutions and tested on 11 cases with histologically suspected glioblastomas. PKMI-Net achieved DSCs of 0.94, 0.95, and 0.92 for GTV, high-dose CTV, and low-dose CTV, respectively, resulting in an overall DSC of 0.95. All outputs were deemed clinically acceptable without requiring revision. The architecture used a two-stage U-Net framework where the initial GTV segmentation informed subsequent high-dose CTV and low-dose CTV predictions, improving contextual accuracy across planning volumes [43].
In conclusion, DL continues to show strong potential for automating segmentation, margin detection, and RT planning. Advances in autoencoders and multi-layer CNNs have largely supplanted fully connected architectures. Many recent models are capable of automatically segmenting tumors with processing times as short as 20 s per patient [9]. Further, novel architectures, particularly diffusion models, are being investigated and show promise by coupling segmentation maps with uncertainty quantification. Nonetheless, high-quality labeling and collaborative data sharing remain essential to advance these technologies toward clinical implementation [9].

5. Personalized and Biologically Informed Tumor-Progression Radiotherapy

In conventional glioblastoma RT, the PTV is generally defined by applying an isotropic expansion to the CTV. This expansion is intended to account for potential errors in target delineation, setup uncertainties, and patient motion, thereby ensuring adequate dose coverage of the tumor. Such CTV approximation can result in centimeter-level errors in PTV definition, limiting treatment accuracy and increasing radiation exposure to healthy tissues [44]. Because tumor boundaries are difficult to define, clinicians often apply binary dose escalation protocols, maximizing dose to the core while minimizing dose to surrounding areas, even though microscopic infiltration frequently extends beyond visible margins [45].
A large-scale modeling study used data from 124 glioblastoma patients in The Cancer Genome Archive alongside 397 from the UCSF Glioma Dataset to identify relations between tumor proliferation, infiltration, and molecular pathway activation. A patient-specific growth model was created using contrast-enhanced T1 and T2/FLAIR MRI inputs, outputting tumor growth predictions within 4–7 min. The model was validated on 30 patients by comparing predicted recurrence volumes with those defined by standard radiation oncology practices. Findings reinforced that many recurrences occur beyond standard CTV margins, emphasizing the need for biologically grounded treatment planning. Deployable models must be time-efficient, highly validated, and compatible with clinical computational infrastructure to support practical integration [45]. As an extension of this concept, imaging-derived tumor sub-regions or spatial habitats can be computationally extracted from the relative intensities of pixels within multi-parametric datasets (say, T1, T1C, T2, and FLAIR) and correlated with genomic and molecular features [46]. In principle, these imaging correlates derived from larger MR sequence datasets could populate an assortment of signatures that map to each of the hallmarks of cancer [47], such that personalized interventions can be tailored to specific pathways and/cellular processes. From an RT standpoint, however, the identification of tumor sub-regions harboring inherently aggressive phenotypes (biological target volumes) may enable a degree of rational personalization of RT treatment volumes, expansions, and doses that improves upon what is currently available.
Tumor infiltration beyond the visible imaging margins could be more effectively accounted for through the integration of tumor growth models, which not only capture the spatial-temporal dynamics of glioblastoma progression but also enhance the biological interpretability of treatment planning. The development of such informed and personalized RT frameworks has the potential to derive these insights by integrating personalized tumor dynamics and multimodal imaging. A recent study outlined the use of a Bayesian ML model to infer tumor cell density using a reaction-diffusion model based on the Fisher-Kolmogorov equation [48]. This model incorporated preoperative MRI and FET-PET imaging to estimate microscopic infiltration beyond MRI-visible regions. In a clinical population study, the personalized RT plans derived from these inferred tumor densities showed comparable tumor coverage to standard RT while sparing more healthy tissue. Furthermore, the regions of high tumor cell density aligned with known radioresistant areas, suggesting that such biologically guided maps could inform dose escalation strategies. This approach demonstrates the feasibility of individualized treatment design with clinically available imaging modalities; however, external validity and generalizability are questionable given a small testing cohort of just 8 patients [48]. Although such a model is favorable to DL approaches in terms of explainability and biological interpretability, there are often tradeoffs to performance and interpretability that limit the utility compared to winning models in the annual BRATS challenge [24].
PET imaging is less emphasized in our review, primarily given the fact that most of the cited models were trained and validated using CT- and MRI-based modalities. This likely directly reflects the data included within the annual BRATS challenge, which has never included PET imaging data. However, the relative potential impact for such spatially localizing, metabolically informative imaging should not be understated, as it is vitally important for management of glioblastoma [49]. Thus, if PET is to be effectively integrated into existing DL frameworks, it is crucial to create large, shared consortia in addition to corresponding patient CT and MRI.

6. Modification of Treatment, Patient Response Prediction, and Triage During Therapy

Despite advances in RT techniques, clinical outcomes for many patients with glioblastoma remain poor, underscoring the limitations of current treatment strategies. Timely identification of patients at heightened risk for unfavorable outcomes during the course of RT is essential for improving therapeutic efficacy, reducing the likelihood of treatment interruptions, and mitigating the associated burden on healthcare systems. ML offers a transformative opportunity in this regard, as it enables the systematic integration and analysis of large-scale, multimodal clinical datasets. By uncovering complex patterns that are often indiscernible to conventional statistical approaches, ML-based models have the potential to support early risk stratification, guide adaptive treatment strategies, and ultimately contribute to more personalized and effective patient care [50]. Efforts to characterize local tumor infiltration, predict dose distributions based on individualized anatomy and prescription, and ultimately forecast overall clinical outcomes have gained increasing attention in recent years [8].
A recent multi-institutional study utilized leave-one-out cross validation to train and evaluate a patch-based CNN using multi-parameter MRI data stacks from 229 glioblastoma patients to predict regions of interest as either high- or low-infiltration. According to their findings, patients were found to be 8.13 to 19.48 times more likely to experience tumor recurrence when having high-infiltration regions compared to low-infiltration regions [51]. This highlights the requirement for datasets to be annotated by specific regions. DL is only as good as the data it is trained on, and its continued improvement and novel insights could potentially be catalyzed by further stratification of tumor-infiltrating regions. DL has also been used in research to rapidly segment patient images. DeepMedic was applied retrospectively to assess high-grade glioma recurrence, offering segmentation-derived insights [18]. The study suggested that reirradiation is safe and effective in glioblastoma treatment, showing the utility of auto segmentation models in treatment planning and optimization.
A large clinical study evaluated the effectiveness of an ML-based triage system utilizing electronic medical record data for predicting acute care needs during RT and chemoradiation. The algorithm assessed 963 outpatient adult RT or chemoradiation treatment courses to identify patients with a ≥10% risk of requiring acute care, defined as emergency department visits or hospital admissions during treatment. Of these, 311 courses were randomized to either standardized weekly or required biweekly clinical evaluations. Patients identified as high-risk by the ML model and assigned to the intensified clinical follow-up experienced a significant reduction in acute care visits, dropping from 22.3% to 12.3% compared to those receiving standard care (p = 0.02). The model demonstrated strong predictive value, with a receiver operating characteristic area under the curve (AUC) of 0.851, supporting its potential as a tool for real-time patient management during therapy [52].
In a separate study focused on treatment response prediction, ML was used to stratify patients undergoing RT as likely responders or non-responders from radiomic features. Response probability was output by the model and compared to clinician assessments. A decision threshold of 67% was set by the model to classify patients as responders versus non-responders. The model achieved an accuracy of 75% with an AUC of 0.74, outperforming clinician assessments, which achieved an accuracy of 54% and an AUC of 0.56. These findings underscore the ability of ML frameworks to integrate complex imaging and clinical data to outperform physician-predicted therapeutic response [53]. Integrated end-to-end workflows have also been developed. One example combined automated glioblastoma segmentation and ensemble-based survival prediction into a single pipeline trained on BRATS-2020 data. The model classified patients as long (>12 months) or short (<12 months) term survivors with AUCs of 0.86 and 0.72 on BRATS-2020 and institutional datasets, respectively. The auto segmentation component achieved a DSC of 0.91, supporting the model’s utility in streamlining the entire planning process from segmentation through prognosis [50]. One study further demonstrated the potential for mapping dose escalation and demonstrated PTV coverage comparable to manual segmentation, though training and testing were greatly limited due to small sample sizes of 95 and 15 patients, respectively [54].
Together, these studies highlight how AI-guided tools can assist not only in pre-treatment planning but also in monitoring and adjusting care during the treatment course [55]. By identifying patients who may benefit from closer follow-up or alternative treatment strategies, these models offer the potential to improve clinical outcomes, reduce treatment-related complications, and optimize healthcare resource utilization.

7. Radiogenomics and Non-Invasive Biomarker Integration

Radiogenomics represents a growing intersection between imaging, ML, and genomic data, offering the potential to guide personalized treatment strategies in glioblastoma. This approach is particularly promising given the heterogeneity of glioblastoma, which limits the predictive utility of traditional histopathology and challenges the generalizability of fixed RT protocols.
Multiple ML frameworks have been employed to predict genetic mutations and classify glioma subtypes based on imaging features. One study trained a residual CNN model using 406 preoperative brain MRIs to predict isocitrate dehydrogenase mutation status, achieving a testing accuracy of 85.7% [56]. The ability to infer genotype from imaging suggests a reciprocal potential, where known mutation status could be integrated into AI models to inform RT planning [56]. This is especially relevant given that patients with isocitrate dehydrogenase-mutated glioblastoma have demonstrated significantly longer overall survival and progression-free survival compared to isocitrate dehydrogenase-wildtype cases (overall survival of 39 months vs. 14 months), independent of treatment status [57]. Similarly, methylated-DNA-protein-cysteine methyltransferase promoter methylation has been associated with increased responsiveness to alkylating agents, including temozolomide, highlighting the importance of genomic markers in therapeutic decision-making [19].
A random forest-based radiomics model was developed for glioma grading using contrast-enhanced T1-weighted MRI from a training cohort of 101 patients. Testing on an independent cohort of 50 patients from two external institutions yielded an AUC of 0.898, with 84% sensitivity, 76% specificity, and 80% accuracy. The highest-performing model combined DL features from a simple architecture CNN-based model, VGG16, with traditional radiomic features, outperforming either input modality alone [58]. In another study, support vector machine classification combined with synthetic minority over-sampling was able to differentiate high (grades 3 and 4) and low (grades 1 and 2) grade gliomas with 94–96% accuracy, further supporting the utility of hybrid AI approaches [59]. Recently, a DL model was cross validated on 357 patients with isocitrate dehydrogenase-wildtype glioblastoma using pre-operative multiparametric MRI. Notably, the model also incorporated radiogenomic features using genetic sequencing data, enabling spatial mapping of critical gene mutations, including NF1, TP53, PTEN, and EGFR. The multimodal framework was compared with an MRI-only CNN model as well as a radiogenomic-only support vector machine, outperforming both with an AUC of 0.70–0.92 across 13 different biomarkers [60]. This, along with the aforementioned studies, further bolsters the case for incorporating combined models that can leverage both imaging and heterogeneous, molecular-level precision.
Beyond imaging, non-invasive biomarker techniques such as liquid biopsy are being explored for diagnostic and therapeutic monitoring purposes. Circulating tumor DNA and microRNA can provide insights into tumor status, although their reliability remains limited. Transport is impeded by the blood–brain barrier, and intratumoral phenotypic heterogeneity is masked by assessment of a cumulative metric diluted in the systemic circulation. Brain biopsy remains the gold standard but is invasive and carries sampling-related risks. Novel systemic immune-inflammation indices offer non-invasive alternative biomarkers to benchmark clinical grade, glioma subtype, and patient prognosis [61]. Additionally, serum levels of exosome microRNA, along with specific microRNA expression profiles (e.g., miR-21, miR-181c, miR-195, miR-196b), may serve as prognostic biomarkers to accurately predict glioma status and treatment outcomes [10,62,63,64,65]. Furthermore, the integration of radiogenomics with precision population cancer medicine offers a novel approach for comprehensive and longitudinal monitoring of patients, enhancing individualized care and treatment stratification (Figure 2). Initiatives like the Children’s Brain Tumor Tissue Consortium are advancing this goal by curating large-scale shared data repositories. Complementary bioinformatics tools, such as NetworkAnalyst, OmicsNet, Cytoscape, and Alphafold, facilitate exploration of protein–protein interactions, while multimodal approaches combining MRI, genomics, metabolomics, and AI imaging offer a multidimensional view of tumor biology [66].
Radiogenomics and non-invasive biomarker integration provide a promising framework for future glioblastoma treatment personalization. Continued expansion of multi-institutional datasets, model validation across diverse patient populations, and incorporation of comprehensive molecular profiles into imaging-based models will be essential for translating these technologies into clinical care. Modernized data stewardship practices and incentives for pooling population-scale data sets are a crucial step toward providing balance to data and a data ecosystem and models that are representative of populations [67,68]. Validated outcomes data and high-quality ground truth are also a significant hurdle to model validation when considering more nuanced outcomes beyond progression or disease survival [69].
A systematic review of 14 radiogenomics studies reported AUC values ranging from 0.74 to 0.91 but found no consistent patterns based on imaging modality. Modalities used across the studies included T1, T1C, T2, FLAIR, DTI, DWI, spectroscopy, Dynamic Susceptibility Contrast, and Dynamic Contrast Enhanced-MRI. AI techniques included support vector machine, diagonal linear and quadratic discriminant analysis, semi-supervised learning, and CNNs. All studies included MRI as a required input, and all models were trained on single-institution datasets with limited sample sizes (8–37 patients), increasing the risk for protocol-specific biases and overfitting [70]. It is worth noting that the real-world impact of these models on patient survival is unknown, as they have yet to be implemented into clinical practice.

8. Interpretability and Explainability in Deep Learning Models

DL models, often referred to as “black-box” systems, have demonstrated impressive performance in RT planning for glioblastoma; however, a key barrier to clinical adoption remains the lack of consistent performance and generalization. This is further complicated by the absence of interpretability and explainability of these models, in particular when determining treatment plans and responses [23,71]. For example, the challenge of inter-institutional variability was demonstrated in a CNN model trained on 44 glioblastoma patients across two institutions. When tested within the same institution, DSCs reached 0.72 and 0.76. However, when validated across institutions, performance dropped to 0.68 and 0.59, highlighting the need for large, diverse datasets to develop models capable of generalizing across clinical settings [72]. Beyond dataset size, sources of bias such as heterogeneity in imaging protocols, scanner physics, operator technique, patient demographics, and post-processing software must also be considered. Incorporating harmonization strategies and bias-aware model training will therefore be as critical as expanding dataset diversity in addressing systematic variability.
Normally, the most accurate methods, such as DL, are the least transparent, while methods encouraging transparency, such as decision trees, are less successful in their performance [73]. The explainability of AI systems is essential for fostering trust among medical professionals and could play a vital role in facilitating AI integration into clinical practice. Clinicians need a clear understanding of how automated algorithms arrive at specific segmentation, dose planning, or treatment response predictions in order to trust and effectively utilize these outputs in patient care [74]. Fortunately, the risks of unsupervised, black box predictions are inherently mitigated by rigorous assessment of dose maps by the RT team with adherence to strict Radiation Therapy Oncology Group guidelines for glioblastoma, given that physicians can simply adjust the auto-generated plans as necessary. However, lack of transparency poses a significant challenge for incorporation of predictions, such as treatment response, which cannot be readily verified by physicians. While interpretability is important, precedent from genomic clinical decision support shows that black-box algorithms can still clear regulatory hurdles and achieve widespread use when backed by strong evidence that they accomplish the intended purpose. Thus, emphasis should be placed on rigorous prospective validation, unbiased benchmarking, and tools such as Shapley value plots providing insight into model reasoning in the absence of full explainability.
Saliency mapping approaches, including class activation mapping, gradient-weighted class activation mapping, and integrated gradients, have been introduced to generate post hoc explanations of model predictions [65,75,76]. For example, gradient-weighted class activation mapping overlays gradients or heatmaps of any target concept, such as tumor segmentation, on MRI or CT images, transitioning into the final convolutional layer to output a spatial map that visually indicates which localized regions most influence the target concept. These visualizations not only enable clinicians to verify that AI is focusing on clinically plausible anatomic patterns but also flag instances where the model may be inappropriately influenced by artifacts or non-tumor structures. Furthermore, these visualizations lend insights when models fail and help achieve model generalization by identifying dataset bias.
Other methods, namely Shapley additive explanations and Local Interpretable Model-Agnostic Explanations, are perturbation-based and model-agnostic, requiring only model input and output [77]. For example, if a model predicts that a patient has a high risk of recurrence, Shapley additive explanations assign each feature of a dataset (tumor size, patient age, treatment history, etc.) an importance value for a particular prediction to show how much each factor contributed to that prediction. The model works by analyzing many combinations of these features and distributing the influence each one has on the final output. While these methods have been widely used for tabular or radiomics data, emerging studies are adapting Shapley additive explanations values to highlight influential image features or radiomic descriptors relevant to segmentation boundaries or radiogenomic predictions.
Despite significant promise, several risks are still associated with some of the proposed AI-assisted RT workflows. Namely, overfitting persists as a pertinent issue, especially when models are trained on limited or single-institution datasets. Models that appear to be top performers during validation may hold poor external generalizability when applied to unseen data, potentially leading to inaccurate target volumes. Such errors might lead to underdosing infiltrative tumor margins. Perhaps the most pressing challenge lies in automation bias: the inherent tendency to gravitate towards the AI-generated dose map, specifically when physicians are faced with time constraints or when the model has exceedingly high historical performance. Thus, it is imperative to maintain rigorous verification based on standardized RT guidelines.
Recent research also explores uncertainty quantification in model outputs, either through Bayesian DL or using Monte Carlo dropout during inference. This produces probabilistic segmentation maps or confidence intervals for dose predictions, supporting clinicians in assessing which automated outputs warrant further scrutiny or consensus review. In a recent study, two Bayesian DL models were assessed alongside eight uncertainty measures, utilizing 292 PET/CT scans comprising a sizeable cross-institutional dataset to investigate their RT approach for oropharyngeal cancer treatment. The study accurately estimated the quality of their novel DL segmentation in 86.6% of cases; more importantly, however, it successfully recognized areas of interest and cases where the DL framework generated low certainty and therefore increased probability of poor performance [78].
Overall, there is a push to integrate these explainability tools into AI-based RT planning software as standard features. Transparent outputs allow for cross-checking, identify hidden model biases, and reveal avenues for model optimization and re-training, thereby increasing the likelihood of clinician uptake.
Although interpretability and model performance are crucial for clinical deployment, the importance of ethical and regulatory considerations cannot be understated. Data privacy remains a significant challenge given the utility of multimodal data, including imaging, patient history, genomic profiles, etc. Further, multi-institutional data sharing adds another layer of complexity in transmitting anonymized patient data while maintaining high-quality annotated labels. Clear data sharing guidelines must be established and routinely audited to ensure strict HIPAA compliance. Liability and clinical responsibility also become a concern when integrating AI within any clinical decision-making workflow. Physician documentation is crucial in establishing transparency, as responsibility for the output segmentation ultimately lies in the hands of the care team.

9. AI-Driven Solutions and Current Trends in Technology

While CNNs remain the standard for RT planning in glioblastoma, alternative AI-driven solutions are also being developed to address the challenges in the current workflow. For example, intraoperative imaging combined with AI-driven segmentation could enhance tumor boundary detection in real time. In RT planning, DL models trained on consensus-derived contours could standardize target definition and reduce inter-observer variability. Real-time multimodal imaging can be combined with DL models to predict tumor trajectory and adjust treatment dynamically. ML applied to multi-omics data could help further characterize tumors, helping guide RT planning and treatment personalization. Such integrative approaches could pave the way for AI-driven, personalized RT planning that addresses the current challenges that physicians face in glioblastoma management and treatment.
CNNs are limited by their reliance on local receptive fields to segment tumors with poorly defined or infiltrative margins, but recent advances have incorporated self-attention mechanisms and transformer-based architectures to capture long-range dependencies and contextual information, improving boundary delineation [79]. Additionally, consensus learning frameworks can integrate outputs from multiple models or annotators to reduce interobserver variability and enhance segmentation reliability [80]. DL-based harmonization and normalization techniques can also minimize the impact of scanner and protocol-related variability, enhancing reproducibility across institutions [81]. Collectively, these approaches address one of the key limitations of traditional CNNs and represent an important step toward AI-driven tumor segmentation.
Another promising solution involves integrating multi-omics data into ML frameworks to guide RT planning and improve tumor classification. For example, the integrative glioblastoma subtype classifier leveraged both gene expression (transcriptomic) and DNA methylation (epigenomic) data in a multi-omics model [82]. Using Random Forest for feature selection and Nearest Shrunken Centroid for classification, integrative glioblastoma subtype classifier achieved a high mean AUC of 0.96, outperforming classifiers built on either data modality alone. The authors utilized only five features per subtype and were able to produce a highly accurate, cost-effective model from large-scale genomics data. This approach provides a template for merging multi-omics data with imaging-driven predictions to enhance tumor classification and segmentation accuracy. This can allow for patient-specific risk stratification and dose personalization, especially as the accessibility of high-throughput molecular testing improves.
Next, adaptive radiotherapy accounts for temporal changes in tumor position, volume, and response over the course of treatment. Through multimodal imaging with PET or MRI, adaptive radiotherapy captures high-resolution datasets to precisely evaluate anatomical changes in tumor shapes, borders, and locations throughout treatment [83,84]. State-of-the-art systems can quickly process real-time imaging data and even optimize radiation beam placement and intensity, making in-session adjustments to the treatment plan and accommodating daily variations in the patient’s anatomy [85]. Continuous tracking offers real-time feedback, enabling rapid correction if there are any unexpected factors or significant deviations from the original treatment plan. A study by Guevara et al. studied whether adaptive radiotherapy could reduce the RT dose with the aim of improving post-RT cognitive function [86]. They evaluated 10 glioblastoma patients who previously received RT treatment over six weeks without adaptation and simulated weekly plans that adjusted the dosage according to the shrinking tumor. While still targeting the cancerous tissue, the mean and maximum doses administered to the hippocampus and brain were significantly reduced for the adjusted plan. Therefore, incorporating adaptive radiotherapy into pre-existing CNN architectures can address the limitations of static treatment, possibly mitigating the neurocognitive side effects of RT for patients.
Lastly, the emergence of foundation models and latent diffusion architecture represents the latest trends in AI for glioblastoma treatment. Foundation models are large-scale models pre-trained on millions of images and require fewer labeled examples, and demonstrate improved standardization and data-efficiency [87]. For example, the Segment Anything foundation model, trained only on object segmentation in 2D photographs, was given the BRATS challenge and achieved high accuracy for interactive glioma MRI segmentation [88]. In parallel, latent diffusion models generate 3D multi-modal images of brain MRIs and their corresponding masks to augment scarce datasets. Diffusion models are trained by adding noise to an image in a series of iterative steps, gradually denoising, and transforming a noise vector into an image. This methodology allows them to capture complex, high-dimensional structures and generate synthetic images from the underlying data, boosting both the quantity and quality of training data for complex tasks like tumor segmentation [89]. These generative models underpin newer pipelines for auto segmentation and synthetic-CT/field optimization, complementing foundation models and feeding downstream dose-prediction networks. Together, these solutions highlight the growing role of DL in advancing and redefining the glioblastoma treatment workflow.

10. Conclusions

Deep learning models have the power to dramatically streamline clinical workflow and are already being deployed to do so. Manual segmentation is redundant, tedious, and time-consuming for physicians, and convolutional neural networks offer the perfect framework to rapidly automate this task with minimal risk due to standardized treatment guidelines and strict interprofessional review boards. In contrast, the outlook for treatment response prediction and personalized artificial intelligence-generated therapy regimens remains uncertain due to the necessity for substantial clinical data, likely requiring randomized clinical trials to ensure patient safety and efficacy if any changes are to be made to treatment guidelines. Still, deep learning can dramatically optimize clinical workflow with auto segmentation, offering physicians the ability to spend more time with their patients where their time matters most.
Cross-institutional data annotation, sharing, and validation are crucial steps toward improving the performance and generalizability of deep learning models for glioblastoma radiotherapy planning. These collaborative efforts account for inter-institutional variability in imaging acquisition protocols, hardware, and preprocessing methods, which otherwise increase the risk of model overfitting. This point is made clear by comparing internal validation metrics presented in Table 1. For example, ref. [37] achieved similar performance to [32] despite training on data from 10 patients compared to 1652, respectively [32,37]. Obviously, the latter is more likely to perform favorably in cross-institutional validation due to greater data diversity and model generalizability. Thus, external validation should be thought of as mandatory for such models for benchmarking performance. By incorporating diverse datasets, models can be better adapted to highly variable clinical scenarios, including those in rural or resource-limited settings where imaging quality and technique may differ significantly. While larger academic hospitals have begun deploying deep learning models trained on proprietary datasets, broader collaboration is essential to ensure equitable access to high-quality, artificial intelligence-assisted care. Further, bifurcation within single institution practices and fragmentation between multiple institutions carry risk of lowering generalizability and external validity.
It is important to emphasize that, in the absence of prospective data validated by clinical trials, standard radiotherapy protocols established by the Radiation Therapy Oncology Group must remain the foundation of treatment planning. Such guidelines require 2 cm margins around visible tumor boundaries, and any artificially generated segmentation that reduces this margin would be subject to rigorous clinical review. As such, all segmentation outputs, regardless of whether they are clinician-derived or model-assisted, undergo final approval by the radiation oncology team, including the attending physician, dosimetrists, and medical physicists. Outputs must comply with established standards of care in the absence of compelling evidence substantiated by randomized controlled trials.
In addition to technical concerns, there are also critical workflow and human-factor limitations that arise once artificial intelligence systems are introduced into clinical practice. One key risk is automation bias, where clinicians may place excessive trust in artificially generated contours or treatment plans, potentially accepting results without sufficient review [90]. This becomes especially problematic when the system’s output appears precise but lacks contextual nuance or clinical appropriateness. Poorly designed user interfaces can further amplify this issue by making artificial intelligence tools difficult to navigate or inefficient to use, which can lead to passive acceptance rather than informed engagement. Even when models perform well in controlled environments, they may still fall short of being clinically acceptable if they are not designed to integrate smoothly into real-world workflows, support clinician autonomy, and promote active decision-making. A concise overview of the pros and cons of artificial intelligence-based methods vs. manual methods is provided in Table 2.
Deep learning methods have served as state-of-the-art architecture in medical computer vision for over a decade. Numerous studies have demonstrated their effectiveness in tumor segmentation, radiogenomic prediction, and dose planning. Vision transformers are now emerging as a disruptive alternative to auto segmentation models, with growing evidence suggesting their ability to outperform convolutional neural networks due to their improved contextual understanding across spatially distant regions of an image. A significant limitation of vision transformers is their high data demand, requiring expansive, high-quality annotated datasets. As larger annotated imaging datasets become available through shared institutional repositories and as data augmentation techniques continue to evolve, vision transformers are likely to emerge as a viable competitor for auto segmentation in the coming years. This transition will hinge on continued investment in data infrastructure and collaborative research networks.
Ultimately, the integration of biologically informed mathematical modeling, multimodal imaging, and genomic data represents a significant evolution in glioblastoma radiotherapy. By capturing the heterogeneity of tumor infiltration and biological behavior, these frameworks can enhance tumor targeting, minimize toxicity, and advance traditional approaches that have failed to improve patient survival for decades [41]. In parallel, coordinating standards for data curation and artificial intelligence validation within radiation oncology and neuro-oncology communities will be essential for clinical translation. Organizations such as the American Society of Clinical Oncology’s information technology and artificial intelligence initiatives, the National Cancer Institute’s Imaging Data Commons, and the American Society for Radiation Oncology are well-positioned to serve as connective tissue between academic medical centers and large practice groups. Establishing such a formalized community of practice could accelerate adoption, minimize redundancy, and ensure that advances in artificial intelligence for glioblastoma radiotherapy are both reproducible and clinically actionable.

Author Contributions

Conceptualization, R.M., N.R., J.S., S.K. and T.K.P.; methodology, R.M., N.R., J.S., S.K. and T.K.P.; data curation, R.M., N.R., J.S., S.K. and T.K.P.; writing-original draft preparation, R.M., N.R., J.S., A.K., S.K. and T.K.P.; writing-review and editing, R.M., N.R., J.S., K.K.Y., S.P., A.S., A.K., P.J.S., K.S.R., V.G., N.P., S.K. and T.K.P.; visualization, R.M., N.R., J.S., A.K., N.P., S.K. and T.K.P.; supervision, N.P., S.K. and T.K.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the John P. and Kathrine G. McGovern Distinguished Chair endowed professorship to S.K.

Conflicts of Interest

Nikos Paragios is a stockholder and employee of TheraPanacea. Patrick Silva is a paid consultant to Procyon Technologies, LLC.

Abbreviations

AIArtificial Intelligence
RTRadiotherapy
CTVClinical Target Volume
GTVGross Target Volume
PTVPlanning Target Volume
DLDeep Learning
CNNConvolutional Neural Network
MLMachine Learning
DSCDice Similarity Coefficient
BRATSBrain Tumor Segmentation
AUCArea Under the Curve

References

  1. Kanderi, T.; Munakomi, S.; Gupta, V. Glioblastoma Multiforme. In StatPearls; StatPearls Publishing: Treasure Island, FL, USA, 2025. [Google Scholar]
  2. Khagi, S.; Kotecha, R.; Gatson, N.T.N.; Jeyapalan, S.; Abdullah, H.I.; Avgeropoulos, N.G.; Batzianouli, E.T.; Giladi, M.; Lustgarten, L.; Goldlust, S.A. Recent advances in Tumor Treating Fields (TTFields) therapy for glioblastoma. Oncologist 2024, 30, oyae227. [Google Scholar] [CrossRef]
  3. Stupp, R.; Taillibert, S.; Kanner, A.; Read, W.; Steinberg, D.; Lhermitte, B.; Toms, S.; Idbaih, A.; Ahluwalia, M.S.; Fink, K.; et al. Effect of Tumor-Treating Fields Plus Maintenance Temozolomide vs Maintenance Temozolomide Alone on Survival in Patients With Glioblastoma: A Randomized Clinical Trial. JAMA 2017, 318, 2306–2316. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  4. McKinnon, C.; Nandhabalan, M.; Murray, S.A.; Plaha, P. Glioblastoma: Clinical presentation, diagnosis, and management. BMJ 2021, 374, n1560. [Google Scholar] [CrossRef] [PubMed]
  5. Brown, T.J.; Brennan, M.C.; Li, M.; Church, E.W.; Brandmeir, N.J.; Rakszawski, K.L.; Patel, A.S.; Rizk, E.B.; Suki, D.; Sawaya, R.; et al. Association of the Extent of Resection With Survival in Glioblastoma: A Systematic Review and Meta-analysis. JAMA Oncol. 2016, 2, 1460–1469. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  6. Weller, M.; van den Bent, M.; Preusser, M.; Le Rhun, E.; Tonn, J.C.; Minniti, G.; Bendszus, M.; Balana, C.; Chinot, O.; Dirven, L.; et al. EANO guidelines on the diagnosis and treatment of diffuse gliomas of adulthood. Nat. Rev. Clin. Oncol. 2021, 18, 170–186. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  7. Peeken, J.C.; Molina-Romero, M.; Diehl, C.; Menze, B.H.; Straube, C.; Meyer, B.; Zimmer, C.; Wiestler, B.; Combs, S.E. Deep learning derived tumor infiltration maps for personalized target definition in Glioblastoma radiotherapy. Radiother. Oncol. 2019, 138, 166–172. [Google Scholar] [CrossRef] [PubMed]
  8. Rončević, A.; Koruga, N.; Soldo Koruga, A.; Rončević, R.; Rotim, T.; Šimundić, T.; Kretić, D.; Perić, M.; Turk, T.; Štimac, D. Personalized Treatment of Glioblastoma: Current State and Future Perspective. Biomedicines 2023, 11, 1579. [Google Scholar] [CrossRef]
  9. Bibault, J.-E.; Giraud, P. Deep learning for automated segmentation in radiotherapy: A narrative review. Br. J. Radiol. 2024, 97, 13–20. [Google Scholar] [CrossRef]
  10. Aman, R.A.; Pratama, M.G.; Satriawan, R.R.; Ardiansyah, I.R.; Suanjaya, I.K.A. Diagnostic and Prognostic Values of miRNAs in High-Grade Gliomas: A Systematic Review. F1000Research 2025, 13, 796. [Google Scholar] [CrossRef] [PubMed]
  11. Erices, J.I.; Bizama, C.; Niechi, I.; Uribe, D.; Rosales, A.; Fabres, K.; Navarro-Martínez, G.; Torres, Á.; San Martín, R.; Roa, J.C.; et al. Glioblastoma Microenvironment and Invasiveness: New Insights and Therapeutic Targets. Int. J. Mol. Sci. 2023, 24, 7047. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  12. Fathi Kazerooni, A.; Nabil, M.; Zeinali Zadeh, M.; Firouznia, K.; Azmoudeh-Ardalan, F.; Frangi, A.F.; Davatzikos, C.; Saligheh Rad, H. Characterization of active and infiltrative tumorous subregions from normal tissue in brain gliomas using multiparametric MRI. J. Magn. Reson. Imaging 2018, 48, 938–950. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  13. Kruser, T.J.; Bosch, W.R.; Badiyan, S.N.; Bovi, J.A.; Ghia, A.J.; Kim, M.M.; Solanki, A.A.; Sachdev, S.; Tsien, C.; Wang, T.J.C.; et al. NRG brain tumor specialists consensus guidelines for glioblastoma contouring. J. Neurooncol. 2019, 143, 157–166. [Google Scholar] [CrossRef]
  14. Poel, R.; Rüfenacht, E.; Ermis, E.; Müller, M.; Fix, M.K.; Aebersold, D.M.; Manser, P.; Reyes, M. Impact of random outliers in auto-segmented targets on radiotherapy treatment plans for glioblastoma. Radiat. Oncol. 2022, 17, 170. [Google Scholar] [CrossRef]
  15. Sidibe, I.; Tensaouti, F.; Gilhodes, J.; Cabarrou, B.; Filleron, T.; Desmoulin, F.; Ken, S.; Noël, G.; Truc, G.; Sunyach, M.P.; et al. Pseudoprogression in GBM versus true progression in patients with glioblastoma: A multiapproach analysis. Radiother. Oncol. 2023, 181, 109486. [Google Scholar] [CrossRef] [PubMed]
  16. Cramer, C.K.; Cummings, T.L.; Andrews, R.N.; Strowd, R.; Rapp, S.R.; Shaw, E.G.; Chan, M.D.; Lesser, G.J. Treatment of Radiation-Induced Cognitive Decline in Adult Brain Tumor Patients. Curr. Treat. Options Oncol. 2019, 20, 42. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  17. Kickingereder, P.; Isensee, F.; Tursunova, I.; Petersen, J.; Neuberger, U.; Bonekamp, D.; Brugnara, G.; Schell, M.; Kessler, T.; Foltyn, M.; et al. Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: A multicentre, retrospective study. Lancet Oncol. 2019, 20, 728–740. [Google Scholar] [CrossRef] [PubMed]
  18. Mansoorian, S.; Schmidt, M.; Weissmann, T.; Delev, D.; Heiland, D.H.; Coras, R.; Stritzelberger, J.; Saake, M.; Höfler, D.; Schubert, P.; et al. Reirradiation for recurrent glioblastoma: The significance of the residual tumor volume. J. Neurooncol. 2025, 174, 243–252. [Google Scholar] [CrossRef] [PubMed]
  19. Shaver, M.; Kohanteb, P.; Chiou, C.; Bardis, M.; Chantaduly, C.; Bota, D.; Filippi, C.; Weinberg, B.; Grinband, J.; Chow, D.; et al. Optimizing Neuro-Oncology Imaging: A Review of Deep Learning Approaches for Glioma Imaging. Cancers 2019, 11, 829. [Google Scholar] [CrossRef] [PubMed]
  20. Doolan, P.J.; Charalambous, S.; Roussakis, Y.; Leczynski, A.; Peratikou, M.; Benjamin, M.; Ferentinos, K.; Strouthos, I.; Zamboglou, C.; Karagiannis, E. A clinical evaluation of the performance of five commercial artificial intelligence contouring systems for radiotherapy. Front. Oncol. 2023, 13, 1213068. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  21. Lu, S.-L.; Xiao, F.-R.; Cheng, J.C.-H.; Yang, W.-C.; Cheng, Y.-H.; Chang, Y.-C.; Lin, J.-Y.; Liang, C.-H.; Lu, J.-T.; Chen, Y.-F.; et al. Randomized multi-reader evaluation of automated detection and segmentation of brain tumors in stereotactic radiosurgery with deep neural networks. Neuro-Oncology 2021, 23, 1560–1568. [Google Scholar] [CrossRef]
  22. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef]
  23. Marey, A.; Arjmand, P.; Alerab, A.D.S.; Eslami, M.J.; Saad, A.M.; Sanchez, N.; Umair, M. Explainability, transparency and black box challenges of AI in radiology: Impact on patient care in cardiovascular radiology. Egypt J. Radiol. Nucl. Med. 2024, 55, 183. [Google Scholar] [CrossRef]
  24. Bakas, S.; Reyes, M.; Jakab, A.; Bauer, S.; Rempfler, M.; Crimi, A.; Shinohara, R.T.; Berger, C.; Ha, S.M.; Rozycki, M.; et al. Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge. arXiv 2018, arXiv:1811.02629. [Google Scholar] [CrossRef]
  25. Dora, L.; Agrawal, S.; Panda, R.; Abraham, A. State-of-the-Art Methods for Brain Tissue Segmentation: A Review. IEEE Rev. Biomed. Eng. 2017, 10, 235–249. [Google Scholar] [CrossRef]
  26. Işın, A.; Direkoğlu, C.; Şah, M. Review of MRI-based Brain Tumor Image Segmentation Using Deep Learning Methods. Procedia Comput. Sci. 2016, 102, 317–324. [Google Scholar] [CrossRef]
  27. Baid, U.; Ghodasara, S.; Mohan, S.; Bilello, M.; Calabrese, E.; Colak, E.; Farahani, K.; Kalpathy-Cramer, J.; Kitamura, F.C.; Pati, S.; et al. The RSNA-ASNR-MICCAI-BraTS-2021 benchmark on brain tumor segmentation and radiogenomic classification. arXiv 2021, arXiv:2107.02314. [Google Scholar]
  28. Isensee, F.; Jaeger, P.F.; Kohl, S.A.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  29. Zeineldin, R.A.; Karar, M.E.; Burgert, O.; Mathis-Ullrich, F. Multimodal CNN Networks for Brain Tumor Segmentation in MRI: A BraTS 2022 Challenge Solution. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; Bakas, S., Crimi, A., Baid, U., Malec, S., Pytlarz, M., Baheti, B., Zenk, M., Dorent, R., Eds.; Springer Nature: Cham, Switzerland, 2023; pp. 127–137. [Google Scholar]
  30. Ferreira, A.; Solak, N.; Li, J.; Dammann, P.; Kleesiek, J.; Alves, V.; Egger, J. Enhanced Data Augmentation Using Synthetic Data for Brain Tumour Segmentation. In Brain Tumor Segmentation, and Cross-Modality Domain Adaptation for Medical Image Segmentation; Baid, U., Dorent, R., Malec, S., Pytlarz, M., Su, R., Wijethilake, N., Bakas, S., Crimi, A., Eds.; Springer Nature: Cham, Switzerland, 2024; pp. 79–93. [Google Scholar]
  31. Moradi, N.; Ferreira, A.; Puladi, B.; Kleesiek, J.; Fatemizadeh, E.; Luijten, G.; Alves, V.; Egger, J. Comparative Analysis of nnUNet and MedNeXt for Head and Neck Tumor Segmentation in MRI-Guided Radiotherapy. In Head and Neck Tumor Segmentation for MR-Guided Applications; Wahid, K.A., Dede, C., Naser, M.A., Fuller, C.D., Eds.; Springer Nature: Cham, Switzerland, 2025; pp. 136–153. [Google Scholar]
  32. Xue, J.; Wang, B.; Ming, Y.; Liu, X.; Jiang, Z.; Wang, C.; Liu, X.; Chen, L.; Qu, J.; Xu, S.; et al. Deep learning–based detection and segmentation-assisted management of brain metastases. Neuro-Oncology 2020, 22, 505–514. [Google Scholar] [CrossRef]
  33. Naceur, M.B.; Saouli, R.; Akil, M.; Kachouri, R. Fully Automatic Brain Tumor Segmentation using End-To-End Incremental Deep Neural Networks in MRI images. Comput. Methods Programs Biomed. 2018, 166, 39–49. [Google Scholar] [CrossRef]
  34. Chang, K.; Beers, A.L.; Bai, H.X.; Brown, J.M.; Ly, K.I.; Li, X.; Senders, J.T.; Kavouridis, V.K.; Boaro, A.; Su, C.; et al. Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bidimensional measurement. Neuro-Oncology 2019, 21, 1412–1422. [Google Scholar] [CrossRef]
  35. Ranjbarzadeh, R.; Bagherian Kasgari, A.; Jafarzadeh Ghoushchi, S.; Anari, S.; Naseri, M.; Bendechache, M. Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images. Sci. Rep. 2021, 11, 10930. [Google Scholar] [CrossRef] [PubMed]
  36. Deng, W.; Shi, Q.; Luo, K.; Yang, Y.; Ning, N. Brain Tumor Segmentation Based on Improved Convolutional Neural Network in Combination with Non-quantifiable Local Texture Feature. J. Med. Syst. 2019, 43, 152. [Google Scholar] [CrossRef]
  37. Zhuge, Y.; Krauze, A.V.; Ning, H.; Cheng, J.Y.; Arora, B.C.; Camphausen, K.; Miller, R.W. Brain tumor segmentation using holistically nested neural networks in MRI images. Med. Phys. 2017, 44, 5234–5243. [Google Scholar] [CrossRef]
  38. Isensee, F.; Kickingereder, P.; Wick, W.; Bendszus, M.; Maier-Hein, K.H. Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; Crimi, A., Bakas, S., Kuijf, H., Menze, B., Reyes, M., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 287–297. [Google Scholar]
  39. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [Google Scholar] [CrossRef]
  40. Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.-M.; Larochelle, H. Brain tumor segmentation with Deep Neural Networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef] [PubMed]
  41. Soltaninejad, M.; Zhang, L.; Lambrou, T.; Allinson, N.; Ye, X. Multimodal MRI brain tumor segmentation using random forests with features learned from fully convolutional neural network. arXiv 2017, arXiv:1704.08134. [Google Scholar] [CrossRef]
  42. Hussain, S.; Anwar, S.M.; Majid, M. Brain tumor segmentation using cascaded deep convolutional neural network. In Proceedings of 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju Island, Republic of Korea, 11–15 July 2017; pp. 1998–2001. [Google Scholar]
  43. Tian, S.; Liu, Y.; Mao, X.; Xu, X.; He, S.; Jia, L.; Zhang, W.; Peng, P.; Wang, J. A multicenter study on deep learning for glioblastoma auto-segmentation with prior knowledge in multimodal imaging. Cancer Sci. 2024, 115, 3415–3425. [Google Scholar] [CrossRef]
  44. Unkelbach, J.; Bortfeld, T.; Cardenas, C.E.; Gregoire, V.; Hager, W.; Heijmen, B.; Jeraj, R.; Korreman, S.S.; Ludwig, R.; Pouymayou, B.; et al. The role of computational methods for automating and improving clinical target volume definition. Radiother. Oncol. 2020, 153, 15–25. [Google Scholar] [CrossRef]
  45. Metz, M.-C.; Ezhov, I.; Peeken, J.C.; Buchner, J.A.; Lipkova, J.; Kofler, F.; Waldmannstetter, D.; Delbridge, C.; Diehl, C.; Bernhardt, D.; et al. Toward image-based personalization of glioblastoma therapy: A clinical and biological validation study of a novel, deep learning-driven tumor growth model. Neurooncol. Adv. 2024, 6, vdad171. [Google Scholar] [CrossRef]
  46. Dextraze, K.; Saha, A.; Kim, D.; Narang, S.; Lehrer, M.; Rao, A.; Narang, S.; Rao, D.; Ahmed, S.; Madhugiri, V.; et al. Spatial habitats from multiparametric MR imaging are associated with signaling pathway activities and survival in glioblastoma. Oncotarget 2017, 8, 112992–113001. [Google Scholar] [CrossRef]
  47. Hanahan, D. Hallmarks of Cancer: New Dimensions. Cancer Discov. 2022, 12, 31–46. [Google Scholar] [CrossRef]
  48. Lipkova, J.; Angelikopoulos, P.; Wu, S.; Alberts, E.; Wiestler, B.; Diehl, C.; Preibisch, C.; Pyka, T.; Combs, S.E.; Hadjidoukas, P.; et al. Personalized Radiotherapy Design for Glioblastoma: Integrating Mathematical Tumor Models, Multimodal Scans, and Bayesian Inference. IEEE Trans. Med. Imaging 2019, 38, 1875–1884. [Google Scholar] [CrossRef]
  49. Hönikl, L.S.; Delbridge, C.; Yakushev, I.; Negwer, C.; Bernhardt, D.; Schmidt-Graf, F.; Meyer, B.; Wagner, A. Assessing the role of FET-PET imaging in glioblastoma recurrence: A retrospective analysis of diagnostic accuracy. Brain Spine 2025, 5, 105599. [Google Scholar] [CrossRef]
  50. Yang, Z.; Zamarud, A.; Marianayagam, N.J.; Park, D.J.; Yener, U.; Soltys, S.G.; Chang, S.D.; Meola, A.; Jiang, H.; Lu, W.; et al. Deep learning-based overall survival prediction in patients with glioblastoma: An automatic end-to-end workflow using pre-resection basic structural multiparametric MRIs. Comput. Biol. Med. 2025, 185, 109436. [Google Scholar] [CrossRef]
  51. Kwak, S.; Akbari, H.; Garcia, J.A.; Mohan, S.; Dicker, Y.; Sako, C.; Matsumoto, Y.; Nasrallah, M.P.; Shalaby, M.; O’Rourke, D.M.; et al. Predicting peritumoral glioblastoma infiltration and subsequent recurrence using deep-learning–based analysis of multi-parametric magnetic resonance imaging. J. Med. Imaging 2024, 11, 054001. [Google Scholar] [CrossRef]
  52. Hong, J.C.; Eclov, N.C.W.; Dalal, N.H.; Thomas, S.M.; Stephens, S.J.; Malicki, M.; Shields, S.; Cobb, A.; Mowery, Y.M.; Niedzwiecki, D.; et al. System for High-Intensity Evaluation During Radiation Therapy (SHIELD-RT): A Prospective Randomized Study of Machine Learning–Directed Clinical Evaluations During Radiation and Chemoradiation. J. Clin. Oncol. 2020, 38, 3652–3661. [Google Scholar] [CrossRef]
  53. Gutsche, R.; Lohmann, P.; Hoevels, M.; Ruess, D.; Galldiks, N.; Visser-Vandewalle, V.; Treuer, H.; Ruge, M.; Kocher, M. Radiomics outperforms semantic features for prediction of response to stereotactic radiosurgery in brain metastases. Radiother. Oncol. 2022, 166, 37–43. [Google Scholar] [CrossRef]
  54. Tsang, D.S.; Tsui, G.; McIntosh, C.; Purdie, T.; Bauman, G.; Dama, H.; Laperriere, N.; Millar, B.-A.; Shultz, D.B.; Ahmed, S.; et al. A pilot study of machine-learning based automated planning for primary brain tumours. Radiat. Oncol. 2022, 17, 3. [Google Scholar] [CrossRef] [PubMed]
  55. Di Nunno, V.; Fordellone, M.; Minniti, G.; Asioli, S.; Conti, A.; Mazzatenta, D.; Balestrini, D.; Chiodini, P.; Agati, R.; Tonon, C.; et al. Machine learning in neuro-oncology: Toward novel development fields. J. Neurooncol. 2022, 159, 333–346. [Google Scholar] [CrossRef]
  56. Chang, K.; Bai, H.X.; Zhou, H.; Su, C.; Bi, W.L.; Agbodza, E.; Kavouridis, V.K.; Senders, J.T.; Boaro, A.; Beers, A.; et al. Residual Convolutional Neural Network for the Determination of IDH Status in Low- and High-Grade Gliomas from MR Imaging. Clin. Cancer Res. 2018, 24, 1073–1081. [Google Scholar] [CrossRef] [PubMed]
  57. Wong, Q.H.-W.; Li, K.K.-W.; Wang, W.-W.; Malta, T.M.; Noushmehr, H.; Grabovska, Y.; Jones, C.; Chan, A.K.-Y.; Kwan, J.S.-H.; Huang, Q.J.-Q.; et al. Molecular landscape of IDH-mutant primary astrocytoma Grade IV/glioblastomas. Mod. Pathol. 2021, 34, 1245–1260. [Google Scholar] [CrossRef]
  58. Ding, J.; Zhao, R.; Qiu, Q.; Chen, J.; Duan, J.; Cao, X.; Yin, Y. Developing and validating a deep learning and radiomic model for glioma grading using multiplanar reconstructed magnetic resonance contrast-enhanced T1-weighted imaging: A robust, multi-institutional study. Quant. Imaging Med. Surg. 2022, 12, 1517–1528. [Google Scholar] [CrossRef]
  59. Zhang, X.; Yan, L.-F.; Hu, Y.-C.; Li, G.; Yang, Y.; Han, Y.; Sun, Y.-Z.; Liu, Z.-C.; Tian, Q.; Han, Z.-Y.; et al. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features. Oncotarget 2017, 8, 47816–47830. [Google Scholar] [CrossRef]
  60. Fathi Kazerooni, A.; Akbari, H.; Hu, X.; Bommineni, V.; Grigoriadis, D.; Toorens, E.; Sako, C.; Mamourian, E.; Ballinger, D.; Sussman, R.; et al. The radiogenomic and spatiogenomic landscapes of glioblastoma and their relationship to oncogenic drivers. Commun. Med. 2025, 5, 55. [Google Scholar] [CrossRef]
  61. Lu, J.; Zhang, Z.-Y.; Zhong, S.; Deng, D.; Yang, W.-Z.; Wu, S.-W.; Cheng, Y.; Bai, Y.; Mou, Y.-G. Evaluating the Diagnostic and Prognostic Value of Peripheral Immune Markers in Glioma Patients: A Prospective Multi-Institutional Cohort Study of 1282 Patients. J. Inflamm. Res. 2025, 18, 7477–7492. [Google Scholar] [CrossRef]
  62. Hasani, F.; Masrour, M.; Jazi, K.; Ahmadi, P.; Hosseini, S.S.; Lu, V.M.; Alborzi, A. MicroRNA as a potential diagnostic and prognostic biomarker in brain gliomas: A systematic review and meta-analysis. Front. Neurol. 2024, 15, 1357321. [Google Scholar] [CrossRef]
  63. Lakomy, R.; Sana, J.; Hankeova, S.; Fadrus, P.; Kren, L.; Lzicarova, E.; Svoboda, M.; Dolezelova, H.; Smrcka, M.; Vyzula, R.; et al. MiR-195, miR-196b, miR-181c, miR-21 expression levels and O-6-methylguanine-DNA methyltransferase methylation status are associated with clinical outcome in glioblastoma patients. Cancer Sci. 2011, 102, 2186–2190. [Google Scholar] [CrossRef] [PubMed]
  64. Lan, F.; Yue, X.; Xia, T. Exosomal microRNA-210 is a potentially non-invasive biomarker for the diagnosis and prognosis of glioma. Oncol. Lett. 2020, 19, 1967–1974. [Google Scholar] [CrossRef]
  65. Zhou, Q.; Liu, J.; Quan, J.; Liu, W.; Tan, H.; Li, W. MicroRNAs as potential biomarkers for the diagnosis of glioma: A systematic review and meta-analysis. Cancer Sci. 2018, 109, 2651–2659. [Google Scholar] [CrossRef] [PubMed]
  66. Velu, U.; Singh, A.; Nittala, R.; Yang, J.; Vijayakumar, S.; Cherukuri, C.; Vance, G.R.; Salvemini, J.D.; Hathaway, B.F.; Grady, C.; et al. Precision Population Cancer Medicine in Brain Tumors: A Potential Roadmap to Improve Outcomes and Strategize the Steps to Bring Interdisciplinary Interventions. Cureus 2024, 16, e71305. [Google Scholar] [CrossRef] [PubMed]
  67. Silva, P.J.; Silva, P.A.; Ramos, K.S. Genomic and Health Data as Fuel to Advance a Health Data Economy for Artificial Intelligence. BioMed Res. Int. 2025, 2025, 6565955. [Google Scholar] [CrossRef]
  68. Silva, P.J.; Rahimzadeh, V.; Powell, R.; Husain, J.; Grossman, S.; Hansen, A.; Hinkel, J.; Rosengarten, R.; Ory, M.G.; Ramos, K.S. Health equity innovation in precision medicine: Data stewardship and agency to expand representation in clinicogenomics. Health Res. Policy Syst. 2024, 22, 170. [Google Scholar] [CrossRef]
  69. Silva, P.; Janjan, N.; Ramos, K.S.; Udeani, G.; Zhong, L.; Ory, M.G.; Smith, M.L. External control arms: COVID-19 reveals the merits of using real world evidence in real-time for clinical and public health investigations. Front. Med. 2023, 10, 1198088. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  70. d’Este, S.H.; Nielsen, M.B.; Hansen, A.E. Visualizing Glioma Infiltration by the Combination of Multimodality Imaging and Artificial Intelligence, a Systematic Review of the Literature. Diagnostics 2021, 11, 592. [Google Scholar] [CrossRef]
  71. Holzinger, A.; Langs, G.; Denk, H.; Zatloukal, K.; Müller, H. Causability and explainability of artificial intelligence in medicine. WIREs Data Min. Knowl. 2019, 9, e1312. [Google Scholar] [CrossRef] [PubMed]
  72. AlBadawy, E.A.; Saha, A.; Mazurowski, M.A. Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing. Med. Phys. 2018, 45, 1150–1158. [Google Scholar] [CrossRef] [PubMed]
  73. Bologna, G.; Hayashi, Y. Characterization of Symbolic Rules Embedded in Deep DIMLP Networks: A Challenge to Transparency of Deep Learning. J. Artif. Intell. Soft Comput. Res. 2017, 7, 265–286. [Google Scholar] [CrossRef]
  74. Cui, S.; Traverso, A.; Niraula, D.; Zou, J.; Luo, Y.; Owen, D.; El Naqa, I.; Wei, L. Interpretable artificial intelligence in radiology and radiation oncology. Br. J. Radiol. 2023, 96, 20230142. [Google Scholar] [CrossRef]
  75. Sangwan, H. Quantifying Explainable Ai Methods in Medical Diagnosis: A Study in Skin Cancer. medRxiv 2024. [Google Scholar] [CrossRef]
  76. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2020, 128, 336–359. [Google Scholar] [CrossRef]
  77. Lundberg, S.; Lee, S.-I. A Unified Approach to Interpreting Model Predictions. arXiv 2017, arXiv:1705.07874. [Google Scholar] [CrossRef]
  78. Sahlsten, J.; Jaskari, J.; Wahid, K.A.; Ahmed, S.; Glerean, E.; He, R.; Kann, B.H.; Mäkitie, A.; Fuller, C.D.; Naser, M.A.; et al. Application of simultaneous uncertainty quantification and segmentation for oropharyngeal cancer use-case with Bayesian deep learning. Commun. Med. 2024, 4, 110. [Google Scholar] [CrossRef]
  79. Alruily, M.; Mahmoud, A.A.; Allahem, H.; Mostafa, A.M.; Shabana, H.; Ezz, M. Enhancing Breast Cancer Detection in Ultrasound Images: An Innovative Approach Using Progressive Fine-Tuning of Vision Transformer Models. Int. J. Intell. Syst. 2024, 2024, 6528752. [Google Scholar] [CrossRef]
  80. Marin, T.; Zhuo, Y.; Lahoud, R.M.; Tian, F.; Ma, X.; Xing, F.; Moteabbed, M.; Liu, X.; Grogg, K.; Shusharina, N.; et al. Deep learning-based GTV contouring modeling inter- and intra-observer variability in sarcomas. Radiother. Oncol. 2022, 167, 269–276. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  81. Abbasi, S.; Lan, H.; Choupan, J.; Sheikh-Bahaei, N.; Pandey, G.; Varghese, B. Deep learning for the harmonization of structural MRI scans: A survey. Biomed. Eng. Online 2024, 23, 90. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  82. Ensenyat-Mendez, M.; Íñiguez-Muñoz, S.; Sesé, B.; Marzese, D.M. iGlioSub: An integrative transcriptomic and epigenomic classifier for glioblastoma molecular subtypes. BioData Min. 2021, 14, 42. [Google Scholar] [CrossRef]
  83. Dona Lemus, O.M.; Cao, M.; Cai, B.; Cummings, M.; Zheng, D. Adaptive Radiotherapy: Next-Generation Radiotherapy. Cancers 2024, 16, 1206. [Google Scholar] [CrossRef]
  84. Weykamp, F.; Meixner, E.; Arians, N.; Hoegen-Saßmannshausen, P.; Kim, J.-Y.; Tawk, B.; Knoll, M.; Huber, P.; König, L.; Sander, A.; et al. Daily AI-Based Treatment Adaptation under Weekly Offline MR Guidance in Chemoradiotherapy for Cervical Cancer 1: The AIM-C1 Trial. J. Clin. Med. 2024, 13, 957. [Google Scholar] [CrossRef]
  85. Vuong, W.; Gupta, S.; Weight, C.; Almassi, N.; Nikolaev, A.; Tendulkar, R.D.; Scott, J.G.; Chan, T.A.; Mian, O.Y. Trial in Progress: Adaptive RADiation Therapy with Concurrent Sacituzumab Govitecan (SG) for Bladder Preservation in Patients with MIBC (RAD-SG). Int. J. Radiat. Oncol. 2023, 117, e447–e448. [Google Scholar] [CrossRef]
  86. Guevara, B.; Cullison, K.; Maziero, D.; Azzam, G.A.; De La Fuente, M.I.; Brown, K.; Valderrama, A.; Meshman, J.; Breto, A.; Ford, J.C.; et al. Simulated Adaptive Radiotherapy for Shrinking Glioblastoma Resection Cavities on a Hybrid MRI-Linear Accelerator. Cancers 2023, 15, 1555. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  87. Paschali, M.; Chen, Z.; Blankemeier, L.; Varma, M.; Youssef, A.; Bluethgen, C.; Langlotz, C.; Gatidis, S.; Chaudhari, A. Foundation Models in Radiology: What, How, Why, and Why Not. Radiology 2025, 314, e240597. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  88. Putz, F.; Beirami, S.; Schmidt, M.A.; May, M.S.; Grigo, J.; Weissmann, T.; Schubert, P.; Höfler, D.; Gomaa, A.; Hassen, B.T.; et al. The Segment Anything foundation model achieves favorable brain tumor auto-segmentation accuracy in MRI to support radiotherapy treatment planning. Strahlenther. Onkol. 2025, 201, 255–265. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  89. Kebaili, A.; Lapuyade-Lahorgue, J.; Vera, P.; Ruan, S. Multi-modal MRI synthesis with conditional latent diffusion models for data augmentation in tumor segmentation. Comput. Med. Imaging Graph. 2025, 123, 102532. [Google Scholar] [CrossRef] [PubMed]
  90. Baroudi, H.; Brock, K.K.; Cao, W.; Chen, X.; Chung, C.; Court, L.E.; El Basha, M.D.; Farhat, M.; Gay, S.; Gronberg, M.P.; et al. Automated Contouring and Planning in Radiation Therapy: What Is ‘Clinically Acceptable’? Diagnostics 2023, 13, 667. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Example segmentation for radiotherapy for a patient with glioblastoma. Contours are drawn in red for GTV outlined on T1c, green for GTV outlined on FLAIR, purple for CTV60 defined as GTVT1c + margin (tucked away from brainstem), outer red for PTV60 defined as CTV60 + 3mm margin, light blue forCTV50 defined as GTVFLAIR + margin tucked away frrom contralateral brain due to falx cerebri serving as an anatomic barrier, and salmon for PTV50 defined as CTV50 + 3 mm margin. (A) T1-weighted MRI (axial view); (B) FLAIR MRI (axial view); (C) Isodose level color key (cGy); (D) CT (axial view); (E) CT (coronal view); (F) CT (sagittal view); (G) Pictorial representation of convolutional neural networks in radiotherapy planning for glioblastoma.
Figure 1. Example segmentation for radiotherapy for a patient with glioblastoma. Contours are drawn in red for GTV outlined on T1c, green for GTV outlined on FLAIR, purple for CTV60 defined as GTVT1c + margin (tucked away from brainstem), outer red for PTV60 defined as CTV60 + 3mm margin, light blue forCTV50 defined as GTVFLAIR + margin tucked away frrom contralateral brain due to falx cerebri serving as an anatomic barrier, and salmon for PTV50 defined as CTV50 + 3 mm margin. (A) T1-weighted MRI (axial view); (B) FLAIR MRI (axial view); (C) Isodose level color key (cGy); (D) CT (axial view); (E) CT (coronal view); (F) CT (sagittal view); (G) Pictorial representation of convolutional neural networks in radiotherapy planning for glioblastoma.
Cancers 17 03762 g001
Figure 2. Personalized precision medicine for patient-specific radiotherapy treatment.
Figure 2. Personalized precision medicine for patient-specific radiotherapy treatment.
Cancers 17 03762 g002
Table 1. Summary of DL models for auto segmentation of brain tumors from 2016-2025.
Table 1. Summary of DL models for auto segmentation of brain tumors from 2016-2025.
AuthorYearStudy/ModelInput DataPatients/
Plans
Performance (DSC)Notable Features
Peeken et al. [7]2019CNNDTI; FLAIR33NAMicroscopic infiltration mapping aligned with clinical RT guidelines
Lu et al. [21]20213D U-Net + DeepMedic EnsembleCECT; T1C12880.86–0.90Real-time clinical SRS workflow integration
Xue et al. [32]2020Cascaded 3D FCN3D-T1-MPRAGE16520.85High-volume validation
Naceur et al. [33]2018Incremental XCNetT1C; T1; T2; FLAIR2100.88Novel parallel CNNs + ELOBA_λ training; 20.87 s/segmentation
Kickingereder et al. [17]2019ANNT1C; T24550.89–0.93Segmentation output < 1 min; longitudinal validation
Chang et al. [34]2019AutoRANO (U-Net)T1C; FLAIR8430.94Outputs RANO metrics for volumetric tracking
Ranjbarzadeh et al. [35]2021Cascaded CNN w/AttentionT1; T1C; T2; FLAIR2850.92, 0.87, 0.91 *Reduced training time by 80%
Deng et al. [36]2019FCNN + DMDFT1; T1C; T2; FLAIR1000.91Segmentation output < 1 s
Zhuge et al. [37]2017HNNT1; T1C; T2; FLAIR100.83Weighted-fusion; 10 h training
Isensee et al. [38]2018Modified U-NetT1C; T1; T2; FLAIR2200.90, 0.80, 0.73 *Top performer that year
Pereira et al. [39]2016CNN (3 × 3 kernel)T1C; T1; T2; FLAIR650.88, 0.83, 0.77 *BRATS-2013 top performer & BRATS-2015 2nd overall
Havaei et al. [40]2017Cascaded CNN (2nd DNN input)T1C; T1; T2; FLAIR65NAReduced segmentation time by 30-fold
Soltaninejad et al. [41]2017RF + FCNT1C; T1; T2; FLAIR650.88, 0.80, 0.73 *RF & FCN ensemble
Hussain et al. [42]2018Deep CNN w/dual-patch inputT1C; T1; T2; FLAIR2740.87, 0.89, 0.92 *Novel batch normalization
Tian et al. [43]20243D U-NetNCCT; T1C; FLAIR1480.92, 0.87, 0.91 *Novel two-stage 3D U-Net
Moradi et al. [31]2025nnU-Net ensembleT2; FLAIR1500.83Synthetic training data; BRATS-2023 and 2024 top performer
* Whole tumor, tumor core, enhancing tumor, respectively; T1: T1-weighted MRI; T2: T2-weighted MRI; T1C: T1-weighted MRI with contrast; FLAIR: T2 fluid-attenuated inversion recovery MRI; CECT: Contrast enhanced computed tomography; NCCT: Non-contrast computed tomography; RANO: Response assessment in neuro-oncology; SRS: Stereotactic radiosurgery; 3D-T1-MPRAGE: Three-dimensional T1-weighted magnetization-prepared rapid gradient echo; DWI: Diffusion-weighted MRI; NA: results and full text not publicly available
Table 2. Summary of pros and cons of manual vs. AI-based radiation treatment planning.
Table 2. Summary of pros and cons of manual vs. AI-based radiation treatment planning.
ManualArtificial Intelligence
Auto-segmentationPROS- Physician-drivenPROS- Faster segmentation
- Context-aware decisions- Improved consistency
- Customizable per patient- Scalable across cases
CONS- Time-consumingCONS- May miss subtle anatomical nuances
- Intra-/inter-observer variability- Limited generalizability
- Inconsistent delineation standards- Requires clinician verification
Dose planningPROS- Greater human oversightPROS- High-speed prediction
- Flexible for complex cases- More standardized plans
- Learns from large datasets
CONS- Labor-intensiveCONS- May not account for patient-specific anatomy
- Less reproducible- Potential overfitting to training data
- Susceptible to planning variability
Biologically informed RTPROS- Direct clinical judgment in biomarker relevancePROS- Can integrate complex genomic/radiomic data
- Custom-tailored escalation decisions- Identifies non-obvious dose–response patterns
CONS- Limited by available validated biomarkersCONS- Models may lack transparency
- Not easily reproducible across institutions- Biological relevance not always clinically validated
Treatment response predictionPROS- Based on clinician experience and medical historyPROS- Detects hidden correlations
- Potential for early outcome forecasting
CONS- Subjective and inconsistentCONS- Risk of bias
- Cannot scale or track subtle data patterns- Generalizability across cohorts is limited
Radiogenomics integrationPROS- Precision when availablePROS- Merges imaging and genetic data
- Personalized to patient- Hypothesis-generating at population level
CONS- Not feasible at large scaleCONS- Often exploratory
- Requires multidisciplinary interpretation- Lacks consistent clinical validation
Interpretability/Decision SupportPROS- Transparent reasoningPROS- Synthesizes multimodal data
- Based on clinical logic- Suggests patterns not obvious to clinicians
CONS- May overlook complex data relationshipsCONS- Often black-box models
- Hard to scale to multi-omic inputs- May reduce clinician trust if unexplained
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Master, R.; Rubin, N.; Sampson, J.; Yadav, K.K.; Pandita, S.; Sabbagh, A.; Krishnan, A.; Silva, P.J.; Ramos, K.S.; Gregoire, V.; et al. Advances in Artificial Intelligence for Glioblastoma Radiotherapy Planning and Treatment. Cancers 2025, 17, 3762. https://doi.org/10.3390/cancers17233762

AMA Style

Master R, Rubin N, Sampson J, Yadav KK, Pandita S, Sabbagh A, Krishnan A, Silva PJ, Ramos KS, Gregoire V, et al. Advances in Artificial Intelligence for Glioblastoma Radiotherapy Planning and Treatment. Cancers. 2025; 17(23):3762. https://doi.org/10.3390/cancers17233762

Chicago/Turabian Style

Master, Reid, Nesha Rubin, James Sampson, Kamlesh K. Yadav, Shruti Pandita, Aria Sabbagh, Anika Krishnan, Patrick J. Silva, Kenneth S. Ramos, Vincent Gregoire, and et al. 2025. "Advances in Artificial Intelligence for Glioblastoma Radiotherapy Planning and Treatment" Cancers 17, no. 23: 3762. https://doi.org/10.3390/cancers17233762

APA Style

Master, R., Rubin, N., Sampson, J., Yadav, K. K., Pandita, S., Sabbagh, A., Krishnan, A., Silva, P. J., Ramos, K. S., Gregoire, V., Paragios, N., Krishnan, S., & Pandita, T. K. (2025). Advances in Artificial Intelligence for Glioblastoma Radiotherapy Planning and Treatment. Cancers, 17(23), 3762. https://doi.org/10.3390/cancers17233762

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop