1. Introduction
Monte Carlo (MC) simulation occupies a central place in the field of radiotherapy because it provides the most faithful representation of radiation transport available to modern computational science [
1,
2]. Since the earliest applications of computational physics in medicine, researchers have sought methods that can accurately describe how photons, electrons, protons, and heavy ions deposit energy in human tissue. MC simulation emerged as the preferred approach because it does not rely on simplified analytical approximations but instead models individual particle interactions based on well-established physical probability distributions [
3,
4]. Through the accumulation of many simulated particle histories, it is possible to reconstruct a three-dimensional dose distribution that closely reflects the true behavior of radiation in the body. This capability has allowed MC simulation to become a benchmark for validating new treatment planning systems [
5,
6], commissioning linear accelerators [
7], and exploring advanced radiotherapy techniques [
8].
As radiotherapy has evolved toward increasingly precise and conformal treatments, the importance of accurate dose calculation has grown accordingly. Modern treatment modalities require the dose distribution to be predicted with high fidelity in the presence of complex anatomical features, such as air cavities, bone interfaces, irregular surfaces, and internal organ motion [
9,
10,
11,
12]. Analytical dose calculation algorithms can struggle under these conditions because they often rely on assumptions that break down in heterogeneous regions. MC simulation, by contrast, retains accuracy even in challenging scenarios because it models the relevant physics directly. For this reason, MC simulation is frequently considered the reference standard for research in radiotherapy physics and is often used as the final authority when comparing or validating alternative dose calculation algorithms [
13].
Figure 1 shows an overview of the major applications of MC simulation in radiotherapy. These applications include beam modeling, patient dose calculation, treatment planning system verification, imaging and dosimetry research, and advanced studies involving novel radiation delivery techniques. The diagram illustrates how MC methods contribute to different stages of the radiotherapy process and highlights their central role in ensuring accurate and reliable computation of dose.
Despite these strengths, MC simulation has historically been limited by its computational demands. A typical simulation must track millions or even billions of particle histories to reduce statistical uncertainty to clinically acceptable levels. Each particle undergoes numerous potential interactions, and each of these interactions must be sampled from probability distributions derived from theoretical or experimental cross-sectional data. Carrying out this process for a large particle population requires significant computational resources and time. On a conventional computing system, a single high-resolution MC calculation may require several minutes to complete. Although shorter runtimes are possible with highly parallel architectures [
14], such resources are not always accessible in routine clinical settings.
These computational constraints have become a major challenge as radiotherapy shifts toward workflows that require rapid feedback. One of the clearest examples is image-guided adaptive radiotherapy [
15]. In this approach, the patient is imaged immediately before treatment, and the treatment plan is adapted to reflect the current anatomical state. Dose calculation must therefore be performed quickly [
16], often within the short window of time in which the patient is positioned on the treatment couch. Classical MC simulation cannot meet this requirement because it cannot complete a full dose calculation within the necessary timeframe. As a result, the use of MC simulation in adaptive workflows has been limited to research settings and retrospective analysis rather than real-time clinical decision-making.
The increasing incorporation of artificial intelligence (AI) into radiotherapy has created new opportunities to address this limitation [
17]. AI, particularly in the form of deep learning, has demonstrated the ability to learn complex spatial and physical relationships from large datasets. When trained on MC-generated dose distributions, deep learning models can reproduce many essential features of a full simulation while requiring only a fraction of the computation time. Once trained, such models can evaluate dose almost instantly, which opens the possibility of performing near-real-time dose calculations during treatment planning or adaptive workflows [
18].
The combination of AI and MC simulation provides a path toward computational frameworks that are both accurate and efficient. AI can act as a surrogate for rapid dose estimation, while MC simulation remains the authoritative standard against which predictions are validated [
19]. This hybrid strategy has the potential to transform radiotherapy computation by enabling real-time treatment adaptation, improving the speed of planning, and supporting the study of advanced delivery techniques. The purpose of this entry is to present an overview of these hybrid methods, their scientific foundations, and their relevance to the future of radiotherapy practice and research.
2. MC Simulation in Radiotherapy
MC simulation is widely regarded as the most accurate method for computing radiation dose in radiotherapy because it directly models the underlying physics that governs particle interactions in matter. The method originated from early computational work in the mid-twentieth century and has since become indispensable in fields such as high-energy physics, nuclear engineering, radiation protection, and medical physics [
20]. In radiotherapy, its central objective is to predict how photons, electrons, or other therapeutic particles deposit energy within patient anatomy, which is essential for ensuring that treatment delivers a curative dose to the tumor while sparing surrounding healthy tissues.
The defining feature of MC simulation is its reliance on statistical sampling to recreate the random nature of radiation transport [
21]. When a photon or electron beam enters the body, each particle follows a unique trajectory determined by probabilistic interactions such as scattering, energy loss, absorption, and the generation of secondary particles. MC simulation explicitly reproduces these processes by sampling interaction probabilities from accurate cross-sectional data. When repeated over many particle histories, this produces a detailed and physically realistic description of the dose distribution. Unlike analytical methods, which approximate dose deposition based on simplified equations, MC simulation naturally handles the full complexity of real clinical scenarios, including heterogeneities, irregular geometries, and complex beam arrangements [
22].
MC methods gained widespread use in radiotherapy with the development of general-purpose simulation packages, among which the EGSnrc system has become especially prominent [
23]. EGSnrc includes a comprehensive set of photon and electron interaction models and provides users with tools that can accurately simulate clinical linear accelerator beams. One of its key components, BEAMnrc [
24], allows the treatment head of a linear accelerator to be modeled by combining modular elements that represent targets, scattering foils, primary collimators, multi-leaf collimators, and other beam-shaping structures. This detailed modeling produces a phase space file that describes the energy, position, and direction of particles emerging from the accelerator, thereby representing the output field that reaches the patient.
The next stage in the simulation process typically uses DOSXYZnrc [
25], which imports the phase space file and computes the dose distribution within a three-dimensional phantom or patient-specific geometry. These geometries are often derived from computed tomography scans that have been converted into voxelized material maps. The conversion process accounts for the density and composition of tissues so that the correct interactions are applied during the simulation. Through this workflow, MC simulation supports precise modeling of dose for both megavoltage photon beams and therapeutic electron beams, and it has been used extensively in research to evaluate new treatment techniques and validate clinical planning systems [
26,
27]. For example,
Figure 2 illustrates the depth dose enhancement ratio (DDER) obtained from MC simulations for gold nanoparticle-enhanced radiotherapy. Results are shown for a 10 MV flattening filter (FF) photon beam (
Figure 2a) and a flattening filter-free (FFF) photon beam at varying gold nanoparticle concentrations. The DDER is defined as the ratio of the depth dose with nanoparticles to the depth dose in water (without nanoparticles) at the same point in the phantom [
27].
Although the accuracy of MC simulation is unparalleled, its computational cost is substantial because each particle history is independent and must be simulated separately. A clinically robust simulation often requires millions of histories to ensure that statistical noise is sufficiently low across the entire dose distribution. In regions where the dose is low or steep gradients exist, even more histories may be required to achieve acceptable precision. The need for such large computations leads to long runtimes when using a single processor, and while parallelization can reduce these times [
28], access to large computing clusters is not always feasible in many clinical environments.
This computational burden becomes particularly problematic in workflows that demand rapid results. For example, in image-guided adaptive radiotherapy, clinicians acquire new imaging immediately before treatment and must determine whether the existing plan remains valid. If the anatomy has shifted or deformed, a new dose calculation may be needed to assess whether the planned dose is still appropriate. Performing a full MC simulation in this context would require several minutes, which exceeds the time available during a treatment session. As a result, MC simulation has been used primarily in research or retrospective studies rather than as a routine component of adaptive clinical practice.
Despite this limitation, MC simulation continues to serve as the foundation for accurate dosimetry in radiotherapy. It plays a critical role in the calibration and commissioning of radiotherapy systems, in the assessment of new treatment planning algorithms, and in studies that require a high degree of physical accuracy. Furthermore, MC-generated datasets are essential for training AI models that aim to approximate dose distributions [
29,
30]. These models rely on MC simulation as the source of truth against which predictions are compared.
The integration of MC simulation with AI in hybrid frameworks reflects the recognition that neither method alone can fully meet the demands of modern radiotherapy. MC provides accuracy but lacks speed, while AI provides speed but requires MC for validation and training. Understanding the principles, strengths, and limitations of MC simulation is therefore essential for appreciating how these hybrid systems achieve a balance between computational efficiency and dosimetric reliability.
3. AI Methods for Approximating MC Dose Calculation
AI has emerged as a powerful tool for enhancing radiotherapy computation because it can learn complex relationships from data and reproduce them with remarkable speed. Among its many applications in medical physics, one of the most promising is the approximation of MC-generated dose distributions [
31]. This is possible because dose patterns produced by MC simulation, although governed by complex physics, contain spatial and statistical structures that can be recognized and emulated by advanced learning algorithms. Once trained, these AI models can produce dose estimates that resemble MC calculations while requiring only a fraction of the computation time [
32]. Such a capability directly addresses a major limitation of MC methods and supports the practical use of accurate dosimetry in time-sensitive radiotherapy settings.
Deep learning is the principal branch of AI that enables this form of dose prediction [
33]. Deep learning models consist of layered computations that progressively extract features from input data. When applied to radiotherapy, these inputs often include computed tomography images, voxelized phantoms, spatial dose masks, and representations of radiation beam parameters. Deep learning models can identify patterns within these data that correspond to the behavior of dose deposition, such as how tissue density affects electron scatter or how beam geometry influences energy distribution. This remarkable ability to learn both local and global relationships makes deep learning especially suitable for reproducing the three-dimensional dose patterns generated by MC simulation [
34].
Figure 3 illustrates the architecture of Deep Profiler, a multi-task deep neural network that incorporates radiomics into its training to generate an image-derived fingerprint. This fingerprint enables prediction of time-to-event treatment outcomes while approximating conventional radiomic features. The model was validated using an independent study cohort [
33].
Among deep learning architectures, the three-dimensional U-Net has become one of the most widely used for dose prediction [
35]. The U-Net structure consists of an encoding path that compresses information from the input data and a decoding path that reconstructs this information into a high-resolution output. The encoding stages allow the model to recognize broad contextual structures, such as regions of differing density or large-scale anatomical contours. The decoding stages then restore the spatial detail required to produce a voxel-by-voxel representation of dose. Through skipping connections that link corresponding levels of the encoding and decoding paths, the model preserves fine anatomical details while also integrating information about larger spatial patterns.
Figure 4 shows the architecture of an attention-gated three-dimensional U-Net model for three-dimensional dose distribution prediction. The model accepts a 12-channel tensor of size 64 × 64 × 64 as input to predict the complete 3D dose distribution in a head-and-neck cancer treatment plan [
35].
To improve prediction accuracy, many implementations use a cascaded design consisting of two sequential U-Net models [
36]. The first model generates a coarse approximation of the dose distribution based on the major anatomical features and beam geometry. The second model refines this estimate by focusing on the regions where finer detail is needed, such as near the penumbra or in heterogeneous tissues. This two-stage process mirrors how a human observer might first understand the broad features of a dose distribution and then examine finer structures. The cascaded approach also stabilizes the training process by allowing the first network to learn general dose patterns before the second network attempts to correct local discrepancies.
Training deep learning models for dose prediction requires extensive datasets that pair anatomical information with accurate MC-generated dose distributions. These datasets must reflect the diversity of clinical situations that the model encounters. To achieve this, researchers often employ methods to expand the variety of available training examples. For example, anatomical deformation techniques can modify computed tomography images to simulate changes in organ position or shape [
37]. Random sampling of beam angles, field sizes, and energy levels broadens the representation of beam geometries. This augmented diversity helps the model generalize to new patients and prevents it from overfitting to a narrow set of training examples.
The learning process is guided by loss functions that quantify the difference between the predicted dose distribution and the reference MC result. These functions may incorporate simple voxel-wise differences or more sophisticated spatial comparisons that encourage the model to reproduce both global dose patterns and local gradients. During training, the deep learning model repeatedly updates its internal parameters to reduce the loss, gradually aligning its predictions with the MC ground truth. The training process continues until the model reaches a point where further improvements are minimal, indicating that it has learned the target mapping from anatomy and beam configuration to dose distribution [
38].
Once trained, AI models can produce dose predictions in a time frame suitable for real-time or interactive clinical use. A complete three-dimensional dose distribution can often be generated in seconds using standard computational hardware. This level of speed transforms the possibilities for adaptive radiotherapy, interactive plan exploration, and rapid dose verification. Although the AI prediction is not identical to the MC simulation, it is sufficiently accurate for many planning and decision-making tasks. When higher precision is required, MC simulation can still be performed selectively, retaining its role as the definitive standard.
Representative implementations reported in the literature have used training datasets ranging from approximately 50 to several hundred patients, depending on the treatment site and modality [
31,
32,
33,
34,
35,
36]. Three-dimensional U-Net–based architectures trained on MC–generated dose distributions have demonstrated mean absolute dose differences on the order of 1–3% of prescription dose and gamma pass rates exceeding 90–95% under 3%/3 mm criteria in selected clinical scenarios [
35,
36]. Training is typically performed using GPU-based systems over periods ranging from several hours to a few days, whereas inference time for full three-dimensional dose prediction is generally on the order of seconds [
31,
32]. These findings illustrate the feasibility of AI as a surrogate model while underscoring the importance of physics-based validation in out-of-distribution conditions.
In terms of computational requirements, training three-dimensional deep learning models for dose prediction is commonly performed on modern GPU hardware, as reported in recent AI-based radiotherapy dose prediction studies [
35,
36]. Once trained, deployment does not require high-performance computing infrastructure and can often be executed on standard clinical workstations equipped with a single GPU [
31,
32,
35]. Transferability across equipment or treatment sites typically requires either retraining or fine-tuning of the model using site-specific MC datasets, particularly when beam energy spectra, treatment planning systems, or imaging protocols differ. Reported limitations and failure modes are generally associated with out-of-distribution anatomy, uncommon beam configurations, or insufficient representation of extreme cases in the training dataset [
31,
34]. These considerations highlight that while AI-based models are computationally efficient at inference, their robustness depends on appropriate dataset design and validation.
AI does not aim to replace MC simulation entirely. Instead, it provides a complementary tool that extends the usefulness of MC methods into clinical scenarios where time has historically been a limiting factor. By enabling fast approximation of dose distributions, AI allows clinicians and researchers to benefit from MC-level accuracy without the corresponding computational delay [
39]. This synergy between data-driven learning and physics-based simulation forms the basis for hybrid frameworks, which combine both methods to achieve efficient and reliable dose calculation in modern radiotherapy.
3.1. Limitations and Uncertainty in AI-Based Dose Prediction
Despite their computational advantages, AI-based dose prediction methods have important limitations that must be explicitly recognized [
31,
34]. A primary concern is limited generalization outside the training domain. Deep learning models are inherently data-driven and may perform reliably only when new patient anatomy, beam geometry, and imaging characteristics fall within the distribution of anatomical variability, target volume size, organ-at-risk geometry, and beam configuration parameters represented in the training dataset. In practice, similarity is evaluated indirectly through validation performance on independent datasets, statistical distribution comparisons of anatomical and dosimetric features, and uncertainty estimation metrics that identify out-of-distribution cases. When applied to different treatment sites, alternative linear accelerator models, or uncommon beam geometries, predictive accuracy may degrade due to domain shift [
32].
Furthermore, the reliability of AI models depends critically on the quality and representativeness of the MC-generated datasets used for training [
31,
34]. Any systematic bias, insufficient anatomical diversity, or restricted range of beam parameters in the training data can be propagated into the learned model. In low-dose regions, steep dose gradients, or extreme heterogeneities, AI models may produce unstable or non-physical predictions because they do not explicitly enforce radiation transport physics [
34].
Another ongoing challenge is interpretability. Deep neural networks operate as high-dimensional function approximators, and the internal reasoning behind a specific voxel-level prediction is often difficult to explain, which may complicate clinical validation and regulatory acceptance [
31].
For these reasons, hybrid AI–MC frameworks retain MC simulation as the reference standard for verification, uncertainty assessment, and correction in scenarios where AI predictions may be less reliable [
40,
41]. Explicit consideration of these limitations is essential for the responsible clinical integration of AI-assisted dose calculation. Emerging approaches assess domain shift using feature-space distance metrics, uncertainty thresholds, or degradation in gamma index performance on validation cohorts, thereby providing quantitative indicators of when retraining or model adaptation may be necessary.
3.2. General Implementation Framework for AI–MC Dose Prediction
Although specific implementations vary across institutions and treatment sites, AI-assisted MC dose prediction typically follows a structured workflow [
31,
34,
40]. First, a dataset is generated consisting of patient imaging (e.g., CT scans), structure contours, beam configuration parameters, and corresponding high-fidelity MC dose distributions [
31,
40]. These MC datasets serve as ground truth for supervised learning.
The deep learning model, often based on three-dimensional U-Net or cascaded architectures, is trained to map anatomical and beam inputs to voxel-level dose output [
35,
36]. Training is performed using stochastic gradient–based optimization with loss functions that quantify voxel-wise dose differences and spatial consistency relative to the MC reference [
34,
38].
Reported dataset sizes vary substantially depending on the clinical task and anatomical complexity [
31,
34]. For relatively constrained treatment scenarios, such as site-specific stereotactic radiosurgery (SRS) or superficial electron treatments with limited beam configurations and reduced anatomical variability, models have been trained using datasets on the order of several dozen cases when the parameter space is restricted [
35,
36]. In contrast, for more heterogeneous treatment sites such as head-and-neck, pelvic, or multi-field intensity-modulated radiotherapy (IMRT), where target volumes, organ-at-risk geometries, and beam arrangements exhibit greater diversity, substantially larger datasets—often in the hundreds of cases—are typically required to achieve stable generalization [
31,
34]. Dataset scale, therefore, reflects task complexity, anatomical variability, and beam configuration diversity rather than a fixed numerical threshold.
Following training, validation is conducted on independent test datasets to evaluate metrics such as mean absolute dose difference, gamma pass rate, and dose–volume histogram agreement [
35,
36]. When deployed clinically, the trained model performs rapid inference (typically seconds) [
31,
32], after which hybrid systems may apply selective MC recalculation in regions identified as having high uncertainty or clinical importance [
40].
Scalability across anatomical sites or treatment platforms generally requires either retraining or fine-tuning of the network using site-specific MC datasets, particularly when beam spectra, planning systems, or imaging protocols differ. Transferability is therefore achievable but depends on the appropriate representation of the target domain in the training data.
4. Hybrid AI and MC Frameworks for Dose Calculation
The integration of AI with MC simulation has led to structured hybrid computational pipelines in which data-driven prediction and physics-based recalculation are explicitly linked within a unified workflow [
40,
41]. Rather than operating as independent techniques, AI and MC serve complementary and sequential roles. AI provides rapid surrogate dose estimation based on models trained with high-fidelity MC data, while MC simulation is incorporated for targeted verification, refinement, and uncertainty management [
42]. The defining characteristic of hybrid systems is therefore not the parallel use of two methods, but their coordinated interaction within adaptive and planning workflows.
Hybrid approaches are organized around three operational components: (i) AI-based full-volume dose prediction; (ii) uncertainty or risk assessment to identify regions requiring higher precision; and (iii) selective MC recalculation for verification or correction. This structured interaction enables computational acceleration without relinquishing the physical rigor associated with MC simulation.
A typical hybrid workflow begins with the acquisition of patient imaging, such as computed tomography or cone beam computed tomography, which is converted into a voxelized phantom containing material and density information. Beam parameters are provided to the AI model, which produces a preliminary dose distribution. This estimate supports interactive tasks, including beam configuration exploration, anatomical adaptation assessment, and preliminary plan evaluation [
43]. Regions identified as high-risk, such as steep dose gradients or heterogeneous tissue interfaces, may then undergo targeted MC recalculation to verify or refine the AI prediction [
44]. This selective verification strategy reduces the overall computational burden while preserving accuracy in clinically critical areas.
In representative hybrid implementations, AI is first used to generate a full three-dimensional dose estimate within seconds, followed by selective MC recalculation in regions identified as having elevated uncertainty or steep gradients [
40,
41]. Such strategies have been reported to substantially reduce overall computational time compared with full-volume MC recalculation while maintaining clinically acceptable dosimetric agreement. These implementations exemplify a structured division of labor between rapid surrogate modeling and high-fidelity physical verification.
Hybrid systems also incorporate iterative feedback mechanisms. Newly generated MC simulations can be added to the training dataset to improve model robustness and reduce prediction error over time [
45]. This dynamic relationship enables continuous refinement of the AI surrogate while preserving MC simulation as the ground truth for quality assurance and recalibration.
Hybrid AI–MC frameworks can be categorized into several functional models:
- (i)
AI surrogate models with post hoc MC verification;
- (ii)
AI-guided selective MC recalculation in regions of elevated uncertainty;
- (iii)
Iterative refinement systems in which AI predictions and MC simulations inform each other during training or adaptive workflows.
Although implementation details vary, these models share a common objective: extending the practical usability of MC-based dosimetry in computationally constrained environments.
Applications of hybrid computation span multiple treatment modalities. In photon therapy, hybrid systems facilitate rapid evaluation of beam angle or energy modifications while reserving MC simulation for verification near organs-at-risk or in heterogeneous regions [
46]. In electron therapy, where dose distributions are sensitive to surface irregularities and air gaps, AI enables rapid estimation of daily anatomical variations, with MC confirmation applied when high precision is required [
47]. In adaptive radiotherapy, AI supports rapid dose recalculation and workflow efficiency, while MC simulation provides independent verification of updated plans and reconstructed dose distributions [
48].
Figure 5 illustrates the complementary roles of AI-driven automation and MC-based verification within adaptive treatment processes.
Hybrid frameworks, therefore, represent an integration of computational acceleration and physics-based modeling rather than a replacement of one method by the other. By combining rapid surrogate prediction with selective high-fidelity recalculation, these systems aim to support efficient planning, adaptive decision-making, and research applications while maintaining dosimetric rigor.
Uncertainty Quantification in Hybrid AI–MC Frameworks
Uncertainty quantification (UQ) is a critical consideration in the clinical deployment of AI-assisted dose calculation [
31]. In radiotherapy, treatment decisions are sensitive to relatively small variations in predicted dose, particularly near organs-at-risk and in regions of steep dose gradients [
49]. Consequently, understanding not only the predicted dose distribution but also the associated uncertainty is essential for safe clinical integration.
In AI-based dose prediction, uncertainty may arise from several sources [
31]. Data uncertainty reflects variability in patient anatomy, imaging quality, and beam configurations represented in the training dataset. Model uncertainty arises from limitations in the neural network architecture and its learned parameters, particularly when applied to out-of-distribution cases. In addition, numerical and statistical uncertainty remains inherent in the MC simulations used to generate training data, especially when simulations are performed with reduced particle histories to accelerate data generation [
40].
Several methodological approaches have been proposed to estimate uncertainty in deep learning models, including ensemble modeling, MC dropout techniques, Bayesian neural networks, and variance estimation across repeated stochastic forward passes [
31]. Such approaches can provide voxel-wise uncertainty maps that identify regions where predictions may be less reliable. In hybrid AI–MC frameworks, these uncertainty indicators can be used to guide selective MC recalculation in high-risk regions, thereby balancing computational efficiency with dosimetric reliability [
40].
By explicitly incorporating uncertainty assessment into hybrid systems, clinicians can better determine when AI predictions are sufficient and when high-fidelity MC verification is required [
40]. This integration strengthens the safety, transparency, and robustness of AI-assisted radiotherapy workflows.
5. Applications and Future Directions
Hybrid AI-MC methods have considerable potential to influence many aspects of radiotherapy practice and research. Their ability to combine physical accuracy with computational efficiency creates opportunities to improve treatment planning, support adaptive decision-making, and enable new treatment paradigms that require rapid and reliable dose estimation. As radiotherapy continues to advance in complexity, hybrid frameworks offer a means to keep computational methods aligned with clinical needs. They also provide a foundation for future innovations that require both fast computation and high-fidelity modeling.
One of the most significant applications of hybrid methods is image-guided adaptive radiotherapy. In adaptive workflows, patient anatomy is imaged immediately before treatment, and the dose distribution must often be recalculated to determine whether the existing plan remains appropriate [
49]. Even small changes in the position of organs or tumor volume can alter the delivered dose in meaningful ways [
50]. Classical MC simulation cannot generate a new dose distribution quickly enough for real-time decision-making during a treatment session. AI, however, can approximate the effect of anatomical changes within seconds. This enables clinicians to rapidly evaluate whether the dose coverage of the target remains adequate or whether nearby organs at risk are receiving increased exposure. Once this initial assessment is made, MC simulation can be used selectively to confirm the dose in areas where the consequences of error are most significant [
51]. This workflow provides both speed and reliability, which are essential for practical clinical adaptation. For example, AI-based dose recalculation following cone-beam CT acquisition has been reported to be achievable within seconds, whereas full MC recalculation for adaptive quality assurance may require several minutes depending on particle history and hardware configuration [
49,
51]. Hybrid approaches that apply MC verification selectively can therefore maintain total adaptation time within clinically reported on-couch decision windows of approximately 10–15 min [
51]. These findings illustrate how hybrid computation aligns computational constraints with adaptive workflow requirements. Reported studies indicate that models trained for one anatomical site do not automatically generalize to others without retraining or fine-tuning, underscoring the importance of site-specific data representation for reliable cross-domain deployment.
Hybrid frameworks also support a more efficient and interactive approach to treatment planning. Traditional planning often requires planners to test different beam configurations repeatedly, with each configuration requiring a new dose calculation. In some cases, this process can take hours or days. With hybrid methods, planners can explore different configurations in near real time. AI provides immediate feedback by predicting how a dose is likely to be distributed, which encourages a more exploratory and informed planning process [
52,
53]. Once a promising configuration is identified, MC simulation can be used to validate the result. This enables planners to benefit from the accuracy of MC without the time delays that would otherwise prevent its use during the early stages of planning.
Electron therapy is another area where hybrid methods may provide important advantages. Electron dose is highly sensitive to surface irregularities and tissue heterogeneities, which makes accurate dose calculation especially challenging [
54]. Hybrid systems can address this challenge by using AI to perform fast approximations of electron dose distributions. These preliminary estimates allow clinicians to evaluate whether a given electron field is suitable for the patient’s anatomy on a specific day. MC simulation can then be applied to confirm the accuracy of the prediction or to explore the effects of subtle anatomical variations [
55]. This approach may be particularly valuable for treatments in which the tumor or treatment area lies close to the skin surface.
Photon therapy also benefits from hybrid computation. In many treatments, including those for prostate, lung, and head and neck cancers, complex anatomical relationships influence how photons scatter through tissue and how dose accumulates. AI can provide rapid insight into how changes in beam angle, field size, or energy affect the dose distribution [
56]. MC simulation can then be used to verify dose in regions where accuracy is critical, such as near organs-at-risk. This combined strategy enables a more detailed and efficient approach to plan assessment.
Hybrid AI-MC methods are also promising for emerging research areas, including ultra-high dose rate radiotherapy [
57]. Ultra-high dose rate treatments create unique physical and biological conditions that differ from conventional radiotherapy. Understanding these conditions often requires sophisticated modeling of particle interactions [
58]. MC simulation is the natural tool for such tasks, but the complexity of the parameter space can make large-scale investigation time-consuming. AI can accelerate this exploratory process by providing rapid approximations that help researchers identify promising regions of interest, which can then be studied in greater detail with MC simulation [
59,
60]. This approach may support the development of new treatment strategies and improve understanding of the underlying biological effects.
Looking toward the future, the evolution of hybrid dose calculation frameworks is likely to include several important developments. One direction is the incorporation of physics-informed learning, in which AI models are structured to obey physical principles that relate to radiation transport. This approach may reduce the amount of training data needed and improve the reliability of predictions in situations that differ from the training examples [
61]. Another direction involves more advanced methods for quantifying uncertainty. Understanding where AI predictions are robust and where they require MC confirmation is essential for integrating hybrid systems safely into clinical workflows. Research in uncertainty estimation may help clinicians identify regions of higher risk and decide when a full MC simulation is necessary [
62].
Hybrid systems may also be extended to other treatment modalities such as proton therapy [
63,
64]. Proton beams require careful modeling of range, scattering, and energy deposition patterns. AI models that incorporate aspects of these physical behaviors could assist in the rapid prediction of proton dose, while MC simulation remains essential for validation. Furthermore, hybrid frameworks may eventually integrate biological response modeling by combining physical dose information with predictions of tissue response or treatment outcome [
65].
The broad potential of hybrid AI and MC systems reflects a larger trend in radiotherapy toward computational approaches that support precision and adaptability. As these methods continue to mature, they may contribute to more personalized treatment strategies, improved accuracy in challenging anatomical regions, and greater confidence in the safety and effectiveness of advanced treatment techniques [
66]. Hybrid frameworks, therefore, represent an important step toward a more responsive and computationally sophisticated future in radiotherapy [
67].
Table 1 summarizes the major clinical and research applications of hybrid AI and MC methods in radiotherapy. The table highlights how these approaches support adaptive treatment, improve planning efficiency, enhance accuracy in photon and electron therapies, and enable exploration of emerging modalities such as ultra-high-dose-rate irradiation and proton therapy.
5.1. Model Maintenance and Lifecycle Management
The long-term clinical deployment of hybrid AI–MC models requires structured lifecycle management aligned with existing radiotherapy QA frameworks. Model maintenance is generally event-driven rather than time-driven. Retraining or fine-tuning is typically considered when significant changes occur in the data-generating environment, such as the introduction of a new linear accelerator model, substantial modification of beam energy spectra, changes in treatment planning system algorithms, or alterations in CT imaging protocols that affect Hounsfield unit calibration or image reconstruction characteristics.
Routine equipment maintenance that does not materially alter beam characteristics or imaging parameters does not necessarily mandate full model retraining; however, post-maintenance validation using established dosimetric benchmarks and independent QA datasets is advisable. Similarly, periodic performance monitoring may be performed using prospective validation cases to detect potential degradation in prediction accuracy over time.
Model updating procedures should follow institutional governance policies consistent with medical device QA standards. This includes documentation of retraining datasets, validation metrics, independent verification using MC recalculation or phantom measurements where appropriate, and formal clinical approval prior to redeployment. In practice, implementation typically involves collaboration between medical physicists, clinical scientists, and data specialists to ensure both computational integrity and regulatory compliance.
Thus, retraining frequency is not defined by a universal schedule but is triggered by demonstrable changes in system characteristics, imaging protocols, or observed degradation in validation performance. This approach aligns hybrid AI–MC model maintenance with established radiotherapy QA principles rather than treating retraining as a routine or automatic procedure.
5.2. Clinical Governance and Professional Responsibility
The clinical updating or retraining of hybrid AI–MC models should be conducted within a structured governance framework rather than treated as a routine technical adjustment. In most institutional settings, this process involves multidisciplinary collaboration between medical physicists, clinical scientists, and professionals with expertise in data science or machine learning. The specific distribution of responsibilities may vary by institution, but retraining and model validation should occur under formal quality assurance oversight consistent with existing radiotherapy QA standards.
Updated models should undergo documented validation using independent test datasets, dosimetric verification (including MC recalculation or phantom-based measurements where appropriate), and review by qualified clinical physicists prior to clinical deployment. Final approval for clinical use should follow institutional medical device governance processes and align with local regulatory requirements.
Thus, retraining is not considered a technically trivial procedure but rather a controlled model update requiring documented validation, peer review, and clinical authorization before implementation in patient care.
6. Conclusions
Hybrid AI and MC methods represent an important advancement in the development of computational tools for radiotherapy. Their integration offers a practical response to the long-standing challenge of achieving both accuracy and speed in dose calculation. MC simulation continues to provide the most reliable description of radiation transport available to medical physics, but its computational demands limit its use in situations where rapid feedback is required. AI, by contrast, can approximate complex spatial patterns in a data-driven manner, learning statistical relationships from high-fidelity MC-generated datasets. By combining these two approaches, hybrid frameworks preserve the physical reliability of MC methods while enabling dose estimation at speeds that align with modern clinical workflows.
The adoption of hybrid frameworks has implications for many stages of the radiotherapy process. In treatment planning, hybrid systems make it possible to explore beam configurations interactively and to assess their dosimetric impact without waiting for lengthy simulations. In adaptive radiotherapy, hybrid computation enables immediate evaluation of how anatomical changes influence dose, which supports real-time decision-making during treatment sessions. In research, these frameworks facilitate the investigation of emerging treatment techniques by allowing large parameter spaces to be explored efficiently while still relying on accurate physics-based modeling for verification.
The continued development of hybrid AI and MC methods is likely to strengthen the role of computation in guiding clinical decisions. Advances in machine learning architecture, improved methods for quantifying uncertainty, and deeper incorporation of physical principles into AI models will contribute to more robust and more transparent frameworks. These developments may allow hybrid systems to handle increasingly complex clinical conditions and to provide more nuanced information about dose distribution and treatment quality.
As radiotherapy continues to evolve toward greater personalization and precision, the importance of accurate and efficient dose calculation will only grow. Hybrid AI and MC frameworks offer a promising foundation for meeting these demands. They provide a path toward treatment systems that are both responsive and scientifically rigorous, and they illustrate the value of combining physics-based simulation with data-driven modeling. The field is still in a state of active development, but the potential of these hybrid approaches to advance radiotherapy practice and research is substantial and likely to influence the discipline for many years to come.