Dose Calculation Algorithms for External Radiation Therapy: An Overview for Practitioners

: Radiation therapy (RT) is a constantly evolving therapeutic technique; improvements are continuously being introduced for both methodological and practical aspects. Among the features that have undergone a huge evolution in recent decades, dose calculation algorithms are still rapidly changing. This process is propelled by the awareness that the agreement between the delivered and calculated doses is of paramount relevance in RT, since it could largely affect clinical outcomes. The aim of this work is to provide an overall picture of the main dose calculation algorithms currently used in RT, summarizing their underlying physical models and mathematical bases, and highlighting their strengths and weaknesses, referring to the most recent studies on algorithm comparisons. This handy guide is meant to provide a clear and concise overview of the topic, which will prove useful in helping clinical medical physicists to perform their responsibilities more effectively and efﬁciently, increasing patient beneﬁts and improving the overall quality of the management of radiation treatment.


Introduction
Technological advances in external beam radiation therapy (RT) allow high-precision dose delivery in terms of the target volume, and the improved control of side effects [1][2][3].
A key feature of modern RT techniques includes the calculation of an accurate dose in order to assure treatment quality and consistency. Attaining an accurate and fast dose distribution calculation is indeed one of the most challenging tasks in modern RT. Improvements in dose calculation accuracy could positively affect the effectiveness of treatments, mainly in highly heterogeneous tissues, such as the lungs, where detailed secondary particle tracking is required [4]. This task can be very time-consuming and may lead to unaffordable computational burdens in clinical practice. Since the first works on three-dimensional (3D) dose calculation in the early 1970s [5,6], the evolution of computing power allowed the deep rethinking of dose calculation algorithms: radiation transport calculation allowed for more and more accurate reconstructions of energy deposition patterns within a clinically acceptable computation time.
Here, we provide a general overview of modern dose calculation algorithms, through a brief description and discussion of their critical issues.

Dose Calculation Algorithm for Photon Beams
A photon dose calculation algorithm must model the energy deposition pattern induced by a clinical RT X-ray beam from a radiation source, most commonly from a linear accelerator (LINAC), in patient tissues, characterized by the electron density derived from computed tomography (CT) imaging of patients. The accelerator head represents the radiation source and patient tissues and organs stands for the interaction targets. Photons interact with matter according to a stochastic process that depends on photon energy and on the atomic number and density of the medium through which they travel. The principal types of photon interaction in the megavoltage energy range are the photoelectric effect, Compton scattering and pair production, which attenuate the beam according to the Lambert-Beer law [7] with an overall linear attenuation coefficient ( ) given by the sum of single-process attenuation coefficients.
Modern dose calculation algorithms ( Figure 1)-the so called model-based algorithms, in contraposition to correction-based algorithms-for photon beams are based on the total energy released in matter (TERMA) value, which is the total energy per unit of target mass released at the primary photon interaction point [8]. The differential TERMA energy distribution ( ) is related to the energy differential fluence ( ) of the primary photon beam by the following relation: where ( ) is the local density, estimated from the CT number value, and ( , ) is the mass attenuation coefficient at .
Integrating over the beam spectrum, we obtain the TERMA [8]: If released energy is absorbed locally, TERMA coincides with the absorbed dose. Conversely, if a secondary energy transport is considered, the dose in a homogenous medium can be derived as the following convolution: The differential TERMA energy distribution T E (r) is related to the energy differential fluence ψ E (r) of the primary photon beam by the following relation: where ρ(r) is the local density, estimated from the CT number value, and µ ρ (r, E) is the mass attenuation coefficient at r.
Integrating over the beam spectrum, we obtain the TERMA [8]: If released energy is absorbed locally, TERMA coincides with the absorbed dose. Conversely, if a secondary energy transport is considered, the dose in a homogenous medium can be derived as the following convolution: where h(s, E) is the normalized point spread function, or "dose spread kernel" (DSK), at energy E and position s relative to the point of scattering of the primary photon [8].
The DSK describes the energy transport by the secondary particles, including single and multiple scattered photons, photoelectric electrons, Compton electrons, electron-positron pairs, bremsstrahlung photons, photonuclear particles and so on. For inhomogeneous media, the translation invariance is broken and a single DSK can no longer account for the secondary transport throughout the target. In this case, a full superposition integral need to be carried out: where k(r, s, E) is the normalized energy delivered at point r by secondary particles, associated to a primary photon of energy E interacting in s:

Dose Spread Kernel
Monte Carlo (MC) simulation is the most practical technique for the calculation of energy deposition kernels. Kernels for monoenergetic photons in water have been calculated by Mohan at al. [9], Ahnesjo at al. [8] and Mackie at al. [10,11]. Different types of kernels, depending on the irradiation geometry [12,13] and technique [14][15][16], have been defined.
The approach used for the computation of the previously introduced DSK considers an incoming monoenergetic, infinitely narrow photon beam that is perpendicularly incident on a homogeneous medium (e.g., water). The beam is forced to interact in a fixed point of the target volume. All subsequent interaction events are simulated via the MC method [11], including secondary particles paths, depicting a detailed energy deposition pattern. The DSK is nothing more than a pre-calculated function providing the average energy fraction (carried by a single incoming photon) absorbed in each small volume element around the primary interaction site. Each dose deposition value is calculated as the arithmetic average of many simulation events of the order of millions.
The DSK calculated in water can be rescaled to approximate the DSK in another homogeneous medium with an identical atomic number but different density [8]: where ρ w and ρ m are water and medium density, respectively. Mohan et al. [9] introduced a density scaling method to be directly applied to the DSK. It assumed that energy transport through the kernel from the interaction point s to the interest point P at r occurs along a straight line: introducing ray tracing into the kernel. Along each ray, the contribution of the kernel is scaled with the average electron density encountered along the line connecting s and r. This leads to a distortion of the kernel shape. The size of the scaled kernel appears to be shrunk in the direction of high-density local inhomogeneity or stretched in the direction of low-density local inhomogeneity. An efficient way to deal with the DSK approach is provided by the collapsed cone convolution (CCC) algorithm, which was introduced in 1989 by A. Ahnesjo [17] and is one of the most widespread algorithms in treatment planning systems (TPS). In this approach, the convolution is performed by assuming that all energy is released into coaxial cones of equal solid angle and, from volume elements on the cone axis, is rectilinearly transported, attenuated and deposited in elements on the axis. Density scaling of the kernels is performed during the convolution procedure to account for the inhomogeneity in the irradiated volume. The CCC approach offers good agreement with performed measures, except when a lateral charged particle equilibrium is missing in low-density media.

Pencil Beam Kernel
In TPS, further simplification has been introduced by the "pencil beam (PB) kernel", which can be obtained by integrating the DSK along the direction of propagation of the incoming narrow beam. The resulting PB kernel is subsequently convolved with the cross-sectional 2D primary fluence. The PB kernel can be either obtained again through MC simulations [9,18] or through ordinary beam measurements [19,20]. This approach is less accurate than DSK, but provides a much higher calculation speed. However, the PB algorithm suffers from inaccuracies when tissue heterogeneities are present.
Other approximations have been introduced to downsize the multidimensional scatter integral and to speed up the algorithm: under certain condition of symmetry, the DSK function can be pre-integrated to form not only a PB kernel but also a slab kernel or a broad beam [21]. The kernel format is linked to the dimension of the scattering integral: the greater the first, the smaller the last. Another procedure to reduce dose integral complexity is the introduction of a polyenergetic kernel, in order to avoid energy dependence. Polyenergetic kernels can be calculated like a weighted sum of all the spectral components of energy fluence.
To improve the accuracy of the dose estimation, while keeping the computational burden within acceptable levels, different formulations of the PB kernel have been proposed. In particular, the anisotropic analytical algorithm (AAA) is a pencil beam algorithm designed in cylindrical coordinates by Ulmer and Kaissl in 2005 [22] and subsequently improved by Tillikainen et al. in 2008 [23]. It implements a better reconstruction of the lateral energy spread pattern related to inhomogeneity in the transversal plane: the pencil beams, derived from MC simulations, are separated into lateral and depth-directed components. The lateral component is modelled by exponential functions, which permits the accurate modelling of lateral scatter in heterogeneous tissues.

Monte Carlo Methods
The MC calculation method has been for a long time the gold standard for validating dose calculation algorithms used for radiation therapy in clinical practice [24]. However, their long computation time made these codes incompatible with routine clinical treatment planning. In the last decade, the development of powerful hardware, coupled with the implementation of accurate condensed history techniques [25] and variance reduction techniques [26], allowed for the introduction of commercial planning systems based on MC methods [27,28].
The distinguishing feature of MC is tied to the stochastic description of the interactions between primary or secondary particles and the medium. The cross sections of the considered processes define the probability density distributions of the interactions that are randomly sampled.
The initial condition of the simulation is represented by a pre-filled phase space of the photon beam line.
Particles paths are then divided into steps of length given by the specific mean free path λ of the particle in the medium; for every step, a random number generator is used to select the occurring interaction. The amount of energy lost in each interaction and all secondary particles produced in the process are recorded. Afterwards, secondary particles are also tracked, and the subsequent interactions recorded. This process is continued until the energy of each particle goes down below the cut-off energy value, when residual kinetic energy is deposited locally. Each simulation process (history) has to be repeated N times and the statistical uncertainty of the computation depends on how many particle histories are simulated, and usually decreases as N −1/2 . This may result in a very intensive calculation, depending on the desired accuracy or the complexity and size of the geometry. Finally, the dose is computed by summing all the energy releases that occurred in every mass element. There are many Monte Carlo codes that have been developed for particle transport and those most frequently used in RT beam modeling include BEAM [29], GEANT4 [30], EGS [25], PENELOPE [31] and XVMC [32]. MC-based dose calculation codes all have similar structures. First, a MC code must define a set of descriptive data in order to depict the system: it reproduces set-up characteristics, modeling the LINAC head with precise details of the component dimensions, geometry, locations and material composition. For the patient portion, CT data provide morphological and chemical information in terms of mass density, electron density and atomic composition, which are required to calculate the proper radiation transport in the tissue.

Linear Boltzmann Transport Equations
The need for a fast and reliable method for absorbed dose calculation resulted in the exploration of deterministic solutions to the coupled linear Boltzmann transport equations (LBTEs) for photons and secondary species (electrons and positrons). LBTEs define for each type of particle the behavior of their distribution function in the phase space under the action of a variety of mutual interaction processes. Once the relevant physics, defining the species interactions that occur in the radiation beam, is properly accounted for, the LBTEs result in a set of kinetic partial integro-differential equations, the numerical solution of which requires the discretization of the phase space in spatial, angular and energy domains [33,34]. Once the LBTEs are solved for the fluence of the particles, the dose can be calculated by integrating over the fluence direction the electron stopping power via Møller scattering (the only process actually depositing energy in the medium). The advantage of the LBTE solution method over MC methods relies on the fact that deterministic calculations can typically run faster than MC simulations, providing detailed information on the radiation field distribution with no issues related to statistical uncertainty [35].

Dose Calculation Algorithms for Electron Beams
An electron beam crossing a medium undergoes two main processes: angular deflections and energy loss. Angular deflections may occur during both elastic and inelastic interactions. Energy loss is typically due to bremsstrahlung at high electron energy, whereas low-energy electrons mainly lose their energy through atomic ionization or excitation. The energy loss process is quantified by the mass stopping power, which includes the terms associated with any kind of inelastic interaction.
As in photon beams, a number of computational algorithms have been applied to electron beams, including the PB model and the MC method [36].
The core of the PB model for electrons is described in [37][38][39]. The multiple scattering of electrons determines the spatial distribution of the absorbed energy of high-energy electron beams. Typically, the transversal dose spread of electrons is smaller than that of a photon beam, owing to the decreased range of electrons [40,41].
MC-based electron-beam algorithms represent a subset of those introduced for highenergy photon-beam calculations. Of note, the indications for using MC algorithms against PB approaches are more stringent for electron beams than for photons for essentially two main reasons. First, electron PB models fail to provide sufficient accuracy in sharp inhomogeneities to an even higher degree than for photon beams [42,43]. Second, for electron beams, MC methods are fast enough to be considered for routine treatment planning since electrons directly ionize the medium and a lower number of simulations are needed to contain the statistical uncertainties.
Finally, the LBTE approach is another viable option that is directly implemented based on the same equations described in Section 1.4 [44].

Dose Calculation Algorithms for Hadrontherapy
Hadrontherapy utilizes relativistic charged particles, such as protons [45] and carbon ions [46], for targeted dose delivery. Although hadrontherapy uses considerably larger and more expensive accelerator facilities, its physical properties offer dosimetric advantages over other irradiation types.
Three processes dominate the interaction of charged particles with matter in the therapeutic energy range [47].
Inelastic Coulomb interaction with electrons is responsible for most of the energy loss of the projectile, while leading to negligible direction changes due to the large mass difference of the projectile and the electrons. The energy loss is referred to as linear energy transfer (LET), in units of keV/µm. The LET of individual ions is stochastic, but can be described for a particle beam using the Bethe equation [48]. In a wide energy range, the energy loss is inversely proportional to the square of the projectile's velocity v 2 , which leads to a sharp increase of the energy loss towards the end of its range, the so-called Bragg Peak [49]. Stochastic processes lead to a widening of this peak, a process known as range straggling. This effect is proportional to 1/ √ A, with A being the mass of the projectile. According to the Bethe equation, the range of the beam in a given material therefore depends on its initial energy, so that different points in the body can be reached by controlling this energy. In clinical practice, energies corresponding to approximately a 30-cm range in water are used: up to 220 MeV for protons, and 430 MeV/u for carbon ions. Several other ions have been explored in the past [50], and helium and oxygen are currently under pre-clinical investigation [51].
Elastic Coulomb interaction with target nuclei results in little energy loss, but scatters the projectile. The scattering behavior of a particle beam follows multiple Coulomb scattering theory. Furthermore, scattering is proportional to 1/ √ A. Inelastic, nuclear interactions lead to target fragmentation and, for particles heavier than protons to projectile fragmentation [49]. This causes a loss of primaries, so that with increasing projectile mass and initial energy, fewer primaries actually reach the Bragg Peak [52]. Heavy fragments of the target have a short range and a high LET, whereas light fragments have a larger range than the primaries, leading to the so-called fragment tail. Some target and projectile fragments, such as 11 C or 15 O, are short-lived positron emitters and enable beam detection via positron emission tomography [53].
A dose engine for hadrontherapy needs to take these principal effects into accountthe energy loss and finite range, the multiple Coulomb scattering, as well as the nuclear fragmentation, which causes a complex mixed irradiation field, especially for heavier particles. Due to the dependence on particle mass, some effects that can be neglected for protons are highly significant for heavy ions and vice versa. In the following, the practical considerations for the two dominant dose engine types, analytic pencil beam algorithms as well as dedicated or general-purpose Monte Carlo algorithms, are presented.
To irradiate a volumetric tumor, the beam has to be shaped laterally and longitudinally. Beam energies are stacked to form the 'spread-out Bragg peak' (SOBP), whereas lateral conformity can be achieved using passive scattering and collimators or via active pencil beam scanning (PBS). PBS permits much more conformal delivery and uses a cleaner beam, as less material is in the beam path. Practically all new facilities are equipped with PBS; therefore, PBS is the main focus of this section.
In modern pencil beam scanning, each field consists of several layers with a given primary energy and a corresponding penetration depth. Layer spacing in the beam direction is around 3 mm in water. For heavier ions, such as carbon, the necessary width of the Bragg peaks is achieved with the use of small modulating elements in the beam, the so-called ripple filter [54]. In each layer, a set of beam spots is defined with a spacing of a few mm, depending on the size of the available pencil beam. In total, the number of spots in a field can reach several thousand. In delivery, each layer is traversed by the scanned beam in a pre-computed scan pattern, with the beam residing at the given location until its prescribed dose is achieved. Depending on the scanning strategy and the accelerator's capabilities, the beam is switched off between spots ("spot scanning") [55] or delivers the beam continuously ("raster scanning") [56].

Tissue Stopping Power and Beam Range
Beam range is of the utmost importance in particle therapy due to the sharp dose gradients in the Bragg peak. Stopping the power estimation of the various tissues in the human body is therefore an integral part of dose calculation. In a good approximation, tissue can be transformed into a water-equivalent space, using ray tracing along the beam axis and a lookup table [57] to convert Hounsfield units (HU) into water-equivalent thickness (WET) values. There is no unique relationship of stopping power and HU in single-energy CT, contributing to considerable range uncertainty [58]. Dual-energy CT has been employed recently to improve stopping power predictions [59], thus permitting a significant reduction in range margins.
This strategy enables dose calculation in a water-equivalent space, which allows for significant simplifications. Errors occur in tissue, which is greatly different from water in terms of either the elements contained or the density, such as cortical bone or lung tissue. Low density tissue, such as lung tissue, air cavities or distances between nozzle elements and the patient, can lead to an underestimation of the effect of scattering and beam divergence, as the geometrical distance is compressed in calculation of WET values. This is especially important for protons.
It should be noted that the conversion of the CT to WET is beam-angle-dependent. Consequently, margins against range uncertainty should ideally be anisotropic, extending only in the beam direction. This is an issue in intensity-modulated particle therapy, in which multiple beams are optimized simultaneously.

Pencil Beam Algorithms
A large variety of pencil beam algorithms (PBAs) exist in commercial TPSs and research software. Their varying degrees of complexity balance the trade-offs between accuracy and calculation times. These analytical calculations are based on the parametrization of each pencil beam, describing the longitudinal depth dose distribution (DDD), as well as the lateral extent perpendicular to the central beam axis. Typically, the DDD is derived from MC calculations and verified against measurements. It is then tabulated for each energy and forms part of the base data for dose calculation in a specific facility. The lateral extent is described by one or more Gaussians, which also describe beam widening through MCS and beam divergence. Double or triple Gaussians offer a better MCS approximation and can describe different beam components, such as large angle scattering from nozzle elements [60] or different beam fragments in ion therapy [61]. The dose at a specific voxel with a position x is calculated by superimposing all relevant pencil beam contributions with a fluence F, taking into account the respective depth d and lateral components g: The formulation of the lateral and depth components depends on the actual algorithm, but in general these are dependent on the initial beam energy E, the lateral distance to the beam axis r, and the water-equivalent depth z.
Heterogeneous density in the beam path poses a significant challenge to pencil beam algorithms. If each beam is calculated along the central axis only, different WET values in the vicinity will not be considered correctly. One possible solution is to split each pencil beam into sub-beams, typically of the same width but of a different weight [60]. The sub-beam weights are fitted to approximate the original pencil beam. Each sub-beam is mapped individually, with its own range estimate at the central sub-beam axis. The accuracy, as well as the computational effort required for this approach, scales with the number of sub-beams [62]. An alternative is to calculate the WET for each dose voxel instead, and to sample each contributing pencil beam at this WET. The drawback of the first method is that each sub-beam still only considers one WET, so that slanted surfaces or other heterogeneities smaller than the sub-beam width still lead to dose errors. The drawback of the second method is that there is only one beam path to each voxel, whereas scattered particles reaching the voxel on curved paths are not considered correctly.

Monte Carlo Algorithms
Similarly to their use in photon therapy (see Section 2.3), Monte Carlo calculations offer higher accuracy than analytical methods in hadrontherapy. MC accurately handles heterogeneous material, scattering and fragmentation, at the cost of a considerably higher computational effort. Several general-purpose codes have been extensively used in particle therapy, including FLUKA [63] and Geant4 or TOPAS, its therapy-specific derivative [64]. Both codes have features to support particle therapy, such as the capability to import DICOM CT images. These tools feature powerful simulation capabilities and permit very accurate simulations. As they are not dedicated to therapy applications, they tend to be slow. Especially for proton therapy, tailored solutions are under development, such as Fred [65], gPMC [66] and MCsquare [67]. In these codes, scoring is adapted to therapy requirements, allowing for faster execution, and careful simplifications are introduced, such as reduced nuclear interactions, neglecting neutrons or the local resolution of delta electrons. This permits streamlined, massively parallel calculations, ideally suited for GPUs, and leading to calculation times of minutes for entire fields, with clinically acceptable inaccuracies. MC calculation is much more complex for heavier particles due to the complex fragmentation, which leads to a significant amount of heavy, energetic secondaries that have to be tracked. The dedicated codes mostly focus on protons, and ion beam calculations are considerably more time-consuming.
It should be noted that the Monte Carlo code outcome is highly dependent on the quality of the underlying data and models. Cross sections of ions in the therapeutic energy range are not always precisely known, especially for secondary fragments. A recent example of the impact of nuclear cross sections is the experiment of Horst et al. [68], showing significant uncertainty in helium cross sections with a strong dosimetric impact.
Monte Carlo simulations are essential for the prediction of secondary emissions and isotope production that can be used for range verification, such as prompt gamma detection [69] or PET imaging [53,70]. The emissions do not coincide with the Bragg peak, so MC predictions are necessary for comparison with the acquired data.

Relative Biological Effectiveness
It is well known that slow ions such as alpha particles have a different radiobiological impact at the same absorbed dose as photons. The concept of relative biological effectiveness (RBE) relates the dose from a given type of radiation to a reference quality, typically high-energy photons: There is a complex relationship between RBE and LET, projectile species, dose level, outcome parameter, as well as cell or tissue type. A large database of cell survival experiments can be found in [71].
Protons, with their comparably low LET, are considered from a radiobiological point of view to be similar to photons, with their RBE values in clinical practice fixed at 1.1. Recently, interest in proton RBE has increased, as rising LET values at the end of the range are typically placed outside of the target due to necessary margins. This might lead to an underestimated effect in OARs [72]. Explicitly considering RBE also shows an effective range increase of proton beams, as the RBE continues to rise after the end of the SOBP [73].
For heavier ions, the RBE is significantly higher and also varies along the SOBP with respect to the mixed beam field, resulting from fragmentation. Its complex relationship to several input parameters requires explicit RBE modeling to achieve clinically valid outcomes. Several RBE models have been developed and published, with two in current clinical use: the microdosimetric kinetic model (MKM) [74] and the local effect model (LEM) [75]. LEM is used in all European centers as well as in China, whereas MKM is used in all Japanese centers. Without going into detail, both models use different assumptions and result in a different dose scale, so dose prescriptions are not directly comparable [76]. Historically, the MKM did not include a dose-dependence of RBE, so that for high-fraction doses, the RBE is over-estimated, and stated doses are higher, as would be expected for a comparable effect in photon therapy. On the other hand, this model has been in clinical use for 25 years, and has an excellent track record of treating patients with carbon ions [77]. Therefore, even after an updated version of the MKM included dose dependence as a necessary adaptation for PBS [78], the dose scale was retained in order to preserve the significant clinical experience gained in many dose finding studies and clinical routines.
The LEM, in contrast, was developed directly for PBS as used in the GSI pilot therapy project, and aims to transfer clinical experience from photon therapy. RBE-weighted doses computed by LEM should therefore have the same effect as the same photon dose, but clinical experience in high-fraction doses is limited. Clinical routine still employs the first version of the LEM, whereas the current research version is LEM IV [79]. Recently, RayStation also implemented LEM IV to support helium therapy, and now features LEM I, LEM IV and MKM for the calculation of ion beam doses.
RBE models in general require data on the spectral composition of the beam, which varies with initial energy and penetration depth. For RBE-weighted analytical calculations, these data were loaded as additional base data, and were also derived from Monte Carlo simulations and validated by measurements. This same spectral information can be derived directly through a Monte Carlo calculation of the treatment field, so the MC code for particle therapy supports a variety of RBE models-both in research versions and the above-mentioned version in clinical use [80,81].
It should be noted that the RBE-weighted dose calculation is in general non-linear to the beam weights, with important implications for treatment planning, as efficient linear programming is not suitable for these plans.

4D-Dose Calculation
Although scanned hadrontherapy offers very high precision, it is also vulnerable to uncertainties, especially in the beam range. This is especially true for targets in moving organs, such as lung tumors. The explicit consideration of target motion is therefore more relevant for hadrontherapy than for photons in order to quantify both the interplay between target and beam scanning motion, as well as motion-induced range changes to the target [82]. A standard approach uses time-resolved computed tomography (4DCT) to calculate quasi-static treatment plans over the course of a breathing cycle. As proposed in [83], the treatment plan can be applied fully to each motion phase, and then accumulated in a reference phase, thereby removing the interplay effect. Alternatively, a time-sequence can be simulated or measured for a given treatment plan, and each pencil beam can be assigned to a 4DCT phase to explicitly model interplay patterns. Finally, a 4D-dose reconstruction is also possible, taking both delivery log files and measured patient motion into account. Synchronizing motion and delivery for dose reconstruction is often challenging as most facilities do not foresee an integrated recording of motion with the log files, so no common clock exists [84].

Calculation Algorithm Possibilities in Commercial Treatment Planning Systems
The development over the years of dose calculation algorithms aimed at reducing clinical treatment dose uncertainties and computation times. An overall dose uncertainty less than 5% is also recommended by ICRU Report 24 [85]; this means that for the dose calculation step, the necessary uncertainty level needed is on the order of 2%-3% [86].
Algorithms used in commercial TPSs have advanced from homogeneous dose calculations such as pencil beam convolution (PBC) with an equivalent path length for inhomogeneity corrections (the so called type-A algorithms, according to the classification introduced by Knoos et al. in 2006 [87]); then to convolution/superposition algorithms such as collapsed cone convolution (CCC) and pencil beam algorithms, including lateral scaling for lateral electron transport (i.e., the AAA, classified as type B); and finally to the last generation dose calculation engines, represented by Monte Carlo (MC) and LBTE (type C) according to an updated classification [88], which model the dose distribution in heterogeneous materials with the highest accuracy. The latest type-C generation, which emerged with developments in information technology, have three main features: (1) improved secondary electron transport modeling, (2) the ability to compute dose deposition in biological tissues with high-Z materials, and (3) reporting of the dose as a dose-tomedium. Precisely for this reason, MC modeling today represents the arrival point for many treatment planning system manufacturers.
Several products are currently available for clinical use. Some of them are in a basic format, providing an extremely simple and fast calculation, whereas others very complex and articulated with numerous applications and secondary tools.
A list of the main commercial TPSs available and their respective algorithm options for photon, electron, proton and ion beams are reported in Table 1.

RayStation System
The RayStation system (RaySearch Laboratories, Stockholm, Sweden) is the most versatile treatment planning system to date, usable for photon, electron, proton, and carbon ion dose calculations and planning. RayStation can be used to plan for electrons/photons on LINACs, for photons on Accuray's Helical TomoTherapy system [89], and for protons and carbon ions on various delivery systems.
RayStation uses both the collapsed cone convolution (CCC) and Monte Carlo (MC) algorithms for photon beams. It is supplied by a GPU-based dose engine [90] to speed up the calculation time. RayStation uses the VMC++ code, a C++ re-implementation of the VMC and XVMC code systems for electron beam dose calculations. The VMC++ code is optimized for 3D dose calculations in voxel geometries [91].
RayStation also supports planning for proton pencil beam scanning (PBS), line scanning (LS), uniform scanning (US), double scattering (DS), and wobbling proton beam delivery systems. For PBS/LS beams, RayStation offers two clinical dose engines-one based on the PBA and the other on the MC. MC is especially recommended for patients/cases with large inhomogeneities [92] and is available for beams collimated by blocks or multileaf collimators (MLCs). For US, DS and Wobbling beams, RayStation offers a clinical dose engine based on the PBA. The light ion dose calculation in RayStation is based on PBA. The computation of RBE-weighted values is implemented according to the local effect model (LEM) or to the microdosimetric kinetic model (MKM) [93,94].

Pinnacle System
The Pinnacle system (Philips Radiation Oncology Systems, Fitchburg, MA, USA) is based on a CCC fluence model developed by the Madison group in Wisconsin [95]. The point dose kernels are based on the EGS4 Monte Carlo code [11] and a unique kernel is used for the primary and scatter doses. To speed up the computation, the 'fast convolve' and 'adaptive convolution' methods have been implemented in Pinnacle as faster versions of the CCC model.
For electron beams, Pinnacle employs a standard PBA based on the Hogstrom model, the so-called Fermi-Eyges-Hogstrom (FEH) model [37]. This algorithm was implemented in C language, based on the FORTRAN public domain version of the FEH code developed at the M.D. Anderson Cancer Center, Houston, TX, USA.
For proton therapy, the Pinnacle dose calculation algorithm uses an improved Bortfeld-Bragg peak fitting function and enhanced multiple Coulomb scattering based on Monte-Carlo-simulated data that boosts accuracy over dose algorithms that use a water equivalent thickness (WET) approximation alone. The dose calculation includes all components of the delivery that are traversed along the proton beam's path, including line components (range modulator wheels, degraders), beam limiting devices (apertures), patient-specific modifiers (compensators) and the patient's anatomy. Pinnacle proton planning is used in European centers, e.g., the Rutherford Cancer Centers in the UK.

Eclipse System
In the Eclipse system (Varian Medical Systems, Palo Alto, CA, USA), several algorithms have been employed. A pencil beam convolution/superposition algorithm is used, with an equivalent path-length-based method (EPL) for inhomogeneity correction (Eclipse/Mod-Batho) [96] and another one is used with a more advanced correction scheme, taking into account the density distribution in 3D (Eclipse/ETAR) [87,97]. A third algorithm, the AAA, considers the heterogeneities by correcting the photon scatter kernel according to the 3D anisotropic changes in electron density in the neighborhood of the point of interaction. The last algorithm implemented in the Eclipse system is AXB: Acuros XB [98]. This algorithm solves the LBTE numerically and shows good agreement with MC-based validation algorithms [99].
Regarding electron beam dose calculations, Eclipse is equipped with the generalized Gaussian pencil beam (GGPB) and electron Monte Carlo (eMC) algorithms. The GGPB incorporates a three-dimensional heterogeneity correction published by Hyödynmaa [100] and improved modeling of radial dose profiles by including large-angle scatter via summing a few Gaussian functions [101].
The eMC algorithm consists of the transport and the electron beam source models. The first model transports electrons and calculates the dose delivered along the particle tracks, whereas the second one describes the electrons and photons that emerge from the LINAC treatment head [102].
The algorithms used in Eclipse for protons are the proton convolution superposition (PCS) and Acuros proton therapy (APT) dose algorithms. The PCS engine is a fluencebased dose model that computes the dose as a convolution of the proton fluence and a dose kernel hardcoded in the system [103,104]. APT is a fast MC module, with simplified radiation transport, benchmarked with MCNPX 2.7a [105]. It provides fast and robust dose calculations for proton therapy by neglecting minor physics processes in order to reduce the computational run-time. APT classifies radiation transport as slowing down interactions with atomic electrons, elastic nuclear Coulomb scattering, elastic nuclear strong-force scattering or non-elastic nuclear reactions.

Monaco System
Monaco is a treatment planning option offered by Elekta AB (Sweden). The beam model in Monaco relies on the virtual fluence model for photons developed by Fippel [106] and, for electron contamination, the approach developed by Sikora and Alber [107] is used. Monaco presently has a clinical MC engine called XVMC for photons, whereas for electron beams, it is based on the voxel-based MC algorithm (VMC) [108]. A commercial GPU-accelerated MC algorithm (GPUMCD) has been developed [109] for modeling doses for both a standard linear accelerator and for an Elekta MRI-LINAC, for which it can describe the electron return effect in the MRI magnetic field [110].

Xio System
Elekta's XiO system (CMS Inc., Elekta, St. Louis, MO, USA) provides a planning system for particle therapy treatments. The XiO planning system for photons and electrons, to our knowledge, is no longer developed for clinics. The XiO TPS uses a pencil beam dose calculation model and dose optimization using a conjugate gradient algorithm (a gradient descent algorithm) [111].
XiO supports different particle treatment modalities, including broad beam (scattered or wobbled beams), SOBP and spot scanning with intensity-modulated proton therapy (IMPT).

Other Planning Systems
In the photon scenario, we must also include the PB and MC-based dose calculation algorithms of the TPS Elements Multiple Brain Mets SRS (MBM) (Brainlab, Munich, Germany), dedicated to LINAC-based stereotactic radiosurgery. Furthermore, there are two proprietary TPSs, called Precise algorithms, available for Cyberknife units [112,113] with the ray tracing (type-A) and MC algorithms, and for Tomotherapy units with the collapsed cone convolution superposition algorithm (which is evolving to the MC) [114].
Finally, some research TPSs have been developed for the experimental boron neutron capture therapy (BNCT) technique [115,116], a binary therapy which consists in irradiation by thermal neutrons of the cancer tissue previously enriched with 10 B. The neutron capture produces high-LET 7 Li and alpha particles, which release the dose in the surrounding tumor cells [117].

Dosimetric Performance of the Commercial Algorithms
Many authors have examined the dosimetric performance of the commercial dose calculation algorithms against measured data under homogeneous and heterogeneous irradiation geometries. According to Lu [118], the accuracy of algorithms has been ranked in the following descending order: MC and LBTE, CCC, AAA, PBA ( Figure 1).
In the photon scenario, Mzenda et al. [119] presented, in a homogeneous medium for 6 MV and 10 MV photon beams, the RayStation CCCS-based dose calculation accuracy and the plan quality assessment compared with the then-more-established Pinnacle TPS. Tolerances of 3% for in-field and out-of-field regions, 10% for buildup and penumbral regions, and gamma values of 2%/2 mm dose/distance acceptance criteria were reported versus the treatment machine-measured data. Equivalent plan quality was reported, compared to Pinnacle. Furthermore, the RayStation MC-based dose calculations, tested on both homogeneous and heterogeneous irradiation geometries for 6 MV and 10 MV FFF photon beams, showed a considerable agreement between measured and calculated dose curves. In most cases, the concordance was higher than 2%/2 mm across the high-dose regions of the profiles, whereas the out-of-field dose errors were lower than 1% [120].
The equivalent dosimetric performance of the Acuros XB algorithm in Eclipse TPS was demonstrated versus the X-ray voxel Monte Carlo (XVMC) algorithm. Tsuruta et al. [121] reported that AXB values agreed with MC values within 3.5% for all dose metrics in a patient cohort of 147 lung cancer patients treated using stereotactic body radiation therapy (SBRT). In a similar cohort, the different types of algorithms of the Eclipse TPS were investigated [98]. The lung SBRT target doses calculated by type-A algorithms were not affected by lung density variations, whereas type-B and type-C algorithms showed systematic correlations between density differences and target dose differences.
Snyder et al. examined the accuracy of Monaco under different conditions [122]. Measurements of percent depth doses (PDDs) and profiles, open-field point-doses in homogenous and heterogeneous media, obliquely incident electron beams and gamma analysis were performed. Point-doses in water agreed within 2% with commissioning data in 99.5% and 98.6% of the points computed by MC and CC, respectively. The comparison of PDDs and profiles calculated by Monaco with measured data for photon beams led to Gamma index values lower than 1. All point-dose calculations for the eMC algorithm in water yielded agreement within 4% with measurements, whereas in heterogeneous media, at extended distances from source to surface or in the case of electron beams at oblique incident angles, values were below 5%.
In the electron scenario and for low-and medium-energy electron dosimetry at oblique incident angles, an acceptable accuracy at the clinical level was also reported by Archibald-Heeren et al. [123] for RayStation's application of the VMC++ algorithm.
The results of the eMC algorithm compared with those of the GGPB algorithm in Eclipse TPS were reported by Lawrence et al. [102] in the case of flat and varied surfaces for energies ranging from 6 to 20 MeV electron beams. The eMC algorithm resulted in predicted output changes within 3% for most scenarios and up to 6% at the lowest energy, in comparison with the GGPB algorithm, for which the discrepancies were up to 17% at the lowest energies.
A poor accuracy was reported for the electron PBA algorithm in Pinnacle TPS. In bolus electron conformal therapy using a cylindrical patient, a pencil beam redefinition algorithm (PBRA, in-house C++ program) resulted at least twice the accuracy of the PBA [124].
In the proton scenario, Alshaikhi et al. [125] tested the effect of varying planning parameters on pencil beam scanning dose distributions of four proton TPSs: Eclipse, Pinnacle, RayStation and XiO. An investigation was made into the implementation and optimal selection of different proton planning parameters by each system.
The accuracy of RaySearch's PBA and MC algorithms for proton therapy and clinical validation data on a phantom reproducing animal lung tissue was presented by Schreuder et al. [126]. The RayStation MC algorithm resulted in more accurate dose calculations than the PBA algorithm for lung targets.
The AcurosPT performance, evaluated by Lin et al. [105], demonstrated acceptable dose distributions for clinical proton beams, within 2 mm/2%, although the use of caution was recommended at very distal depths.

Conclusions and Future Perspective
In the present overview, we provided a general picture of the main dose calculation algorithms currently used in RT as well as the state of the art of the last-generation dose calculation algorithms, summarizing their underlying physical models and mathematical bases, and highlighting their strengths and weaknesses by referring to the most recent studies on algorithm comparisons.
One may wonder what comes next and what are the future perspectives in relation to the modern RT techniques.
A subject of increasing interest in RT planning is MR-imaging-based treatment planning [127], which aims at removing the planning CT scan and using in its place a magnetic resonance image (MRI) alone. This would allow for exploiting in a straightforward manner the intrinsic multi-spectral and intrinsic higher contrast of MRI compared to CT images [128,129], without the need for cumbersome registration processes [130,131]. Several techniques have been developed in order to introduce an MRI-only RT planning workflow [132]. The main sites clinically tested so far are the brain and prostate and a commercial solution has been developed by Philips for clinical synthetic CT generation for prostate patients (MRCAT pelvis). Despite the potential benefits of MRI-alone RT, clinical studies testing MRI-only planning techniques with larger patient cohorts are needed to bring MRI-only RT into clinical settings. In addition, the recent introduction of MRI-LINAC facilities requires dedicated techniques in terms of dose calculation engines to account for the secondary electron return effect in the MRI magnetic field [133]. Dose calculation methods for hadrontherapy are still developing, similarly to the entire field. Faster calculation times are still necessary, especially for RBE-weighted doses. Online log-file-based dose reconstructions might enable further steps in adaptive therapy. A further speeding up of MC codes might enable routine treatment planning with increased accuracy, especially for heterogeneous target regions such as the lung. New modalities such as particle arc therapy [134] require support from efficient dose engines, whereas novel delivery strategies such as spatially fractionated [135] or Flash [136] delivery, with an impact on radiobiological effects, would greatly benefit from explicit modeling, which could build on the existing RBE models already in clinical use. Although models for the increased efficacy of high-LET irradiation against hypoxia exist [137], they are not integrated into standard models and are not used in clinical practice.
Eventually, next-generation dose calculation algorithms are expected to integrate biological criteria and models such as normal tissue complication probability (NTCP) [138] and tumor control probability (TCP) [139] models in order to optimize treatment plans on the basis of risk predictions, rather than on physical dose and dosimetric parameters [140][141][142]. In particular, a promising development for predictive models in the framework of dose engines is represented by the dosiomic approach, characterized by the analysis of spatial patterns in dose distributions [143][144][145][146][147][148][149][150], against the conventional point-wise parameters of dose-volume histograms (DVH) [151,152] and their surface and mass variants (e.g., DSH [153,154] and DMH [155]).