Next Article in Journal
A Phytochemical Analysis, Microbial Evaluation and Molecular Interaction of Major Compounds of Centaurea bruguieriana Using HPLC-Spectrophotometric Analysis and Molecular Docking
Next Article in Special Issue
Mortality Prediction of COVID-19 Patients Using Radiomic and Neural Network Features Extracted from a Wide Chest X-ray Sample Size: A Robust Approach for Different Medical Imbalanced Scenarios
Previous Article in Journal
A Practical Model Study on the Mechanism of Clay Landslide under Static Loads: From the Perspective of Major Crack–Stress–Displacement
Previous Article in Special Issue
Psychological Stress Level Detection Based on Heartbeat Mode
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond

by
Miriam Santoro
1,2,†,
Silvia Strolin
1,†,
Giulia Paolani
1,2,*,
Giuseppe Della Gala
1,
Alessandro Bartoloni
3,
Cinzia Giacometti
4,
Ilario Ammendolia
4,
Alessio Giuseppe Morganti
4,5 and
Lidia Strigari
1,*
1
Department of Medical Physics, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
2
Medical Physics Specialization School, Alma Mater Studiorum, University of Bologna, 40138 Bologna, Italy
3
Istituto Nazionale di Fisica Nucleare (INFN) Sezione di Roma 1, 00185 Roma, Italy
4
Department of Radiation Oncology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
5
Department of Experimental, Diagnostic and Specialty Medicine, Alma Mater Studiorum, University of Bologna, 40138 Bologna, Italy
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(7), 3223; https://doi.org/10.3390/app12073223
Submission received: 31 January 2022 / Revised: 11 March 2022 / Accepted: 16 March 2022 / Published: 22 March 2022

Abstract

:

Featured Application

Computational models based on artificial intelligence (AI) variants have been developed and applied successfully in many areas, both inside and outside of medicine. However, the full potential of AI in the entire radiotherapy workflow is not fully understood, while potential ethical, legal, and skill barriers might limit or postpone the application of AI in support of clinical practice.

Abstract

In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers.

1. Introduction

Artificial intelligence (AI) is a field of computer science that focuses on developing computer-based approaches that mimicking humans’ ability to make decisions and solve problems. From a conceptual standpoint, AI can be divided into two main categories: iterative optimization (IO) [1], also known as computationalism [2], and machine learning (ML), which in turn includes deep learning (DL), resembling a matryoshka scheme. IO algorithms are based on the progressive combination of several scripts that are independently created as parts of the decision-making process of an experienced operator. ML uses statistical functions to develop models that self-learn patterns from collected data and make decisions on new data. ML includes DL, which involves the combination of simple algorithms in a complex hierarchical and “deep” architecture, inspired by the connections of neurons in the network-based structure inside the human brain.
Currently, AI has been applied in many medical fields, thanks to its ability to find novel solutions for managing very complex, high-dimensional, and multifactorial problems, as commonly encountered in the field of therapy and imaging [3].
Thanks to the extensive development that has occurred in radiotherapy (RT) oncology information systems (OIS), a large quantity of information (e.g., patient assessments, multimodality images, absorbed dose distributions, machine performance, and patient-specific quality assurance) is now available. These databases are reaching dimensions beyond those manageable by individuals or groups, meaning quality assurance (QA) of the RT workflow processes and machine status checks are not possible.
AI can allow an integrated and comprehensive assessment of the patient’s condition using all available information in the OIS to support radiation oncology staff. Importantly, a radiation oncologist’s judgment can be more accurate than those made by machines, as they are often dedicated to a single task (e.g., patient identification, prioritization, monitoring, or patient workflow checks). radiation oncologists have daily interactions with patients and have the capability to understand their unspoken needs and values. In this sense, AI tools should be considered as computer aids to the staff, who would remain responsible and in charge of patient management.
Furthermore, the rapid development of new AI-based solutions applied to RT will impact the daily clinical practice, such as the (semi-)automatic delineation of normal tissues and target volumes or treatment plan optimization. Nevertheless, AI strategies use advanced statistical techniques and complex algorithms that are often not fully understood by RT staff, with the risk of being employed as “black box” tools [4].
Our overview focuses on the AI-based approaches implemented or proposed in recent years in RT to favor the more conscious use of available solutions. At the same time, we highlight possible barriers to implementation in clinical practice and identify possible countermeasures.

2. Materials and Methods

2.1. Literature Search Strategy

PubMed and a Scopus searches were performed using a query string to identify the AI application in the RT field. The query string in Pubmed was the following: (((“artificial intelligence”[Title/Abstract]) OR (“machine learning”[Title/Abstract]) OR (“deep learning”[Title/Abstract])) AND ((“radiotherapy”[Title/Abstract]) OR (“radiation therapy”[Title/Abstract]) OR (“radiation oncology”[Title/Abstract]))) NOT review. Filters: from 1 January 2018–1 January 2022.
The query string in Scopus was the following: (TITLE-ABS (“artificial intelligence” OR “machine learning” OR “deep learning”) AND TITLE-ABS (“radiotherapy” OR “radiation therapy” OR “radiation oncology”)) AND (EXCLUDE (DOCTYPE, “re”)) AND (LIMIT-TO (PUBYEAR, 2021) OR LIMIT-TO (PUBYEAR, 2020) OR LIMIT-TO (PUBYEAR, 2019) OR LIMIT-TO (PUBYEAR, 2018)).
The search was restricted to the last four years to include only the most recent articles in both cases.

2.2. Study Selection

The PRISMA flow diagram [5] methodology was followed for study selection. Two authors independently reviewed the titles and abstracts to decide on the inclusion into this study. Several papers were manually added after the screening based on excluded review citations, according to [6]. Full articles were retrieved when the abstract included the investigated topic, and only full papers published in English were considered. The data were collected in a database with the following columns for the subsequent data analysis: first author, year, title, inclusion/exclusion issues, AI-based algorithm used, AI-based algorithm classification, AI-based algorithm goal, the main phase of the RT workflow (patient care coordination, multimodality image registration and segmentation, treatment planning, patient positioning, online monitoring, adaptive planning, patient-specific plan, machine QA, image and chart review, outcome prediction). The algorithm classification was grouped into IO, ML, and DL, representing the main categories of the AI strategies [1,2,7]. The main goal was chosen from the phases of the RT patient-based workflow, as illustrated in Figure 1.

3. Results and Discussion

3.1. Search Inclusion Criteria and Study Description

The reported PubMed and Scopus searches identified 1824 papers, selected as described in the PRISMA flow diagram (Figure 2).
After the screening of 691 duplicates, 204 of 1133 papers were excluded for the following reasons: 45 reviews; 19 no English; 13 no AI application; 65 no application to clinical RT; 6 comments, commentary, or letters; 15 conferences; 3 ethical or philosophical perspectives; 8 harmonization or standardization; 5 surveys; 2 corrections or corrigendum; 6 errata or retracted; 16 editorial or book chapter; 1 general recommendation. In addition, 8 of 929 papers were excluded after the inspection of the full text for the following reasons: 6 overviews, 1 no application to clinical RT, 1 commentary. The details about the excluded papers are reported in the Supplementary Materials Table S2).
Out of the 921 included papers reporting the types of methods employed (i.e., IO, ML, or DL), 321 papers (34.9%) used ML methods and 596 (64.7%) used DL methods. Four papers (0.4%) investigated the application of IO approaches to image and chart reviews and the Pinnacle treatment planning system. In particular, the Pinnacle auto-planning module is based on the ability of a digital computer to perform tasks commonly associated with human intelligence through a system of iterative optimization [8].
Figure 3 shows the distribution of the published papers, including IO, ML, and DL methods per year. From the figure, it is possible to notice how the number of papers using AI approaches in RT is growing over time. In particular, the number of papers per year using DL shows a steeper growth compared to ML approaches.
The numbers of papers reporting applications of AI subgroups according to the different RT workflow steps (highlighted in Figure 1) are reported in Figure 4. Among the AI approaches, DL methods were mostly implemented for semi-automatic organs at risk (OARs) or tumor segmentation studies (254 papers) and for synthetic image generation (127 papers). ML methods were mostly used for the outcome prediction (183 papers).

3.2. Reported Application of AI to RT

Out of the identified AI papers (listed in Supplementary Materials Table S1), a subset of relevant papers will be presented as examples of research areas that are well established or consolidated or still under development and applied to the analyzed phases of the RT workflow (Figure 1).

3.2.1. Patient Care Coordination and Optimization

AI has been applied to patient care coordination and optimization to create a data management system for clinical and research processes [9]. The implemented system was trained to extract relevant information by direct connection with structured data and text mining technology, which allowed an objective evaluation of key performance indicators to improve patient care management. ML was also used as a tool for clinical decision support in a multicentric context [10]. In particular, Field et al. [10] developed a platform to coordinate data analysis across RT centers using distributed or federated learning methods after a harmonization phase. Moreover, other examples of complex jobs with custom constraints that AI could optimize include scheduling RT treatment appointments, checking machines, and patient-specific quality assurance (QA).
Moreover, the possibility to analyze OIS data and set an alert in terms of the monitor units, i.e., the number of delivered fractions (i.e., ad hoc modified to account for the treatment gap), might also help assess adherence to guidelines [11], especially during pandemic periods. As described in [12], AI methods were also applied to patient face recognition before RT treatment. These solutions reduce the risk of the patient misidentification that may occur in a busy RT department, which could lead to delivering an inappropriate RT treatment plan.

3.2.2. Image Registration

Image registration and fusion algorithms are currently applied in RT and are considered a critical component of contour propagation and dose accumulation among consecutive RT treatments, as well as in online or offline adaptive plan optimization. Automated rigid registration or deformable image registration (DIR) often requires manual tuning due to possible relevant patient anatomy and setup changes compared with the available images. This task is time-consuming but can be aided by AI-based algorithms to assist the radiation oncologist. In [12,13], DL-based methods were implemented to learn similarity metrics for image registration purposes, allowing for fast, user-independent, non-rigid inter-modality registration
It should be noted that non-negligible uncertainties still exist, potentially affecting applications based on DIR [14]. The validation of the DIR-based algorithms against expert contours is necessary, as recommended by the American Association of Physicists in Medicine’s Task Group (TG) 132 report [15] and as applied in several papers [16,17,18,19,20,21,22,23].

3.2.3. Image Segmentation

AI-based methods have shown potential for medical image segmentation, target detection, and other tasks [24,25,26,27] using manual segmentation performed by expert radiation oncologists as the ground truth.
Image segmentation is a process of the RT workflow that shows substantial intra- and inter-observer variability [28]. It is crucial for treatment plan development because RT plans are optimized and judged based on contoured regions and the fulfillment of dose–volume constraints. Therefore, the absorbed dose distribution metrics depend on the accuracy and integrity of contours used to identify target and normal tissues. The possibility of using AI-based segmentation might reduce the inter-observer variability in the delineation of OARs and targets.
Nevertheless, manual segmentation is a time-consuming process, which impacts the work schedule of the RT department. The time required for this task is relevant not only for baseline computed tomography (CT) planning images but also for offline or online adaptive RT, requiring repeated CT images or daily acquired cone–beam CT (CBCT). Indeed, the time delay between image acquisition and manual segmentation can last several minutes, which might be incompatible with the online adaptive process, because in the meantime the targets and OARs might change in volume and position [29]. ML and DL-based methods were applied for automatic image segmentation in 15 and 254 identified papers, respectively, to mimic the expert radiation oncologists’ results in the delineation and identification of organs and tumors (Figure 4).
AI-based segmentation tools have been applied to different anatomical areas (Figure 5a), such as the brain [30,31], head and neck [25,26,27,32,33,34,35,36,37,38,39], thorax (including the segmentation of lungs [40,41,42,43,44], breasts [44,45,46,47], and heart [44,47]), abdomen [24,25,48], and female or male pelvis [29,45,49,50,51,52,53,54]. The use of AI for automatic segmentation might generate contours of lesions and OARs with an expected higher adherence to international guidelines [55,56,57,58,59,60,61,62,63,64,65], reducing the inter-operator variability, especially in a multicentric context.
Several commercial, open-source research tools are now available for auto-segmentation [32,33,34,35,36,39,40,41,47,66,67,68,69]. Among the investigated papers, 138 used a convolutional neural network (CNN) specifically designed for image recognition and computer vision applications. These techniques were applied in the RT field to segment lesions or OARs in several areas and image modalities. Excluding CT alone, the second most frequent imaging modality is MR, thanks to its ability to provide superior soft-tissue contrast and PET/CT images to obtain metabolic information (Figure 5b).
Ongoing significant efforts are still directed towards improving the efficiency and robustness of automatic delineation and assessment strategies for QA with AI-based segmentation tools [42,70,71,72], requiring ad hoc guidelines [45]. In particular, Maffei et al. [70] developed an ML approach based on the use of a radiomic-feature-based classifier to evaluate the segmentation quality of the heart’s structure. Moreover, van Rooij et al. [71] used spatial probability maps to detect inaccuracies in contour delineation of the head and neck using a DL approach. Finally, DL-based methods were used to evaluate the quality of contouring of OARs and targets in lung cancer patients [42] and salivary glands in head and neck cancer patients [72].

3.2.4. Synthetic Image Generation

During RT, multiparametric MR images can allow more comprehensive characterization of the investigated area, while daily pre-therapy CBCT images are widely used for patient positioning verification. Unfortunately, the intensity values of MR [73] or CBCT [74] images are not directly related to the electron density of the tissues, which represents one of the input data issues in treatment planning. Moreover, CBCT is a practical low-dose image modality, but it suffers from poor image quality compared to CT. Thus, there is a need to derive CT-equivalent information (i.e., “synthetic” CT images), which is mandatory for RT treatment planning and to generate digitally reconstructed radiography (DRR) images that can be used for patient setup verification. Using this strategy, the generation of synthetic images allows fully MR-based planning, while reducing the mismatch that originates during the transfer of contours from one image modality to another and caused by organ motion and changes [75,76,77].
Regarding MR images, synthetic CT image generation based on segmentation and atlases has been proposed [78]. Unfortunately, the first approach is time-consuming, requiring the acquisition and segmentation of multiple MR sequences. At the same time, the atlas-based approach relies on DIR methods and depends on the cohort included in the atlas. In this context, DL approaches are considered the most effective strategy due to their ability to learn complex models without acquiring additional images or solving image-registration issues. DL-based “synthetic” CT image generation was implemented in 89% (Figure 6a) of papers, while ML was applied in 11% of papers (Figure 6b). Both approaches employed multiple 2D or 3D images as input data.
Among DL-based methods, several examples were based on CBCT [79,80], MR [73,81,82], or 2D projection [83,84] images. CNN approaches were also adopted [85] to speed up the online setup verification using MR-guided RT.
Another application of AI methods involves the generation of synthetic images with increased quality, performed mainly on CBCT images (in 11/21 papers). Finally, Dai et al. [39] generated synthetic MR images to improve the auto-segmentation accuracy.

3.2.5. Treatment Planning

In the context of intensity-modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), manual planning for complex cases is challenging, depending on the planner’s experience and ability to manage the trial-and-error process. Planners, although very experienced, neither know “a priori” how much a plan can be optimized nor how to tailor all dosimetric constraints to the specific patient. During the optimization of the plan, the constraints depend on the site, stage of disease, and treatment schedule for given classes of patients. Knowledge-based treatment planning uses the ML technique to identify the patient-specific dose–volume constraints. This approach might have a limited capability to guide the new treatment schedules [86] because it requires a large dataset of collected data to train the system and provide the best patient-tailored solutions [87].
On the other hand, in a research study, Hrinvich et al. [88] implemented a DL method to find the optimal machine parameter solution for a VMAT plan. This method is based on reinforcement learning and a convolutional neural network, which are ML- and DL-based approaches, respectively. The paper employed a two-dimensional network design to optimize VMAT delivery. Its future extension to three-dimensional beam modeling could represent a fully DL-based time-saving solution that does not require previous plans.
Another treatment planning optimization strategy is based on fully automated IO solutions that rely on computerized rules and reasoning methods [49]. The technique templates allow the automatic generation of plans without human intervention. The superiority of auto-planning versus manual VMAT optimization was reported using a blinded side-by-side plan comparison [89]. The full implementation of auto-planning in the clinical routine can allow one to develop consistent quality plans with minimal inter-planner variation in less than 30 min [90]. This will undoubtedly significantly reduce planning times in busy departments or help in finding a better solution for possible planning class optimization. With the advent of auto-planning technologies, a complete understanding of underlined methods and quality assurance procedures is necessary for the effective use of these tools.
More research is needed to explore the full potential of auto-planning and its optimal clinical application by identifying ad hoc dose–volume constraints for generating novel treatments based on the available technologies in a multicenter context [89].

3.2.6. Patient Positioning and Monitoring and Adaptive Planning

To guarantee consistency in online imaging, in the treatment phase it is essential that the patient is in the same position as established in the simulation phase. The CBCT is the most used 3D online imaging technique to verify patient positioning, despite its much lower quality than for the planning CT images. AI allows the improvement of the CBCT image quality using DL-based methods [91,92], thereby enabling more accurate patient positioning. Similar approaches can be used for patient monitoring using onboard MRI, ultrasonography, or optical surface imaging [93]. In addition, DL-based methods were used to automatically detect gold fiducial markers before treatment [94].
During the RT delivery, patient or internal organ motion can increase the dose delivered to non-target tissues if motion management methods are not adopted. AI can be used to generate patient-specific dynamic motion management models, which can improve tumor tracking and interrupt irradiation in inadequate target positions. These algorithms could automatically adjust for complex breathing patterns in real-time to accurately track and predict the tumor position in advance [95]. AI models were used to predict geometric changes occurring in head and neck cancer patients and to identify the most appropriate treatment week for acquiring images for planning adaptations [29,96].
DL methods were also implemented to develop a complete adaptive RT strategy [29,54], reducing the time required for other tasks [97].

3.2.7. Planning Quality Assurance, Commissioning, and Machine Performance Checks

Planning QA, metrics, and chart reviews are essential components to ensure safety and high quality in RT treatments [98]. The advances made in OIS technology, including “record and verify” modules, have led to a growing collection of patient data, images, and reports. This huge quantity of information is not manageable for individuals or groups, thus, and could potentially represent a barrier to error identification or could lead to gaps in quality [99].
For this reason, several ongoing studies highlight the potential of AI approaches to strengthen and speed up the QA processes, including daily metric and chart reviews collected in the RTOISs. The recently published AAPM TG-275 report [98] lists new recommendations for comprehensive and minimum initial chart checks, which require significant human resources if manually performed [100]. The development of an initial chart check automation process for radiotherapy informative systems dramatically will improve the practicality and efficiency of implementing the above TG recommendations, yielding significant reductions in both manual check times (by 44–98%) and residual detectable errors (by 15–85%) [101].
Several examples are now available on the ability of ML and DL approaches [102,103,104] to distinguish introduced RT treatment delivery errors based on the collected 2D/3D data from devices (diode or chamber arrays) available for patient-specific quality assurance (PS-QA). In the future, patient-specific QA and machine performance prediction processes can hopefully be fully automated.

3.2.8. Outcome Prediction

Outcome prediction is one of the major applications of AI methods, being analyzed in about 24% of papers, mostly pertaining to ML approaches. Examples of the outcome prediction areas that have been investigated include overall survival [105], progression [106], recurrence [107], toxicity [23,108,109,110], biomarker identification [111], detection and classification of tumors [112,113], mutation prediction, treatment response [114,115,116], patient and risk stratification [117], and quality of life [118].
Most papers (about 60%) have focused on applying ML or DL models to clinical, genomic, and mixed (e.g., clinical and treatment) data. For example, Rosen et al. [23] predicted the risk of xerostomia in head and neck cancer patients, Lee et al. [108] evaluated the effects of multiple single nuclear polymorphisms (SNPs) on the risk of urinary symptoms in prostate cancer patients, Tian et al. [109] estimated the risk of fistula formation in patients treated with interstitial brachytherapy for advanced gynecological malignancies, and van Velzen et al. [110] assessed the heart disease risk in breast cancer patients. Other examples include predictions of survival in patients with non-small-cell lung cancer [105,119] and recurrence in salivary gland tumor patients undergoing adjuvant chemotherapy [107]. Furthermore, AI can be used to predict treatment failure [114] or response after individualized carbon ion RT [115] or neoadjuvant therapy [116], or to monitor post-RT changes in soft-tissue sarcomas [120]. As additional examples of ML-based applications, Tabl et al. [111] identified gene biomarkers guiding breast cancer treatments, while Yang et al. [118] predicted the quality of life of prostate cancer patients after RT treatment. Finally, Stenhouse et al. [121] developed an ML model to select the most critical features impacting on the choice of an optimal brachytherapy applicator.
The remaining papers (about 40%) developed predictive models using radiomic or dosiomic features. The radiomic and dosiomic approaches have gained increasing interest in recent years because they allow the extraction of quantitative information (i.e., features) from images and dose distribution information, respectively. The features are extracted from CT, MRI, or PET images collected at baseline or during follow-up and could be included in predictive models for the already mentioned purposes. As examples, Li et al. [122] predicted overall survival rates in the early stages of non-small-cell lung cancer patients, Ubaldi et al. [112] classified lung cancer stages, Osman et al. [117] stratified risk in prostate cancer patients, Kawahara et al. detected [113] the degree of differentiation in esophageal squamous cell carcinoma, and Du et al. [106] evaluated progression in nasopharyngeal carcinoma patients without metastasis using radiomic features extracted from CT images
Unfortunately, the number of analyzed cases is limited compared to the number of patients treated every day, meaning a harmonization procedure is needed. The accuracy of dose–effect models is strongly related to the volume, velocity, and reliability of the collected data, such as for big data [99]. In addition, to build reliable models in multicentric studies, enough data must be collected, demanding a harmonization procedure to unify data or databases. In this context, Sleeman Iv et al. [123] and Syed et al. [124] developed methods for automatically re-labeling structure sets names according to AAPM TG-263. Another example was proposed by Haga et al. [125] to standardize imaging features before radiomic analysis.
Moreover, mobile, portable, and wearable devices might boost the collection of harmonized data from the real world to early identify patient toxicity or outcomes [126].

3.3. Skill and Concern

Although the future of AI in radiotherapy remains undecided, these approaches in oncology are limited by the ability of RT staff to understand the current conditions and development in the field, the accuracy of the methods, and the direction in which they are likely to unfold.
Five recent surveys focused on the overall aspects of the AI application, from education to quality assurance. Among these, Batumalai et al. [127] emphasized that in treatment planning AI improved the consistency in terms of planning optimization, productivity, and quality, allowing the staff to focus on patient care. This positive perceived impact of AI conflicts with the concern that it might cause a loss of skills or that there is a lack of training to maintain the planning expertise.
The deep separation in technological availability among the developed and developing countries is also a factor that may cause judgmental errors [128]. Data from developed countries cannot simply be extrapolated and applied to developing countries without expecting discrepancies. Hence, ensuring equity in data representation, keeping in mind the geographical variations in diseases, populations, and health services, seems to be the way forward. As with all multifactorial problems complicated by the explosion of raw input data, there is a risk of cognitive overload [129]. Indeed, the radiation oncologist’s evaluation is affected by the limited human cognitive capacity for using variables in the clinical decision-making process [129,130].
On the other hand, the amount of information that computers can analyze is unmeasurable, and in any case is far beyond the human ability to retain information for specific clinical decisions. Therefore, AI is needed to identify early biomarkers for patient stratification or outcome prediction. However, several concerns in using AI tools as a “black box” solution have been raised by RT staff. One of the proposed methods to make ML and DL results more acceptable in clinical practice is to allow RT staff to understand the inner workings of the device they are using [131].
Continued education on AI is considered a priority by RT staff members, as well as the preservation of their skills (e.g., manual segmentation ability or contour supervision), as reported in several surveys [132,133]. Thus, the role of medical professionals in RT departments is evolving thanks to the introduction of AI methods. Most of the staff’s work will focus on macro-processes, namely verifying the system’s performance quality, meaning staff competencies and education will need to change accordingly.
In this context, novel tasks and roles to pursue are already included in the core curricula for RT staff according to the individual’s professional role [134]. Also crucial for future implementation is that these AI tools might enable a more direct role in patient care responsibilities for all professionals, including medical physicists, through the online analysis of follow-up patient outcomes [135].
Caution must be observed in making decisions solely based on AI-generated information, the accuracy of which might be limited by the paucity of training data.
The future use of AI will herald unprecedented changes in the field of radiation oncology. Numerous discussions have prompted careful thought about AI’s impacts on the future landscape of RT, including how to preserve patient safety and how these devices and tools will be developed and regulated [136]. Price suggested that regulatory agencies mandate that ML developers disclose the information used in their algorithms [4].
Successful prospective validation through a large patient cohort may require academic–industrial–multicenter cooperation while fulfilling data security and intellectual property requirements. Cross-validated approaches among RT departments are also mandatory, and federated learning techniques might help rapidly implement shared AI solutions in RT [137]. With the federated learning technique, data are stored local centers. At the same time, models are developed from the entire cohort in a multicenter context [10].

3.4. Ethics

One of the main concerns is the ethics in using the AI approaches and their evolving role [138].
Many AI learning methods require large datasets, which are often not available, are very expensive, or are protected by intellectual property rights. AI algorithms need to be extensively tested for accuracy before clinical implementation, requiring costly and time-consuming testing. One of the main concerns is if AI fails to deliver the correct output, who will take responsibility for the mistake? [139]. Human supervision is mandatory to guarantee the management and control of the AI results, which might impact patients undergoing RT through the implementation and evolution of AI over time.

4. Conclusions

AI-based tools are now on the roadmap for RT and have been applied across the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns have been raised, including the need for harmonization and improvements in skill for RT staff to permit the intended use of AI tools in clinical practice and the avoid their use as a “black box” solution.
In conclusion, strong cooperation between RT clinicians and AI experts is necessary to develop and implement reliable and trustworthy AI tools.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/app12073223/s1, Table S1: List of included papers for quantitative evaluation and their classification according to the study main aims and AI subgroups (i.e., type of methods); Table S2: List of excluded papers and reasons for exclusion from quantitative evaluation.

Author Contributions

Conceptualization, L.S.; methodology, M.S., G.P., and S.S.; investigation, I.A. and A.G.M.; resources, C.G. and I.A.; data curation, A.B., C.G., and G.P.; writing—original draft preparation, M.S., A.B., and L.S.; writing—review and editing G.D.G., S.S., G.P., and L.S.; visualization, M.S.; supervision, L.S. and A.G.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hussein, M.; Heijmen, B.J.M.; Verellen, D.; Nisbet, A. Automation in intensity modulated radiotherapy treatment planning—A review of recent innovations. Br. J. Radiol. 2018, 91, 20180270. [Google Scholar] [CrossRef]
  2. Barragán-Montero, A.; Javaid, U.; Valdés, G.; Nguyen, D.; Desbordes, P.; Macq, B.; Willems, S.; Vandewinckele, L.; Holmström, M.; Löfman, F.; et al. Artificial intelligence and machine learning for medical imaging: A technology review. Phys. Med. 2021, 83, 242–256. [Google Scholar] [CrossRef]
  3. Manco, L.; Maffei, N.; Strolin, S.; Vichi, S.; Bottazzi, L.; Strigari, L. Basic of machine learning and deep learning in imaging for medical physicists. Phys. Med. 2021, 83, 194–205. [Google Scholar] [CrossRef]
  4. Price, W.N., II. Regulating Black-Box Medicine. Mich. Law Rev. 2017, 116, 421–474. [Google Scholar] [CrossRef]
  5. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  6. Lewis, P.J.; Court, L.E.; Lievens, Y.; Aggarwal, A. Structure and Processes of Existing Practice in Radiotherapy Peer Review: A Systematic Review of the Literature. Clin. Oncol. 2021, 33, 248–260. [Google Scholar] [CrossRef]
  7. Francolini, G.; Desideri, I.; Stocchi, G.; Salvestrini, V.; Ciccone, L.P.; Garlatti, P.; Loi, M.; Livi, L. Artificial Intelligence in radiotherapy: State of the art and future directions. Med. Oncol. 2020, 37, 50. [Google Scholar] [CrossRef]
  8. Kusters, J.; Bzdusek, K.; Kumar, P.; van Kollenburg, P.G.M.; Kunze-Busch, M.C.; Wendling, M.; Dijkema, T.; Kaanders, J. Automated IMRT planning in Pinnacle: A study in head-and-neck cancer. Strahlenther. Onkol. 2017, 193, 1031–1038. [Google Scholar] [CrossRef]
  9. Marazzi, F.; Tagliaferri, L.; Masiello, V.; Moschella, F.; Colloca, G.F.; Corvari, B.; Sanchez, A.M.; Capocchiano, N.D.; Pastorino, R.; Iacomini, C.; et al. GENERATOR Breast DataMart-The Novel Breast Cancer Data Discovery System for Research and Monitoring: Preliminary Results and Future Perspectives. J. Pers. Med. 2021, 11, 65. [Google Scholar] [CrossRef]
  10. Field, M.; Vinod, S.; Aherne, N.; Carolan, M.; Dekker, A.; Delaney, G.; Greenham, S.; Hau, E.; Lehmann, J.; Ludbrook, J.; et al. Implementation of the Australian Computer-Assisted Theragnostics (AusCAT) network for radiation oncology data extraction, reporting and distributed learning. J. Med. Imaging Radiat. Oncol. 2021, 65, 627–636. [Google Scholar] [CrossRef]
  11. Galofaro, E.; Malizia, C.; Ammendolia, I.; Galuppi, A.; Guido, A.; Ntreta, M.; Siepe, G.; Tolento, G.; Veraldi, A.; Scirocco, E.; et al. COVID-19 Pandemic-Adapted Radiotherapy Guidelines: Are They Really Followed? Curr. Oncol. 2021, 28, 288. [Google Scholar] [CrossRef]
  12. Haskins, G.; Kruecker, J.; Kruger, U.; Xu, S.; Pinto, P.A.; Wood, B.J.; Yan, P. Learning deep similarity metric for 3D MR-TRUS image registration. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 417–425. [Google Scholar] [CrossRef] [Green Version]
  13. Cao, X.; Yang, J.; Wang, L.; Xue, Z.; Wang, Q.; Shen, D. Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity. Mach. Learn. Med. Imaging 2018, 11046, 55–63. [Google Scholar] [CrossRef]
  14. Oh, S.; Kim, S. Deformable image registration in radiation therapy. Radiat. Oncol. J. 2017, 35, 101–111. [Google Scholar] [CrossRef]
  15. Brock, K.K.; Mutic, S.; McNutt, T.R.; Li, H.; Kessler, M.L. Use of image registration and fusion algorithms and techniques in radiotherapy: Report of the AAPM Radiation Therapy Committee Task Group No. 132. Med. Phys. 2017, 44, e43–e76. [Google Scholar] [CrossRef] [Green Version]
  16. Weppler, S.; Schinkel, C.; Kirkby, C.; Smith, W. Lasso logistic regression to derive workflow-specific algorithm performance requirements as demonstrated for head and neck cancer deformable image registration in adaptive radiation therapy. Phys. Med. Biol. 2020, 65, 195013. [Google Scholar] [CrossRef]
  17. Kumarasiri, A.; Siddiqui, F.; Liu, C.; Yechieli, R.; Shah, M.; Pradhan, D.; Zhong, H.; Chetty, I.J.; Kim, J. Deformable image registration based automatic CT-to-CT contour propagation for head and neck adaptive radiotherapy in the routine clinical setting. Med. Phys. 2014, 41, 121712. [Google Scholar] [CrossRef]
  18. Liang, X.; Bibault, J.E.; Leroy, T.; Escande, A.; Zhao, W.; Chen, Y.; Buyyounouski, M.K.; Hancock, S.L.; Bagshaw, H.; Xing, L. Automated contour propagation of the prostate from pCT to CBCT images via deep unsupervised learning. Med. Phys. 2021, 48, 1764–1770. [Google Scholar] [CrossRef]
  19. Hoffmann, C.; Krause, S.; Stoiber, E.M.; Mohr, A.; Rieken, S.; Schramm, O.; Debus, J.; Sterzing, F.; Bendl, R.; Giske, K. Accuracy quantification of a deformable image registration tool applied in a clinical setting. J. Appl. Clin. Med. Phys. 2014, 15, 4564. [Google Scholar] [CrossRef]
  20. Ramadaan, I.S.; Peick, K.; Hamilton, D.A.; Evans, J.; Iupati, D.; Nicholson, A.; Greig, L.; Louwe, R.J. Validation of Varian’s SmartAdapt® deformable image registration algorithm for clinical application. Radiat. Oncol. 2015, 10, 73. [Google Scholar] [CrossRef] [Green Version]
  21. Pukala, J.; Johnson, P.B.; Shah, A.P.; Langen, K.M.; Bova, F.J.; Staton, R.J.; Mañon, R.R.; Kelly, P.; Meeks, S.L. Benchmarking of five commercial deformable image registration algorithms for head and neck patients. J. Appl Clin. Med. Phys. 2016, 17, 25–40. [Google Scholar] [CrossRef]
  22. Loi, G.; Fusella, M.; Lanzi, E.; Cagni, E.; Garibaldi, C.; Iacoviello, G.; Lucio, F.; Menghi, E.; Miceli, R.; Orlandini, L.C.; et al. Performance of commercially available deformable image registration platforms for contour propagation using patient-based computational phantoms: A multi-institutional study. Med. Phys. 2018, 45, 748–757. [Google Scholar] [CrossRef]
  23. Rosen, B.S.; Hawkins, P.G.; Polan, D.F.; Balter, J.M.; Brock, K.K.; Kamp, J.D.; Lockhart, C.M.; Eisbruch, A.; Mierzwa, M.L.; Ten Haken, R.K.; et al. Early Changes in Serial CBCT-Measured Parotid Gland Biomarkers Predict Chronic Xerostomia After Head and Neck Radiation Therapy. Int. J. Radiat. Oncol. Biol. Phys. 2018, 102, 1319–1329. [Google Scholar] [CrossRef]
  24. Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.W.; Heng, P.A. H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes. IEEE Trans. Med. Imaging 2018, 37, 2663–2674. [Google Scholar] [CrossRef] [Green Version]
  25. Chen, L.; Bentley, P.; Mori, K.; Misawa, K.; Fujiwara, M.; Rueckert, D. DRINet for Medical Image Segmentation. IEEE Trans. Med. Imaging 2018, 37, 2453–2462. [Google Scholar] [CrossRef]
  26. Tong, N.; Gou, S.; Yang, S.; Ruan, D.; Sheng, K. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks. Med. Phys. 2018, 45, 4558–4567. [Google Scholar] [CrossRef] [Green Version]
  27. Tong, N.; Gou, S.; Yang, S.; Cao, M.; Sheng, K. Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images. Med. Phys. 2019, 46, 2669–2682. [Google Scholar] [CrossRef]
  28. Nelms, B.E.; Tomé, W.A.; Robinson, G.; Wheeler, J. Variations in the contouring of organs at risk: Test case from a patient with oropharyngeal cancer. Int. J. Radiat. Oncol. Biol. Phys. 2012, 82, 368–378. [Google Scholar] [CrossRef]
  29. Sibolt, P.; Andersson, L.M.; Calmels, L.; Sjostrom, D.; Bjelkengren, U.; Geertsen, P.; Behrens, C.F. Clinical implementation of artificial intelligence-driven cone-beam computed tomography-guided online adaptive radiotherapy in the pelvic region. Phys. Imaging Radiat. Oncol. 2021, 17, 1–7. [Google Scholar] [CrossRef]
  30. Feng, C.H.; Cornell, M.; Moore, K.L.; Karunamuni, R.; Seibert, T.M. Automated contouring and planning pipeline for hippocampal-avoidant whole-brain radiotherapy. Radiat. Oncol. 2020, 15, 251. [Google Scholar] [CrossRef]
  31. Pan, K.; Zhao, L.; Gu, S.; Tang, Y.; Wang, J.; Yu, W.; Zhu, L.; Feng, Q.; Su, R.; Xu, Z.; et al. Deep learning-based automatic delineation of the hippocampus by MRI: Geometric and dosimetric evaluation. Radiat. Oncol. 2021, 16, 12. [Google Scholar] [CrossRef] [PubMed]
  32. Liu, Y.; Lei, Y.; Fu, Y.; Wang, T.; Zhou, J.; Jiang, X.; McDonald, M.; Beitler, J.J.; Curran, W.J.; Liu, T.; et al. Head and neck multi-organ auto-segmentation on CT images aided by synthetic MRI. Med. Phys. 2020, 47, 4294–4302. [Google Scholar] [CrossRef]
  33. Zhong, Y.; Yang, Y.; Fang, Y.; Wang, J.; Hu, W. A Preliminary Experience of Implementing Deep-Learning Based Auto-Segmentation in Head and Neck Cancer: A Study on Real-World Clinical Cases. Front. Oncol. 2021, 11, 638197. [Google Scholar] [CrossRef] [PubMed]
  34. Zhu, W.; Huang, Y.; Zeng, L.; Chen, X.; Liu, Y.; Qian, Z.; Du, N.; Fan, W.; Xie, X. AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy. Med. Phys. 2018, 46, 576–589. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Chen, W.; Li, Y.; Dyer, B.A.; Feng, X.; Rao, S.; Benedict, S.H.; Chen, Q.; Rong, Y.A.-O. Deep learning vs. atlas-based models for fast auto-segmentation of the masticatory muscles on head and neck CT images. Radiat. Oncol. 2020, 15, 176. [Google Scholar] [CrossRef]
  36. Kim, N.A.-O.; Chun, J.; Chang, J.A.-O.; Lee, C.G.; Keum, K.C.; Kim, J.S. Feasibility of Continual Deep Learning-Based Segmentation for Personalized Adaptive Radiation Therapy in Head and Neck Area. Cancers 2021, 13, 702. [Google Scholar] [CrossRef]
  37. Van Rooij, W.; Dahele, M.; Nijhuis, H.; Slotman, B.J.; Verbakel, W.F. Strategies to improve deep learning-based salivary gland segmentation. Radiat. Oncol. 2020, 15, 272. [Google Scholar] [CrossRef]
  38. Nikolov, S.; Blackwell, S.; Zverovitch, A.; Mendes, R.; Livne, M.; De Fauw, J.; Patel, Y.; Meyer, C.; Askham, H.; Romera-Paredes, B.; et al. Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study. J. Med. Internet Res. 2021, 23, e26151. [Google Scholar] [CrossRef]
  39. Dai, X.; Lei, Y.; Wang, T.; Zhou, J.; Roper, J.; McDonald, M.; Beitler, J.J.; Curran, W.J.; Liu, T.; Yang, X. Automated delineation of head and neck organs at risk using synthetic MRI-aided mask scoring regional convolutional neural network. Med. Phys. 2021, 48, 5862–5873. [Google Scholar] [CrossRef]
  40. Nemoto, T.; Futakami, N.; Yagi, M.; Kumabe, A.; Takeda, A.; Kunieda, E.; Shigematsu, N. Efficacy evaluation of 2D, 3D U-Net semantic segmentation and atlas-based segmentation of normal lungs excluding the trachea and main bronchi. J. Radiat. Res. 2020, 61, 257–264. [Google Scholar] [CrossRef] [Green Version]
  41. Wong, J.; Huang, V.; Giambattista, J.A.; Teke, T.; Kolbeck, C.; Giambattista, J.; Atrchian, S. Training and Validation of Deep Learning-Based Auto-Segmentation Models for Lung Stereotactic Ablative Radiotherapy Using Retrospective Radiotherapy Planning Contours. Front. Oncol. 2021, 11, 626499. [Google Scholar] [CrossRef] [PubMed]
  42. Men, K.; Geng, H.; Biswas, T.; Liao, Z.; Xiao, Y. Automated Quality Assurance of OAR Contouring for Lung Cancer Based on Segmentation With Deep Active Learning. Front. Oncol. 2020, 10, 986. [Google Scholar] [CrossRef] [PubMed]
  43. Gu, H.; Gan, W.; Zhang, C.; Feng, A.; Wang, H.; Huang, Y.; Chen, H.; Shao, Y.; Duan, Y.; Xu, Z. A 2D-3D hybrid convolutional neural network for lung lobe auto-segmentation on standard slice thickness computed tomography of patients receiving radiotherapy. Biomed. Eng. Online 2021, 20, 94. [Google Scholar] [CrossRef] [PubMed]
  44. Lappas, G.; Wolfs, C.J.A.; Staut, N.; Lieuwes, N.G.; Biemans, R.; van Hoof, S.J.; Dubois, L.J.; Verhaegen, F. Automatic contouring of normal tissues with deep learning for preclinical radiation studies. Phys. Med. Biol. 2022, 67, 044001. [Google Scholar] [CrossRef]
  45. Schreier, J.; Attanasi, F.; Laaksonen, H. Generalization vs. Specificity: In Which Cases Should a Clinic Train its Own Segmentation Models? Front. Oncol. 2020, 10, 675. [Google Scholar] [CrossRef]
  46. Liu, Z.; Liu, F.; Chen, W.; Liu, X.; Hou, X.; Shen, J.; Guan, H.; Zhen, H.; Wang, S.; Chen, Q.; et al. Automatic Segmentation of Clinical Target Volumes for Post-Modified Radical Mastectomy Radiotherapy Using Convolutional Neural Networks. Front. Oncol. 2020, 10, 581347. [Google Scholar] [CrossRef]
  47. Choi, M.S.; Choi, B.S.; Chung, S.Y.; Kim, N.; Chun, J.; Kim, Y.B.; Chang, J.S.; Kim, J.S. Clinical evaluation of atlas- and deep learning-based automatic segmentation of multiple organs and clinical target volumes for breast cancer. Radiother. Oncol. 2020, 153, 139–145. [Google Scholar] [CrossRef]
  48. Liang, F.; Qian, P.; Su, K.H.; Baydoun, A.; Leisser, A.; Van Hedent, S.; Kuo, J.W.; Zhao, K.; Parikh, P.; Lu, Y.; et al. Abdominal, multi-organ, auto-contouring method for online adaptive magnetic resonance guided radiotherapy: An intelligent, multi-level fusion approach. Artif. Intell. Med. 2018, 90, 34–41. [Google Scholar] [CrossRef]
  49. Xia, X.; Wang, J.; Li, Y.; Peng, J.; Fan, J.; Zhang, J.; Wan, J.; Fang, Y.; Zhang, Z.; Hu, W. An Artificial Intelligence-Based Full-Process Solution for Radiotherapy: A Proof of Concept Study on Rectal Cancer. Front. Oncol. 2020, 10, 616721. [Google Scholar] [CrossRef]
  50. Savenije, M.H.F.; Maspero, M.; Sikkes, G.G.; van der Voort van Zyp, J.R.N.; Kotte, T.J.; Alexis, N.; Bol, G.H.; van den Berg, T.; Cornelis, A. Clinical implementation of MRI-based organs-at-risk auto-segmentation with convolutional networks for prostate radiotherapy. Radiat. Oncol. 2020, 15, 104. [Google Scholar] [CrossRef]
  51. Sartor, H.; Minarik, D.; Enqvist, O.; Ulén, J.; Wittrup, A.; Bjurberg, M.; Trägårdh, E. Auto-segmentations by convolutional neural network in cervical and anorectal cancer with clinical structure sets as the ground truth. Clin. Transl. Radiat. Oncol. 2020, 25, 37–45. [Google Scholar] [CrossRef] [PubMed]
  52. Cha, E.; Elguindi, S.; Onochie, I.; Gorovets, D.; Deasy, J.O.; Zelefsky, M.; Gillespie, E.F. Clinical implementation of deep learning contour autosegmentation for prostate radiotherapy. Radiother. Oncol. 2021, 159, 1–7. [Google Scholar] [CrossRef] [PubMed]
  53. Ma, C.Y.; Zhou, J.Y.; Xu, X.T.; Guo, J.; Han, M.F.; Gao, Y.Z.; Du, H.; Stahl, J.N.; Maltz, J.S. Deep learning-based auto-segmentation of clinical target volumes for radiotherapy treatment of cervical cancer. J. Appl. Clin. Med. Phys. 2021, 23, e13470. [Google Scholar] [CrossRef] [PubMed]
  54. Byrne, M.; Archibald-Heeren, B.; Hu, Y.; Teh, A.; Beserminji, R.; Cai, E.; Liu, G.; Yates, A.; Rijken, J.; Collett, N.; et al. Varian ethos online adaptive radiotherapy for prostate cancer: Early results of contouring accuracy, treatment plan quality, and treatment time. J. Appl. Clin. Med. Phys. 2022, 23, e13479. [Google Scholar] [CrossRef]
  55. Brouwer, C.L.; Steenbakkers, R.J.H.M.; Bourhis, J.; Budach, W.; Grau, C.; Grégoire, V.; van Herk, M.; Lee, A.; Maingon, P.; Nutting, C.; et al. CT-based delineation of organs at risk in the head and neck region: DAHANCA, EORTC, GORTEC, HKNPCSG, NCIC CTG, NCRI, NRG Oncology and TROG consensus guidelines. Radiother. Oncol. 2015, 117, 83–90. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Scoccianti, S.; Detti, B.; Gadda, D.; Greto, D.; Furfaro, I.; Meacci, F.; Simontacchi, G.; Di Brina, L.; Bonomo, P.; Giacomelli, I.; et al. Organs at risk in the brain and their dose-constraints in adults and in children: A radiation oncologist’s guide for delineation in everyday practice. Radiother. Oncol. 2015, 114, 230–238. [Google Scholar] [CrossRef]
  57. Offersen, B.V.; Boersma, L.J.; Kirkove, C.; Hol, S.; Aznar, M.C.; Biete Sola, A.; Kirova, Y.M.; Pignol, J.-P.; Remouchamps, V.; Verhoeven, K.; et al. ESTRO consensus guideline on target volume delineation for elective radiation therapy of early stage breast cancer. Radiother. Oncol. 2015, 114, 3–10. [Google Scholar] [CrossRef]
  58. Kong, F.M.; Ritter, T.; Quint, D.J.; Senan, S.; Gaspar, L.E.; Komaki, R.U.; Hurkmans, C.W.; Timmerman, R.; Bezjak, A.; Bradley, J.D.; et al. Consideration of dose limits for organs at risk of thoracic radiotherapy: Atlas for lung, proximal bronchial tree, esophagus, spinal cord, ribs, and brachial plexus. Int. J. Radiat. Oncol. Biol. Phys. 2011, 81, 1442–1457. [Google Scholar] [CrossRef] [Green Version]
  59. Jabbour, S.K.; Hashem, S.A.; Bosch, W.; Kim, T.K.; Finkelstein, S.E.; Anderson, B.M.; Ben-Josef, E.; Crane, C.H.; Goodman, K.A.; Haddock, M.G.; et al. Upper abdominal normal organ contouring guidelines and atlas: A Radiation Therapy Oncology Group consensus. Pract. Radiat. Oncol. 2014, 4, 82–89. [Google Scholar] [CrossRef] [Green Version]
  60. Gay, H.A.; Barthold, H.J.; O’Meara, E.; Bosch, W.R.; El Naqa, I.; Al-Lozi, R.; Rosenthal, S.A.; Lawton, C.; Lee, W.R.; Sandler, H.; et al. Pelvic normal tissue contouring guidelines for radiation therapy: A Radiation Therapy Oncology Group consensus panel atlas. Int. J. Radiat. Oncol. Biol. Phys. 2012, 83, e353–e362. [Google Scholar] [CrossRef] [Green Version]
  61. Salembier, C.; Villeirs, G.; De Bari, B.; Hoskin, P.; Pieters, B.R.; Van Vulpen, M.; Khoo, V.; Henry, A.; Bossi, A.; De Meerleer, G.; et al. ESTRO ACROP consensus guideline on CT- and MRI-based target volume delineation for primary radiation therapy of localized prostate cancer. Radiother. Oncol. 2018, 127, 49–61. [Google Scholar] [CrossRef] [PubMed]
  62. Grégoire, V.; Ang, K.; Budach, W.; Grau, C.; Hamoir, M.; Langendijk, J.A.; Lee, A.; Le, Q.-T.; Maingon, P.; Nutting, C.; et al. Delineation of the neck node levels for head and neck tumors: A 2013 update. DAHANCA, EORTC, HKNPCSG, NCIC CTG, NCRI, RTOG, TROG consensus guidelines. Radiother. Oncol. 2014, 110, 172–181. [Google Scholar] [CrossRef] [PubMed]
  63. Offersen, B.V.; Boersma, L.J.; Kirkove, C.; Hol, S.; Aznar, M.C.; Sola, A.B.; Kirova, Y.M.; Pignol, J.P.; Remouchamps, V.; Verhoeven, K.; et al. ESTRO consensus guideline on target volume delineation for elective radiation therapy of early stage breast cancer, version 1.1. Radiother. Oncol. 2016, 118, 205–208. [Google Scholar] [CrossRef] [PubMed]
  64. Harris, V.A.; Staffurth, J.; Naismith, O.; Esmail, A.; Gulliford, S.; Khoo, V.; Lewis, R.; Littler, J.; McNair, H.; Sadoyze, A.; et al. Consensus Guidelines and Contouring Atlas for Pelvic Node Delineation in Prostate and Pelvic Node Intensity Modulated Radiation Therapy. Int. J. Radiat. Oncol. Biol. Phys. 2015, 92, 874–883. [Google Scholar] [CrossRef]
  65. Lawton, C.A.F.; Michalski, J.; El-Naqa, I.; Buyyounouski, M.K.; Lee, W.R.; Menard, C.; O’Meara, E.; Rosenthal, S.A.; Ritter, M.; Seider, M. RTOG GU Radiation Oncology Specialists Reach Consensus on Pelvic Lymph Node Volumes for High-Risk Prostate Cancer. Int. J. Radiat. Oncol. Biol. Phys. 2009, 74, 383–387. [Google Scholar] [CrossRef] [Green Version]
  66. Jeong, H.A.-O.; Ntolkeras, G.; Alhilani, M.A.-O.; Atefi, S.R.; Zöllei, L.; Fujimoto, K.; Pourvaziri, A.; Lev, M.H.; Grant, P.E.; Bonmassar, G. Development, validation, and pilot MRI safety study of a high-resolution, open source, whole body pediatric numerical simulation model. PLoS ONE 2021, 16, e0241682. [Google Scholar] [CrossRef]
  67. Cardenas, C.E.; Mohamed, A.S.R.; Yang, J.; Gooding, M.; Veeraraghavan, H.; Kalpathy-Cramer, J.; Ng, S.P.; Ding, Y.; Wang, J.; Lai, S.Y.; et al. Head and neck cancer patient images for determining auto-segmentation accuracy in T2-weighted magnetic resonance imaging through expert manual segmentations. Med. Phys. 2020, 47, 2317–2322. [Google Scholar] [CrossRef]
  68. Cardenas, C.E.; Yang, J.; Anderson, B.M.; Court, L.E.; Brock, K.B. Advances in Auto-Segmentation. Semin. Radiat. Oncol. 2019, 29, 185–197. [Google Scholar] [CrossRef]
  69. Liu, X.; Li, K.W.; Yang, R.; Geng, L.S. Review of Deep Learning Based Automatic Segmentation for Lung Cancer Radiotherapy. Front. Oncol. 2021, 11, 717039. [Google Scholar] [CrossRef]
  70. Maffei, N.; Manco, L.; Aluisio, G.; D’Angelo, E.; Ferrazza, P.; Vanoni, V.; Meduri, B.; Lohr, F.; Guidi, G. Radiomics classifier to quantify automatic segmentation quality of cardiac sub-structures for radiotherapy treatment planning. Phys. Med. 2021, 83, 278–286. [Google Scholar] [CrossRef]
  71. van Rooij, W.; Verbakel, W.F.; Slotman, B.J.; Dahele, M. Using Spatial Probability Maps to Highlight Potential Inaccuracies in Deep Learning-Based Contours: Facilitating Online Adaptive Radiation Therapy. Adv. Radiat. Oncol. 2021, 6, 100658. [Google Scholar] [CrossRef]
  72. Nijhuis, H.; van Rooij, W.; Gregoire, V.; Overgaard, J.; Slotman, B.J.; Verbakel, W.F.; Dahele, M. Investigating the potential of deep learning for patient-specific quality assurance of salivary gland contours using EORTC-1219-DAHANCA-29 clinical trial data. Acta Oncol. 2021, 60, 575–581. [Google Scholar] [CrossRef] [PubMed]
  73. Maspero, M.; Savenije, M.H.F.; Dinkla, A.M.; Seevinck, P.R.; Intven, M.P.W.; Jurgenliemk-Schulz, I.M.; Kerkmeijer, L.G.W.; van den Berg, C.A.T. Dose evaluation of fast synthetic-CT generation using a generative adversarial network for general pelvis MR-only radiotherapy. Phys. Med. Biol. 2018, 63, 185001. [Google Scholar] [CrossRef] [PubMed]
  74. Barateau, A.; De Crevoisier, R.; Largent, A.; Mylona, E.; Perichon, N.; Castelli, J.; Chajon, E.; Acosta, O.; Simon, A.; Nunes, J.C.; et al. Comparison of CBCT-based dose calculation methods in head and neck cancer radiotherapy: From Hounsfield unit to density calibration curve to deep learning. Med. Phys. 2020, 47, 4683–4693. [Google Scholar] [CrossRef] [PubMed]
  75. Schmidt, M.A.; Payne, G.S. Radiotherapy planning using MRI. Phys. Med. Biol. 2015, 60, R323–R361. [Google Scholar] [CrossRef] [PubMed]
  76. Devic, S. MRI simulation for radiotherapy treatment planning. Med. Phys. 2012, 39, 6701–6711. [Google Scholar] [CrossRef]
  77. Le, A.H.; Stojadinovic, S.; Timmerman, R.; Choy, H.; Duncan, R.L.; Jiang, S.B.; Pompos, A. Real-Time Whole-Brain Radiation Therapy: A Single-Institution Experience. Int. J. Radiat. Oncol. Biol. Phys. 2018, 100, 1280–1288. [Google Scholar] [CrossRef]
  78. Han, X. MR-based synthetic CT generation using a deep convolutional neural network method. Med. Phys. 2017, 44, 1408–1419. [Google Scholar] [CrossRef] [Green Version]
  79. Li, Y.; Zhu, J.; Liu, Z.; Teng, J.; Xie, Q.; Zhang, L.; Liu, X.; Shi, J.; Chen, L. A preliminary study of using a deep convolution neural network to generate synthesized CT images based on CBCT for adaptive radiotherapy of nasopharyngeal carcinoma. Phys. Med. Biol. 2019, 64, 145010. [Google Scholar] [CrossRef]
  80. Dhont, J.; Verellen, D.; Mollaert, I.; Vanreusel, V.; Vandemeulebroucke, J. RealDRR—Rendering of realistic digitally reconstructed radiographs using locally trained image-to-image translation. Radiother. Oncol. 2020, 153, 213–219. [Google Scholar] [CrossRef]
  81. Bahrami, A.; Karimian, A.; Fatemizadeh, E.; Arabi, H.; Zaidi, H. A new deep convolutional neural network design with efficient learning capability: Application to CT image synthesis from MRI. Med. Phys. 2020, 47, 5158–5171. [Google Scholar] [CrossRef] [PubMed]
  82. Maspero, M.; Bentvelzen, L.G.; Savenije, M.H.F.; Guerreiro, F.; Seravalli, E.; Janssens, G.O.; van den Berg, C.A.T.; Philippens, M.E.P. Deep learning-based synthetic CT generation for paediatric brain MR-only photon and proton radiotherapy. Radiother. Oncol. 2020, 153, 197–204. [Google Scholar] [CrossRef] [PubMed]
  83. Dai, X.; Lei, Y.; Tian, Z.; Wang, T.; Liu, T.; Curran, W.J.; Yang, X. Deep learning-based volumetric image generation from projection imaging for prostate radiotherapy. In Proceedings of the Medical Imaging 2021: Image-Guided Procedures, Robotic Interventions, and Modeling, Online, 15–19 February 2021. [Google Scholar]
  84. Tong, F.; Nakao, M.; Wu, S.; Nakamura, M.; Matsuda, T. X-ray2Shape: Reconstruction of 3D Liver Shape from a Single 2D Projection Image. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montréal, QC, Canada, 20–24 July 2020; pp. 1608–1611. [Google Scholar]
  85. Cusumano, D.; Lenkowicz, J.; Votta, C.; Boldrini, L.; Placidi, L.; Catucci, F.; Dinapoli, N.; Antonelli, M.V.; Romano, A.; De Luca, V.; et al. A deep learning approach to generate synthetic CT in low field MR-guided adaptive radiotherapy for abdominal and pelvic cases. Radiother. Oncol. 2020, 153, 205–212. [Google Scholar] [CrossRef]
  86. Wang, M.; Zhang, Q.; Lam, S.; Cai, J.; Yang, R. A Review on Application of Deep Learning Algorithms in External Beam Radiotherapy Automated Treatment Planning. Front. Oncol. 2020, 10, 580919. [Google Scholar] [CrossRef]
  87. Cilla, S.; Deodato, F.; Romano, C.; Ianiro, A.; Macchia, G.; Re, A.; Buwenge, M.; Boldrini, L.; Indovina, L.; Valentini, V.; et al. Personalized automation of treatment planning in head-neck cancer: A step forward for quality in radiation therapy? Phys. Med. 2021, 82, 7–16. [Google Scholar] [CrossRef] [PubMed]
  88. Hrinivich, W.T.; Lee, J. Artificial intelligence-based radiotherapy machine parameter optimization using reinforcement learning. Med. Phys. 2020, 47, 6140–6150. [Google Scholar] [CrossRef]
  89. Cilla, S.; Romano, C.; Morabito, V.E.; Macchia, G.; Buwenge, M.; Dinapoli, N.; Indovina, L.; Strigari, L.; Morganti, A.G.; Valentini, V.; et al. Personalized Treatment Planning Automation in Prostate Cancer Radiation Oncology: A Comprehensive Dosimetric Study. Front. Oncol. 2021, 11, 636529. [Google Scholar] [CrossRef]
  90. Cilla, S.; Macchia, G.; Romano, C.; Morabito, V.E.; Boccardi, M.; Picardi, V.; Valentini, V.; Morganti, A.G.; Deodato, F. Challenges in lung and heart avoidance for postmastectomy breast cancer radiotherapy: Is automated planning the answer? Med. Dosim. 2021, 46, 295–303. [Google Scholar] [CrossRef]
  91. Kida, S.; Nakamoto, T.; Nakano, M.; Nawa, K.; Haga, A.; Kotoku, J.; Yamashita, H.; Nakagawa, K. Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network. Cureus 2018, 10, e2548. [Google Scholar] [CrossRef] [Green Version]
  92. Kurosawa, T.; Nishio, T.; Moriya, S.; Tsuneda, M.; Karasawa, K. Feasibility of image quality improvement for high-speed CBCT imaging using deep convolutional neural network for image-guided radiotherapy in prostate cancer. Phys. Med. 2020, 80, 84–91. [Google Scholar] [CrossRef]
  93. Rostampour, N.; Jabbari, K.; Esmaeili, M.; Mohammadi, M.; Nabavi, S. Markerless Respiratory Tumor Motion Prediction Using an Adaptive Neuro-fuzzy Approach. J. Med. Signals Sens. 2018, 8, 25–30. [Google Scholar] [PubMed]
  94. Gustafsson, C.J.; Sward, J.; Adalbjornsson, S.I.; Jakobsson, A.; Olsson, L.E. Development and evaluation of a deep learning based artificial intelligence for automatic identification of gold fiducial markers in an MRI-only prostate radiotherapy workflow. Phys. Med. Biol. 2020, 65, 225011. [Google Scholar] [CrossRef] [PubMed]
  95. Takahashi, W.; Oshikawa, S.; Mori, S. Real-time markerless tumour tracking with patient-specific deep learning using a personalised data generation strategy: Proof of concept by phantom study. Br. J. Radiol. 2020, 93, 20190420. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Maspero, M.; Houweling, A.C.; Savenije, M.H.F.; van Heijst, T.C.F.; Verhoeff, J.J.C.; Kotte, A.; van den Berg, C.A.T. A single neural network for cone-beam computed tomography-based radiotherapy of head-and-neck, lung and breast cancer. Phys. Imaging Radiat. Oncol. 2020, 14, 24–31. [Google Scholar] [CrossRef] [PubMed]
  97. Wang, C.; Zhu, X.; Hong, J.C.; Zheng, D. Artificial Intelligence in Radiotherapy Treatment Planning: Present and Future. Technol. Cancer Res. Treat. 2019, 18, 1533033819873922. [Google Scholar] [CrossRef]
  98. Ford, E.; Conroy, L.; Dong, L.; de Los Santos, L.F.; Greener, A.; Gwe-Ya Kim, G.; Johnson, J.; Johnson, P.; Mechalakos, J.G.; Napolitano, B.; et al. Strategies for effective physics plan and chart review in radiation therapy: Report of AAPM Task Group 275. Med. Phys. 2020, 47, e236–e272. [Google Scholar] [CrossRef] [Green Version]
  99. El Naqa, I.; Ruan, D.; Valdes, G.; Dekker, A.; McNutt, T.; Ge, Y.; Wu, Q.J.; Oh, J.H.; Thor, M.; Smith, W.; et al. Machine learning and modeling: Data, validation, communication challenges. Med. Phys. 2018, 45, e834–e840. [Google Scholar] [CrossRef] [Green Version]
  100. Xia, P.; Sintay, B.J.; Colussi, V.C.; Chuang, C.; Lo, Y.-C.; Schofield, D.; Wells, M.; Zhou, S. Medical Physics Practice Guideline (MPPG) 11.a: Plan and chart review in external beam radiotherapy and brachytherapy. J. Appl. Clin. Med. Phys. 2021, 22, 4–19. [Google Scholar] [CrossRef]
  101. Xu, H.; Zhang, B.; Guerrero, M.; Lee, S.W.; Lamichhane, N.; Chen, S.; Yi, B. Toward automation of initial chart check for photon/electron EBRT: The clinical implementation of new AAPM task group reports and automation techniques. J. Appl. Clin. Med. Phys. 2021, 22, 234–245. [Google Scholar] [CrossRef]
  102. Osman, A.A.-O.X.; Maalej, N.M. Applications of machine and deep learning to patient-specific IMRT/VMAT quality assurance. J. Appl. Clin. Med Phys. 2021, 22, 20–36. [Google Scholar] [CrossRef]
  103. Cho, Y.B.; Farrokhkish, M.; Norrlinger, B.; Heaton, R.; Jaffray, D.; Islam, M. An artificial neural network to model response of a radiotherapy beam monitoring system. Med. Phys. 2020, 47, 1983–1994. [Google Scholar] [CrossRef] [PubMed]
  104. Luk, S.M.H.; Meyer, J.; Young, L.A.; Cao, N.; Ford, E.C.; Phillips, M.H.; Kalet, A.M. Characterization of a Bayesian network-based radiotherapy plan verification model. Med. Phys. 2019, 46, 2006–2014. [Google Scholar] [CrossRef] [PubMed]
  105. Huang, Z.; Hu, C.; Chi, C.; Jiang, Z.; Tong, Y.; Zhao, C. An Artificial Intelligence Model for Predicting 1-Year Survival of Bone Metastases in Non-Small-Cell Lung Cancer Patients Based on XGBoost Algorithm. Biomed. Res. Int. 2020, 2020, 3462363. [Google Scholar] [CrossRef]
  106. Du, R.; Lee, V.H.; Yuan, H.; Lam, K.O.; Pang, H.H.; Chen, Y.; Lam, E.Y.; Khong, P.L.; Lee, A.W.; Kwong, D.L.; et al. Radiomics Model to Predict Early Progression of Nonmetastatic Nasopharyngeal Carcinoma after Intensity Modulation Radiation Therapy: A Multicenter Study. Radiol. Artif. Intell. 2019, 1, e180075. [Google Scholar] [CrossRef] [PubMed]
  107. De Felice, F.; Valentini, V.; De Vincentiis, M.; Di Gioia, C.R.T.; Musio, D.; Tummulo, A.A.; Ricci, L.I.; Converti, V.; Mezi, S.; Messineo, D.; et al. Prediction of Recurrence by Machine Learning in Salivary Gland Cancer Patients After Adjuvant (Chemo)Radiotherapy. In Vivo 2021, 35, 3355–3360. [Google Scholar] [CrossRef]
  108. Lee, S.; Kerns, S.; Ostrer, H.; Rosenstein, B.; Deasy, J.O.; Oh, J.H. Machine Learning on a Genome-wide Association Study to Predict Late Genitourinary Toxicity After Prostate Radiation Therapy. Int. J. Radiat. Oncol. 2018, 101, 128–135. [Google Scholar] [CrossRef]
  109. Tian, Z.; Yen, A.; Zhou, Z.; Shen, C.; Albuquerque, K.; Hrycushko, B. A machine-learning-based prediction model of fistula formation after interstitial brachytherapy for locally advanced gynecological malignancies. Brachytherapy 2019, 18, 530–538. [Google Scholar] [CrossRef]
  110. van Velzen, S.G.M.; Gal, R.; Teske, A.J.; van der Leij, F.; van den Bongard, D.; Viergever, M.A.; Verkooijen, H.M.; Išgum, I. AI-Based Radiation Dose Quantification for Estimation of Heart Disease Risk in Breast Cancer Survivors After Radiation Therapy. Int. J. Radiat. Oncol. Biol. Phys. 2021, 112, 621–632. [Google Scholar] [CrossRef]
  111. Tabl, A.A.; Alkhateeb, A.; ElMaraghy, W.; Rueda, L.; Ngom, A. A machine learning approach for identifying gene biomarkers guiding the treatment of breast cancer. Front. Genet. 2019, 10, 256. [Google Scholar] [CrossRef] [Green Version]
  112. Ubaldi, L.; Valenti, V.; Borgese, R.F.; Collura, G.; Fantacci, M.E.; Ferrera, G.; Iacoviello, G.; Abbate, B.F.; Laruina, F.; Tripoli, A.; et al. Strategies to develop radiomics and machine learning models for lung cancer stage and histology prediction using small data samples. Phys. Med. 2021, 90, 13–22. [Google Scholar] [CrossRef]
  113. Kawahara, D.; Murakami, Y.; Tani, S.; Nagata, Y. A prediction model for degree of differentiation for resectable locally advanced esophageal squamous cell carcinoma based on CT images using radiomics and machine-learning. Br. J. Radiol. 2021, 94, 20210525. [Google Scholar] [CrossRef] [PubMed]
  114. Lou, B.; Doken, S.; Zhuang, T.; Wingerter, D.; Gidwani, M.; Mistry, N.; Ladic, L.; Kamen, A.; Abazeed, M.E. An image-based deep learning framework for individualizing radiotherapy dose. Lancet Digit. Health 2019, 1, e136–e147. [Google Scholar] [CrossRef] [Green Version]
  115. Wu, S.; Jiao, Y.; Zhang, Y.; Ren, X.; Li, P.; Yu, Q.; Zhang, Q.; Wang, Q.; Fu, S. Imaging-Based Individualized Response Prediction Of Carbon Ion Radiotherapy For Prostate Cancer Patients. Cancer Manag. Res. 2019, 11, 9121–9131. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  116. Haak, H.E.; Gao, X.; Maas, M.; Waktola, S.; Benson, S.; Beets-Tan, R.G.H.; Beets, G.L.; van Leerdam, M.; Melenhorst, J. The use of deep learning on endoscopic images to assess the response of rectal cancer after chemoradiation. Surg. Endosc. 2021, 1–9. [Google Scholar] [CrossRef] [PubMed]
  117. Osman, S.O.S.; Leijenaar, R.T.H.; Cole, A.J.; Lyons, C.A.; Hounsell, A.R.; Prise, K.M.; O’Sullivan, J.M.; Lambin, P.; McGarry, C.K.; Jain, S. Computed Tomography-based Radiomics for Risk Stratification in Prostate Cancer. Int. J. Radiat. Oncol. Biol. Phys. 2019, 105, 448–456. [Google Scholar] [CrossRef] [PubMed]
  118. Yang, Z.; Olszewski, D.; He, C.; Pintea, G.; Lian, J.; Chou, T.; Chen, R.C.; Shtylla, B. Machine learning and statistical prediction of patient quality-of-life after prostate radiation therapy. Comput. Biol. Med. 2021, 129, 104127. [Google Scholar] [CrossRef]
  119. Jochems, A.; El-Naqa, I.; Kessler, M.; Mayo, C.S.; Jolly, S.; Matuszak, M.; Faivre-Finn, C.; Price, G.; Holloway, L.; Vinod, S.; et al. A prediction model for early death in non-small cell lung cancer patients following curative-intent chemoradiotherapy. Acta Oncol. 2018, 57, 226–230. [Google Scholar] [CrossRef] [Green Version]
  120. Blackledge, M.D.; Winfield, J.M.; Miah, A.; Strauss, D.; Thway, K.; Morgan, V.A.; Collins, D.J.; Koh, D.M.; Leach, M.O.; Messiou, C. Supervised Machine-Learning Enables Segmentation and Evaluation of Heterogeneous Post-treatment Changes in Multi-Parametric MRI of Soft-Tissue Sarcoma. Front. Oncol. 2019, 9, 941. [Google Scholar] [CrossRef] [Green Version]
  121. Stenhouse, K.; Roumeliotis, M.; Ciunkiewicz, P.; Banerjee, R.; Yanushkevich, S.; McGeachy, P. Development of a Machine Learning Model for Optimal Applicator Selection in High-Dose-Rate Cervical Brachytherapy. Front. Oncol. 2021, 11, 611437. [Google Scholar] [CrossRef]
  122. Li, H.; Galperin-Aizenberg, M.; Pryma, D.; Simone, C.B., II; Fan, Y. Unsupervised machine learning of radiomic features for predicting treatment response and overall survival of early stage non-small cell lung cancer patients treated with stereotactic body radiation therapy. Radiother. Oncol. 2018, 129, 218–226. [Google Scholar] [CrossRef]
  123. Sleeman Iv, W.C.; Nalluri, J.; Syed, K.; Ghosh, P.; Krawczyk, B.; Hagan, M.; Palta, J.; Kapoor, R. A Machine Learning method for relabeling arbitrary DICOM structure sets to TG-263 defined labels. J. Biomed. Inform. 2020, 109, 103527. [Google Scholar] [CrossRef] [PubMed]
  124. Syed, K.; Iv, W.S.; Ivey, K.; Hagan, M.; Palta, J.; Kapoor, R.; Ghosh, P. Integrated Natural Language Processing and Machine Learning Models for Standardizing Radiotherapy Structure Names. Healthcare 2020, 8, 120. [Google Scholar] [CrossRef]
  125. Haga, A.; Takahashi, W.; Aoki, S.; Nawa, K.; Yamashita, H.; Abe, O.; Nakagawa, K. Standardization of imaging features for radiomics analysis. J. Med. Investig. 2019, 66, 35–37. [Google Scholar] [CrossRef] [PubMed]
  126. Peltola, M.K.; Lehikoinen, J.S.; Sippola, L.T.; Saarilahti, K.; Mäkitie, A.A. A Novel Digital Patient-Reported Outcome Platform for Head and Neck Oncology Patients—A Pilot Study. Clin. Med. Insights Ear Nose Throat 2016, 9, 1–6. [Google Scholar] [CrossRef]
  127. Batumalai, V.; Jameson, M.G.; King, O.; Walker, R.; Slater, C.; Dundas, K.; Dinsdale, G.; Wallis, A.; Ochoa, C.; Gray, R.; et al. Cautiously optimistic: A survey of radiation oncology professionals’ perceptions of automation in radiotherapy planning. Tech. Innov. Patient Support Radiat. Oncol. 2020, 16, 58–64. [Google Scholar] [CrossRef] [PubMed]
  128. Luna, D.; Almerares, A.; Mayan, J.C., 3rd; González Bernaldo de Quirós, F.; Otero, C. Health Informatics in Developing Countries: Going beyond Pilot Practices to Sustainable Implementations: A Review of the Current Challenges. Healthc. Inform. Res. 2014, 20, 3–10. [Google Scholar] [CrossRef]
  129. Abernethy, A.P.; Etheredge, L.M.; Ganz, P.A.; Wallace, P.; German, R.R.; Neti, C.; Bach, P.B.; Murphy, S.B. Rapid-learning system for cancer care. J. Clin. Oncol. 2010, 28, 4268–4274. [Google Scholar] [CrossRef] [Green Version]
  130. Halford, G.S.; Baker, R.; McCredden, J.E.; Bain, J.D. How many variables can humans process? Psychol. Sci. 2005, 16, 70–76. [Google Scholar] [CrossRef]
  131. Vayena, E.A.-O.; Blasimme, A.A.-O.; Cohen, I.G. Machine learning in medicine: Addressing ethical challenges. PLoS Med. 2018, 15, e1002689. [Google Scholar] [CrossRef]
  132. Scheetz, J.; Rothschild, P.; McGuinness, M.; Hadoux, X.; Soyer, H.P.; Janda, M.; Condon, J.J.J.; Oakden-Rayner, L.; Palmer, L.J.; Keel, S.; et al. A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology. Sci. Rep. 2021, 11, 5193. [Google Scholar] [CrossRef]
  133. Victor Mugabe, K. Barriers and facilitators to the adoption of artificial intelligence in radiation oncology: A New Zealand study. Tech. Innov. Patient Support Radiat. Oncol. 2021, 18, 16–21. [Google Scholar] [CrossRef] [PubMed]
  134. Zanca, F.; Hernandez-Giron, I.; Avanzo, M.; Guidi, G.; Crijns, W.; Diaz, O.; Kagadis, G.C.; Rampado, O.; Lønne, P.I.; Ken, S.; et al. Expanding the medical physicist curricular and professional programme to include Artificial Intelligence. Phys. Med. 2021, 83, 174–183. [Google Scholar] [CrossRef] [PubMed]
  135. Atwood, T.F.; Brown, D.W.; Murphy, J.D.; Moore, K.L.; Mundt, A.J.; Pawlicki, T. Establishing a New Clinical Role for Medical Physicists: A Prospective Phase II Trial. Int. J. Radiat. Oncol. 2018, 102, 635–641. [Google Scholar] [CrossRef] [PubMed]
  136. Netherton, T.J.; Cardenas, C.E.; Rhee, D.J.; Court, L.E.; Beadle, B.M. The Emergence of Artificial Intelligence within Radiation Oncology Treatment Planning. Oncology 2020, 99, 124–134. [Google Scholar] [CrossRef] [PubMed]
  137. Kirienko, M.A.-O.; Sollini, M.A.-O.; Ninatti, G.A.-O.; Loiacono, D.A.-O.; Giacomello, E.A.-O.; Gozzi, N.A.-O.; Amigoni, F.A.-O.; Mainardi, L.A.-O.; Lanzi, P.A.-O.; Chiti, A.A.-O. Distributed learning: A reliable privacy-preserving strategy to change multicenter collaborations using AI. Eur. J. Pediatr. 2021, 48, 3791–3804. [Google Scholar] [CrossRef]
  138. Korreman, S.; Eriksen, J.G.; Grau, C. The changing role of radiation oncology professionals in a world of AI—Just jobs lost—Or a solution to the under-provision of radiotherapy? Clin. Transl. Radiat. Oncol. 2021, 26, 104–107. [Google Scholar] [CrossRef]
  139. McBee, M.P.; Awan, O.A.; Colucci, A.T.; Ghobadi, C.W.; Kadom, N.; Kansagra, A.P.; Tridandapani, S.; Auffermann, W.F. Deep Learning in Radiology. Acad. Radiol. 2018, 25, 1472–1480. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Steps of the RT patient-based workflow, in which AI-based techniques can be applied.
Figure 1. Steps of the RT patient-based workflow, in which AI-based techniques can be applied.
Applsci 12 03223 g001
Figure 2. PRISMA flowchart diagram summarizing the study selection. (§) The list of included papers for quantitative synthesis and their classifications according to the AI subgroups (i.e., type of methods) and main aims are reported in the Supplementary Materials (Table S1). (*) The list of excluded papers from quantitative synthesis and reasons for exclusion are reported in the Supplementary Materials (Table S2).
Figure 2. PRISMA flowchart diagram summarizing the study selection. (§) The list of included papers for quantitative synthesis and their classifications according to the AI subgroups (i.e., type of methods) and main aims are reported in the Supplementary Materials (Table S1). (*) The list of excluded papers from quantitative synthesis and reasons for exclusion are reported in the Supplementary Materials (Table S2).
Applsci 12 03223 g002
Figure 3. Papers per year according to the type of AI approach. The blue, cyan, and dark blue bars represent DL, ML, and IO methods, respectively. The grey line represents the total count (i.e., the sum of DL, ML, and IO methods) per year.
Figure 3. Papers per year according to the type of AI approach. The blue, cyan, and dark blue bars represent DL, ML, and IO methods, respectively. The grey line represents the total count (i.e., the sum of DL, ML, and IO methods) per year.
Applsci 12 03223 g003
Figure 4. The numbers of identified papers according to the AI subgroups (IO, ML, and DL) and RT workflow steps (highlighted in Figure 1).
Figure 4. The numbers of identified papers according to the AI subgroups (IO, ML, and DL) and RT workflow steps (highlighted in Figure 1).
Applsci 12 03223 g004
Figure 5. Numbers of identified papers in image segmentation according to (a) the contoured district using ML or DL methods, and (b) the imaging modality used for segmentation by adopting the convolutional neural network (CNN) approach.
Figure 5. Numbers of identified papers in image segmentation according to (a) the contoured district using ML or DL methods, and (b) the imaging modality used for segmentation by adopting the convolutional neural network (CNN) approach.
Applsci 12 03223 g005
Figure 6. The numbers of identified papers that aimed to generate synthetic images according to the input imaging modality for (a) DL- and (b) ML-based approaches.
Figure 6. The numbers of identified papers that aimed to generate synthetic images according to the input imaging modality for (a) DL- and (b) ML-based approaches.
Applsci 12 03223 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Santoro, M.; Strolin, S.; Paolani, G.; Della Gala, G.; Bartoloni, A.; Giacometti, C.; Ammendolia, I.; Morganti, A.G.; Strigari, L. Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. Appl. Sci. 2022, 12, 3223. https://doi.org/10.3390/app12073223

AMA Style

Santoro M, Strolin S, Paolani G, Della Gala G, Bartoloni A, Giacometti C, Ammendolia I, Morganti AG, Strigari L. Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. Applied Sciences. 2022; 12(7):3223. https://doi.org/10.3390/app12073223

Chicago/Turabian Style

Santoro, Miriam, Silvia Strolin, Giulia Paolani, Giuseppe Della Gala, Alessandro Bartoloni, Cinzia Giacometti, Ilario Ammendolia, Alessio Giuseppe Morganti, and Lidia Strigari. 2022. "Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond" Applied Sciences 12, no. 7: 3223. https://doi.org/10.3390/app12073223

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop