Next Article in Journal
Biportal Endoscopic Radiofrequency Ablation of the Sacroiliac Joint Complex in the Treatment of Chronic Low Back Pain: A Technical Note with 1-Year Follow-Up
Next Article in Special Issue
Complications during CT-Guided Lung Nodule Localization: Impact of Needle Insertion Depth and Patient Characteristics
Previous Article in Journal
Protocol of Breast Cancer Prevention Model with Addition of Breast Ultrasound to Routine Gynecological Visits as a Chance for an Early Diagnosis and Treatment in 25 to 49-Year-Old Polish Females
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Technological Advancements in Interventional Oncology

1
Department of Diagnostic Imaging, Oncologic Radiotherapy and Hematology—A. Gemelli University Hospital Foundation IRCCS, L.go A. Gemelli 8, 00168 Rome, Italy
2
Istituto di Radiodiagnostica, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(2), 228; https://doi.org/10.3390/diagnostics13020228
Submission received: 5 December 2022 / Revised: 31 December 2022 / Accepted: 2 January 2023 / Published: 7 January 2023
(This article belongs to the Special Issue Imaging-Guided Techniques in Interventional Oncology)

Abstract

:
Interventional radiology, and particularly interventional oncology, represents one of the medical subspecialties in which technological advancements and innovations play an utterly fundamental role. Artificial intelligence, consisting of big data analysis and feature extrapolation through computational algorithms for disease diagnosis and treatment response evaluation, is nowadays playing an increasingly important role in various healthcare fields and applications, from diagnosis to treatment response prediction. One of the fields which greatly benefits from artificial intelligence is interventional oncology. In addition, digital health, consisting of practical technological applications, can assist healthcare practitioners in their daily activities. This review aims to cover the most useful, established, and interesting artificial intelligence and digital health innovations and updates, to help physicians become more and more involved in their use in clinical practice, particularly in the field of interventional oncology.

1. Introduction

Nowadays, physicians are facing an overwhelming amount of complex data, which may impede effective and prompt clinical practice. Artificial intelligence (AI) promises to turn what seems to be a burden of too much information into value. AI represents a very wide field of current technology, also comprising healthcare. Since the 2000s, there has been an exponential growth of AI literature publications from all over the world. Looking at what has been done in the field of radiology in terms of AI, the main areas of interest are represented by neuroradiology, chest, and abdominal imaging, with recent emerging studies on interventional radiology (IR) and, specifically, in interventional oncology (IO) [1]. AI is currently employed in a lot of tasks that might help physicians, and radiologists in particular, in near-future clinical practice tasks, which include automated detection and interpretation of diagnostic imaging findings, comparison and longitudinal analysis of data, patient’s stratification and prognosis evaluation, support to clinical decision-making, and post-processing of images (segmentation, registration, and quantification). Interventional radiologists could take advantage of AI to guide their percutaneous locoregional treatments (e.g., thermal ablation or transarterial radioembolization among the others), and even monitor the outcomes of the treatment (i.e., automatically detecting and segmenting neoplastic liver lesions to analyze treatment response on post-procedural imaging). It is very important to underline the differences between AI and Digital Health (DH), as well as their meaning. AI is based on the construction of complex algorithms driven by a large amount of high quality and very curated data, possibly coming from several centers and pooled together. The future of AI in clinical practice will involve multimodal biomarkers (pathology, genomic, radiomics, and clinical data) to build up complex algorithms and softwares that will provide more efficient and precise applications in everyday clinical practice. Very important features of AI are machine learning (ML) and deep learning (DL). ML is a subset of AI involved with the creation of algorithms which can modify itself without human intervention to produce the desired output by feeding itself through the structured data. Therefore, ML requires structured or labeled data to understand the differences between pathologic and/or physiological conditions and to perform pattern recognition (e.g., images of benign and malignant tumors), to learn the classification, and then to produce the output. ML can provide a fast, reproducible, and reliable tissue characterization (i.e., distinguishing between viable and necrotic tumor, tumor vasculature, healthy liver parenchyma), which is resistant to artifacts and biases. ML algorithms in radiology go through two phases: the first one, called “training”, is characterized by iterative learning to find the best model to classify images (e.g., benign or malignant tumors), whereas the second phase, called “prediction”, consists of applying the best model to classify a new image [2]. DL is a specialized subset of ML which relies on a layered structure of algorithms called “artificial neural network”. Through DL, the machine automatically learns the relevant features or patterns without the need to define features, but in order for that to happen there is a need for a large amount of data (which unfortunately is currently mostly lacking in IO). DL is the ideal natural partner to oncology practitioners which want to perform precision medicine, handling large and heterogeneous datasets for cancer diagnosis (digital pathology and diagnostic imaging), tumor burden, prediction of patient outcomes (e.g., survival, life expectancy, treatment outcome, tumor recurrence), and tailored management. DL can also develop “digital biomarkers”, explain, influence, and predict clinical outcomes. As a confirmation of DL importance, there is the fact that the number of research articles published involving DL in medicine skyrocketed over the past few years [3]. AI is, thus, very different from DH, which includes robotics, augmented reality (AR) and virtual reality (VR), telehealth and telemedicine, as well as practical applications that help physicians in their daily medical activities. Nonetheless, AI and DH are deeply intertwined, and both can help to improve the quality of care in disease diagnosis and patient treatment.

2. Radiomics

Radiomics is a science which combines radiology, mathematics and artificial intelligence techniques, using data-characterisation algorithms and mathematical analysis to extract many aspects and traits from radiological imaging; it performs a quantitative approach to diagnostic imaging acquisitions, as opposed to the classical qualitative approach made by physicians [4]. Radiomics evaluates the so-called image biomarkers as digital image texture, consisting of the single pixels and their relationship to the other pixels, as well as intensity and tissue density spatial distribution; these traits, known as radiomic features, may reveal tumoral patterns and characteristics that are not visible to the human eye. The various phases to obtain radiomics features consist of data collection, target lesion segmentation, image biomarkers detection and extraction from image texture, modeling, processing, and validation [5]. The unique imaging characteristics of different disease forms may be helpful in predicting the prognosis and therapeutic response of different types of cancer, thereby offering important information for individualized treatment. The most cutting-edge uses of radiomics are found in radiology and oncology, and therefore, in IO.

2.1. Diagnosis

DL systems based on convolutional neural networks (CNNs) have shown potential to revolutionize the process of radiological diagnosis, increasing sensitivity in classifying neoplastic lesions, and giving the radiologist the ability to interpret, check and validate the results [6,7,8]. AI plays an important role in IO, helping physicians to achieve a higher accuracy in the diagnosis of lesions and, thus, to choose the best approach to treatment, personalized for every patient and every neoplastic lesion.
In their study, Hamm et al. tried to develop and validate a DL system based on CNNs, which classifies common hepatic lesions on multi-phasic magnetic resonance imaging (MRI), and compared its performances with those of diagnostic radiologists; test set performance in a single run of random unseen cases showed an average 90% sensitivity and 98% specificity, while the average sensitivity and specificity on these same cases for radiologists was 82.5% and 96.5%, respectively. Results showed a 90% sensitivity for classifying hepatocellular carcinoma (HCC) compared to 60–70% for radiologists [6]. Moreover, the authors have attempted to develop a proof-of-concept “interpretable” DL prototype that justifies the aspects of its predictions from the pre-trained hepatic lesion classifier, identifying and scoring radiological features. This method enables radiologists to interpret elements of decision-making behind classification decisions; this way, clinicians can validate these features by using feature maps or similar interpretability techniques and can check whether the system has accurately identified the lesion’s features in the correct locations [7]. Yasaka et al. in their clinical retrospective study investigated whether different types of liver masses could be differentiated at dynamic contrast agent-enhanced CT by using models based on DL with a CNN. Masses were diagnosed according to five categories (category A, classic HCC; category B, malignant liver tumors other than classic and early HCCs; category C, indeterminate masses or mass-like lesions—including early HCCs and dysplastic nodules—and rare benign liver masses other than hemangiomas and cysts; category D, hemangiomas; and category E, cysts). DL with CNN showed high diagnostic performance; median accuracy of differential diagnosis of liver masses for test data was 0.84 and median area under the receiver operating characteristic (ROC) curve for differentiating categories A-B from C-E was 0.92 [8]. The great promise of AI in interventional oncology is to bring precision medicine at its finest, to the level of the individual patient, through a better understanding and definition of the target lesion. Budai et al. retrospectively constructed a radiomics-based model to diagnose histotypes of renal cell carcinoma (clear-cell versus other histotypes) evaluating the CT scans of 209 patients with renal cell carcinoma, obtaining an accuracy of 78%, sensitivity of 80%, and specificity of 74%, respectively; these results were compared to the ones achieved by an expert radiologist (accuracy of 79%, sensitivity of 84%, and specificity of 69%) [9].

2.2. Staging and Outcome Prediction

Staging and outcome prediction is mandatory to address the best treatment for every patient, and this also applies to IO in which locoregional treatments greatly vary based on tumor staging and patient’s outcome, particularly for HCC patients [10].
Most of the current staging systems (e.g., the Tumor, Node, Metastasis (TNM) classification) have limitations in terms of patient prognosis. On the other side, nomograms represent a useful tool in personalized care of oncologic patients, used by an increasing number of cancer centers to improve clinical practice. Nomograms are getting more and more useful in oncological settings as they are capable of estimating the personalized patient risks and prognosis based on disease and subjective characteristics. Even though nomograms’ diffusion dates back to before AI’s, artificial intelligence plays a great role, integrating prognostic and determinant variables, generating a personalized and individualized probability of a patient’s clinical event. Nomograms can be obtained for virtually all types of cancer and can predict various outcomes (prognosis at the time of diagnosis, post-treatment recurrence risk, procedure-specific survival outcomes).
AI-based nomograms with easy-to-use digital interfaces allow for fast and accurate data computation, obtaining clearer and easier to understand prognoses compared to other staging systems, improving the decision-making process. Gupta et al. retrospectively investigated CT images texture to predict grading and survival of 38 patients with suspected gallbladder neoplasm [11]. Multiple authors used radiomics-based models to predict lymph node metastases in various cancer typer (e.g., gastric, breast, bladder, colo-rectal) [12,13,14,15]. AI-based tools can also be used for the staging of the primitive tumor, as well as for prediction of aggressive disease progression [16,17,18,19]. These studies demonstrate how AI outperforms “classical” staging systems, and better determines tumor stage and presence of metastasis, granting a more accurate, personalized and faster treatment choice.

2.3. Treatment Response Prediction

ML and DL models can be used in IO practice in predicting response to treatment of cancer patients undergoing locoregional (percutaneous or intra-arterial) therapies. Artificial neural networks that are multilayered or “deep” are the basis of DL. The various neural layers between input and output provide the DL with its plasticity and the ability to define novel patterns of intelligent classification, simulating the workings of the human brain. When compared to a human reader, who can only detect and use a portion of the total big information content of digital images, DL can automatically distinguish the pertinent features from data, allowing it to learn new patterns and determine more complex relationships. This is particularly true in IO where DL-models are capable of integrating multiple patient and tumoral variables, unseen at the human eye, to to guide clinical choices and to predict outcomes of locoregional treatments as transarterial chemoembolization (TACE), radioembolization (TARE), and percutaneous ablation.
Abajian et al. used supervised ML models (logistic regression and random forest) to predict treatment response of patients with liver tumors undergoing TACE. The input multiparametric dataset (age, gender, tumor type, comorbidities, liver function, disease stage, baseline enhancement, wash-out) is elaborated by the ML models to produce an output classification, dividing the patients in treatment responders versus non-responders. The authors obtained an overall accuracy greater than 78% in predicting treatment response to TACE using baseline clinical (e.g., cirrhosis) and diagnostic imaging (e.g., baseline enhancing tumor burden) tumor variables [20]. Morshid et al. evaluated the predictive capabilities of an ML algorithm using clinical parameters and pre-treatment computed tomography imaging features in HCC patients undergoing first-line TACE treatments in terms of time to progression (TTP). The authors classified the patients as TACE-susceptible (in case of high TTP) or TACE-refractory, using diagnostic image features and Barcelona Clinic Liver Cancer (BCLC) stage, obtaining a prediction accuracy rate of 74.2% (versus 62.9% of BCLC stage alone), and potentially aiding HCC patient selection for TACE treatments [21]. Liu et al. built and validated DL radiomics-based algorithms to predict treatment response to first TACE in patients with HCC evaluating pretreatment contrast-enhanced ultrasound (CEUS) acquisitions, with effective and accurate treatment result prediction, also being time- and labor-saving in clinical practice [22]. Peng et al. successfully trained and validated a DL-based model from CT images to predict the response of patients with intermediate-stage HCC undergoing TACE, which had an accuracy of 84.3% [23]. In a retrospective study by Mähringer-Kunz et al., a survival prediction model was developed and validated in order to decide whether repeating or avoiding TACE in HCC patients. The AI-based prediction model was compared to conventional prediction scores and was found to have promising performances in predicting 1-year survival (positive predictive value 87.5%, negative predictive value 68%, sensitivity 77.8%, specificity 81%), significantly outperforming the conventional scoring systems (p < 0.001) [24].
Kobe et al. successfully created a ML-model based on texture analysis derived from pre-treatment cone-beam computed tomography (CBCT) in patients undergoing TARE for liver metastases, with high accuracy in treatment response prediction [25]. Ince et al. validated ML-based models with clinical and radiomic features to predict response after TARE. Radiomic features were extracted from pretreatment T1-weighted post-contrast MRI acquisitions, obtained within 3 months before procedure. A total of 1128 features were retrieved among 82 patients (65 responders and 17 nonresponders). In total, 12 features (8 radiomic and 4 clinical) were chosen throughout the selection procedure and were used in the study [26].
Sato et al. developed several ML models to predict tumor recurrence after radiofrequency ablation (RFA) treatment. A total of 1778 patients undergoing RFA for HCC lesions for the first time were included. Tumor number, serum albumin level and des-gamma-carboxyprothrombin level were the most important variables found for the prediction of HCC recurrence [27]. Iezzi et al. retrospectively evaluated contrast-enhanced CT scans in 42 HCC patients (56 lesions) treated with percutaneous ablative techniques (RFA and microwave ablation) to construct a radiomics-based model for early prediction of poor treatment response, obtaining an accuracy of 66%, sensitivity of 85%, specificity of 50%, positive predictive value of 59%, and negative predictive value of 79% [28]. When treating a patient with ablative techniques, it is mandatory to ensure the best lesion coverage as possible, also obtaining a safety ablation margin. The accurate registration of pre-operative diagnostic images to intra- or post-procedural imaging can help in improving needle insertion and treatment results [29]. However, this is not always feasible, particularly due to different patient positioning and variable breathing phases and apnea, as well as due to the difference in image quality and pre-/post-ablation tissue texture. Even though image registration and fusion imaging techniques mostly pertain to digital health, artificial intelligence can bring benefits to this field, overcoming the difficulties related to rigid and non-rigid registration. Wei et al. proposed a DL-based approach to address the registration issues of fusion imaging in US-guided ablations, using a classification network to estimate the US probe angle and then estimating the US plane in the pre-operative CT or MR images through segmentation, with effective results, improving the accuracy of intra-procedural image registration [30].

3. AI-Assisted Detection and Segmentation

Computed-aided detection (CAD) is an increasingly utilized tool to perform an adjunctive, second read of diagnostic imaging acquisitions (X-ray, CT, MRI) in order to assist the radiologist in the detection of pathologic lesions and improve their accuracy. Mostly used on chest X-rays and lung CT scans for pulmonary nodules [31,32]. Lee et al. evaluated the efficacy and clinical usefulness of lung nodules’ CAD in patients with colorectal cancer oligometastases to the lung, obtaining good sensitivity and specificity values [31]. Li et al. demonstrated how the recent technological advancements of CAD recognition algorithms increased the accuracy of lung nodule detection up to 99.56%, with a sensitivity of 99.3%, greatly reducing false negatives and missed detections [32]. Ahn et al. determined the usefulness of evaluating breast MRI with a CAD software in the prediction of invasive neoplasm in patients with ductal carcinoma “in situ”, to select patients for sentinel lymph-node biopsy [33]. Takamoto et al. validated a recently developed software of AI-assisted CT-based virtual hepatectomy in patients affected by liver cancer, with a focus on processing time, obtaining reliable and accurate volumetries with a significatively (p < 0.001) shorter processing time for AI-assisted reconstructions [34]. AI-assisted CNN-based virtual segmentation can also be useful and time-sparing for volumetries prior to transarterial radioembolization procedures, as demonstrated by Chaichana et al. [35]. The authors developed an automated CNN-based method for target lesion and organ segmentation on Single-Photon Emission Computed Tomography (SPECT) images obtained after 99mTc-labeled Macroaggregated Albumin (MAA) administration, obtaining a time-sparing (about 1 min per patient) and accurate segmentation method. CAD can be an extremely valuable tool for interventional radiologists, as it can provide a faster and more efficient way to detect small or barely visible lesions, leading not only to a correct and early diagnosis but also to a shorter time-to-treatment for patients, which can eventually lead to a better prognosis. As previously stated, CAD may also help IO practitioners to reduce the planning time, as in the case of pre-procedural hepatic volumetric assessment.

4. Big Data Issues

The need for AI to access a large amount of data brings forth crucial questions on data handling and privacy. Data collecting systems are different in the USA and Europe. While a data economy already exists in the USA, data regulation in European Countries is different and is based on the General Data Protection Regulation (GDPR) [36]. In Europe, the data privacy and security law is designed to set standards for all sensitive personal data, including healthcare. In the USA a data economy already exists with the Health Insurance Portability and Accountability Act (HIPAA), which was created in 1996, aimed at dealing with protected health information, and includes standards that regulate exchange of protected health information between covered entities (healthcare providers, insurance companies, third party business associates). The USA privacy laws treat big health data in different ways based on how data is created and who is responsible for its custody (physicians, healthcare providers and their associates). However, if on one side HIPAA greatly protects privacy by forbidding the use of personal health information for research without a review board waiver or patient authorization, on the other side the patient’s anonymized health data can easily become identifiable and, moreover, as HIPAA bases its regulations on particular entities and not on the data itself, if a patient gives its data to an uncovered entity HIPAA does not restricts its usage [37]. In Europe, the GDPR defined a broad health data regulation on every personal data related to a patient’s health, regardless of its format, how it was created and collected. The circulation of anonymized datasets between centers in Europe is regulated neither by GDPR nor by any other central law, leading to unclear provisions and impairments in data circulation between centers. These issues need to be taken care of, as AI is becoming increasingly involved in daily routines, both in healthcare and in other life sectors.

5. Digital Health

5.1. Virtual Reality and Augmented Reality

Digital reality, or extended reality, is an umbrella term which covers all the various technologies that enhance human senses, including platforms that represent cutting-edge technology and that will transform medical training and clinical practice at its most routine levels and at its highest technological point, to drive adoption quality and confidence in performing new procedures with new devices [38]. Nowadays, various forms of digital reality are available and continuously improved:
  • Augmented Reality (AR) consists of the addition of digital elements to a live view, basically creating a hybrid of our own reality view and computer-generated objects;
  • Virtual Reality (VR) is a completely digital view, where objects and the environment are being replaced by fully digital elements;
  • Mixed Reality combines elements of both AR and VR, bringing a technology in which real-world and digital objects simultaneously interact with each other.
One of the most promising digital reality applications is probably navigation, which makes possible many tasks such as layering of medical 2D or 3D images, establishment of the skin entry point, display of the target lesion, visualization of the needle path, identification of structures that should be preserved or are vulnerable in the needle path and also tracking of the distance and angle to target lesions. A traditional CT setup has a monitor directly in front of the scanner, allowing the physician to analyze images and process data on the scanner table with the assistance of a position laser or a laser guide and the sporadic use of in-and-out CT imaging acquisitions to determine the instrument’s position. However, this traditional CT setup lacks real-time feedback information on images, needle position and anatomy, which is where AR may be helpful. MRI can be a challenging modality for interventional procedures since it has longer acquisition time and gantry- as well as magnetic-field-related issues; however, these interventions can be particularly facilitated by AR navigation. A prototypical setup for a navigation system, described in the literature by Fritz et al., uses AR to perform navigation, identification of the anatomic site and of the needle path from the outside, and includes an MRI scanner (or a CT or any hybrid scanner), a dedicated workstation, and a navigation unit. The navigation unit can consist of an MRI-compatible monitor and a semi-transparent mirror to reflect the monitor. Imaging data are acquired and, subsequently, an overlay system allows projection into the mirror and from the mirror into the line of the interventional site; the operator sees both the patient and the MRI (or CT) images or other superimposed information through the semi-transparent mirror. Regardless of where the operator stands, the images are always following, with a laser projected into the skin for determination of the entry point. The display can be mounted with a frame, can be attached to the scanner, or can be freely standing on the other side of the scanner [39]. Another promising application of augmented reality is the teaching of interventional skills: a project where trainers used augmented reality to build up their interventional skills has been carried out. Immersing the learner in a virtual world leads to a higher level of active participation compared to textbooks or online learning modules, because of the student’s increased social, environmental, and personal presence inside the learning activity [40]. In an animal model investigation by Zhang and colleagues using a projectional AR system, it was discovered that operator time for guiding a needle to the target was significantly reduced, and patient-to-image registration error was low [41]. AR with stereotactic navigation includes systems where multiple different electrodes, needles or antennas are inserted in the same patient which still depends upon humans for insertion, but it can plan and track needles path. A result of paper in 301 tumors shows a very precise insertion of needle on particularly difficult to achieve target lesions, with a median lateral and longitudinal error of needle placement of 3 mm [42]. Another paper shows the advantages of stereotactic navigation over manual insertion, demonstrating a primary efficacy rate of 84% in the stereotactic-guidance group versus 75% in the manual-guidance group [43]. Both studies involve complex positions of target lesions and large tumors. A more advanced application of AR involves smart glasses. In this case, images from different modalities are previously loaded into a visor which is synchronized with electromagnetic sensors applied to the patient’s body. This allows a superimposed 3D visualization of the target lesion, of the trajectory line connecting the interventional device to the target, visualization of the organ’s major blood vessels and structures, as well as of the interventional device (needle, electrode), and may be implemented by a target touching phase (i.e., changing color of the target when it is reached by the tip of the needle). In addition, this system has the advantage of changing the modality of guidance (e.g., from CT or MRI to ultrasound room) without losing registration data. It also has to take into account moving organs, for example, pairing with breathing respiratory acts [44]. In a recent study, Long et al. compared smartphone AR, smart glasses AR, and 3D CBCT-guided fluoroscopy for percutaneous needle insertion in a phantom model. The placement error using the 3 systems was similar, but with significantly reduced placement time [45].
Some limitations of AR were described in the recent past, but most of them are nowadays partially or completely resolved, such as:
  • The limited field of view (human eyes have a field of view of 200 degrees in horizontal plane and 135 in vertical plane, while head-mounted displays (HMDs) have a field of view of less than 90 degrees;
  • Hardware efficacy (however, nowadays even cheaper smartphones meet the minimum requisites);
  • Registration mismatch between the real target and the visualized target, or between target and interventional device (which nowadays is less than 5 mm);
  • Cybersickness (nausea, headache, dizziness, and vestibular mismatch that can be brought up while using HMDs; however, these symptoms are nowadays dramatically reduced even though subjective to the single physician;
  • Time-consuming user-dependent calibration and adjustment with HMDs (no longer needed or greatly decreased nowadays);
  • Weight of HMDs.
The main advantages of guidance with AR consist in the significant reduction of radiation dose for procedures performed in the CT room (thanks to the pre-procedural integration of data and minor use of radiographic guidance), system usability both in ultrasound and CT room, for every organ and with any device, high speed and great precision. AR has the potential to change the interaction between imaging formation and clinical practice.

5.2. Robotics-Assisted Ablation

Robotics in IO can be of great help as could make treatments available also in countries which lack adequate access to IR specialists. Routinary use of robotics in IO can lead to the reduction of radiation exposure to operators and can increase procedure accuracy in the near future. Robotics systems offering off-plane and multiplanar percutaneous intervention planning, targeting, as well as needle positioning, also using three-dimensional target views, are available for clinical use and can easily support practitioners [46]. Image-guided systems can be useful for spatial positioning and orientation of one or more ablative needle-probes, assisting in manual advancement of the probe (electrode, antenna, etc.) and as intraoperative guidance and post-treatment verification [47]. Robotics-based systems with remote micro- and macro-positioning of the needles can be used for interventions with CBCT, fluoroscopy, and CT in which needle placement is operated by the physician but from a distance [48]. A recent study showed that a table-mounted CT-robot succeeded in reducing microwave ablation needle repositioning attempts and increased accuracy for out-of-plane targets (5.9 mm versus 10.1 mm) but at the cost of longer targeting time compared to freehand targeting (36 min versus 19 min) [49]. CT-guided steerable mini-robots probably represent the most advanced and useful technology in IO nowadays: this robotics-based system is capable of intraprocedural correction of trajectory misalignments during percutaneous procedures [50]. However, many unsolved issues are still under investigation such as cost-effectiveness, standardization, high learning curves, impact on workflow, and so on.

5.3. Virtual Multidisciplinary Tumor Board

Integrated multi-disciplinary assessment of every oncological patient undoubtedly leads to better clinical decisions. Virtual multidisciplinary tumor board (v-MDTB) platforms can offer the power to visualize, support, diagnose, and communicate, integrating data from hospital information systems across different clinical domains (such as radiology, pathology, and genomics), thus enabling a consistent, comprehensive, and intuitive view of patient’s relevant information and care path, to facilitate cross-disciplinary collaboration and communication and giving the physicians evidence-based decision tools to promote guidelines adherence and evidence-driven care in personalized medicine. Interactive conferencing, in which a three-dimensional tool can be created, allowing us to virtually discuss how to approach certain percutaneous interventions, can be done with all the physicians together in one room or through teleconferencing [51].

6. Conclusions

The aim of every physician should be to use AI in order to obtain the best in clinical practice activities, avoiding redundancies. Given the great help AI can give to every physician in making the work better and faster, implementation of AI techniques and methods in present and future clinical practice should not be feared, as physician’s role will always be fundamental, particularly in case of ethical issues. In the clinical setting, the use of AI promises to lead to a patient-specific IO, with personalized screening and diagnosis, staging and risk assessment, segmentation, fully automated neoplastic tissue compartmentalization, multi-parametric characterization and classification using ML algorithms, treatment choice, as well as tailored post-treatment surveillance and follow-up protocols. AI can provide physicians with “virtual” tissue biopsy, digital pathology, as well as molecular imaging of tissue microenvironment (e.g., hypoxia, tissue metabolism, and immune response). AI can work using integrated multi-modality imaging (e.g., cone-beam CT, positron-emission tomography (PET), and MRI) and fusion imaging, to add powerful and accurate instruments to the interventional radiologist’s toolbox. Treatment prognosis plays an important role in decision making during routine clinical practice, and AI-based models can greatly help in bringing on the best personalized decision for every patient, both for experienced and inexperienced physicians. Nonetheless, physician’s decisions should always be considered, even in case of disagreement with AI suggestions: physician’s knowledge and supervision on AI tasks is still important to grant optimal results. IO can greatly benefit from DH as well as from AI applications, although there is a lack of meaningful and sufficient data to perform training and validation; therefore, clinical trials are welcome to generate more and more data. Governments and hospitals should more and more facilitate the use of AI in common clinical pathways, and interventional radiology can be the stepping stone for this healthcare practice change.

Author Contributions

Conceptualization, A.P. and P.B.; methodology, A.P. and P.B.; investigation, G.M. and A.T.; writing—original draft preparation, G.M., P.B. and A.P.; writing—review and editing, A.P. and P.B.; supervision, R.I., L.N. and E.S.; project administration, A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Iezzi, R.; Goldberg, S.N.; Merlino, B.; Posa, A.; Valentini, V.; Manfredi, R. Artificial Intelligence in Interventional Radiology: A Literature Review and Future Perspectives. J. Oncol. 2019, 2019, 6153041. [Google Scholar] [CrossRef]
  2. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine Learning for Medical Imaging. Radiographics 2017, 37, 505–515. [Google Scholar] [CrossRef] [Green Version]
  3. Mazurowski, M.A.; Buda, M.; Saha, A.; Bashir, M.R. Deep learning in radiology: An overview of the concepts and a survey of the state of the art with focus on MRI. J. Magn. Reson. Imaging 2019, 49, 939–954. [Google Scholar] [CrossRef] [PubMed]
  4. van Timmeren, J.E.; Cester, D.; Tanadini-Lang, S.; Alkadhi, H.; Baessler, B. Radiomics in medical imaging-“how-to” guide and critical reflection. Insights Imaging 2020, 11, 91. [Google Scholar] [CrossRef] [PubMed]
  5. Litvin, A.A.; Burkin, D.A.; Kropinov, A.A.; Paramzin, F.N. Radiomics and Digital Image Texture Analysis in Oncology (Review). Sovrem. Tekhnologii Med. 2021, 13, 97–104. [Google Scholar] [CrossRef] [PubMed]
  6. Hamm, C.A.; Wang, C.J.; Savic, L.J.; Ferrante, M.; Schobert, I.; Schlachter, T.; Lin, M.; Duncan, J.S.; Weinreb, J.C.; Chapiro, J.; et al. Deep learning for liver tumor diagnosis part I: Development of a convolutional neural network classifier for multi-phasic MRI. Eur. Radiol. 2019, 29, 3338–3347. [Google Scholar] [CrossRef] [PubMed]
  7. Wang, C.J.; Hamm, C.A.; Savic, L.J.; Ferrante, M.; Schobert, I.; Schlachter, T.; Lin, M.; Weinreb, J.C.; Duncan, J.S.; Chapiro, J.; et al. Deep learning for liver tumor diagnosis part II: Convolutional neural network interpretation using radiologic imaging features. Eur. Radiol. 2019, 29, 3348–3357. [Google Scholar] [CrossRef]
  8. Yasaka, K.; Akai, H.; Abe, O.; Kiryu, S. Deep Learning with Convolutional Neural Network for Differentiation of Liver Masses at Dynamic Contrast-enhanced CT: A Preliminary Study. Radiology 2018, 286, 887–896. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Budai, B.K.; Stollmayer, R.; Rónaszéki, A.D.; Körmendy, B.; Zsombor, Z.; Palotás, L.; Fejér, B.; Szendrõi, A.; Székely, E.; Maurovich-Horvat, P.; et al. Radiomics analysis of contrast-enhanced CT scans can distinguish between clear cell and non-clear cell renal cell carcinoma in different imaging protocols. Front. Med. (Lausanne) 2022, 9, 974485. [Google Scholar] [CrossRef]
  10. Reig, M.; Forner, A.; Rimola, J.; Ferrer-Fàbrega, J.; Burrel, M.; Garcia-Criado, Á.; Kelley, R.K.; Galle, P.R.; Mazzaferro, V.; Salem, R.; et al. BCLC strategy for prognosis prediction and treatment recommendation: The 2022 update. J. Hepatol. 2022, 76, 681–693. [Google Scholar] [CrossRef]
  11. Gupta, P.; Rana, P.; Ganeshan, B.; Kalage, D.; Irrinki, S.; Gupta, V.; Yadav, T.D.; Kumar, R.; Das, C.K.; Gupta, P.; et al. Computed tomography texture-based radiomics analysis in gallbladder cancer: Initial experience. Clin. Exp. Hepatol. 2021, 7, 406–414. [Google Scholar] [CrossRef]
  12. Wang, Y.; Liu, W.; Yu, Y.; Liu, J.J.; Xue, H.D.; Qi, Y.F.; Lei, J.; Yu, J.C.; Jin, Z.Y. CT radiomics nomogram for the preoperative prediction of lymph node metastasis in gastric cancer. Eur. Radiol. 2020, 30, 976–986. [Google Scholar] [CrossRef]
  13. Yu, Y.; Tan, Y.; Xie, C.; Hu, Q.; Ouyang, J.; Chen, Y.; Gu, Y.; Li, A.; Lu, N.; He, Z.; et al. Development and Validation of a Preoperative Magnetic Resonance Imaging Radiomics-Based Signature to Predict Axillary Lymph Node Metastasis and Disease-Free Survival in Patients With Early-Stage Breast Cancer. JAMA Netw. Open 2020, 3, e2028086. [Google Scholar] [CrossRef] [PubMed]
  14. Wu, S.; Zheng, J.; Li, Y.; Yu, H.; Shi, S.; Xie, W.; Liu, H.; Su, Y.; Huang, J.; Lin, T. A Radiomics Nomogram for the Preoperative Prediction of Lymph Node Metastasis in Bladder Cancer. Clin. Cancer Res. 2017, 23, 6904–6911. [Google Scholar] [CrossRef] [Green Version]
  15. Li, M.; Zhang, J.; Dan, Y.; Yao, Y.; Dai, W.; Cai, G.; Yang, G.; Tong, T. A clinical-radiomics nomogram for the preoperative prediction of lymph node metastasis in colorectal cancer. J. Transl. Med. 2020, 18, 46. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Li, Y.; Zhang, Y.; Fang, Q.; Zhang, X.; Hou, P.; Wu, H.; Wang, X. Radiomics analysis of [18F]FDG PET/CT for microvascular invasion and prognosis prediction in very-early- and early-stage hepatocellular carcinoma. Eur. J. Nucl. Med. Mol. Imaging 2021, 48, 2599–2614. [Google Scholar] [CrossRef] [PubMed]
  17. Lin, X.; Zhao, S.; Jiang, H.; Jia, F.; Wang, G.; He, B.; Jiang, H.; Ma, X.; Li, J.; Shi, Z. A radiomics-based nomogram for preoperative T staging prediction of rectal cancer. Abdom. Radiol. (NY) 2021, 46, 4525–4535. [Google Scholar] [CrossRef]
  18. Fu, S.; Pan, M.; Zhang, J.; Zhang, H.; Tang, Z.; Li, Y.; Mu, W.; Huang, J.; Dong, D.; Duan, C.; et al. Deep Learning-Based Prediction of Future Extrahepatic Metastasis and Macrovascular Invasion in Hepatocellular Carcinoma. J. Hepatocell. Carcinoma 2021, 8, 1065–1076. [Google Scholar] [CrossRef]
  19. Fu, S.; Lai, H.; Huang, M.; Li, Q.; Liu, Y.; Zhang, J.; Huang, J.; Chen, X.; Duan, C.; Li, X.; et al. Multi-task deep learning network to predict future macrovascular invasion in hepatocellular carcinoma. EClinicalMedicine 2021, 42, 101201. [Google Scholar] [CrossRef]
  20. Abajian, A.; Murali, N.; Savic, L.J.; Laage-Gaupp, F.M.; Nezami, N.; Duncan, J.S.; Schlachter, T.; Lin, M.; Geschwind, J.F.; Chapiro, J. Predicting Treatment Response to Intra-arterial Therapies for Hepatocellular Carcinoma with the Use of Supervised Machine Learning-An Artificial Intelligence Concept. J. Vasc. Interv. Radiol. 2018, 29, 850–857.e1. [Google Scholar] [CrossRef]
  21. Morshid, A.; Elsayes, K.M.; Khalaf, A.M.; Elmohr, M.M.; Yu, J.; Kaseb, A.O.; Hassan, M.; Mahvash, A.; Wang, Z.; Hazle, J.D.; et al. A machine learning model to predict hepatocellular carcinoma response to transcatheter arterial chemoembolization. Radiol. Artif. Intell. 2019, 1, e180021. [Google Scholar] [CrossRef]
  22. Liu, D.; Liu, F.; Xie, X.; Su, L.; Liu, M.; Xie, X.; Kuang, M.; Huang, G.; Wang, Y.; Zhou, H.; et al. Accurate prediction of responses to transarterial chemoembolization for patients with hepatocellular carcinoma by using artificial intelligence in contrast-enhanced ultrasound. Eur. Radiol. 2020, 30, 2365–2376. [Google Scholar] [CrossRef]
  23. Peng, J.; Kang, S.; Ning, Z.; Deng, H.; Shen, J.; Xu, Y.; Zhang, J.; Zhao, W.; Li, X.; Gong, W.; et al. Residual convolutional neural network for predicting response of transarterial chemoembolization in hepatocellular carcinoma from CT imaging. Eur. Radiol. 2020, 30, 413–424. [Google Scholar] [CrossRef] [Green Version]
  24. Mähringer-Kunz, A.; Wagner, F.; Hahn, F.; Weinmann, A.; Brodehl, S.; Schotten, S.; Hinrichs, J.B.; Düber, C.; Galle, P.R.; Pinto Dos Santos, D.; et al. Predicting survival after transarterial chemoembolization for hepatocellular carcinoma using a neural network: A Pilot Study. Liver Int. 2020, 40, 694–703. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Kobe, A.; Zgraggen, J.; Messmer, F.; Puippe, G.; Sartoretti, T.; Alkadhi, H.; Pfammatter, T.; Mannil, M. Prediction of treatment response to transarterial radioembolization of liver metastases: Radiomics analysis of pre-treatment cone-beam CT: A proof of concept study. Eur. J. Radiol. Open 2021, 8, 100375. [Google Scholar] [CrossRef]
  26. İnce, O.; Önder, H.; Gençtürk, M.; Cebeci, H.; Golzarian, J.; Young, S. Prediction of Response of Hepatocellular Carcinoma to Radioembolization: Machine Learning Using Preprocedural Clinical Factors and MR Imaging Radiomics. J. Vasc. Interv. Radiol. 2022, 13, S1051–S0443. [Google Scholar] [CrossRef] [PubMed]
  27. Sato, M.; Tateishi, R.; Moriyama, M.; Fukumoto, T.; Yamada, T.; Nakagomi, R.; Kinoshita, M.N.; Nakatsuka, T.; Minami, T.; Uchino, K.; et al. Machine Learning–Based Personalized Prediction of Hepatocellular Carcinoma Recurrence After Radiofrequency Ablation. Gastro Hep Adv. 2022, 1, 29–37. [Google Scholar] [CrossRef]
  28. Iezzi, R.; Casà, C.; Posa, A.; Cornacchione, P.; Carchesio, F.; Boldrini, L.; Tanzilli, A.; Cerrito, L.; Fionda, B.; Longo, V.; et al. Project for interventional Oncology LArge-database in liveR Hepatocellular carcinoma—Preliminary CT-based radiomic analysis (POLAR Liver 1.1). Eur. Rev. Med. Pharmacol. Sci. 2022, 26, 2891–2899. [Google Scholar] [CrossRef]
  29. Gunay, G.; Luu, M.H.; Moelker, A.; van Walsum, T.; Klein, S. Semiautomated registration of pre- and intraoperative CT for image-guided percutaneous liver tumor ablation interventions. Med. Phys. 2017, 44, 3718–3725. [Google Scholar] [CrossRef] [PubMed]
  30. Wei, W.; Haishan, X.; Alpers, J.; Rak, M.; Hansen, C. A deep learning approach for 2D ultrasound and 3D CT/MR image registration in liver tumor ablation. Comput. Methods Programs Biomed. 2021, 206, 106117. [Google Scholar] [CrossRef]
  31. Lee, J.J.B.; Suh, Y.J.; Oh, C.; Lee, B.M.; Kim, J.S.; Chang, Y.; Jeon, Y.J.; Kim, J.Y.; Park, S.Y.; Chang, J.S. Automated Computer-Aided Detection of Lung Nodules in Metastatic Colorectal Cancer Patients for the Identification of Pulmonary Oligometastatic Disease. Int. J. Radiat. Oncol. Biol. Phys. 2022, 114, 1045–1052. [Google Scholar] [CrossRef]
  32. Li, Y.; Zheng, H.; Huang, X.; Chang, J.; Hou, D.; Lu, H. Research on lung nodule recognition algorithm based on deep feature fusion and MKL-SVM-IPSO. Sci. Rep. 2022, 12, 17403. [Google Scholar] [CrossRef]
  33. Ahn, H.S.; Kim, S.M.; Kim, M.S.; Jang, M.; Yun, B.; Kang, E.; Kim, E.K.; Park, S.Y.; Kim, B. Application of magnetic resonance computer-aided diagnosis for preoperatively determining invasive disease in ultrasonography-guided core needle biopsy-proven ductal carcinoma in situ. Medicine (Baltim.) 2020, 99, e21257. [Google Scholar] [CrossRef] [PubMed]
  34. Takamoto, T.; Ban, D.; Nara, S.; Mizui, T.; Nagashima, D.; Esaki, M.; Shimada, K. Automated Three-Dimensional Liver Reconstruction with Artificial Intelligence for Virtual Hepatectomy. J. Gastrointest. Surg. 2022, 26, 2119–2127. [Google Scholar] [CrossRef]
  35. Chaichana, A.; Frey, E.C.; Teyateeti, A.; Rhoongsittichai, K.; Tocharoenchai, C.; Pusuwan, P.; Jangpatarapongsa, K. Automated segmentation of lung, liver, and liver tumors from Tc-99m MAA SPECT/CT images for Y-90 radioembolization using convolutional neural networks. Med. Phys. 2021, 48, 7877–7890. [Google Scholar] [CrossRef] [PubMed]
  36. General Data Protection Regulation. Available online: https://gdpr-info.eu/ (accessed on 30 October 2022).
  37. Terry, N.P. Regulatory Disruption and Arbitrage in Health-Care Data Protection. Yale J. Health Policy Law Ethics 2017, 17, 143–208. [Google Scholar] [PubMed]
  38. Solbiati, L.; Gennaro, N.; Muglia, R. Augmented Reality: From Video Games to Medical Clinical Practice. Cardiovasc. Intervent. Radiol. 2020, 43, 1427–1429. [Google Scholar] [CrossRef] [PubMed]
  39. Fritz, J.; U-Thainual, P.; Ungi, T.; Flammang, A.J.; Kathuria, S.; Fichtinger, G.; Iordachita, I.I.; Carrino, J.A. MR-guided vertebroplasty with augmented reality image overlay navigation. Cardiovasc. Intervent. Radiol. 2014, 37, 1589–1596. [Google Scholar] [CrossRef]
  40. U-Thainual, P.; Fritz, J.; Moonjaita, C.; Ungi, T.; Flammang, A.; Carrino, J.A.; Fichtinger, G.; Iordachita, I. MR image overlay guidance: System evaluation for preclinical use. Int. J. Comput. Assist. Radiol. Surg. 2013, 8, 365–378. [Google Scholar] [CrossRef] [PubMed]
  41. Zhang, X.; Chen, G.; Liao, H. High-quality see-through surgical guidance system using enhanced 3-D autostereoscopic augmented reality. IEEE Trans. Biomed. Eng. 2017, 64, 1815–1825. [Google Scholar] [CrossRef]
  42. Lachenmayer, A.; Tinguely, P.; Maurer, M.H.; Frehner, L.; Knöpfli, M.; Peterhans, M.; Weber, S.; Dufour, J.F.; Candinas, D.; Banz, V. Stereotactic image-guided microwave ablation of hepatocellular carcinoma using a computer-assisted navigation system. Liver Int. 2019, 39, 1975–1985. [Google Scholar] [CrossRef] [PubMed]
  43. Schaible, J.; Lürken, L.; Wiggermann, P.; Verloh, N.; Einspieler, I.; Zeman, F.; Schreyer, A.G.; Bale, R.; Stroszczynski, C.; Beyer, L. Primary efficacy of percutaneous microwave ablation of malignant liver tumors: Comparison of stereotactic and conventional manual guidance. Sci. Rep. 2020, 10, 18835. [Google Scholar] [CrossRef] [PubMed]
  44. Solbiati, M.; Ierace, T.; Muglia, R.; Pedicini, V.; Iezzi, R.; Passera, K.M.; Rotilio, A.C.; Goldberg, S.N.; Solbiati, L.A. Thermal Ablation of Liver Tumors Guided by Augmented Reality: An Initial Clinical Experience. Cancers 2022, 14, 1312. [Google Scholar] [CrossRef] [PubMed]
  45. Long, D.J.; Li, M.; De Ruiter, Q.M.B.; Hecht, R.; Li, X.; Varble, N.; Blain, M.; Kassin, M.T.; Sharma, K.V.; Sarin, S.; et al. Comparison of Smartphone Augmented Reality, Smartglasses Augmented Reality, and 3D CBCT-guided Fluoroscopy Navigation for Percutaneous Needle Insertion: A Phantom Study. Cardiovasc. Intervent. Radiol. 2021, 44, 774–781. [Google Scholar] [CrossRef]
  46. Fischer, T.; Lachenmayer, A.; Maurer, M.H. CT-guided navigated microwave ablation (MWA) of an unfavorable located breast cancer metastasis in liver segment I. Radiol. Case Rep. 2018, 14, 146–150. [Google Scholar] [CrossRef]
  47. Fong, A.J.; Stewart, C.L.; Lafaro, K.; LaRocca, C.J.; Fong, Y.; Femino, J.D.; Crawford, B. Robotic assistance for quick and accurate image-guided needle placement. Updates Surg. 2021, 73, 1197–1201. [Google Scholar] [CrossRef]
  48. Interventional Systems: Micromate. Available online: https://www.interventional-systems.com/micromate (accessed on 30 October 2022).
  49. Heerink, W.J.; Ruiter, S.J.S.; Pennings, J.P.; Lansdorp, B.; Vliegenthart, R.; Oudkerk, M.; de Jong, K.P. Robotic versus Freehand Needle Positioning in CT-guided Ablation of Liver Tumors: A Randomized Controlled Trial. Radiology 2019, 290, 826–832. [Google Scholar] [CrossRef] [PubMed]
  50. XACT Robotics. Available online: https://xactrobotics.com (accessed on 30 October 2022).
  51. Uppot, R.N.; Laguna, B.; McCarthy, C.J.; De Novi, G.; Phelps, A.; Siegel, E.; Courtier, J. Implementing Virtual and Augmented Reality Tools for Radiology Education and Training, Communication, and Clinical Care. Radiology 2019, 291, 570–580. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Posa, A.; Barbieri, P.; Mazza, G.; Tanzilli, A.; Natale, L.; Sala, E.; Iezzi, R. Technological Advancements in Interventional Oncology. Diagnostics 2023, 13, 228. https://doi.org/10.3390/diagnostics13020228

AMA Style

Posa A, Barbieri P, Mazza G, Tanzilli A, Natale L, Sala E, Iezzi R. Technological Advancements in Interventional Oncology. Diagnostics. 2023; 13(2):228. https://doi.org/10.3390/diagnostics13020228

Chicago/Turabian Style

Posa, Alessandro, Pierluigi Barbieri, Giulia Mazza, Alessandro Tanzilli, Luigi Natale, Evis Sala, and Roberto Iezzi. 2023. "Technological Advancements in Interventional Oncology" Diagnostics 13, no. 2: 228. https://doi.org/10.3390/diagnostics13020228

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop