Abstract
This review is a brief overview of the current status and the potential role of artificial intelligence (AI) in interventional radiology (IR). The literature published in the last decades was reviewed and the technical developments in terms of radiomics, virtual reality, robotics, fusion imaging, cone-beam computed tomography (CBCT) and Imaging Guidance Software were analyzed. The evidence shows that AI significatively improves pre-procedural planning, intra-procedural navigation, and post-procedural assessment. Radiomics extracts features from optical images of personalized treatment strategies. Virtual reality offers innovative tools especially for training and procedural simulation. Robotic systems, combined with AI, could enhance precision and reproducibility of IR procedures while reducing operator exposure to X-ray. Fusion imaging and CBCT, augmented by AI software, improve real-time guidance and procedural outcomes.
1. Introduction
The term “artificial intelligence” (AI) refers to computational algorithms capable of performing tasks that typically require human intelligence, with partial or complete autonomy, to generate useful outputs from specific inputs [,].
AI represents an umbrella concept encompassing machine learning (ML) and deep learning (DL). Machine learning relies on so-called “backward training” methods, in which computer systems iteratively identify and learn specific pathological features from training datasets. A common implementation of ML involves the use of artificial neural networks (ANNs) [,].
A convolutional neural network (CNN) is a specialized type of deep ANN particularly suited for image analysis. CNNs are inspired by the connectivity patterns of neurons in the visual cortex, which process visual information through receptive fields and transmit these data to deeper network layers. Similar to ANNs, CNNs consist of input, output, and multiple hidden layers [].
Despite its inherently technology-driven nature and reliance on image guidance, the application of AI in interventional radiology (IR) remains relatively underdeveloped compared to diagnostic radiology. A major contributing factor is the limited availability and sharing of large-scale datasets. Furthermore, IR procedures typically generate a lower volume of radiological images per patient and per unit time than those in diagnostic radiology. Nevertheless, the imaging data produced in IR are often smaller in size and primarily two-dimensional, which, in principle, may facilitate their processing by current AI algorithms []. Another major limitation arises from the intrinsic heterogeneity of interventional procedures: no two patients or interventions are identical. Variations in anatomy, body habitus, lesion characteristics, and procedural techniques often necessitate real-time adjustments, even when standardized protocols are available. This high degree of procedural individuality complicates the development of reproducible datasets and hinders the standardization required for robust AI model training. A further key challenge in the development of AI algorithms for IR is the widespread use of highly variable image acquisition protocols, operator-dependent device preferences, and the essential need for tactile feedback during procedures [].
The application of AI in IR can be categorized into pre-, peri- and post-procedural uses [,]. Several tools are currently available for application of AI, including among others, radiomics, virtual reality (VR) [], and three-dimensional (3D) simulators [,].
In pre-procedural assessment, AI can assist in the prescreening of patient records, enabling selection through ML and deep learning DL-based predictive models that stratify patients into likely responders and non-responders []. Furthermore, the integration of AI algorithms with molecular data holds promise for enhancing diagnostic accuracy and improving prognostic evaluations [,]. Additionally, pre-procedural virtual simulations can provide patients with an immersive preview of their upcoming intervention and enhance visualization of complex anatomical structures []. These technologies also offer novel approaches for education and procedural training [,,]. A summary of these pre-procedural AI applications is provided in Table 1.
Table 1.
Summary of pre-procedural AI applications in interventional radiology.
Intra-procedural applications of AI include the reduction in radiation exposure, enhancement of procedural precision through the use of navigation software, and the integration of imaging fusion techniques [,,,,]. Table 2 outlines the main intra-procedural applications of AI in interventional radiology.
Table 2.
Summary of intra-procedural AI applications in interventional radiology.
In the post-procedural setting, AI algorithms are expected to play a predictive role by quantifying residual disease and providing prognostic information, thereby supporting the development of personalized follow-up strategies [,,]. Furthermore, AI can contribute to the evaluation of technical success and facilitate the early detection of complications [,].
This narrative review aims to provide a comprehensive overview of the current applications of AI in IR, with a particular focus on its role across the pre-, intra-, and post-procedural phases. In addition, it critically examines the existing limitations, technical challenges, and future directions necessary for the safe and effective integration of AI into routine clinical practice.
2. Pre-Procedural Applications of AI in IR
2.1. Radiomics: Principles and Clinical Applications in IR
Radiomics is defined as the process of converting medical images into high-dimensional quantitative data, enabling the extraction of imaging features that are imperceptible to the human eye and reflect the underlying ultrastructural characteristics of tissues. This noninvasive approach allows the derivation of detailed information from imaging data, potentially improving diagnostic accuracy and supporting personalized treatment planning [,,,].
The radiomics workflow begins with the acquisition of medical images using modalities such as CT, MRI, or PET, where the standardization of imaging protocols is essential to ensure reproducibility and comparability across institutions [,,,]. Following acquisition, the region of interest (ROI) is defined, allowing quantitative descriptors to be extracted from the selected area [,]. These descriptors often include texture and intensity features that capture the ultrastructural architecture of tissues []. The extracted features are subsequently represented in high-dimensional datasets that can be correlated with underlying biological processes and clinical outcomes [,,], (Figure 1).
Figure 1.
The radiomics workflow including the image acquisition, the definition of ROI, and feature extraction.
As imaging biomarkers, these extracted features—including shape, size, and intensity metrics—can assist in diagnosis, prognosis, and treatment planning. Originally developed within the field of oncology, radiomics has since expanded to encompass multiple clinical domains.
Recent evidence indicates that radiomics holds considerable promise in the pre-procedural phase of IR, particularly for patient stratification and treatment planning []. For example, radiomics applied to pre-procedural CT imaging has been employed to predict survival, hepatic encephalopathy, and clinical response following trans jugular intrahepatic portosystemic shunt (TIPS) creation []. Similarly, MRI-based radiomics prior to ablation has demonstrated predictive value for pathological response in hepatocellular carcinoma (HCC) patients undergoing transplantation, particularly when combined with clinical variables []. A meta-analysis further reinforced the role of radiomics in forecasting microvascular invasion (MVI) in HCC [].
Different studies have assessed the role of radiomics in determining the prediction of success in locoregional therapies for liver diseases including both primary and secondary lesions. He et al. [] developed and validated a novel prognostic nomogram to evaluate the survival benefit of HCC patients receiving postoperative adjuvant trans arterial chemoembolization (PA-TACE).
Yang et al. [] developed a model based on whole-liver radiomics features of pre-treatment enhanced MRI for predicting the prognosis of HCC patients undergoing continued trans arterial chemoembolization (TACE) after TACE resistance.
Bernatz et al. [] aimed to identify HCC patients who will respond to repetitive TACE to improve the treatment algorithm, extracting radiomics features from the 24 h post-embolization CT.
Zhang et al. [] developed an interpretable machine learning model to predict the treatment response to initial conventional TACE (cTACE) in intermediate-stage HCC.
Wang et al. [] established a transcriptomic biomarker for predicting the efficacy of TACE that correlates with radiomics features on pre-treatment imaging, tumor immune microenvironment characteristics, and the efficacy of immunotherapy and targeted therapy in HCC patients.
Recent studies investigated the role of radiomics in prediction of postoperative liver metastasis in pancreatic neuroendocrine tumor (panNET) after R0 resection [].
Collectively, these studies underscore the potential of radiomics to inform pre-procedural decision-making in IR, while preclinical models provide a critical platform for biomarker validation and translational advancement.
Robust statistical or ML models are then developed and require independent validation to confirm their reliability and generalizability [,]. To guarantee methodological rigor, the quality of radiomics research should always be assessed using structured evaluation tools, such as the Radiomics Quality Score (RQS), which incorporates sixteen key components, or the more recent CheckList for EvaluAtion of Radiomics research (CLEAR) [,].
2.2. Virtual Reality in Learning
Virtual Reality, defined as a computer-generated immersive environment, has the potential to serve as a supplementary educational tool in medical training by enhancing the understanding of both preclinical and clinical concepts, as demonstrated by improvements in academic performance metrics [].
VR offers significant advantages, including improved skill acquisition, cost-effectiveness, and the potential to reduce patient morbidity and mortality. Incorporating VR into medical education is anticipated to enhance theoretical understanding and foster the development of technical competencies essential for interventional radiology practice [].
The spectrum of reality includes VR, mixed reality [], and augmented reality (AR) and is reassumed in Table 3.
Table 3.
The table summarizes the characteristics of the spectrum of reality including virtual reality (VR), mixed reality (MR) and augmented reality (AR).
Virtual reality fully immerses people in an entirely virtual environment using headsets that cover the user’s entire field of vision. To interact with the environment, including virtual objects, one can use headsets, gloves, and earphones. Mixed reality consists in a hybrid approach that seamlessly blends virtual and real-world elements, allowing interaction between both in real time. MR uses advanced headsets to integrate and anchor digital objects in the real world, allowing dynamic interaction.
Augmented reality overlays digital content onto the real world, enhancing but not replacing the physical environment. AR uses devices like smartphones, tablets, or AR glasses to superimpose digital elements onto the real environment [].
Simulations have already been adopted in orthopedic surgery training, typically using manual image segmentation and patient-specific anatomical models derived from cross-sectional imaging []. Similar VR-based simulation tools are being developed for IR training. Unique to IR is the need to cultivate tactile feedback, spatial reasoning, cognitive awareness, and motor skills essential for safe and effective equipment use []. In interventional radiology, VR can be used for training in angiography, angioplasty, vascular catheterization, catheter placement under fluoroscopic guidance, and stent placement and others [].
The traditional apprenticeship model of “see one, do one, teach one” is increasingly being replaced by a “see many before doing many” approach, which reduces opportunities for direct procedural practice. Given the estimated 10,000 h required to attain expert-level proficiency [,], current training frameworks are often insufficient. VR simulation systems offer a potential solution by providing ample opportunities for deliberate practice within controlled environments.
Furthermore, variability in case mixes across institutions can lead to inconsistent training experiences. Simulation databases can expose trainees to a broader spectrum of cases, enhancing procedural preparedness. When combined with conventional teaching methods, VR systems can shorten procedure times, reduce operator error, and improve overall training outcomes in a safe and efficient manner [,].
Chaer et al. [] conducted a randomized, controlled study with pre- and post-task questionnaires, with the aim to evaluate the acquisition of catheter skills by surgical residents with surgical simulation training comparing with specific instruction and didactic lectures, without simulations. The study revealed that simulator training improved the performance of residents in the operating room.
Knudsen et al. [] designed a randomized, controlled, prospective study to validate the acquisition of percutaneous renal collecting system access skills using a computer-based hybrid virtual reality surgical (VRS) simulator compared to traditional resident training. The study revealed participants who received VRS training showed significant improvement in almost 80% of the parameters measured, whereas those in the control arm showed no significant improvement in any of the parameters [].
VR also offers the potential to enhance IR training globally. In regions with limited access to specialized educators or training infrastructure, VR can provide high-quality, standardized education. Datasets used for training AI models could also serve as educational tools, complete with standardized reporting as answer keys. This approach would enrich educational resources and promote exposure to diverse anatomical and pathological cases, reducing the risk of bias in AI systems [].
2.3. Advancing Medical Education Through Three-Dimensional (3D) Modeling
In IR, the incorporation of 3D modeling into AI learning procedures has become a game-changing strategy.
Using innovative biological tissue-mimicking resins, patient-specific vascular 3D printing makes it possible to reproduce intricate anatomy with extreme accuracy, producing lifelike physical models for pre-procedural planning and simulation.
By connecting imaging-derived quantitative features with clinical decision-making, these models improve predictive power in treatment response, recurrence, and survival outcomes, especially for liver cancers, when paired with AI-driven radiomics and texture analysis.
The learning process can offer not only pre-procedural planning and device testing but also potential ground-truth datasets for training AI algorithms to recognize vascular variations and predict treatment outcomes. These models can be combined with AI-driven radiomics and texture analysis, which extract quantitative features from imaging to enhance prognostic accuracy [,].
Furthermore, integrating 3D-printed models into advanced VR simulation platforms enables the creation of hybrid training environments, where AI systems adaptively tailor procedural difficulty and provide performance feedback. This synergy has already shown benefits in education, as VR-enhanced endovascular simulators significantly reduced fluoroscopy time and improved procedural proficiency among trainees. Collectively, 3D modeling strengthens the AI learning pipeline in IR by supplying anatomically precise data for algorithm development, validating predictive models, and fostering skill acquisition in a safe, iterative training environment.
Together, these advances highlight how 3D modeling not only strengthens the AI learning process by providing multimodal, data-rich environments but also bridges the gap between theoretical algorithms and practical, patient-centered interventions in IR [].
The studies addressing pre-procedural applications of AI in interventional radiology, as discussed in this section, are summarized in Table 4, which provides an overview of the methodologies, imaging modalities, and key findings.
Table 4.
Pre-procedural applications of AI in IR.
3. Intra-Procedural Applications of AI in IR
3.1. Cone-Beam Computed Tomography (CBCT) and Imaging Guidance Software
Cone-beam computed tomography has profoundly transformed interventional radiology by enabling high-resolution 3D imaging directly in the operating room [,]. When combined with image fusion technologies, CBCT facilitates multimodal visualization, significantly enhancing accuracy in both oncologic and vascular procedures [,].
Among the main advantages of CBCT is its capacity for intraoperative volumetric imaging. By providing real-time 3D reconstructions, CBCT improves target recognition and allows immediate post-procedural evaluation, as demonstrated in hepatic ablations or complex drainage procedures [,]. This capability is critical for verifying treatment completeness, such as tumor ablation or embolization [,,].
In vascular procedures, CBCT can be used in addition to automated tumor feeder detection [] to improve both safety and precision of embolization therapies. The AFD systems, achieving real-time 3D visualization of blood vessels through advanced imaging techniques with intelligent algorithms [,], consisting in three steps, in which the first step is the manual identification and segmentation of a ROI, the second is the manual identification of tip of catheter, and finally the last step is the automated identification of feeding arteries. The final 3D roadmap, containing the segmented ROI and feeding arteries and the paths from catheter to vessels, has been overlaid onto the live fluoroscopy images [] (Figure 2).

Figure 2.
Multifocal HCC in a patient with HBV/HDV/alcohol-related cirrhosis (BCLC stage B) treated with DEB-TACE using BioPearl (Tokyo, Japan). (A,B) Pre-treatment CT shows two subcapsular lesions (~3 cm) in segments VI and IV with (A) arterial hyperenhancement and (B) venous washout (red circles), consistent with HCC. (C,D) Emboguide (Version 1.2.1, Philips) software and CBCT mapping of the arterial pathway from the right hepatic artery to the target lesion in segment IV. (E) Angiographic image showing the lesion in segment IV supplied by the left hepatic artery. (F,G) Emboguide (Philips) software and CBCT mapping of the right hepatic artery pathway to the lesion in segment VI before chemoembolization, showing enhancement of the target area. (H,I) Post-treatment CT in (H) arterial and (I) venous phases demonstrates hypodense areas (~3.5 cm) in segments VI and IV, consistent with treated lesions (red arrows).
These software utilize dynamic contrast-enhanced imaging for peak vessel visualization which enhances vascular structure definition, especially when dealing with tumors displaying irregular or abnormal blood supply. Real-time integration of these images enables precise catheter navigation which supports embolization procedures with reduced requirements for extensive fluoroscopy exposure [,].
In liver and lung tumor embolization, the AFD software enables accurate delivery of embolic agents to complex and variable tumor-feeding vessels, increasing treatment efficacy while minimizing non-target embolization and complications [,]. In neurovascular interventions, its ability to guide microcatheter navigation through intricate vascular networks supports safer and more effective treatment of aneurysms and arteriovenous malformations.
One of the key benefits is their ability to enhance the detection of small or tortuous feeder vessels, which are often difficult to identify using conventional digital subtraction angiography (DSA). The improved visualization and segmentation of the vascular anatomy—achieved through the use of contrast-enhanced CBCT—lead to more precise navigation and catheterization. This not only reduces the risk of missing target vessels but also limits the potential for non-target embolization [,,].
Another significant advantage lies in the system’s capacity to shorten procedure times. By facilitating rapid and accurate feeder identification, AFD software reduces the need for repeated angiographic runs, minimizing radiation exposure for both the operator and the patient [,]. This improved workflow translates into greater procedural efficiency, particularly in high-volume interventional settings.
Despite these benefits, AFD software has some limitations. The accuracy of vessel segmentation and feeder identification is highly dependent on image quality. Suboptimal CBCT acquisitions—due to respiratory motion, patient obesity, or poor contrast opacification—can impair the system’s performance. Moreover, while the software provides semi-automated vessel mapping, operator input is often required to correct or refine suggested feeder paths, especially in complex anatomies [].
Equally important is its precision in positioning. CBCT enables millimetric control of needle and catheter trajectories, minimizing the risk of technical errors or complications [,,]. The technology also supports treatment verification, for example contrast-enhanced CBCT after tumor ablation of the liver allows early assessment of the ablation zone [].
AI-driven image guidance systems such as XperGuide (Version 3.5.1, Philips Allura Xper FD20, Philips Healthcare), improve the precision of percutaneous procedures by delivering real-time three-dimensional needle guidance for percutaneous interventions [,] (Figure 3). XperCT software (Version 3.5.1, Philips Allura Xper FD20, Philips Healthcare) can be used to predict ablation volume [].

Figure 3.
A 61-year-old man with rectal adenocarcinoma previously treated with surgery, adjuvant chemotherapy, and radiotherapy. Pre-procedural MRI, including axial (A) and sagittal (B) T2-weighted images, (C) T1-weighted post-contrast image, and (D) diffusion-weighted imaging (DWI) with (E) apparent diffusion coefficient (ADC) map, together with (F) 18F-FDG PET/CT, demonstrate a heterogeneous mass in the presacral region consistent with local disease recurrence (red arrows). (G–I) Intra-procedural cone-beam CT (CBCT) in axial (G), coronal (H), and sagittal (I) planes obtained using XperGuide software (Philips Healthcare) for cryoablation probe placement (purple arrows), and XperCT (Philips Healthcare) for ablation zone prediction (purple and yellow circles).
XperGuide and XperCT are utilized in a wide variety of clinical settings, particularly in liver, lung, and renal tumor interventions. In liver tumors, particularly those located in the subphrenic region or near vital structures, their precise needle placement capabilities reduce the need for intraoperative patient repositioning and significantly lower the risk of complications such as infection and misplacement []. Similarly, in lung interventions, XperGuide enhances the accuracy of needle navigation in complex thoracic anatomies, thereby decreasing the number of passes required, reducing the risk of pneumothorax, and minimizing procedural trauma and subsequent interventions [,]. For renal tumors and spinal metastases, XperCT improves lesion visibility in areas where MRI may be limited, enabling more precise treatment planning and needle placement [,].
Both XperGuide and XperCT use intelligent algorithms for real-time trajectory optimization, motion compensation, and collision detection, but are not fully dependent on deep learning. These features enhance the safety of procedures by considering the patient movements—such as breathing—and avoiding important structures. Moreover, ongoing research seeks to integrate ML models that predict lesion response based on ablation geometry, suggesting optimal energy settings tailored to specific tissue characteristics. This would further personalize interventions, improving the likelihood of complete tumor ablation while minimizing damage to surrounding healthy tissues [,].
3.2. Intraoperative Applications of VR
The intraoperative applications of VR are numerous and increasingly sophisticated. One of the most promising is real-time navigation and guidance, where VR is integrated with live imaging modalities such as fluoroscopy, ultrasound, or CBCT. This fusion enables dynamic navigation during a procedure, improving spatial orientation and precision, and allowing procedural strategies to be adapted instantly to patient-specific anatomical variations.
Closely related is the use of AR overlays, where hybrid AR–VR solutions project virtual elements directly onto the sterile field. This approach improves ergonomics and safety during critical steps, such as needle placement in percutaneous ablations, while reducing the cognitive load on clinicians by simplifying complex decision-making processes [].
Another frontier is the integration of AI into VR systems. AI algorithms can automatically evaluate anatomical variants, suggest optimal trajectories for vascular access or needle insertion, and provide real-time predictive analytics to anticipate potential complications or deviations from the ideal path [].
VR also facilitates remote collaboration. By creating shared virtual environments, multidisciplinary teams can interact in real time from different locations, supporting complex interventions and enabling expert consultation even in remote or resource-limited settings [].
Some platforms now offer dynamic personalized simulation, where intraoperative images continuously update the virtual model, allowing real-time adjustments to the planned strategy and enabling a more adaptive, patient-specific approach to treatment [,].
Advanced anatomical visualization represents another strength of VR. Through stereoscopic interaction with reconstructed organs and vessels, operators achieve more precise targeting in procedures such as TACE, radioembolization, and tumor ablations. This three-dimensional immersion enhances depth perception and targeting accuracy compared to traditional 2D imaging [].
A particularly innovative intraoperative use is procedural rehearsal, where clinicians can simulate key procedural steps on a patient’s anatomical model shortly before execution. This allows them to anticipate potential challenges and fine-tune their approach in real time, improving overall procedural efficiency [].
Similar to hypnosis, which has already proven its effectiveness in analgesic treatment as a distraction technique, VR has recently emerged as a new therapeutic weapon, also during IR procedures. Grange et al. [] evaluated the tolerance and feasibility of using VR headsets with patients during interventional radiology procedures, demonstrating that it can be beneficial for pain and anxiety management.
The introduction of VR into clinical practice also raises ethical and practical considerations. Patient privacy and data security are paramount, as VR platforms rely on highly sensitive, patient-specific data. Compliance with regulations such as the GDPR in Europe and HIPAA in the United States is mandatory [], and patients must be fully informed about how their data will be used, particularly when VR is employed for training or planning []. Ensuring adequate clinician training is also critical to avoid misuse or over-reliance on the technology, which could lead to errors. While VR holds great promise for improving procedural precision and outcomes, the long-term impact of VR-based training on real-world clinical performance remains to be fully evaluated. There are also potential psychological considerations, such as disorientation or dependency on immersive environments [].
Despite its potential, VR adoption faces technical and practical challenges. Rendering latency, headset ergonomics, compatibility with existing medical infrastructure, and high costs still limit widespread use []. Additionally, specialized staff training is essential, and concerns persist regarding data storage, privacy, and regulatory compliance. Nevertheless, with ongoing technological advances and increasing clinical demand, costs are expected to decrease, facilitating broader clinical integration.
3.3. Robotics
Various robotic devices (table-mounted, floor-mounted, gantry-mounted, and patient-mounted) have been developed to enhance precision and standardization in procedures. These systems can provide real-time visualization and tracking, allowing for better trajectory planning and needle placement; robotic systems have evolved to allow for multiple degrees of freedom, enhancing their flexibility and precision.
- Table-Mounted Systems: These systems manipulate the needle under imaging guidance and have shown high accuracy in clinical settings [];
- Floor-Mounted Systems: These devices can hold and orient needles and have demonstrated improved accuracy in phantom and animal studies [];
- Patient-Mounted Systems: These systems offer ergonomic advantages and have shown promising results in clinical trials [].
Interventional radiology robots can support both percutaneous and endovascular procedures under various imaging modalities, including CT, MRI, US, and fluoroscopy, as well as through fused multimodal image datasets [,,].
3.3.1. Percutaneous Applications
Accurate visualization of the target via CT, CBCT, MRI, fluoroscopy, and US is critical for percutaneous procedures—biopsy, tumor ablation, and infiltration.
CT/CBCT guidance enables precise image-guided interventions, while navigation software and robotic assistance further enhance targeting accuracy, shorten procedure time, and lower radiation exposure. Commercial platforms such as Maxio and iSYS report improved needle-insertion times and sub-2 mm targeting errors; XACT Robotics demonstrated <8.5 min skin-to-target times and robust compensation for respiratory motion []. MRI-compatible robots leverage high soft-tissue contrast and eliminate ionizing radiation, although spatial constraints and material compatibility remain challenges. US-guided systems (e.g., B-Rob I) automate probe stabilization and needle placement to overcome operator dependence [,].
Although less established than CT, needle-based techniques can be integrated with MRI to accurately locate anatomical structures. MRI-guided robotic systems are primarily utilized for biopsies and ablations in prostate, brain, and breast cancer cases. A drawback of this approach is the challenge of positioning and maneuvering the needle within a closed-bore MRI scanner, necessitating that the patient be moved in and out of the scanner for these procedures. Additionally, transrectal MRI-guided prostate biopsies may offer a quicker alternative compared to manually adjusting the needle; robotic-assisted MRI-guided biopsy yields 100% technical success rate with a short MRI room occupation time [,].
Zheng et al. demonstrated excellent results in assisted percutaneous discectomy with the Mazor X robotic system that utilizes three-dimensional CT imaging for surgical trajectory design, allowing precise planning of access angles and diameters for the procedure. Under fluoroscopic guide visualization, the system guides the puncture dilation and instrument insertion along the defined surgical path [].
3.3.2. Endovascular Applications
Endovascular robotic systems, first introduced in the mid-2000s, were developed to enhance catheter stability, improve procedural precision, and reduce radiation exposure for the clinician []. Early platforms—such as the Sensei X (Hansen Medical), Niobe (Stereotaxis), and Amigo RCS (Catheter Robotics)—demonstrated feasibility for robot-assisted navigation of large-bore catheters in cardiac and aortic procedures. However, these systems were limited by their bulky design, lack of tactile feedback, and prolonged setup times, making them unsuitable for navigating smaller, tortuous vessels or for use in urgent clinical scenarios [,,].
Second-generation systems, such as the Magellan (Hansen) and CorPath 200/GRX (Corindus), have since introduced slimmer, more flexible robotic catheters offering multiple degrees of freedom, thereby expanding the scope of endovascular applications []. These platforms have been successfully applied in procedures such as uterine artery embolization, hepatic chemoembolization, and peripheral revascularization, demonstrating high technical success and significant reductions in radiation exposure for operators [].
Despite these advances, key limitations persist. Most systems still lack true haptic or force feedback, depend on 2D fluoroscopic guidance, and perform suboptimally in thrombosed or highly tortuous vasculature. Current research is therefore shifting toward next-generation technologies, including untethered microrobots and soft-body actuators—self-propelled, wireless devices capable of autonomous navigation—which may enable fully minimally invasive, radiation-free interventions in the future [].
Robotic platforms also show promise as training tools when integrated with surgical simulators, offering realistic procedural practice while minimizing radiation exposure. Additionally, robotic assistance may help reduce inter-operator variability. However, widespread clinical adoption remains hindered by high system costs, limited compatibility with existing tools, workflow disruption in interventional suites, and the continued absence of haptic feedback [].
3.4. Imaging Fusion
Fusion imaging is a technique based on the integration of different imaging modalities, with the aim to enhance the power of each one, reducing to a minimum the weaknesses of each individual mode. The process consists of different steps. The first step is importation of data from a previous CT/MR/PET exam, followed by the spatial alignment of the imaging dataset and both anatomical landmarks and external markers can be used. Imaging registration can be carried manually by the operator, automatically based on matching common anatomical landmarks, or semi-automatically using a combination of both techniques. When an appropriate alignment is achieved, real-time US and CT/MR/PET images are overlaid on the US monitor, displaying the same plane and moving synchronously together [] (Figure 4 and Figure 5).
Figure 4.
A 72-year-old man with HCC treated with MWTA under fusion imaging guidance. (A) Pre-procedural planning using fusion of CT and ultrasound images to identify the target lesion. (B) Intra-procedural monitoring during insertion of the MWTA antenna into the target lesion. (C,D) Prediction of the ablation zone using XperCT software (EPIQ PercuNav, Philips fusion system) (purple circles).

Figure 5.
A 71-year-old man with HCC in the setting of NASH-related liver disease, previously treated with MWTA under fusion imaging guidance. (A,B) Pre-treatment CT images show a 2 cm lesion in segment VII of the liver, demonstrating arterial phase hyperenhancement (red circle) (A) and venous phase washout (red arrow) (B). (C) Fusion imaging (EPIQ PercuNav, Philips) showing selection of the target lesion (blue circles) in segment VII using ultrasound–CT co-registration. (D) Introduction of the MWTA antenna (20 cm length) into the target lesion; ablation performed for 2 min 30 s at 150 W. (E) Intra-procedural monitoring demonstrates the typical “popcorn effect” of the hepatic parenchyma caused by tissue heating and vaporization. (F,G) Post-treatment CT images in the arterial (F) and venous (G) phases demonstrate a 3 cm hypodense area in segment VII, consistent with complete response at 1-month follow-up.
A major enhancement to CBCT is its integration with image fusion. The alignment between preoperative CT and intraoperative CBCT improves visualization of hepatic lesions and facilitates accurate hepatic segmentation []. Multimodal fusion-guided navigation merges preoperative images with fluoroscopy or CBCT for advanced 3D orientation, aiding in selective chemoembolization, complex biopsies, and vascular malformation treatments [,,]. By improving targeting, fusion technologies can also reduce procedural time, lowering radiation exposure for both patients and operators [].
Modern systems now offer automatic, dynamic registration, aligning pre- and intraoperative datasets without manual intervention and ensuring smooth transitions between modalities []. The clinical applications of CBCT and image fusion are extensive. In liver oncology, this combination is becoming the emerging standard for TACE and ablation procedures, allowing precise targeting of lesions that are difficult to visualize with a single modality [,,]. In vascular interventions, CBCT with vascular overlay improves embolization planning and stent placement in complex anatomies []. Nonetheless, there are limitations and considerations. CBCT is susceptible to motion artifacts, particularly in non-cooperative patients, and involves higher radiation doses than ultrasound—though lower than conventional CT []. Image fusion accuracy heavily depends on precise registration; even small errors can affect targeting. Finally, proper use of CBCT and fusion systems requires specialized training and significant financial investment, which may be challenging for smaller healthcare facilities.
The main studies exploring intra-procedural applications of AI, including CBCT, VR, robotics, and fusion imaging are summarized in Table 5, highlighting their methodological approaches and clinical impact.
Table 5.
Intra-procedural applications of AI in IR.
4. Post-Procedural Applications of AI in IR
Artificial intelligence methodologies, encompassing machine learning and deep learning, are increasingly utilized in the analysis of post-procedural imaging to assess treatment efficacy and predict clinical outcomes. These technologies contribute to enhanced workflow efficiency, reduced inter-observer variability in image interpretation, and improved accuracy in post-procedural assessment. Moreover, AI plays a pivotal role in the quantification of treatment response, supports prognostic evaluation, and informs subsequent management strategies []. Abajian et al. demonstrated the use of random forest models incorporating MRI signal intensity, contrast enhancement, and clinical parameters (e.g., cirrhosis) to classify responses following TACE, achieving promising predictive performance [].
Moon et al. [] evaluated the potential of four-dimensional (4D) flow MRI in predicting treatment responses after TACE in cirrhotic patients with HCC, revealing that the quantitative flow data obtained by 4D flow MRI may be useful for predicting CR after TACE in cirrhotic patients with HCC.
Similarly, support vector machine models based on radiomic features from post-EVAR CT angiography effectively identified aggressive type II endoleaks associated with aneurysmal sac expansion, reinforcing the role of AI in personalized surveillance strategies [].
In a pilot study, Daye et al. integrated clinical and radiomic data from pre-treatment CT scans to predict local tumor progression and survival in patients undergoing percutaneous thermal ablation for adrenal metastases, demonstrating high predictive accuracy []. Additional work by Sinha et al. showed the potential of machine learning to forecast procedure-specific outcomes such as pneumothorax after CT-guided transthoracic biopsy, in-hospital mortality post-TIPS, and prolonged hospital stay following uterine artery embolization [].
Machine learning algorithms have also been used to predict long-term complications after inferior vena cava (IVC) filter placement by integrating a wide array of clinical, anatomical, and device-related variables. These models may enhance patient selection, perioperative planning, and post-procedural follow-up, ultimately reducing complication rates [].
Beyond predictive modeling, AI holds potential to streamline post-treatment image analysis and improve inter-observer consistency. The generation of quantitative imaging biomarkers supports tailored treatment strategies and facilitates more nuanced risk stratification. In clinical practice, interventional radiologists frequently engage in multidisciplinary discussions to integrate AI-derived insights with comprehensive patient data, optimizing both immediate therapeutic decisions and long-term follow-up plans [].
For clarity, the studies described in this section are summarized in Table 6, which provides a structured overview of their methodologies, imaging modalities, and principal contributions to AI in IR.
Table 6.
Post-procedural applications of AI in IR.
5. Discussion
The integration of AI into IR is progressing rapidly and has the potential to remake the field by upgrading efficiency, accuracy, and personalization of treatments. AI can improve decision-making, procedural guidance, and long-term patient management with different tools used during pre-, intra-, and post-procedures.
Artificial intelligence has been firstly adopted for diagnostic radiology, mainly with application in automated detection of findings and features, automated interpretation of findings, and post-processing imaging tools such as reduction in image noise and artifacts [,].
Artificial intelligence tools for diagnostic radiology are supported by large, annotated datasets and standardized imaging protocols. Several studies have assessed the role of radiomics in the extraction of features to predict a response to a specific treatment [,,,,,].
In contrast, AI adoption in IR is still at the beginning. The interventional environment presents considerable complexity due to variability in procedures, different techniques and devices, and real-time nature of intraoperative imaging. In fact, unlike diagnostic radiology, based on static images, IR requires continuous integration of multimodal data—cross-sectional imaging, fluoroscopy, ultrasound, and CBCT. This complexity renders the development of algorithms specific for IR settings significantly more challenging.
Artificial intelligence in IR can gain advances from diagnostic radiology. Radiomics, already used in diagnostic imaging for tumor characterization and prognosis, can be applied to procedural planning in in patient selection for IR procedure in order to personalize therapies [,,]. Workflow optimization algorithms, widely used in diagnostic radiology for planning and reporting, could be tailored to facilitate procedure scheduling, reduce delays, and improve resource allocation in IR [].
The intraoperative environment of IR provides different opportunities to validate the usefulness of AI. Image fusion and CBCT navigation systems, could reduce procedure times, radiation dose, enhancing the efficacy of the procedure. Robotics, supported by machine learning, can improve precision and reproducibility of procedures. After the procedure, predictive models can aid in monitoring outcomes, anticipating complications, and guiding follow-up strategies, parallel to predictive analytics already proven in diagnostic radiology [,,].
Virtual reality, combined with 3D modelling, is revolutionizing medical education and simulation-based training, surpassing the limitations of traditional educational models [].
However, several barriers still hinder the full implementation of AI in daily interventional radiology practice. Data heterogeneity remains a major limitation, as inter-institutional differences in imaging protocols, device selection, and procedural techniques make it challenging to develop generalizable algorithms. Moreover, most studies in IR are constrained by small sample sizes and retrospective designs, limiting the robustness and reproducibility of current evidence. AI integration is nevertheless expected to gradually overcome these challenges through multicenter data sharing, federated learning, and harmonization algorithms capable of standardizing imaging acquisition and analysis across platforms. As automation increases and technology matures, implementation costs are anticipated to decline, facilitating broader clinical adoption [,]. In addition, the current level of algorithmic sophistication may not yet allow AI systems to reliably identify the optimal procedural approach, particularly in complex or variable interventional settings. In this regard, it may be more appropriate in the near future to refer to the concept of “hybrid intelligence,” in which AI supports and accelerates specific procedural steps, while human expertise remains essential for final decision-making. Robust multicenter studies and standardized datasets are required to determine clinical utility and reproducibility of AI software []. Ethical and legal concerns further complicate the implementation of AI in medicine, particularly regarding data privacy, accountability in cases of diagnostic or therapeutic errors, and the need for transparent decision-making processes [,,]. The ongoing debate centers on how AI will transform clinical roles and responsibilities, raising crucial questions of liability when AI-assisted decisions lead to adverse outcomes. Current legal frameworks hold supervising physicians strictly liable, emphasizing the necessity for clear regulations defining accountability in AI-supported practice. To minimize errors, AI algorithms and their underlying datasets should undergo regular validation and updates, while patients must be adequately informed about the use of AI systems in their care []. Together, these challenges highlight the need for more standardized data collection, rigorous clinical research, and clear regulatory frameworks before AI can be fully integrated into interventional radiology practice.
6. Future Perspectives
The integration of AI into IR offers transformative potential while raising important ethical challenges related to data governance, algorithm and model development, and clinical practice. Establishing standardized research and implementation practices is essential to ensure consistency, transparency, and reliability in AI-driven applications. As virtual healthcare expands, patient–clinician interactions have intensified, increasing professional workload; AI-based assistants could help address this demand by generating structured responses to patient queries and contributing to patient education and management. In parallel, the combination of robotics and AI promises to enhance the precision, efficiency, and clinical outcomes of IR procedures. Furthermore, AI-driven innovations are transforming clinical research by enabling the generation of high-quality synthetic datasets, thereby accelerating trial design and fostering faster translation of novel therapies. Looking ahead, the establishment of international networks and collaborative task forces will be essential in disseminating expertise, supporting institutions in building capabilities, and ensuring that the benefits of AI are equitably shared across the global IR community.
7. Conclusions
Artificial intelligence is rapidly transforming the landscape of IR by enhancing precision, efficiency, and personalization across pre-, intra-, and post-procedural phases. Radiomics enables noninvasive extraction of imaging biomarkers that can inform patient selection and optimize treatment strategies. Virtual and augmented reality, combined with 3D modeling, are reshaping medical education and procedural simulation, providing safe and reproducible training environments. Intra-procedural applications, including CBCT, image fusion, and robotic-assisted systems, demonstrate the potential to improve navigation accuracy, reduce radiation exposure, and increase procedural reproducibility. Post-procedural AI tools support objective treatment assessment, predictive outcome modeling, and personalized follow-up strategies.
Despite this promise, widespread clinical adoption of AI in interventional radiology remains limited by data heterogeneity, small study populations, and ethical and regulatory challenges. Multicenter collaborations, standardized datasets, and rigorous prospective trials will be essential to validate existing applications and ensure reproducibility. Furthermore, clear legal frameworks and transparency in algorithmic decision-making are required to address patient safety and accountability.
Looking ahead, the integration of AI with robotics, extended reality, and advanced imaging modalities has the potential to redefine the practice of interventional radiology. By bridging technical innovation with clinical needs, AI may ultimately contribute to safer procedures, more consistent outcomes, and a higher degree of personalization in patient care.
Author Contributions
Conceptualization, C.L., S.A.A. and G.C.; methodology, C.L., S.A.A. and S.T.; validation, S.C., P.B., P.T., V.A. and A.M.I.; writing—original draft preparation S.T., S.R.M., M.G., A.L. and F.A.; writing—review and editing, C.L., S.A.A., S.C., S.T., V.A., P.T., P.B. and A.M.I.; visualization, C.L. and S.A.A.; supervision, G.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data presented in this study are available on request from the corresponding author due to privacy restrictions.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Iezzi, R.; Goldberg, S.N.; Merlino, B.; Posa, A.; Valentini, V.; Manfredi, R. Artificial Intelligence in Interventional Radiology: A Literature Review and Future Perspectives. J. Oncol. 2019, 2019, 6153041. [Google Scholar] [CrossRef] [PubMed]
- Waller, J.; O’Connor, A.; Rafaat, E.; Amireh, A.; Dempsey, J.; Martin, C.; Umair, M. Applications and challenges of artificial intelligence in diagnostic and interventional radiology. Pol. J. Radiol. 2022, 87, e113–e117. [Google Scholar] [CrossRef]
- Abajian, A.; Murali, N.; Savic, L.J.; Laage-Gaupp, F.M.; Nezami, N.; Duncan, J.S.; Lin, M.; Geschwind, J.F.; Chapiro, J. Predicting Treatment Response to Intra-arterial Therapies for Hepatocellular Carcinoma with the Use of Supervised Machine Learning-An Artificial Intelligence Concept. J. Vasc. Interv. Radiol. 2018, 29, 850–857.e1. [Google Scholar] [CrossRef]
- Letzen, B.; Wang, C.J.; Chapiro, J. The Role of Artificial Intelligence in Interventional Oncology: A Primer. J. Vasc. Interv. Radiol. 2019, 30, 38–41.e1. [Google Scholar] [CrossRef]
- Chartrand, G.; Cheng, P.M.; Vorontsov, E.; Drozdzal, M.; Turcotte, S.; Pal, C.J.; Kadoury, S.; Tang, A. Deep Learning: A Primer for Radiologists. Radiographics 2017, 37, 2113–2131. [Google Scholar] [CrossRef] [PubMed]
- Najafi, A.; Cazzato, R.L.; Meyer, B.C.; Pereira, P.L.; Alberich, A.; Lopez, A.; Ronot, M.; Fritz, J.; Maas, M.; Benson, S.; et al. CIRSE Position Paper on Artificial Intelligence in Interventional Radiology. Cardiovasc. Intervent. Radiol. 2023, 46, 1303–1307. [Google Scholar] [CrossRef]
- Lesaunier, A.; Khlaut, J.; Dancette, C.; Tselikas, L.; Bonnet, B.; Boeken, T. Artificial intelligence in interventional radiology: Current concepts and future trends. Diagn. Interv. Imaging 2025, 106, 5–10. [Google Scholar] [CrossRef]
- von Ende, E.; Ryan, S.; Crain, M.A.; Makary, M.S. Artificial Intelligence, Augmented Reality, and Virtual Reality Advances and Applications in Interventional Radiology. Diagnostics 2023, 13, 892. [Google Scholar] [CrossRef]
- Nardone, V.; Reginelli, A.; Rubini, D.; Gagliardi, F.; Del Tufo, S.; Belfiore, M.P.; Boldrini, L.; Desideri, I.; Cappabianca, S. Delta radiomics: An updated systematic review. Radiol. Med. 2024, 129, 1197–1214. [Google Scholar] [CrossRef]
- Barral, M.; Lefevre, A.; Camparo, P.; Hoogenboom, M.; Pierre, T.; Soyer, P.; Cornud, F. In-Bore Transrectal MRI-Guided Biopsy With Robotic Assistance in the Diagnosis of Prostate Cancer: An Analysis of 57 Patients. AJR Am. J. Roentgenol. 2019, 213, W171–W179. [Google Scholar] [CrossRef] [PubMed]
- Zhou, G.; Liu, W.; Zhang, Y.; Gu, W.; Li, M.; Lu, C.; Zhou, R.; Che, Y.; Lu, H.; Zhu, Y.; et al. Application of three-dimensional printing in interventional medicine. J. Interv. Med. 2020, 3, 1–16. [Google Scholar] [CrossRef]
- Sequeira, C.; Oliveira-Santos, M.; Borges Rosa, J.; Silva Marques, J.; Oliveira Santos, E.; Norte, G.; Gonçalves, L. Three-dimensional simulation for interventional cardiology procedures: Face and content validity. Rev. Port. Cardiol. 2024, 43, 389–396. [Google Scholar] [CrossRef]
- Mosconi, C.; Cucchetti, A.; Bruno, A.; Cappelli, A.; Bargellini, I.; De Benedittis, C.; Lorenzoni, G.; Gramenzi, A.; Tarantino, F.P.; Parini, L.; et al. Radiomics of cholangiocarcinoma on pretreatment CT can identify patients who would best respond to radioembolisation. Eur. Radiol. 2020, 30, 4534–4544. [Google Scholar] [CrossRef] [PubMed]
- Li, H.; Xu, H.; Li, Y.; Li, X. Application of artificial intelligence (AI)-enhanced biochemical sensing in molecular diagnosis and imaging analysis: Advancing and challenges. TrAC Trends Anal. Chem. 2024, 174, 117700. [Google Scholar] [CrossRef]
- Ferrari, R.; Trinci, M.; Casinelli, A.; Treballi, F.; Leone, E.; Caruso, D.; Polici, M.; Faggioni, L.; Neri, E.; Galluzzo, M. Radiomics in radiology: What the radiologist needs to know about technical aspects and clinical impact. Radiol. Med. 2024, 129, 1751–1765. [Google Scholar] [CrossRef] [PubMed]
- Seong, H.; Yun, D.; Yoon, K.S.; Kwak, J.S.; Koh, J.C. Development of pre-procedure virtual simulation for challenging interventional procedures: An experimental study with clinical application. Korean J. Pain 2022, 35, 403–412. [Google Scholar] [CrossRef]
- Gent, D.; Kainth, R. Simulation-based procedure training (SBPT) in rarely performed procedures: A blueprint for theory-informed design considerations. Adv. Simul. 2022, 7, 13. [Google Scholar] [CrossRef]
- Gurgitano, M.; Angileri, S.A.; Roda, G.M.; Liguori, A.; Pandolfi, M.; Ierardi, A.M.; Wood, B.J.; Carrafiello, G. Interventional Radiology ex-machina: Impact of Artificial Intelligence on practice. Radiol. Med. 2021, 126, 998–1006. [Google Scholar] [CrossRef]
- D’Amore, B.; Smolinski-Zhao, S.; Daye, D.; Uppot, R.N. Role of Machine Learning and Artificial Intelligence in Interventional Oncology. Curr. Oncol. Rep. 2021, 23, 70. [Google Scholar] [CrossRef]
- Bang, J.Y.; Hough, M.; Hawes, R.H.; Varadarajulu, S. Use of Artificial Intelligence to Reduce Radiation Exposure at Fluoroscopy-Guided Endoscopic Procedures. Am. J. Gastroenterol. 2020, 115, 555–561. [Google Scholar] [CrossRef]
- Zimmermann, J.M.; Vicentini, L.; Van Story, D.; Pozzoli, A.; Taramasso, M.; Lohmeyer, Q.; Maisano, F.; Meboldt, M. Quantification of Avoidable Radiation Exposure in Interventional Fluoroscopy With Eye Tracking Technology. Investig. Radiol. 2020, 55, 457–462. [Google Scholar] [CrossRef]
- Yin, Y.; de Haas, R.J.; Alves, N.; Pennings, J.P.; Ruiter, S.J.S.; Kwee, T.C.; Yakar, D. Machine learning-based radiomic analysis and growth visualization for ablation site recurrence diagnosis in follow-up CT. Abdom. Radiol. 2024, 49, 1122–1131. [Google Scholar] [CrossRef]
- Lim, S.; Shin, Y.; Lee, Y.H. Arterial enhancing local tumor progression detection on CT images using convolutional neural network after hepatocellular carcinoma ablation: A preliminary study. Sci. Rep. 2022, 12, 1754. [Google Scholar] [CrossRef] [PubMed]
- Moon, C.M.; Lee, Y.Y.; Kim, S.K.; Jeong, Y.Y.; Heo, S.H.; Shin, S.S. Four-dimensional flow MR imaging for evaluating treatment response after transcatheter arterial chemoembolization in cirrhotic patients with hepatocellular carcinoma. Radiol. Med. 2023, 128, 1163–1173. [Google Scholar] [CrossRef] [PubMed]
- Gillies, R.J.; Kinahan, P.E.; Hricak, H. Radiomics: Images Are More than Pictures, They Are Data. Radiology 2016, 278, 563–577. [Google Scholar] [CrossRef]
- Avery, E.; Sanelli, P.C.; Aboian, M.; Payabvash, S. Radiomics: A Primer on Processing Workflow and Analysis. Semin. Ultrasound CT MRI 2022, 43, 142–146. [Google Scholar] [CrossRef] [PubMed]
- Deng, K.; Chen, T.; Leng, Z.; Yang, F.; Lu, T.; Cao, J.; Pan, W.; Zheng, Y. Radiomics as a tool for prognostic prediction in transarterial chemoembolization for hepatocellular carcinoma: A systematic review and meta-analysis. Radiol. Med. 2024, 129, 1099–1117. [Google Scholar] [CrossRef]
- Orlhac, F.; Nioche, C.; Klyuzhin, I.; Rahmim, A.; Buvat, I. Radiomics in PET Imaging: A Practical Guide for Newcomers. PET Clin. 2021, 16, 597–612. [Google Scholar] [CrossRef]
- Tomaszewski, M.R.; Gillies, R.J. The Biological Meaning of Radiomic Features. Radiology 2021, 298, 505–516. [Google Scholar] [CrossRef]
- Varghese, B.A.; Cen, S.Y.; Hwang, D.H.; Duddalwar, V.A. Texture Analysis of Imaging: What Radiologists Need to Know. AJR Am. J. Roentgenol. 2019, 212, 520–528. [Google Scholar] [CrossRef]
- Yamada, A.; Kamagata, K.; Hirata, K.; Ito, R.; Nakaura, T.; Ueda, D.; Fujita, S.; Fushimi, Y.; Fujima, N.; Matsui, Y.; et al. Clinical applications of artificial intelligence in liver imaging. Radiol. Med. 2023, 128, 655–667. [Google Scholar] [CrossRef]
- Triggiani, S.; Contaldo, M.T.; Mastellone, G.; Ce, M.; Ierardi, A.M.; Carrafiello, G.; Cellina, M. The Role of Artificial Intelligence and Texture Analysis in Interventional Radiological Treatments of Liver Masses: A Narrative Review. Crit. Rev. Oncog. 2024, 29, 37–52. [Google Scholar] [CrossRef] [PubMed]
- Mamone, G.; Comelli, A.; Porrello, G.; Milazzo, M.; Di Piazza, A.; Stefano, A.; Benfante, V.; Tuttolomondo, A.; Sparacia, G.; Maruzzelli, L.; et al. Radiomics Analysis of Preprocedural CT Imaging for Outcome Prediction after Transjugular Intrahepatic Portosystemic Shunt Creation. Life 2024, 14, 726. [Google Scholar] [CrossRef]
- Tabari, A.; D’Amore, B.; Cox, M.; Brito, S.; Gee, M.S.; Wehrenberg-Klee, E.; Uppot, R.N.; Daye, D. Machine Learning-Based Radiomic Features on Pre-Ablation MRI as Predictors of Pathologic Response in Patients with Hepatocellular Carcinoma Who Underwent Hepatic Transplant. Cancers 2023, 15, 2058. [Google Scholar] [CrossRef] [PubMed]
- Li, L.; Wu, C.; Huang, Y.; Chen, J.; Ye, D.; Su, Z. Radiomics for the Preoperative Evaluation of Microvascular Invasion in Hepatocellular Carcinoma: A Meta-Analysis. Front. Oncol. 2022, 12, 831996. [Google Scholar] [CrossRef] [PubMed]
- He, Y.; Qian, J.; Zhu, G.; Wu, Z.; Cui, L.; Tu, S.; Luo, L.; Shan, R.; Liu, L.; Shen, W.; et al. Development and validation of nomograms to evaluate the survival outcome of HCC patients undergoing selective postoperative adjuvant TACE. Radiol. Med. 2024, 129, 653–664. [Google Scholar] [CrossRef]
- Yang, C.; Yang, H.C.; Luo, Y.G.; Li, F.T.; Cong, T.H.; Li, Y.J.; Ye, F.; Li, X. Predicting Survival Using Whole-Liver MRI Radiomics in Patients with Hepatocellular Carcinoma After TACE Refractoriness. Cardiovasc. Intervent. Radiol. 2024, 47, 964–977. [Google Scholar] [CrossRef]
- Bernatz, S.; Elenberger, O.; Ackermann, J.; Lenga, L.; Martin, S.S.; Scholtz, J.E.; Koch, V.; Grünewald, L.D.; Herrmann, Y.; Kinzler, M.N.; et al. CT-radiomics and clinical risk scores for response and overall survival prognostication in TACE HCC patients. Sci. Rep. 2023, 13, 533. [Google Scholar] [CrossRef]
- Zhang, L.; Jin, Z.; Li, C.; He, Z.; Zhang, B.; Chen, Q.; You, J.; Ma, X.; Shen, H.; Wang, F.; et al. An interpretable machine learning model based on contrast-enhanced CT parameters for predicting treatment response to conventional transarterial chemoembolization in patients with hepatocellular carcinoma. Radiol. Med. 2024, 129, 353–367. [Google Scholar] [CrossRef]
- Wang, C.; Leng, B.; You, R.; Yu, Z.; Lu, Y.; Diao, L.; Jiang, H.; Cheng, Y.; Yin, G.; Xu, Q. A Transcriptomic Biomarker for Predicting the Response to TACE Correlates with the Tumor Microenvironment and Radiomics Features in Hepatocellular Carcinoma. J. Hepatocell. Carcinoma 2024, 11, 2321–2337. [Google Scholar] [CrossRef]
- Ma, M.; Gu, W.; Liang, Y.; Han, X.; Zhang, M.; Xu, M.; Gao, H.; Tang, W.; Huang, D. A novel model for predicting postoperative liver metastasis in R0 resected pancreatic neuroendocrine tumors: Integrating computational pathology and deep learning-radiomics. J. Transl. Med. 2024, 22, 768. [Google Scholar] [CrossRef]
- Gelmini, A.Y.P.; Duarte, M.L.; de Assis, A.M.; Guimaraes Junior, J.B.; Carnevale, F.C. Virtual reality in interventional radiology education: A systematic review. Radiol. Bras. 2021, 54, 254–260. [Google Scholar] [CrossRef]
- Li, B.; Eisenberg, N.; Beaton, D.; Lee, D.S.; Al-Omran, L.; Wijeysundera, D.N.; Hussain, M.A.; Rotstein, O.D.; de Mestral, C.; Mamdani, M.; et al. Predicting inferior vena cava filter complications using machine learning. J. Vasc. Surg. Venous. Lymphat. Disord. 2024, 12, 101943. [Google Scholar] [CrossRef]
- Tortora, M.; Luppi, A.; Pacchiano, F.; Marisei, M.; Grassi, F.; Werner, H.; Kitamura, F.C.; Tortora, F.; Caranci, F.; Ferraciolli, S.F. Current applications and future perspectives of extended reality in radiology. Radiol. Med. 2025, 130, 905–920. [Google Scholar] [CrossRef] [PubMed]
- Gould, D. Using simulation for interventional radiology training. Br. J. Radiol. 2010, 83, 546–553. [Google Scholar] [CrossRef] [PubMed]
- Chaer, R.A.; Derubertis, B.G.; Lin, S.C.; Bush, H.L.; Karwowski, J.K.; Birk, D.; Morrissey, N.J.; Faries, P.L.; McKinsey, J.F.; Kent, K.C. Simulation improves resident performance in catheter-based intervention: Results of a randomized, controlled study. Ann. Surg. 2006, 244, 343–352. [Google Scholar] [CrossRef]
- Knudsen, B.E.; Matsumoto, E.D.; Chew, B.H.; Johnson, B.; Margulis, V.; Cadeddu, J.A.; Pearle, M.S.; Pautler, S.E.; Denstedt, J.D. A randomized, controlled, prospective study validating the acquisition of percutaneous renal collecting system access skills using a computer based hybrid virtual reality surgical simulator: Phase I. J. Urol. 2006, 176, 2173–2178. [Google Scholar] [CrossRef]
- Kaufmann, R.; Zech, C.J.; Takes, M.; Brantner, P.; Thieringer, F.; Deutschmann, M.; Hergan, K.; Scharinger, B.; Hecht, S.; Rezar, R.; et al. Vascular 3D Printing with a Novel Biological Tissue Mimicking Resin for Patient-Specific Procedure Simulations in Interventional Radiology: A Feasibility Study. J. Digit. Imaging 2022, 35, 9–20. [Google Scholar] [CrossRef] [PubMed]
- Tenewitz, C.; Le, R.T.; Hernandez, M.; Baig, S.; Meyer, T.E. Systematic review of three-dimensional printing for simulation training of interventional radiology trainees. 3D Print Med. 2021, 7, 10. [Google Scholar] [CrossRef]
- Bini, F.; Missori, E.; Pucci, G.; Pasini, G.; Marinozzi, F.; Forte, G.I.; Russo, G.; Stefano, A. Preclinical Implementation of matRadiomics: A Case Study for Early Malformation Prediction in Zebrafish Model. J. Imaging 2024, 10, 290. [Google Scholar] [CrossRef]
- Barral, M.; Chevallier, O.; Cornelis, F.H. Perspectives of Cone-beam Computed Tomography in Interventional Radiology: Techniques for Planning, Guidance, and Monitoring. Tech. Vasc. Interv. Radiol. 2023, 26, 100912. [Google Scholar] [CrossRef]
- Racadio, J.M.; Babic, D.; Homan, R.; Rampton, J.W.; Patel, M.N.; Racadio, J.M.; Johnson, N.D. Live 3D guidance in the interventional radiology suite. AJR Am. J. Roentgenol. 2007, 189, W357–W364. [Google Scholar] [CrossRef]
- Monfardini, L.; Orsi, F.; Caserta, R.; Sallemi, C.; Della Vigna, P.; Bonomo, G.; Varano, G.; Solbiati, L.; Mauri, G. Ultrasound and cone beam CT fusion for liver ablation: Technical note. Int. J. Hyperth. 2018, 35, 500–504. [Google Scholar] [CrossRef]
- Key, B.M.; Tutton, S.M.; Scheidt, M.J.; Cone-Beam, C.T. With Enhanced Needle Guidance and Augmented Fluoroscopy Overlay: Applications in Interventional Radiology. AJR Am. J. Roentgenol. 2023, 221, 92–101. [Google Scholar] [CrossRef]
- Morimoto, M.; Numata, K.; Kondo, M.; Nozaki, A.; Hamaguchi, S.; Takebayashi, S.; Tanaka, K. C-arm cone beam CT for hepatic tumor ablation under real-time 3D imaging. AJR Am. J. Roentgenol. 2010, 194, W452–W454. [Google Scholar] [CrossRef] [PubMed]
- Serrano, E.; Valcarcel Jose, J.; Paez-Carpio, A.; Matute-Gonzalez, M.; Werner, M.F.; Lopez-Rueda, A. Cone Beam computed tomography (CBCT) applications in image-guided minimally invasive procedures. Radiologia 2025, 67, 38–53. [Google Scholar] [CrossRef] [PubMed]
- Tacher, V.; Radaelli, A.; Lin, M.; Geschwind, J.F. How I do it: Cone-beam CT during transarterial chemoembolization for liver cancer. Radiology 2015, 274, 320–334. [Google Scholar] [CrossRef] [PubMed]
- Geis, J.R.; Brady, A.P.; Wu, C.C.; Spencer, J.; Ranschaert, E.; Jaremko, J.L.; Langer, S.G.; Borondy Kitts, A.; Birch, J.; Shields, W.F.; et al. Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement. Radiology 2019, 293, 436–440. [Google Scholar] [CrossRef]
- Chiaradia, M.; Izamis, M.L.; Radaelli, A.; Prevoo, W.; Maleux, G.; Schlachter, T.; Mayer, J.; Luciani, A.; Kobeiter, H.; Tacher, V. Sensitivity and Reproducibility of Automated Feeding Artery Detection Software during Transarterial Chemoembolization of Hepatocellular Carcinoma. J. Vasc. Interv. Radiol. 2018, 29, 425–431. [Google Scholar] [CrossRef]
- Abdelsalam, H.; Emara, D.M.; Hassouna, E.M. The efficacy of TACE; how can automated feeder software help? Egypt. J. Radiol. Nucl. Med. 2022, 53, 43. [Google Scholar] [CrossRef]
- Lanza, C.; Carriero, S.; Buijs, E.F.M.; Mortellaro, S.; Pizzi, C.; Sciacqua, L.V.; Biondetti, P.; Angileri, S.A.; Ianniello, A.A.; Ierardi, A.M.; et al. Robotics in Interventional Radiology: Review of Current and Future Applications. Technol. Cancer Res. Treat. 2023, 22, 15330338231152084. [Google Scholar] [CrossRef]
- Kim, D.J.; Chul-Nam, I.; Park, S.E.; Kim, D.R.; Lee, J.S.; Kim, B.S.; Choi, G.M.; Kim, J.; Won, J.H. Added Value of Cone-Beam Computed Tomography for Detecting Hepatocellular Carcinomas and Feeding Arteries during Transcatheter Arterial Chemoembolization Focusing on Radiation Exposure. Medicina 2023, 59, 1121. [Google Scholar] [CrossRef]
- Zeiler, S.R.; Wasserman, B.A. Vessel Wall Imaging: A Powerful Diagnostic Tool but Not a Substitute for Biopsies. AJNR Am. J. Neuroradiol. 2021, 42, E79. [Google Scholar] [CrossRef] [PubMed]
- Braak, S.J.; van Strijen, M.J.; van Leersum, M.; van Es, H.W.; van Heesewijk, J.P. Real-Time 3D fluoroscopy guidance during needle interventions: Technique, accuracy, and feasibility. AJR Am. J. Roentgenol. 2010, 194, W445–W451. [Google Scholar] [CrossRef]
- Shinde, P.; Jadhav, A.; Gupta, K.K.; Dhoble, S. Quantification of 6d Inter-Fraction Tumour Localisation Errors in Tongue and Prostate Cancer Using Daily Kv-Cbct for 1000 Imrt and Vmat Treatment Fractions. Radiat. Prot. Dosim. 2022, 198, 1265–1281. [Google Scholar] [CrossRef]
- Schernthaner, R.E.; Duran, R.; Chapiro, J.; Wang, Z.; Geschwind, J.F.; Lin, M. A new angiographic imaging platform reduces radiation exposure for patients with liver cancer treated with transarterial chemoembolization. Eur. Radiol. 2015, 25, 3255–3262. [Google Scholar] [CrossRef]
- Floridi, C.; Radaelli, A.; Abi-Jaoudeh, N.; Grass, M.; Lin, M.; Chiaradia, M.; Giovagnoni, A.; Brunese, L.; Wood, B.; Carrafiello, G.; et al. C-arm cone-beam computed tomography in interventional oncology: Technical aspects and clinical applications. Radiol. Med. 2014, 119, 521–532. [Google Scholar] [CrossRef] [PubMed]
- Abdel-Rehim, M.; Ronot, M.; Sibert, A.; Vilgrain, V. Assessment of liver ablation using cone beam computed tomography. World J. Gastroenterol. 2015, 21, 517–524. [Google Scholar] [CrossRef] [PubMed]
- Busser, W.M.; Braak, S.J.; Futterer, J.J.; van Strijen, M.J.; Hoogeveen, Y.L.; de Lange, F.; Schultze Kool, L.J. Cone beam CT guidance provides superior accuracy for complex needle paths compared with CT guidance. Br. J. Radiol. 2013, 86, 20130310. [Google Scholar] [CrossRef]
- Wallace, M.J.; Kuo, M.D.; Glaiberman, C.; Binkert, C.A.; Orth, R.C.; Soulez, G.; Technology Assessment Committee of the Society of Interventional Radiology. Three-dimensional C-arm cone-beam CT: Applications in the interventional suite. J. Vasc. Interv. Radiol. 2008, 19, 799–813. [Google Scholar] [CrossRef]
- Finos, K.; Datta, S.; Sedrakyan, A.; Milsom, J.W.; Pua, B.B. Mixed reality in interventional radiology: A focus on first clinical use of XR90 augmented reality-based visualization and navigation platform. Expert Rev. Med. Devices 2024, 21, 679–688. [Google Scholar] [CrossRef] [PubMed]
- Lang, M.; Ghandour, S.; Rikard, B.; Balasalle, E.K.; Rouhezamin, M.R.; Zhang, H.; Uppot, R.N. Medical Extended Reality for Radiology Education and Training. J. Am. Coll. Radiol. 2024, 21, 1583–1594. [Google Scholar] [CrossRef]
- Briganti, F.; Tortora, M.; Loiudice, G.; Tarantino, M.; Guida, A.; Buono, G.; Marseglia, M.; Caranci, F.; Tortora, F. Utility of virtual stenting in treatment of cerebral aneurysms by flow diverter devices. Radiol. Med. 2023, 128, 480–491. [Google Scholar] [CrossRef]
- Elsakka, A.; Park, B.J.; Marinelli, B.; Swinburne, N.C.; Schefflein, J. Virtual and Augmented Reality in Interventional Radiology: Current Applications, Challenges, and Future Directions. Tech. Vasc. Interv. Radiol. 2023, 26, 100919. [Google Scholar] [CrossRef]
- Nielsen, C.A.; Lonn, L.; Konge, L.; Taudorf, M. Simulation-Based Virtual-Reality Patient-Specific Rehearsal Prior to Endovascular Procedures: A Systematic Review. Diagnostics 2020, 10, 500. [Google Scholar] [CrossRef] [PubMed]
- Grange, L.; Grange, R.; Bertholon, S.; Morisson, S.; Martin, I.; Boutet, C.; Grange, S. Virtual reality for interventional radiology patients: A preliminary study. Support. Care Cancer 2024, 32, 416. [Google Scholar] [CrossRef]
- Lake, K.; Mc Kittrick, A.; Desselle, M.; Padilha Lanari Bo, A.; Abayasiri, R.A.M.; Fleming, J.; Baghaei, N.; Kim, D.D. Cybersecurity and Privacy Issues in Extended Reality Health Care Applications: Scoping Review. JMIR XR Spat. Comput. 2024, 1, e59409. [Google Scholar] [CrossRef]
- Rudschies, C.; Schneider, I. Ethical, legal, and social implications (ELSI) of virtual agents and virtual reality in healthcare. Soc. Sci. Med. 2024, 340, 116483. [Google Scholar] [CrossRef]
- Zhou, S.; Gromala, D.; Wang, L. Ethical Challenges of Virtual Reality Technology Interventions for the Vulnerabilities of Patients With Chronic Pain: Exploration of Technician Responsibility. J. Med. Internet Res. 2023, 25, e49237. [Google Scholar] [CrossRef]
- Chlorogiannis, D.D.; Charalampopoulos, G.; Bale, R.; Odisio, B.; Wood, B.J.; Filippiadis, D.K. Innovations in Image-Guided Procedures: Unraveling Robot-Assisted Non-Hepatic Percutaneous Ablation. Semin. Intervent. Radiol. 2024, 41, 113–120. [Google Scholar] [CrossRef] [PubMed]
- Beaman, C.B.; Kaneko, N.; Meyers, P.M.; Tateshima, S. A Review of Robotic Interventional Neuroradiology. AJNR Am. J. Neuroradiol. 2021, 42, 808–814. [Google Scholar] [CrossRef] [PubMed]
- Rueda, M.A.; Riga, C.T.; Hamady, M.S. Robotics in Interventional Radiology: Past, Present, and Future. Arab. J. Interv. Radiol. 2021, 2, 56–63. [Google Scholar] [CrossRef]
- Levy, S.; Goldberg, S.N.; Roth, I.; Shochat, M.; Sosna, J.; Leichter, I.; Flacke, S. Clinical evaluation of a robotic system for precise CT-guided percutaneous procedures. Abdom. Radiol. 2021, 46, 5007–5016. [Google Scholar] [CrossRef]
- Kettenbach, J.; Kronreif, G.; Figl, M.; Furst, M.; Birkfellner, W.; Hanel, R.; Bergmann, H. Robot-assisted biopsy using ultrasound guidance: Initial results from in vitro tests. Eur. Radiol. 2005, 15, 765–771. [Google Scholar] [CrossRef]
- Berger, J.; Unger, M.; Landgraf, L.; Bieck, R.; Neumuth, T.; Melzer, A. Assessment of Natural User Interactions for Robot-Assisted Interventions. Curr. Dir. Biomed. Eng. 2018, 4, 165–168. [Google Scholar] [CrossRef]
- Christou, A.S.; Amalou, A.; Lee, H.; Rivera, J.; Li, R.; Kassin, M.T.; Varble, N.; Tsz Ho Tse, Z.; Xu, S.; Wood, B.J. Image-Guided Robotics for Standardized and Automated Biopsy and Ablation. Semin. Intervent. Radiol. 2021, 38, 565–575. [Google Scholar] [CrossRef]
- Zheng, W.; Wu, J.; Xia, W.; Zuo, R.; Chang, X.; Yin, H.; Li, C.; Zhang, C. Whole-Workflow Robotic-Assisted Percutaneous Endoscopic Lumbar Discectomy via a Two-Step Access Method: Technical Report and Preliminary Results. J. Pain Res. 2025, 18, 4361–4371. [Google Scholar] [CrossRef]
- Rafii-Tari, H.; Payne, C.J.; Yang, G.Z. Current and emerging robot-assisted endovascular catheterization technologies: A review. Ann. Biomed. Eng. 2014, 42, 697–715. [Google Scholar] [CrossRef]
- Mendes Pereira, V.; Cancelliere, N.M.; Nicholson, P.; Radovanovic, I.; Drake, K.E.; Sungur, J.M.; Krings, T.; Turk, A. First-in-human, robotic-assisted neuroendovascular intervention. J. Neurointerv. Surg. 2020, 12, 338–340. [Google Scholar] [CrossRef] [PubMed]
- Alderliesten, T.; Konings, M.K.; Niessen, W.J. Modeling friction, intrinsic curvature, and rotation of guide wires for simulation of minimally invasive vascular interventions. IEEE Trans. Biomed. Eng. 2007, 54, 29–38. [Google Scholar] [CrossRef]
- Allaqaband, S.; Solis, J.; Kazemi, S.; Bajwa, T.; American Heart A, American College of C. Endovascular treatment of peripheral vascular disease. Curr. Probl. Cardiol. 2006, 31, 711–760. [Google Scholar] [CrossRef] [PubMed]
- Rao, S. Robot-assisted transarterial chemoembolization for hepatocellular carcinoma: Initial evaluation of safety, feasibility, success and outcomes using the Magellan system. J. Vasc. Interv. Radiol. 2015, 26, S12. [Google Scholar] [CrossRef]
- Gunduz, S.; Albadawi, H.; Oklu, R. Robotic Devices for Minimally Invasive Endovascular Interventions: A New Dawn for Interventional Radiology. Adv. Intell. Syst. 2020, 3, 2000181. [Google Scholar] [CrossRef]
- Najafi, G.; Kreiser, K.; Abdelaziz, M.; Hamady, M.S. Current State of Robotics in Interventional Radiology. Cardiovasc. Intervent. Radiol. 2023, 46, 549–561. [Google Scholar] [CrossRef] [PubMed]
- European Society of Radiology. Abdominal applications of ultrasound fusion imaging technique: Liver, kidney, and pancreas. Insights Imaging 2019, 10, 6. [Google Scholar] [CrossRef]
- Biondetti, P.; Ierardi, A.M.; Casiraghi, E.; Caruso, A.; Grillo, P.; Carriero, S.; Lanza, C.; Angileri, S.A.; Sangiovanni, A.; Iavarone, M.; et al. Clinical Impact of a Protocol Involving Cone-Beam CT (CBCT), Fusion Imaging and Ablation Volume Prediction in Percutaneous Image-Guided Microwave Ablation in Patients with Hepatocellular Carcinoma Unsuitable for Standard Ultrasound (US) Guidance. J. Clin. Med. 2023, 12, 7598. [Google Scholar] [CrossRef]
- Abi-Jaoudeh, N.; Kruecker, J.; Kadoury, S.; Kobeiter, H.; Venkatesan, A.M.; Levy, E.; Wood, B.J. Multimodality image fusion-guided procedures: Technique, accuracy, and applications. Cardiovasc. Intervent. Radiol. 2012, 35, 986–998. [Google Scholar] [CrossRef]
- Tacher, V.; Kobeiter, H. State of the Art of Image Guidance in Interventional Radiology. J. Belg. Soc. Radiol. 2018, 102, 7. [Google Scholar] [CrossRef]
- McNally, M.M.; Scali, S.T.; Feezor, R.J.; Neal, D.; Huber, T.S.; Beck, A.W. Three-dimensional fusion computed tomography decreases radiation exposure, procedure time, and contrast use during fenestrated endovascular aortic repair. J. Vasc. Surg. 2015, 61, 309–316. [Google Scholar] [CrossRef]
- Zhong, B.Y.; Jia, Z.Z.; Zhang, W.; Liu, C.; Ying, S.H.; Yan, Z.P.; Ni, C.F. Application of Cone-beam Computed Tomography in Interventional Therapies for Liver Malignancy: A Consensus Statement by the Chinese College of Interventionalists. J. Clin. Transl. Hepatol. 2024, 12, 886–891. [Google Scholar] [CrossRef]
- Angle, J.F. Cone-beam CT: Vascular applications. Tech. Vasc. Interv. Radiol. 2013, 16, 144–149. [Google Scholar] [CrossRef] [PubMed]
- Seah, J.; Boeken, T.; Sapoval, M.; Goh, G.S. Prime Time for Artificial Intelligence in Interventional Radiology. Cardiovasc. Intervent. Radiol. 2022, 45, 283–289. [Google Scholar] [CrossRef]
- Charalambous, S.; Klontzas, M.E.; Kontopodis, N.; Ioannou, C.V.; Perisinakis, K.; Maris, T.G.; Damilakis, J.; Karantanas, A.; Tsetis, D. Radiomics and machine learning to predict aggressive type 2 endoleaks after endovascular aneurysm repair: A proof of concept. Acta. Radiol. 2022, 63, 1293–1299. [Google Scholar] [CrossRef]
- Daye, D.; Staziaki, P.V.; Furtado, V.F.; Tabari, A.; Fintelmann, F.J.; Frenk, N.E.; Shyn, P.; Tuncali, K.; Silverman, S.; Arellano, R.; et al. CT Texture Analysis and Machine Learning Improve Post-ablation Prognostication in Patients with Adrenal Metastases: A Proof of Concept. Cardiovasc. Intervent. Radiol. 2019, 42, 1771–1776. [Google Scholar] [CrossRef]
- Sinha, I.; Aluthge, D.P.; Chen, E.S.; Sarkar, I.N.; Ahn, S.H. Machine Learning Offers Exciting Potential for Predicting Postprocedural Outcomes: A Framework for Developing Random Forest Models in IR. J. Vasc. Interv. Radiol. 2020, 31, 1018–1024.e4. [Google Scholar] [CrossRef]
- Neri, E.; Aghakhanyan, G.; Zerunian, M.; Gandolfo, N.; Grassi, R.; Miele, V.; Giovagnoni, A.; Laghi, A.; SIRM Expert Group on Artificial Intelligence. Explainable AI in radiology: A white paper of the Italian Society of Medical and Interventional Radiology. Radiol. Med. 2023, 128, 755–764. [Google Scholar] [CrossRef] [PubMed]
- van Timmeren, J.E.; Cester, D.; Tanadini-Lang, S.; Alkadhi, H.; Baessler, B. Radiomics in medical imaging-how-to guide and critical reflection. Insights Imaging 2020, 11, 91. [Google Scholar] [CrossRef]
- Sheng, R.; Zheng, B.; Zhang, Y.; Sun, W.; Yang, C.; Zeng, M. A preliminary study of developing an MRI-based model for postoperative recurrence prediction and treatment direction of intrahepatic cholangiocarcinoma. Radiol. Med. 2024, 129, 1766–1777. [Google Scholar] [CrossRef]
- Jiang, Y.; Zhou, K.; Sun, Z.; Wang, H.; Xie, J.; Zhang, T.; Sang, S.; Islam, M.T.; Wang, J.Y.; Chen, C.; et al. Non-invasive tumor microenvironment evaluation and treatment response prediction in gastric cancer using deep learning radiomics. Cell. Rep. Med. 2023, 4, 101146. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Z.; Luo, T.; Yan, M.; Shen, H.; Tao, K.; Zeng, J.; Yuan, J.; Fang, M.; Zheng, J.; Bermejo, I.; et al. Voxel-level radiomics and deep learning for predicting pathologic complete response in esophageal squamous cell carcinoma after neoadjuvant immunotherapy and chemotherapy. J. Immunother. Cancer 2025, 13, e011149. [Google Scholar] [CrossRef]
- Granata, V.; Fusco, R.; De Muzio, F.; Brunese, M.C.; Setola, S.V.; Ottaiano, A.; Cardone, C.; Avallone, A.; Patrone, R.; Pradella, S.; et al. Radiomics and machine learning analysis by computed tomography and magnetic resonance imaging in colorectal liver metastases prognostic assessment. Radiol. Med. 2023, 128, 1310–1332. [Google Scholar] [CrossRef] [PubMed]
- Wang, W.; Peng, Y.; Feng, X.; Zhao, Y.; Seeruttun, S.R.; Zhang, J.; Cheng, Z.; Li, Y.; Liu, Z.; Zhou, Z. Development and Validation of a Computed Tomography-Based Radiomics Signature to Predict Response to Neoadjuvant Chemotherapy for Locally Advanced Gastric Cancer. JAMA Netw. Open. 2021, 4, e2121143. [Google Scholar] [CrossRef]
- Yang, M.; Liu, H.; Dai, Q.; Yao, L.; Zhang, S.; Wang, Z.; Li, J.; Duan, Q. Treatment Response Prediction Using Ultrasound-Based Pre-, Post-Early, and Delta Radiomics in Neoadjuvant Chemotherapy in Breast Cancer. Front. Oncol. 2022, 12, 748008. [Google Scholar] [CrossRef]
- Zheng, C.; Gu, X.T.; Huang, X.L.; Wei, Y.C.; Chen, L.; Luo, N.B.; Lin, H.S.; Jin-Yuan, L. Nomogram based on clinical and preoperative CT features for predicting the early recurrence of combined hepatocellular-cholangiocarcinoma: A multicenter study. Radiol. Med. 2023, 128, 1460–1471. [Google Scholar] [CrossRef]
- Qin, S.; Liu, K.; Chen, Y.; Zhou, Y.; Zhao, W.; Yan, R.; Xin, P.; Zhu, Y.; Wang, H.; Lang, N. Prediction of pathological response and lymph node metastasis after neoadjuvant therapy in rectal cancer through tumor and mesorectal MRI radiomic features. Sci. Rep. 2024, 14, 21927. [Google Scholar] [CrossRef] [PubMed]
- Liu, C.; Zhao, W.; Xie, J.; Lin, H.; Hu, X.; Li, C.; Shang, Y.; Wang, Y.; Jiang, Y.; Ding, M.; et al. Development and validation of a radiomics-based nomogram for predicting a major pathological response to neoadjuvant immunochemotherapy for patients with potentially resectable non-small cell lung cancer. Front. Immunol. 2023, 14, 1115291. [Google Scholar] [CrossRef]
- Buijs, E.; Maggioni, E.; Mazziotta, F.; Lega, F.; Carrafiello, G. Clinical impact of AI in radiology department management: A systematic review. Radiol. Med. 2024, 129, 1656–1666. [Google Scholar] [CrossRef]
- Chehab, M.A.; Brinjikji, W.; Copelan, A.; Venkatesan, A.M. Navigational Tools for Interventional Radiology and Interventional Oncology Applications. Semin. Interv. Radiol. 2015, 32, 416–427. [Google Scholar] [CrossRef]
- Chehab, M.; Kouri, B.E.; Miller, M.J.; Venkatesan, A.M. Image Fusion Technology in Interventional Radiology. Tech. Vasc. Interv. Radiol. 2023, 26, 100915. [Google Scholar] [CrossRef]
- Boeken, T.; Pellerin, O.; Bourreau, C.; Palle, J.; Gallois, C.; Zaanan, A.; Taieb, J.; Lahlou, W.; Di Gaeta, A.; Al Ahmar, M.; et al. Clinical value of sequential circulating tumor DNA analysis using next-generation sequencing and epigenetic modifications for guiding thermal ablation for colorectal cancer metastases: A prospective study. Radiol. Med. 2024, 129, 1530–1542. [Google Scholar] [CrossRef] [PubMed]
- Khanna, N.N.; Maindarkar, M.A.; Viswanathan, V.; Fernandes, J.F.E.; Paul, S.; Bhagawati, M.; Ahluwalia, P.; Ruzsa, Z.; Sharma, A.; Kolluri, R.; et al. Economics of Artificial Intelligence in Healthcare: Diagnosis vs. Treatment. Healthcare 2022, 10, 2493. [Google Scholar] [CrossRef]
- Ahmed, M.I.; Spooner, B.; Isherwood, J.; Lane, M.; Orrock, E.; Dennison, A. A Systematic Review of the Barriers to the Implementation of Artificial Intelligence in Healthcare. Cureus 2023, 15, e46454. [Google Scholar] [CrossRef] [PubMed]
- Contaldo, M.T.; Pasceri, G.; Vignati, G.; Bracchi, L.; Triggiani, S.; Carrafiello, G. AI in Radiology: Navigating Medical Responsibility. Diagnostics 2024, 14, 1506. [Google Scholar] [CrossRef] [PubMed]
- Brady, A.P.; Neri, E. Artificial Intelligence in Radiology-Ethical Considerations. Diagnostics 2020, 10, 231. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).