Next Article in Journal
Evaluation of Focus Measures for Hyperspectral Imaging Microscopy Using Principal Component Analysis
Previous Article in Journal
Comparison of Visual and Quantra Software Mammographic Density Assessment According to BI-RADS® in 2D and 3D Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging

by
Deepshikha Bhati
1,*,
Fnu Neha
1 and
Md Amiruzzaman
2
1
Department of Computer Science, Kent State University, Kent, OH 44242, USA
2
Department of Computer Science, West Chester University, West Chester, PA 19383, USA
*
Author to whom correspondence should be addressed.
J. Imaging 2024, 10(10), 239; https://doi.org/10.3390/jimaging10100239
Submission received: 3 August 2024 / Revised: 14 September 2024 / Accepted: 21 September 2024 / Published: 25 September 2024
(This article belongs to the Section Medical Imaging)

Abstract

:
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools to unravel the black-box nature of these models, providing insights into their inner workings and enhancing trust in their predictions. This survey paper comprehensively examines various interpretation and visualization techniques applied to deep learning models in medical imaging. The paper reviews methodologies, discusses their applications, and evaluates their effectiveness in enhancing the interpretability, reliability, and clinical relevance of deep learning models in medical image analysis.

1. Introduction

Medical imaging (MI) is a cornerstone of modern healthcare, providing critical insights for diagnosing, treating, and monitoring various diseases. Traditionally, MI encompassed mesoscopic imaging techniques such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and Positron Emission Tomography (PET). However, with recent technological advancements, the scope of MI has significantly broadened to include high-resolution histopathology, a vital subspecialty of pathology that focuses on examining tissue samples at microscopic and molecular levels. This area now leverages advanced techniques such as digital pathology and computational analysis.
The integration of artificial intelligence (AI) has revolutionized high-resolution histopathology, enhancing diagnostic accuracy and resolution. This expansion from traditional mesoscopic imaging to advanced microscopic techniques represents a significant evolution in MI. The field now includes histological, cellular, and molecular pathology, driven by AI advancements that support ongoing developments in diagnostic precision and therapeutic strategies. For instance, recent studies highlight the role of AI in analyzing electron microscopy images for disease monitoring [1,2] and improving deep learning applications in histopathology [3,4].
Traditional image analysis methods, reliant on handcrafted features and expert knowledge, are often time-consuming and prone to errors. Machine learning (ML) approaches, including Support Vector Machines (SVMs), decision trees, random forests, and logistic regression, have improved efficiency and accuracy in tasks such as image segmentation and disease classification. However, these methods still require manual feature selection and extraction. The advent of deep learning (DL) has transformed medical image analysis by automatically learning and extracting hierarchical features from large volumes of data [5,6,7,8,9,10,11,12,13]. This progress has provided healthcare professionals with valuable insights, enabling more accurate diagnoses and enhanced patient care.
Despite the impressive performance of DL models, challenges related to interpretability and transparency persist [14,15,16]. The opaque nature of these models raises concerns about their reliability in healthcare, where understanding diagnostic decisions is crucial. Interpretability in AI-driven healthcare models fosters trust and reliability by allowing practitioners to comprehend and verify model outputs. It ensures ethical and legal accountability, supports clinical decision-making, and helps identify biases and errors, enhancing fairness and accuracy.
Efforts to improve the interpretability of DL models in MI are ongoing, with researchers developing techniques to clarify model decision-making processes [17,18,19]. This paper contributes to this field through several key aspects:
1.
Comprehensive Survey: We offer a thorough survey of innovative approaches for interpreting and visualizing DL models in MI, including a broad range of techniques aimed at enhancing model transparency and trust.
2.
Methodological Review: We provide an in-depth review of current methodologies, focusing on post-hoc visualization techniques such as perturbation-based, gradient-based, decomposition-based, trainable attention (TA)-based methods, and vision transformers (ViT). We evaluate each method’s effectiveness and applicability in MI.
3.
Clinical Relevance: We emphasize the importance of interpretability techniques in clinical settings, demonstrating how they lead to more reliable and actionable insights from DL models, thus supporting better decision-making in healthcare.
4.
Future Directions: We outline future research directions in model interpretability and visualization, highlighting the need for more robust and scalable techniques that can handle the complexity of DL models while ensuring practical utility in medical applications.
Our survey covers innovative approaches for interpreting and visualizing DL models in MI. As illustrated in Figure 1, we explore various DL models and techniques, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), transformer-based architectures, autoencoders, Local Interpretable Model-agnostic Explanations (LIME), Gradient-Class Activation Mapping (Grad-CAM), Layer-Wise Relevance Propagation (LRP), attention-based methods, and vision transformers (ViTs).
This review distinguishes itself by expanding the definition of medical imaging to include high-resolution histopathology and digital pathology, thus broadening the scope of traditional mesoscopic imaging techniques. This approach not only highlights recent advancements but also sets the stage for future research in interpretability and visualization within the expanding field of medical imaging.
The rest of this paper is divided into four sections, with multiple subsections within each of them. Section 3 is focused on interpreting model design and workflow. Section 4 Visualizing DL models in MI. Section 5 presents an overview of post-hoc interpretation and Visualization Techniques. A comparison of Different Interpretation Methods is discussed in Section 6 and concludes the work with current challenges and future directions in Section 8.

2. Research Methodology

A comprehensive review of explainable AI in medical image analysis was published by [20,21,22,23,24]. While this review covers a broad range of topics, some critical areas, such as research on trainable attention (TA)-based methods, vision transformers, and their applications, have been overlooked. Our review aims to fill this gap by providing an extensive overview of various domains within medical imaging, addressing key aspects such as Domain, Task, Modality, Performance, and Technique.
This research employs the Systematic Literature Review (SLR) method, which involves several stages. The research questions guiding this study are as follows:
  • What innovative methods exist for interpreting and visualizing deep learning models in medical imaging?
  • How effective are post-hoc visualization techniques (perturbation-based, gradient-based, decomposition-based, TA-based, and ViT) in improving model transparency?
  • What is the clinical relevance of interpretability techniques for actionable insights from deep learning models in healthcare?
  • What are the future research directions for model interpretability and visualization in medical applications?
The survey examines over 400 recent papers on explainable AI (XAI) in medical image analysis. Relevant contributions were identified using keywords like “deep learning”, “convolutional neural networks”, “medical imaging”, “surveys”, “interpretation”, “visualization” and “review”. Sources included ArXiv, bioRxiv and medRxiv, Google Scholar, Scopus, and Science Direct, focusing on titles. Studies without results on medical image data or using only standard neural networks with manually designed features were excluded. In cases of similar work, the most significant publications were selected.
The findings will be comprehensively presented, including a detailed description of the research methodology for replication. The literature search results, relevant articles, and their quality evaluations will be summarized in overview tables. Drawing from expertise in applying XAI techniques to medical image analysis, ongoing challenges, and future research directions will be discussed.

3. Interpreting Model Design and Workflow

Interpreting model design and workflow involves examining the hidden layers of convolutional neural networks (CNNs). This can be achieved through methods such as:
1.
Autoencoders for Learning Latent Representations
2.
Visualizing High-Dimensional Latent Data in a Two-Dimensional Space
3.
Visualizing Filters and Activations in Feature Maps

3.1. Autoencoders for Learning Latent Representations

Autoencoders (AE) are DL models for unsupervised feature learning [25], with applications in anomaly detection [26], image compression [27], and representation learning [28]. They consist of an encoder creating latent representations and a decoder reconstructing images. Variants include variational autoencoders (VAE) and adversarial autoencoders (AAE). In medical imaging, AEs detect abnormalities by comparing input images with reconstructions and highlighting high reconstruction loss areas. For instance, VAE has reconstructed OCT retinal images to detect pathologies [29], and AAE has localized brain lesions in MRI images [30]. Convolutional AEs have detected nuclei in histopathology images by combining learned representations with thresholding [31].

3.2. Visualizing High-Dimensional Latent Data in a Two-Dimensional Space

CNNs produce high-dimensional features, making visualization challenging. Dimensionality reduction techniques like Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (tSNE) simplify this data. PCA performs linear transformations, while tSNE [32] uses nonlinear methods to map high-dimensional data to lower dimensions. tSNE is effective for visualizing patterns and clusters, such as in abdominal ultrasound and histopathology image classification. The constraint-based embedding technique [33], using a divide-and-conquer algorithm to preserve k-nearest neighbors in 2D projections, has assessed deep belief networks separating brain MRI images of schizophrenic and healthy patients, though both tSNE and constraint-based embedding struggled with raw data separation.

3.3. Visualizing Filters and Activations in Feature Maps

A convolutional block extracts local features from input images through convolution filters, ReLU or GELU activations, and pooling layers. Filter visualization reveals CNN’s feature extraction capabilities, with initial layers capturing basic elements and later layers capturing intricate patterns. In medical imaging, filter visualization compares filters in Computer-Aided Detection (CAD) for 3D Computed Tomography (CT) images [9]. Larger filters offer more insights but require more memory. Feature map visualization, representing layer outputs after activation, highlights active features and can indicate training issues. It is used in tasks like skin lesion classification [34], fetal facial plan recognition in ultrasound [35], brain lesion segmentation in MRI [6], and Alzheimer’s diagnosis with PET/MRI [36].

4. Deep Learning Models in Medical Imaging

Convolutional neural networks (CNNs) are essential in DL for MI. CNNs are adept at processing X-rays, Computed Tomography (CT) scans, and Magnetic Resonance Imaging (MRI) through their hierarchical feature representations. Studies [5,6,7] have demonstrated their effectiveness in various medical image analysis tasks. For instance, in segmentation tasks, CNNs excel at delineating organ boundaries or identifying anomalies within medical images, providing valuable insights for accurate diagnosis and treatment planning. Recurrent Neural Networks (RNNs) excel in the temporal modeling of dynamic imaging sequences, such as functional MRI or video-based imaging, by capturing temporal patterns [37]. Generative Adversarial Networks (GANs) are valuable for image synthesis, data augmentation, anomaly detection, generating synthetic images and learning normal patterns [38,39,40].

Transformer-Based Architectures

Transformer-based architectures, including bidirectional encoder representations from transformers (BERT) [41,42], and generative pre-trained transformer (GPT) [43], are emerging for tasks like disease prediction, image reconstruction, and capturing complex dependencies in medical images.

5. Interpretation and Visualization Techniques

In recent years, numerous explainable artificial intelligence (XAI) techniques have been developed to enhance the interpretability of DL models, particularly in MI. These techniques can be broadly categorized into Perturbation-Based, Gradient-Based, Decomposition, and Attention methods.
The timeline presented in Figure 2 illustrates the development of these XAI techniques over the years. The points on the timeline are color-coordinated by respective categories, including Dimensionality Reduction, Feature Visualization, Class Activation Mapping, Saliency Mapping, Prediction Difference Analysis, Grad-CAM, Integrated Gradient, Guided Backpropagation, Occlusion Sensitivity, Trainable Attention, Guided Grad-CAM, Layerwise Relevance Propagation, Deconvolution, LIME, Backpropagation, Autoencoder, Meaningful Perturbation, SHAP, and Attention. Notably, the Gradient-Based category is the most densely populated, with CAM and Grad-CAM being among the most popular entries. The timeline also reveals a higher density of developments between 2017 and 2020. The comparison chart illustrates the visualization Techniques methods: Gradient-Based, Perturbation-Based, Decomposition-Based (LRP), and Trainable Attention Models in terms of model dependency, access to model parameters, and computational efficiency as shown in Table 1.

5.1. Perturbation-Based Methods

Perturbation-based methods evaluate how input changes affect model outputs to determine feature importance. By altering specific image regions, these methods identify areas that significantly influence predictions, typically visualized with heatmaps. As highlighted in recent surveys, perturbation-based XAI methods are crucial for exploring CNN models by systematically altering inputs and observing output changes, which is vital for understanding models used in safety-critical areas where transparency is essential [44]. Techniques like Integrated Gradients (IG), Local Interpretable Model-agnostic Explanations (LIME), and Occlusion Sensitivity (OS) are used in various domains, including breast cancer detection, eye disease classification, and brain MRI analysis (see Table 2). The table categorizes studies by domain, task, modality, performance, and technique. These methods also test model sensitivity to input variations, ensuring robust interpretations.

5.1.1. Occlusion

Zeiler and Fergus [55] introduced an occlusion method to assess the impact on model output when parts of an image are obstructed. Kermany et al. [53] utilized this method for interpreting optical coherence tomography images to diagnose retinal pathologies. A major limitation of occlusion is its high computational demand, as it requires inference for each occluded image region, increasing with image resolution and desired heatmaps.

5.1.2. Local Interpretable Model-Agnostic Explanations (LIME)

Recent studies emphasize the growing importance of local interpretation methods, such as LIME, which offer clear interpretability with lower computational complexity, making them suitable for real-time applications [56]. Ribeiro et al. [57] introduced local interpretable model-agnostic explanations (LIME) to identify superpixels (groups of connected pixels with similar intensities). Seah et al. [54] applied LIME to identify congestive heart failure in chest radiographs. LIME offers an advantage over occlusion by preserving the altered image portions’ context, as they are not completely blocked as shown in Figure 3.

5.1.3. Integrated Gradients

Sundararajan et al. [47] introduced integrated gradients (IG) to measure pixel importance by computing gradients across images interpolated between the original and a baseline image with all non-values. Sayres et al. [46] found that model-predicted grades and heatmaps improved the accuracy of diabetic retinopathy grading by readers.

5.2. Gradient-Based Methods

Backpropagation, used for weight adjustment in neural network training, is also employed in model interpretation methods to compute gradients. Unlike training, these methods do not alter weights but use gradients to highlight important image areas. The development of specialized deep learning networks like FISH-Net, which optimizes detection through innovative techniques such as rotated Gaussian heatmaps and noise refinement, highlights the potential of deep learning in achieving high precision and sensitivity in medical imaging tasks [58]. Figure 4 Shows examples of gradient-based attribution methods for model interpretability.

5.2.1. Saliency Maps

Saliency maps, introduced by Simonyan et al. [60], use gradient information to explain how deep convolutional networks classify images. They are used in class maximization and image-specific class saliency maps. Class maximization generates an image-maximizing activation for a class, as in:
arg max S c ( I ) λ I 2 2
Yi et al. [13] applied this to visualize malignant and benign breast masses. Image-specific class saliency maps create heatmaps showing each pixel’s significance in classification, computed as:
Sal c ( x ) = F c ( x ) x
Dubost et al. [61] used these maps in a weakly supervised method for segmenting brain MRI structures. Saliency maps have also been utilized in diagnosing heart diseases in chest X-rays [12], classifying breast masses in mammography [62] with accuracy ranging from 85% to 92.9%, and identifying pediatric elbow fractures in X-rays [63] with accuracy of 88.0% and area under curve (AUC) of 95.0%. Moreover, iterative saliency maps [64] enhance less obvious image regions by generating a saliency map, inpainting prominent areas, and iterating the process until the image classification changes or a limit is reached. This approach, applied to retinal fundus images for diabetic retinopathy grading, demonstrated higher sensitivity compared to traditional saliency maps. However, saliency has limitations. They do not distinguish if a pixel supports or contradicts a class, and their effectiveness diminishes in binary classification.

5.2.2. Guided Backpropagation

Guided backpropagation, introduced by Springenberg et al. [65], builds on the saliency map approach by Simonyan et al. [60] and the deconvnet concept by Zeiler and Fergus [55]. It improves gradient backpropagation through ReLU layers, where negative activations are set to zero during the forward pass. Guided backpropagation discards gradients where either the forward activation or the backward gradient is negative, producing heatmaps that highlight pixels positively contributing to the classification.
In 2017, Gao and Noble [8] applied guided backpropagation to ultrasound images for fetal heartbeat localization. They found that the heatmaps remained consistent despite variations in the heart’s appearance, size, position, and contrast. Conversely, Böhle et al. [66] discovered that guided backpropagation was less effective for visualizing Alzheimer’s disease in brain MRIs compared to other methods. Similarly, Dubost et al. [67] achieved an intraclass correlation coefficient (ICC) of 93.0% in brain MRI detection using guided backpropagation. Wang et al. [68] obtained an average accuracy of 93.7% in brain MRI classification with this technique. Gessert et al. [69] reported an accuracy of 99.0% in cardiovascular classification using OCT images. Wickstrom et al. [70] achieved a 94.9% accuracy in gastrointestinal segmentation using endoscopy. Lastly, Jamaludin et al. [71] reported an accuracy of 82.5% in musculoskeletal spine classification using MRI images with guided backpropagation.

5.2.3. Class Activation Maps (CAM)

Class Activation Mapping (CAM), introduced by Zhou et al. [72], visualizes regions of an image most influential in a neural network’s classification decision. CAM is computed as a weighted sum of feature maps from the final convolutional layer, using weights from the fully connected layer following global average pooling [73]. For a specific class c and image x:
CAM c ( x ) = k w k c f k ( x )
This heatmap highlights regions most relevant for classification. CAM has been applied in various medical imaging applications, such as segmenting lung nodules in thoracic CT scans [74] and differentiating between benign and malignant breast masses in mammograms [10]. However, CAM’s effectiveness depends on the network architecture, requiring a global pooling (GAP) layer followed by a fully connected layer. While Zhou et al. [75] originally used GAP, Oquab et al. [76] demonstrated that global max pooling and log-sum-exponential pooling can also be used, with the latter yielding finer localization. Table 3 summarizes CAM’s effectiveness across different medical imaging tasks, covering domain tasks, modalities, and performance metrics.

5.2.4. Grad-CAM

Grad-CAM, an extension of CAM by Selvaraju et al. [124], broadens its application to any network architecture and output, including image segmentation and captioning. It bypasses the global pooling layer and weights feature maps directly with gradients calculated via backpropagation from a target class. The gradients of the output for the class c concerning feature maps A k are averaged globally, multiplied by A k , and passed through a ReLU activation to discard negative values:
Grad - CAM c ( x ) = ReLU k 1 Z i j y c A i j k A k
Garg et al. employed grad-CAM visualizations to identify discriminative regions of magnetoencephalography images in the task of detecting eye-blink artifacts [59]. The authors found that the regions of the eye highlighted by grad-CAM are the same regions that human experts rely on. Table 4 summarizes the effectiveness of Grad-CAM across various medical imaging tasks, highlighting domains, tasks, modalities, and performance metrics.

5.3. Decomposition-Based Methods

Decomposition-based techniques for model interpretation focus on breaking down a model’s prediction into a heatmap showing each pixel’s contribution to the final decision. These techniques, such as LRP, have been widely applied across different domains.

Layer-Wise Relevance Propagation (LRP)

Layer-Wise Relevance Propagation (LRP), introduced by Bach et al. in 2015 [170], offers an alternative to gradient-based techniques like saliency mapping, guided backpropagation, and Grad-CAM. Instead of relying on gradients, LRP distributes the output of the final layer back through the network to calculate relevance scores for each neuron. This process is repeated recursively from the final layer to the input layer, generating a relevancy heatmap that can be overlaid on the input image. Further properties of LRP and details of its theoretical basis are given in (Montavon et al., 2017 [171]), and a comparison of LRP to other interpretation methods can be found in ([172,173,174]).
The relevance score R i k ( l , l + 1 ) for a neuron i in layer l from a neuron k in layer l + 1 is defined as:
R i k ( l , l + 1 ) = R k ( l + 1 ) a i w i k h R h k ( l , l + 1 )
The overall relevance score for neuron i in layer l is as follows:
R i l = k R i k ( l , l + 1 )
LRP has been applied in MI, such as diagnosing multiple sclerosis (MS) and Alzheimer’s disease (AD) using MRI scans. For MS, LRP heatmaps highlighted hyperintense lesions and affected brain areas [175] as shown in Figure 5, while for AD, they emphasized the hippocampal volume, a critical region for diagnosis [66]. LRP has been found to provide clearer distinctions compared to gradient-based methods and has been used in frameworks like DeepLight for linking brain regions with cognitive states [176].

5.4. Trainable Attention Models

Trainable Attention (TA) Mechanisms provide a dynamic approach to model interpretation by integrating attention modules into neural networks. Introduced for CNNs by Jetley et al. [179], these soft attention modules generate attention maps that highlight important image parts. They compute compatibility scores between local and global features l s , g using dot products and learned vectors a:
a s = exp ( c s ) i = 1 n exp ( c s , i )
The output g a adjusts local features based on the attention map weights:
g a = a s l s
This method enhances signals from compatible features while reducing those from less compatible ones. Applications of attention mechanisms in medical imaging include the Attention U-Net for organ segmentation in abdominal CT scans [177], fetal ultrasound classification, and breast mass segmentation in mammograms [180]. Additionally, they have improved melanoma lesion classification [181] and osteoarthritis grading in knee X-rays [182]. In cancer diagnostics, the CACNET method integrates attention mechanisms with Mask R-CNN to enhance nuclear segmentation and reduce noise interference, significantly improving the accuracy of CAC identification [183]. Attention-weighted RL (AWRL) models combine self-attention mechanisms with value function approximation to effectively filter out irrelevant features and enhance decision-making processes in complex tasks [184]. The Trainable Feature Matching Attention Network (TFMAN), incorporating non-local and channel attention, exemplifies how trainable attention mechanisms can augment representation capabilities in CNNs for image super-resolution [185]. Though optimal configurations are application-specific, attention mechanisms are valued for their interpretability and performance enhancement.
Table 5 provides an overview of various studies using TA models, highlighting the domains, tasks, modalities including MRI and histology, and performance metrics such as accuracy and F1-score, demonstrating the broad applicability and effectiveness of TA models in MI.

5.5. Vision Transformers

Vision Transformers (ViTs) have emerged as a prominent alternative to convolutional neural networks (CNNs) in medical imaging. Unlike CNNs, which use local receptive fields to capture spatial hierarchies, ViTs employ self-attention to model long-range dependencies and global context [42,196,197]. By partitioning images into fixed-size patches treated as a sequence, ViTs utilize transformer encoder layers, effectively capturing complex anatomical structures and pathological patterns.
ViTs have been employed to automate the Tanner-Whitehouse 3 (TW3) algorithm for bone age assessment, achieving clinically interpretable results with predictive accuracy comparable to that of experienced orthopedic surgeons [198]. ViTs have also demonstrated improved performance in diagnosing conditions such as tuberculosis, pneumothorax, and COVID-19 by leveraging self-supervision and self-training through knowledge distillation [199]. In the domain of medical image registration, ViTs have been shown to enhance the accuracy of volumetric image alignment significantly, outperforming traditional methods by capturing long-range spatial dependencies [200]. ViTs have also been applied to 3D cryogenic electron tomography (cryoET) data, with CryoViT outperforming CNNs in segmenting complex organelles like mitochondria, particularly when training data are limited [201].
ViTs have shown superior performance in segmentation, classification, and detection tasks, achieving high accuracy in segmenting tumors and organs in MRI and CT scans, as reflected in Dice scores. Their interpretability is enhanced through attention maps, gradient-based methods, and occlusion sensitivity, which aid in visualizing model predictions. These advancements highlight ViTs’ potential to improve diagnostic accuracy and provide deeper insights into medical image analysis, as discussed in Table 6. The areas upon which the model correctly focuses its predictions on the test image are presented in Figure 5. The regions of focus identified by the ViT model exhibit significant overlap with the areas of white blood cells [178].

6. Comparison of Different Interpretation Methods

6.1. Categorization by Visualization Technique

Visualization techniques in DL can be categorized based on their application and effectiveness. Table 7 summarizes various visualization techniques used in DL for interpretability. It categorizes methods based on their tasks, body parts, modalities, accuracy, and evaluation metrics. This table highlights that techniques like CAM and Grad-CAM are effective for image classification and localization across different modalities such as X-ray and MRI, achieving high accuracy. LRP is noted for its accuracy in segmentation tasks, while IG is utilized for classification with notable AUC-Receiver Operating Characteristic (ROC) scores. Attention-based methods improve performance and interpretability by focusing on relevant regions, whereas perturbation-based methods assess model robustness. LIME provides model-agnostic explanations, and trainable attention models dynamically enhance feature focus.

6.2. Categorization by Body Parts, Modality, and Accuracy

Table 7 provides a concise overview of imaging techniques categorized by anatomical context (Body Parts). It lists various modalities such as MRI, X-ray, and ultrasound, and highlights specific techniques used for different body parts. For instance, CAM and Grad-CAM are prominent in brain imaging with high accuracy, while LRP and attention-based methods excel in breast imaging. The table also emphasizes the adaptation of methods to address challenges such as speckle noise in ultrasound imaging.
The advanced segmentation techniques discussed include the improved V-net algorithm, which enhances liver and tumor segmentation using distance metric-based loss functions, and the LViT model, which integrates medical text annotations to improve segmentation performance in multimodal datasets [238,239]. Additionally, Transformers have been applied broadly in medical image analysis, significantly improving performance in tasks such as segmentation and classification [240].
The imaging techniques covered in the studies include CT, dermatoscopy, diabetic retinopathy (DR), endoscopy, fundus photography, histology, histopathology, mammography, magnetoencephalography (MEG), MRI, OCT, PET, photography, ultrasound, and X-ray. These studies spanned from 2014 to 2020, with the majority published in 2019 and 2020, as shown in Figure 6.

6.3. Categorization by Task

This section organizes the techniques and their applications across different tasks, highlighting performance metrics and examples for clarity. Table 8 organizes interpretability techniques according to their tasks, including classification, segmentation, and detection. It details the applications, performance metrics, and specific examples for each task. Techniques like CAM, Grad-CAM, and TA models are effective for classification tasks, providing high accuracy and AUC-ROC scores. LRP and Integrated Gradient are highlighted for segmentation tasks, with metrics like dice similarity coefficient (DSC) and intersection over union (IoU). Detection tasks benefit from methods such as saliency maps and CAM, with metrics including mean average precision (mAP) and sensitivity.

7. Current Challenges and Future Directions

7.1. Current Challenges

Despite significant advancements, several challenges remain in the interpretability and visualization of DL models in MI:
Scalability and Efficiency: Many interpretability methods, such as occlusion and perturbation-based techniques, are computationally intensive. This limits their scalability, especially with high-resolution medical images that require real-time analysis.
Clinical Integration: Translating interpretability techniques into clinical practice requires seamless integration with existing workflows and systems. This includes ensuring that the visualizations are intuitive for non-technical healthcare practitioners and that they provide actionable insights.
Robustness and Generalization: Interpretability methods must be robust across diverse patient populations and medical imaging modalities. Models trained on specific datasets might not generalize well to other contexts, leading to potential biases and inaccuracies in interpretations.
Standardization and Validation: There is a lack of standardized metrics and benchmarks for evaluating the effectiveness of interpretability methods. Rigorous validation in clinical settings is essential to establish the reliability and trustworthiness of these techniques.
Ethical and Legal Considerations: The opacity of deep learning models raises ethical and legal concerns, especially in healthcare where decisions can have critical consequences. Ensuring transparency, accountability, and fairness in AI-driven diagnostics is paramount.

7.2. Future Directions

To address the challenges and enhance the field of medical image interpretability, future research should focus on the following directions:
Expansion to High-Resolution Histopathology: As the field of medical imaging evolves to include high-resolution histopathology and digital pathology, future research should explicitly consider these advanced techniques. This includes developing interpretability methods tailored to microscopic and molecular-level images, which require different approaches compared to traditional mesoscopic imaging methods.
Development of Lightweight Methods: Creating computationally efficient interpretability techniques that can handle high-resolution images and deliver results in real-time is crucial. This involves optimizing existing methods and exploring new algorithmic approaches that balance accuracy with computational efficiency.
Enhanced Clinical Collaboration: Collaborative efforts between AI researchers, clinicians, and medical practitioners are necessary to design interpretability methods that are both clinically relevant and user-friendly. This could include the development of interactive visualization tools that allow clinicians to intuitively explore and understand model outputs.
Robustness to Variability: Developing interpretability techniques that are robust to variations in imaging modalities, patient demographics, and clinical conditions is essential. This requires extensive training on diverse datasets and continuous validation across different settings to ensure that methods remain effective and reliable in varied contexts.
Establishment of Standards: Creating standardized benchmarks and validation protocols for interpretability methods will aid in objectively assessing their effectiveness and reliability. This includes the development of common datasets and metrics for comparative evaluations to facilitate consistency and transparency in the field.
Ethical Frameworks: Integrating ethical considerations into the design and deployment of interpretability methods is critical. This involves ensuring that models are transparent, explainable, and free from biases, as well as addressing privacy and data security concerns. Ethical frameworks will support the responsible use of AI in medical imaging.
Hybrid Approaches: Combining different interpretability techniques, such as perturbation-based and gradient-based methods, can provide more comprehensive insights into model behavior. Hybrid approaches can leverage the strengths of various methods, enhancing overall interpretability and providing a more nuanced understanding of model decisions.
By addressing these directions, the field can advance towards more effective, reliable, and clinically relevant interpretability methods in medical imaging, paving the way for better integration of AI technologies in healthcare.

8. Conclusions

In conclusion, the integration of interpretability and visualization techniques into DL models for MI holds immense potential for advancing healthcare diagnostics and treatment planning. While significant progress has been made, challenges related to scalability, clinical integration, robustness, standardization, and ethical considerations persist. Addressing these challenges requires ongoing collaboration between AI researchers, clinicians, and healthcare practitioners. Future research should focus on developing efficient and clinically relevant interpretability methods, establishing standardized evaluation protocols, and ensuring ethical and transparent AI applications in healthcare. By overcoming these hurdles, we can enhance the trustworthiness, reliability, and clinical impact of DL models in MI, ultimately leading to better patient outcomes and more informed clinical decision-making.

Author Contributions

Conceptualization, D.B. and F.N.; Methodology, D.B., F.N. and M.A.; Validation, D.B., F.N. and M.A.; Formal analysis, D.B. and F.N.; Writing—review and editing, D.B., F.N. and M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by Kent State University’s Open Access APC Support Fund.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data were presented in main text.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Neikirk, K.; Lopez, E.G.; Marshall, A.G.; Alghanem, A.; Krystofiak, E.; Kula, B.; Smith, N.; Shao, J.; Katti, P.; Hinton, A.O., Jr. Call to action to properly utilize electron microscopy to measure organelles to monitor disease. Eur. J. Cell Biol. 2023, 102, 151365. [Google Scholar] [CrossRef] [PubMed]
  2. Galaz-Montoya, J.G. The advent of preventive high-resolution structural histopathology by artificial-intelligence-powered cryogenic electron tomography. Front. Mol. Biosci. 2024, 11, 1390858. [Google Scholar] [CrossRef] [PubMed]
  3. Banerji, S.; Mitra, S. Deep learning in histopathology: A review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2022, 12, e1439. [Google Scholar] [CrossRef]
  4. Niazi, M.K.K.; Parwani, A.V.; Gurcan, M.N. Digital pathology and artificial intelligence. Lancet Oncol. 2019, 20, e253–e261. [Google Scholar] [CrossRef] [PubMed]
  5. Hu, P.; Wu, F.; Peng, J.; Bao, Y.; Chen, F.; Kong, D. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 399–411. [Google Scholar] [CrossRef]
  6. Kamnitsas, K.; Ledig, C.; Newcombe, V.F.; Simpson, J.P.; Kane, A.D.; Menon, D.K.; Rueckert, D.; Glocker, B. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 2017, 36, 61–78. [Google Scholar] [CrossRef]
  7. Roth, H.R.; Lu, L.; Farag, A.; Shin, H.C.; Liu, J.; Turkbey, E.B.; Summers, R.M. Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part I 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 556–564. [Google Scholar]
  8. Gao, Y.; Alison Noble, J. Detection and characterization of the fetal heartbeat in free-hand ultrasound sweeps with weakly-supervised two-streams convolutional networks. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, 11–13 September 2017; Proceedings, Part II 20. Springer: Berlin/Heidelberg, Germany, 2017; pp. 305–313. [Google Scholar]
  9. Roth, H.R.; Lu, L.; Liu, J.; Yao, J.; Seff, A.; Cherry, K.; Kim, L.; Summers, R.M. Improving computer-aided detection using convolutional neural networks and random view aggregation. IEEE Trans. Med. Imaging 2015, 35, 1170–1181. [Google Scholar] [CrossRef]
  10. Kim, S.T.; Lee, J.H.; Lee, H.; Ro, Y.M. Visually interpretable deep network for diagnosis of breast masses on mammograms. Phys. Med. Biol. 2018, 63, 235025. [Google Scholar] [CrossRef]
  11. Yang, X.; Do Yang, J.; Hwang, H.P.; Yu, H.C.; Ahn, S.; Kim, B.W.; You, H. Segmentation of liver and vessels from CT images and classification of liver segments for preoperative liver surgical planning in living donor liver transplantation. Comput. Methods Programs Biomed. 2018, 158, 41–52. [Google Scholar] [CrossRef]
  12. Chen, X.; Shi, B. Deep mask for X-ray based heart disease classification. arXiv 2018, arXiv:1808.08277. [Google Scholar]
  13. Yi, D.; Sawyer, R.L.; Cohn III, D.; Dunnmon, J.; Lam, C.; Xiao, X.; Rubin, D. Optimizing and visualizing deep learning for benign/malignant classification in breast tumors. arXiv 2017, arXiv:1705.06362. [Google Scholar]
  14. Hengstler, M.; Enkel, E.; Duelli, S. Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technol. Forecast. Soc. Chang. 2016, 105, 105–120. [Google Scholar] [CrossRef]
  15. Nundy, S.; Montgomery, T.; Wachter, R.M. Promoting trust between patients and physicians in the era of artificial intelligence. JAMA 2019, 322, 497–498. [Google Scholar] [CrossRef] [PubMed]
  16. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J. Artificial intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
  17. Jia, X.; Ren, L.; Cai, J. Clinical implementation of AI technologies will require interpretable AI models. Med. Phys. 2020, 47, 1–4. [Google Scholar] [CrossRef]
  18. Reyes, M.; Meier, R.; Pereira, S.; Silva, C.A.; Dahlweid, F.M.; Tengg-Kobligk, H.v.; Summers, R.M.; Wiest, R. On the interpretability of artificial intelligence in radiology: Challenges and opportunities. Radiol. Artif. Intell. 2020, 2, e190043. [Google Scholar] [CrossRef]
  19. Gastounioti, A.; Kontos, D. Is it time to get rid of black boxes and cultivate trust in AI? Radiol. Artif. Intell. 2020, 2, e200088. [Google Scholar] [CrossRef]
  20. Guo, R.; Wei, J.; Sun, L.; Yu, B.; Chang, G.; Liu, D.; Zhang, S.; Yao, Z.; Xu, M.; Bu, L. A survey on advancements in image-text multimodal models: From general techniques to biomedical implementations. Comput. Biol. Med. 2024, 178, 108709. [Google Scholar] [CrossRef]
  21. Rasool, N.; Bhat, J.I. Brain tumour detection using machine and deep learning: A systematic review. Multimed. Tools Appl. 2024, 1–54. [Google Scholar] [CrossRef]
  22. Huff, D.T.; Weisman, A.J.; Jeraj, R. Interpretation and visualization techniques for deep learning models in medical imaging. Phys. Med. Biol. 2021, 66, 04TR01. [Google Scholar] [CrossRef]
  23. Hohman, F.; Kahng, M.; Pienta, R.; Chau, D.H. Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE Trans. Vis. Comput. Graph. 2018, 25, 2674–2693. [Google Scholar] [CrossRef] [PubMed]
  24. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed]
  25. Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P.A. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; pp. 1096–1103. [Google Scholar]
  26. Kiran, B.R.; Thomas, D.M.; Parakkal, R. An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos. J. Imaging 2018, 4, 36. [Google Scholar] [CrossRef]
  27. Theis, L.; Shi, W.; Cunningham, A.; Huszár, F. Lossy image compression with compressive autoencoders. In Proceedings of the International Conference on Learning Representations, Virtually, 25–29 April 2022. [Google Scholar]
  28. Tschannen, M.; Bachem, O.; Lucic, M. Recent advances in autoencoder-based representation learning. arXiv 2018, arXiv:1812.05069. [Google Scholar]
  29. Uzunova, H.; Ehrhardt, J.; Kepp, T.; Handels, H. Interpretable explanations of black box classifiers applied on medical images by meaningful perturbations using variational autoencoders. In Proceedings of the Medical Imaging 2019: Image Processing, San Diego, CA, USA, 19–21 February 2019; SPIE: Bellingham, DC, USA, 2019; Volume 10949, pp. 264–271. [Google Scholar]
  30. Chen, X.; You, S.; Tezcan, K.C.; Konukoglu, E. Unsupervised lesion detection via image restoration with a normative prior. Med. Image Anal. 2020, 64, 101713. [Google Scholar] [CrossRef]
  31. Hou, L.; Nguyen, V.; Kanevsky, A.B.; Samaras, D.; Kurc, T.M.; Zhao, T.; Gupta, R.R.; Gao, Y.; Chen, W.; Foran, D.; et al. Sparse autoencoder for unsupervised nucleus detection and representation in histopathology images. Pattern Recognit. 2019, 86, 188–200. [Google Scholar] [CrossRef]
  32. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  33. Plis, S.M.; Hjelm, D.R.; Salakhutdinov, R.; Allen, E.A.; Bockholt, H.J.; Long, J.D.; Johnson, H.J.; Paulsen, J.S.; Turner, J.A.; Calhoun, V.D. Deep learning for neuroimaging: A validation study. Front. Neurosci. 2014, 8, 229. [Google Scholar] [CrossRef]
  34. Stoyanov, D.; Taylor, Z.; Kia, S.M.; Oguz, I.; Reyes, M.; Martel, A.; Maier-Hein, L.; Marquand, A.F.; Duchesnay, E.; Löfstedt, T.; et al. Understanding and Interpreting Machine Learning in Medical Image Computing Applications; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  35. Yu, Z.; Tan, E.L.; Ni, D.; Qin, J.; Chen, S.; Li, S.; Lei, B.; Wang, T. A deep convolutional neural network-based framework for automatic fetal facial standard plane recognition. IEEE J. Biomed. Health Inform. 2017, 22, 874–885. [Google Scholar] [CrossRef]
  36. Zhang, F.; Li, Z.; Zhang, B.; Du, H.; Wang, B.; Zhang, X. Multi-modal deep learning model for auxiliary diagnosis of Alzheimer’s disease. Neurocomputing 2019, 361, 185–195. [Google Scholar] [CrossRef]
  37. Al’Aref, S.J.; Anchouche, K.; Singh, G.; Slomka, P.J.; Kolli, K.K.; Kumar, A.; Pandey, M.; Maliakal, G.; Van Rosendael, A.R.; Beecy, A.N.; et al. Clinical applications of machine learning in cardiovascular disease and its relevance to cardiac imaging. Eur. Heart J. 2019, 40, 1975–1986. [Google Scholar] [CrossRef]
  38. Nie, D.; Trullo, R.; Lian, J.; Petitjean, C.; Ruan, S.; Wang, Q.; Shen, D. Medical image synthesis with context-aware generative adversarial networks. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, 11–13 September 2017; Proceedings, Part III 20. Springer: Berlin/Heidelberg, Germany, 2017; pp. 417–425. [Google Scholar]
  39. Frid-Adar, M.; Diamant, I.; Klang, E.; Amitai, M.; Goldberger, J.; Greenspan, H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 2018, 321, 321–331. [Google Scholar] [CrossRef]
  40. Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef] [PubMed]
  41. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  42. Al-Hammuri, K.; Gebali, F.; Kanan, A.; Chelvan, I.T. Vision transformer architecture and applications in digital health: A tutorial and survey. Vis. Comput. Ind. Biomed. Art 2023, 6, 14. [Google Scholar] [CrossRef]
  43. Lecler, A.; Duron, L.; Soyer, P. Revolutionizing radiology with GPT-based models: Current applications, future possibilities and limitations of ChatGPT. Diagn. Interv. Imaging 2023, 104, 269–274. [Google Scholar] [CrossRef]
  44. Ivanovs, M.; Kadikis, R.; Ozols, K. Perturbation-based methods for explaining deep neural networks: A survey. Pattern Recognit. Lett. 2021, 150, 228–234. [Google Scholar] [CrossRef]
  45. Papanastasopoulos, Z.; Samala, R.K.; Chan, H.P.; Hadjiiski, L.; Paramagul, C.; Helvie, M.A.; Neal, C.H. Explainable AI for medical imaging: Deep-learning CNN ensemble for classification of estrogen receptor status from breast MRI. In Proceedings of the Medical Imaging 2020: Computer-Aided Diagnosis, Houston, TX, USA, 16–19 February 2020; SPIE: Bellingham, DC, USA, 2020; Volume 11314, pp. 228–235. [Google Scholar]
  46. Sayres, R.; Taly, A.; Rahimy, E.; Blumer, K.; Coz, D.; Hammel, N.; Webster, D.R. Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology 2019, 126, 552–564. [Google Scholar] [CrossRef]
  47. Sundararajan, M.; Taly, A.; Yan, Q. Axiomatic attribution for deep networks. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 3319–3328. [Google Scholar]
  48. Rajaraman, S.; Candemir, S.; Thoma, G.; Antani, S. Visualizing and explaining deep learning predictions for pneumonia detection in pediatric chest radiographs. In Proceedings of the Medical Imaging 2019: Computer-Aided Diagnosis, San Diego, CA, USA, 17–20 February 2019; SPIE: Bellingham, DC, USA, 2019; Volume 10950, pp. 200–211. [Google Scholar]
  49. Malhi, A.; Kampik, T.; Pannu, H.; Madhikermi, M.; Främling, K. Explaining machine learning-based classifications of in-vivo gastral images. In Proceedings of the 2019 Digital Image Computing: Techniques and Applications (DICTA), Perth, Australia, 2–4 December 2019; pp. 1–7. [Google Scholar]
  50. Dubost, F.; Adams, H.; Bortsova, G.; Ikram, M.A.; Niessen, W.; Vernooij, M.; De Bruijne, M. 3D regression neural network for the quantification of enlarged perivascular spaces in brain MRI. Med. Image Anal. 2019, 51, 89–100. [Google Scholar] [CrossRef]
  51. Shahamat, H.; Abadeh, M.S. Brain MRI analysis using a deep learning based evolutionary approach. Neural Netw. 2020, 126, 218–234. [Google Scholar] [CrossRef]
  52. Gecer, B.; Aksoy, S.; Mercan, E.; Shapiro, L.G.; Weaver, D.L.; Elmore, J.G. Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks. Pattern Recognit. 2018, 84, 345–356. [Google Scholar] [CrossRef] [PubMed]
  53. Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F.; et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018, 172, 1122–1131. [Google Scholar] [CrossRef] [PubMed]
  54. Seah, J.C.; Tang, J.S.; Kitchen, A.; Gaillard, F.; Dixon, A.F. Chest radiographs in congestive heart failure: Visualizing neural network learning. Radiology 2019, 290, 514–522. [Google Scholar] [CrossRef] [PubMed]
  55. Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part I 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 818–833. [Google Scholar]
  56. Liang, Y.; Li, S.; Yan, C.; Li, M.; Jiang, C. Explaining the black-box model: A survey of local interpretation methods for deep neural networks. Neurocomputing 2021, 419, 168–182. [Google Scholar] [CrossRef]
  57. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–16 August 2016; pp. 1135–1144. [Google Scholar]
  58. Xu, X.; Li, C.; Lan, X.; Fan, X.; Lv, X.; Ye, X.; Wu, T. A lightweight and robust framework for circulating genetically abnormal cells (CACs) identification using 4-color fluorescence in situ hybridization (FISH) image and deep refined learning. J. Digit. Imaging 2023, 36, 1687–1700. [Google Scholar] [CrossRef]
  59. Garg, P.; Davenport, E.; Murugesan, G.; Wagner, B.; Whitlow, C.; Maldjian, J.; Montillo, A. Using convolutional neural networks to automatically detect eye-blink artifacts in magnetoencephalography without resorting to electrooculography. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, 11–13 September 2017; Proceedings, Part III 20. Springer: Berlin/Heidelberg, Germany, 2017; pp. 374–381. [Google Scholar]
  60. Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv 2013. [Google Scholar]
  61. Dubost, F.; Bortsova, G.; Adams, H.; Ikram, A.; Niessen, W.J.; Vernooij, M.; De Bruijne, M. Gp-unet: Lesion detection from weak labels with a 3d regression network. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Quebec City, QC, Canada, 11–13 September 2017; pp. 214–221. [Google Scholar]
  62. Lévy, D.; Jain, A. Breast mass classification from mammograms using deep convolutional neural networks. arXiv 2016. [Google Scholar]
  63. Rayan, J.C.; Reddy, N.; Kan, J.H.; Zhang, W.; Annapragada, A. Binomial classification of pediatric elbow fractures using a deep learning multiview approach emulating radiologist decision making. Radiol. Artif. Intell. 2019, 1, e180015. [Google Scholar] [CrossRef]
  64. Liefers, B.; González-Gonzalo, C.; Klaver, C.; van Ginneken, B.; Sánchez, C.I. Dense segmentation in selected dimensions: Application to retinal optical coherence tomography. In Proceedings of the International Conference on Medical Imaging with Deep Learning (MIDL), London, UK, 8–10 July 2019; PMLR. pp. 337–346. [Google Scholar]
  65. Springenberg, J.T.; Dosovitskiy, A.; Brox, T.; Riedmiller, M. Striving for simplicity: The all convolutional net. arXiv 2014. [Google Scholar]
  66. Böhle, M.; Eitel, F.; Weygandt, M.; Ritter, K. Layer-wise relevance propagation for explaining deep neural network decisions in MRI-based Alzheimer’s disease classification. Front. Aging Neurosci. 2019, 11, 456892. [Google Scholar] [CrossRef]
  67. Dubost, F.; Yilmaz, P.; Adams, H.; Bortsova, G.; Ikram, M.A.; Niessen, W.; Vernooij, M.; de Bruijne, M. Enlarged perivascular spaces in brain MRI: Automated quantification in four regions. Neuroimage 2019, 185, 534–544. [Google Scholar] [CrossRef] [PubMed]
  68. Wang, X.; Liang, X.; Jiang, Z.; Nguchu, B.A.; Zhou, Y.; Wang, Y.; Wang, H.; Li, Y.; Zhu, Y.; Wu, F.; et al. Decoding and mapping task states of the human brain via deep learning. Hum. Brain Mapp. 2020, 41, 1505–1519. [Google Scholar] [CrossRef] [PubMed]
  69. Gessert, N.; Latus, S.; Abdelwahed, Y.S.; Leistner, D.M.; Lutz, M.; Schlaefer, A. Bioresorbable scaffold visualization in IVOCT images using CNNs and weakly supervised localization. In Proceedings of the Medical Imaging 2019: Image Processing, San Diego, CA, USA, 19–21 February 2019; SPIE: Bellingham, DC, USA, 2019; Volume 10949, pp. 606–612. [Google Scholar]
  70. Wickstrøm, K.; Kampffmeyer, M.; Jenssen, R. Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps. Med. Image Anal. 2020, 60, 101619. [Google Scholar] [CrossRef] [PubMed]
  71. Jamaludin, A.; Kadir, T.; Zisserman, A. SpineNet: Automated classification and evidence visualization in spinal MRIs. Med. Image Anal. 2017, 41, 63–73. [Google Scholar] [CrossRef] [PubMed]
  72. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
  73. Lin, M.; Chen, Q.; Yan, S. Network in network. arXiv 2013. [Google Scholar]
  74. Feng, X.; Lipton, Z.C.; Yang, J.; Small, S.A.; Provenzano, F.A.; Initiative, A.D.N.; Initiative, F.L.D.N. Estimating brain age based on a uniform healthy population with deep learning and structural magnetic resonance imaging. Neurobiol. Aging 2020, 91, 15–25. [Google Scholar] [CrossRef]
  75. Zhao, G.; Zhou, B.; Wang, K.; Jiang, R.; Xu, M. Respond-CAM: Analyzing deep models for 3D imaging data by visualizations. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, 16–20 September 2018; Proceedings, Part I. Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 485–492. [Google Scholar]
  76. Oquab, M.; Bottou, L.; Laptev, I.; Sivic, J. Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1717–1724. [Google Scholar]
  77. Woerl, A.C.; Eckstein, M.; Geiger, J.; Wagner, D.C.; Daher, T.; Stenzel, P.; Fernandez, A.; Hartmann, A.; Wand, M.; Roth, W.; et al. Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. Eur. Urol. 2020, 78, 256–264. [Google Scholar] [CrossRef]
  78. Ahmad, A.; Sarkar, S.; Shah, A.; Gore, S.; Santosh, V.; Saini, J.; Ingalhalikar, M. Predictive and discriminative localization of IDH genotype in high grade gliomas using deep convolutional neural nets. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 372–375. [Google Scholar]
  79. Shinde, S.; Prasad, S.; Saboo, Y.; Kaushick, R.; Saini, J.; Pal, P.K.; Ingalhalikar, M. Predictive markers for Parkinson’s disease using deep neural nets on neuromelanin sensitive MRI. Neuroimage Clin. 2019, 22, 101748. [Google Scholar] [CrossRef]
  80. Chakraborty, S.; Aich, S.; Kim, H.C. Detection of Parkinson’s disease from 3T T1 weighted MRI scans using 3D convolutional neural network. Diagnostics 2020, 10, 402. [Google Scholar] [CrossRef]
  81. Choi, H.; Kim, Y.K.; Yoon, E.J.; Lee, J.Y.; Lee, D.S.; Initiative, A.D.N. Cognitive signature of brain FDG PET based on deep learning: Domain transfer from Alzheimer’s disease to Parkinson’s disease. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 403–412. [Google Scholar] [CrossRef]
  82. Huang, Z.; Zhu, X.; Ding, M.; Zhang, X. Medical image classification using a light-weighted hybrid neural network based on PCANet and DenseNet. IEEE Access 2020, 8, 24697–24712. [Google Scholar] [CrossRef]
  83. Kim, C.; Kim, W.H.; Kim, H.J.; Kim, J. Weakly-supervised US breast tumor characterization and localization with a box convolution network. In Proceedings of the Medical Imaging 2020: Computer-Aided Diagnosis, Houston, TX, USA, 16–19 February 2020; SPIE: Bellingham, DC, USA, 2020; Volume 11314, pp. 298–304. [Google Scholar]
  84. Luo, L.; Chen, H.; Wang, X.; Dou, Q.; Lin, H.; Zhou, J.; Li, G.; Heng, P.A. Deep angular embedding and feature correlation attention for breast MRI cancer analysis. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019; Proceedings, Part IV 22. Springer: Berlin/Heidelberg, Germany, 2019; pp. 504–512. [Google Scholar]
  85. Yi, P.H.; Lin, A.; Wei, J.; Yu, A.C.; Sair, H.I.; Hui, F.K.; Hager, G.D.; Harvey, S.C. Deep-learning-based semantic labeling for 2D mammography and comparison of complexity for machine learning tasks. J. Digit. Imaging 2019, 32, 565–570. [Google Scholar] [CrossRef] [PubMed]
  86. Lee, J.; Nishikawa, R.M. Detecting mammographically occult cancer in women with dense breasts using deep convolutional neural network and Radon Cumulative Distribution Transform. J. Med. Imaging 2019, 6, 044502. [Google Scholar] [CrossRef] [PubMed]
  87. Qi, X.; Zhang, L.; Chen, Y.; Pi, Y.; Chen, Y.; Lv, Q.; Yi, Z. Automated diagnosis of breast ultrasonography images using deep neural networks. Med. Image Anal. 2019, 52, 185–198. [Google Scholar] [CrossRef]
  88. Xi, P.; Guan, H.; Shu, C.; Borgeat, L.; Goubran, R. An integrated approach for medical abnormality detection using deep patch convolutional neural networks. Vis. Comput. 2020, 36, 1869–1882. [Google Scholar] [CrossRef]
  89. Zhou, L.Q.; Wu, X.L.; Huang, S.Y.; Wu, G.G.; Ye, H.R.; Wei, Q.; Bao, L.Y.; Deng, Y.B.; Li, X.R.; Cui, X.W.; et al. Lymph node metastasis prediction from primary breast cancer US images using deep learning. Radiology 2020, 294, 19–28. [Google Scholar] [CrossRef]
  90. Dunnmon, J.A.; Yi, D.; Langlotz, C.P.; Ré, C.; Rubin, D.L.; Lungren, M.P. Assessment of convolutional neural networks for automated classification of chest radiographs. Radiology 2019, 290, 537–544. [Google Scholar] [CrossRef]
  91. Huang, Z.; Fu, D. Diagnose chest pathology in X-ray images by learning multi-attention convolutional neural network. In Proceedings of the 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 24–26 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 294–299. [Google Scholar]
  92. Khakzar, A.; Albarqouni, S.; Navab, N. Learning interpretable features via adversarially robust optimization. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019; Proceedings, Part VI 22. Springer: Berlin/Heidelberg, Germany, 2019; pp. 793–800. [Google Scholar]
  93. Kumar, D.; Sankar, V.; Clausi, D.; Taylor, G.W.; Wong, A. Sisc: End-to-end interpretable discovery radiomics-driven lung cancer prediction via stacked interpretable sequencing cells. IEEE Access 2019, 7, 145444–145454. [Google Scholar] [CrossRef]
  94. Lei, Y.; Tian, Y.; Shan, H.; Zhang, J.; Wang, G.; Kalra, M.K. Shape and margin-aware lung nodule classification in low-dose CT images via soft activation mapping. Med. Image Anal. 2020, 60, 101628. [Google Scholar] [CrossRef]
  95. Tang, Y.X.; Tang, Y.B.; Peng, Y.; Yan, K.; Bagheri, M.; Redd, B.A.; Brandon, C.J.; Lu, Z.; Han, M.; Xiao, J.; et al. Automated abnormality classification of chest radiographs using deep convolutional neural networks. NPJ Digit. Med. 2020, 3, 70. [Google Scholar] [CrossRef]
  96. Wang, K.; Zhang, X.; Huang, S. KGZNet: Knowledge-guided deep zoom neural networks for thoracic disease classification. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1396–1401. [Google Scholar]
  97. Yi, P.H.; Kim, T.K.; Yu, A.C.; Bennett, B.; Eng, J.; Lin, C.T. Can AI outperform a junior resident? Comparison of deep neural network to first-year radiology residents for identification of pneumothorax. Emerg. Radiol. 2020, 27, 367–375. [Google Scholar] [CrossRef] [PubMed]
  98. Liu, H.; Wang, L.; Nan, Y.; Jin, F.; Wang, Q.; Pu, J. SDFN: Segmentation-based deep fusion network for thoracic disease classification in chest X-ray images. Comput. Med. Imaging Graph. 2019, 75, 66–73. [Google Scholar] [CrossRef] [PubMed]
  99. Ahmad, M.; Kasukurthi, N.; Pande, H. Deep learning for weak supervision of diabetic retinopathy abnormalities. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 573–577. [Google Scholar]
  100. Liao, W.; Zou, B.; Zhao, R.; Chen, Y.; He, Z.; Zhou, M. Clinical interpretable deep learning model for glaucoma diagnosis. IEEE J. Biomed. Health Inform. 2019, 24, 1405–1412. [Google Scholar] [CrossRef] [PubMed]
  101. Perdomo, O.; Rios, H.; Rodríguez, F.J.; Otálora, S.; Meriaudeau, F.; Müller, H.; González, F.A. Classification of diabetes-related retinal diseases using a deep learning approach in optical coherence tomography. Comput. Methods Programs Biomed. 2019, 178, 181–189. [Google Scholar] [CrossRef]
  102. Shen, Y.; Sheng, B.; Fang, R.; Li, H.; Dai, L.; Stolte, S.; Qin, J.; Jia, W.; Shen, D. Domain-invariant interpretable fundus image quality assessment. Med. Image Anal. 2020, 61, 101654. [Google Scholar] [CrossRef]
  103. Wang, X.; Chen, H.; Ran, A.R.; Luo, L.; Chan, P.P.; Tham, C.C.; Chang, R.T.; Mannil, S.S.; Cheung, C.Y.; Heng, P.A. Towards multi-center glaucoma OCT image screening with semi-supervised joint structure and function multi-task learning. Med. Image Anal. 2020, 63, 101695. [Google Scholar] [CrossRef]
  104. Jiang, H.; Yang, K.; Gao, M.; Zhang, D.; Ma, H.; Qian, W. An interpretable ensemble deep learning model for diabetic retinopathy disease classification. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2045–2048. [Google Scholar]
  105. Tu, Z.; Gao, S.; Zhou, K.; Chen, X.; Fu, H.; Gu, Z.; Cheng, J.; Yu, Z.; Liu, J. SUNet: A lesion regularized model for simultaneous diabetic retinopathy and diabetic macular edema grading. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1378–1382. [Google Scholar]
  106. Kumar, D.; Taylor, G.W.; Wong, A. Discovery radiomics with CLEAR-DR: Interpretable computer aided diagnosis of diabetic retinopathy. IEEE Access 2019, 7, 25891–25896. [Google Scholar] [CrossRef]
  107. Liu, C.; Han, X.; Li, Z.; Ha, J.; Peng, G.; Meng, W.; He, M. A self-adaptive deep learning method for automated eye laterality detection based on color fundus photography. PLoS ONE 2019, 14, e0222025. [Google Scholar] [CrossRef]
  108. Narayanan, B.N.; Hardie, R.C.; De Silva, M.S.; Kueterman, N.K. Hybrid machine learning architecture for automated detection and grading of retinal images for diabetic retinopathy. J. Med. Imaging 2020, 7, 034501. [Google Scholar] [CrossRef]
  109. Everson, M.; Herrera, L.G.P.; Li, W.; Luengo, I.M.; Ahmad, O.; Banks, M.; Magee, C.; Alzoubaidi, D.; Hsu, H.; Graham, D.; et al. Artificial intelligence for the real-time classification of intrapapillary capillary loop patterns in the endoscopic diagnosis of early oesophageal squamous cell carcinoma: A proof-of-concept study. United Eur. Gastroenterol. J. 2019, 7, 297–306. [Google Scholar] [CrossRef]
  110. García-Peraza-Herrera, L.C.; Everson, M.; Lovat, L.; Wang, H.P.; Wang, W.L.; Haidry, R.; Stoyanov, D.; Ourselin, S.; Vercauteren, T. Intrapapillary capillary loop classification in magnification endoscopy: Open dataset and baseline methodology. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 651–659. [Google Scholar] [CrossRef] [PubMed]
  111. Wang, S.; Xing, Y.; Zhang, L.; Gao, H.; Zhang, H. Deep convolutional neural network for ulcer recognition in wireless capsule endoscopy: Experimental feasibility and optimization. Comput. Math. Methods Med. 2019, 2019, 7546215. [Google Scholar] [CrossRef] [PubMed]
  112. Yan, C.; Xu, J.; Xie, J.; Cai, C.; Lu, H. Prior-aware CNN with multi-task learning for colon images analysis. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 254–257. [Google Scholar]
  113. Heinemann, F.; Birk, G.; Stierstorfer, B. Deep learning enables pathologist-like scoring of NASH models. Sci. Rep. 2019, 9, 18454. [Google Scholar] [CrossRef] [PubMed]
  114. Kiani, A.; Uyumazturk, B.; Rajpurkar, P.; Wang, A.; Gao, R.; Jones, E.; Yu, Y.; Langlotz, C.P.; Ball, R.L.; Montine, T.J.; et al. Impact of a deep learning assistant on the histopathologic classification of liver cancer. NPJ Digit. Med. 2020, 3, 23. [Google Scholar] [CrossRef]
  115. Chang, G.H.; Felson, D.T.; Qiu, S.; Guermazi, A.; Capellini, T.D.; Kolachalama, V.B. Assessment of knee pain from MR imaging using a convolutional Siamese network. Eur. Radiol. 2020, 30, 3538–3548. [Google Scholar] [CrossRef]
  116. Yi, P.H.; Kim, T.K.; Wei, J.; Shin, J.; Hui, F.K.; Sair, H.I.; Hager, G.D.; Fritz, J. Automated semantic labeling of pediatric musculoskeletal radiographs using deep learning. Pediatr. Radiol. 2019, 49, 1066–1070. [Google Scholar] [CrossRef]
  117. Li, W.; Zhuang, J.; Wang, R.; Zhang, J.; Zheng, W.S. Fusing metadata and dermoscopy images for skin disease diagnosis. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1996–2000. [Google Scholar]
  118. Xie, Y.; Zhang, J.; Xia, Y.; Shen, C. A mutual bootstrapping model for automated skin lesion segmentation and classification. IEEE Trans. Med Imaging 2020, 39, 2482–2493. [Google Scholar] [CrossRef]
  119. Kim, Y.; Lee, K.J.; Sunwoo, L.; Choi, D.; Nam, C.M.; Cho, J.; Kim, J.; Bae, Y.J.; Yoo, R.E.; Choi, B.S.; et al. Deep learning in diagnosis of maxillary sinusitis using conventional radiography. Investig. Radiol. 2019, 54, 7–15. [Google Scholar] [CrossRef]
  120. Wang, L.; Zhang, L.; Zhu, M.; Qi, X.; Yi, Z. Automatic diagnosis for thyroid nodules in ultrasound images by deep neural networks. Med. Image Anal. 2020, 61, 101665. [Google Scholar] [CrossRef]
  121. Huang, Y.; Chung, A.C. Evidence localization for pathology images using weakly supervised learning. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019; Proceedings, Part I 22. Springer: Berlin/Heidelberg, Germany, 2019; pp. 613–621. [Google Scholar]
  122. Kim, I.; Rajaraman, S.; Antani, S. Visual interpretation of convolutional neural network predictions in classifying medical image modalities. Diagnostics 2019, 9, 38. [Google Scholar] [CrossRef]
  123. Tang, C. Discovering Unknown Diseases with Explainable Automated Medical Imaging. In Proceedings of the Medical Image Understanding and Analysis: 24th Annual Conference, MIUA 2020, Oxford, UK, 15–17 July 2020; Proceedings 24. Springer: Berlin/Heidelberg, Germany, 2020; pp. 346–358. [Google Scholar]
  124. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  125. Hilbert, A.; Ramos, L.A.; van Os, H.J.; Olabarriaga, S.D.; Tolhuisen, M.L.; Wermer, M.J.; Marquering, H.A. Data-efficient deep learning of radiological image data for outcome prediction after endovascular treatment of patients with acute ischemic stroke. Comput. Biol. Med. 2019, 115, 103516. [Google Scholar] [CrossRef] [PubMed]
  126. Kim, B.H.; Ye, J.C. Understanding graph isomorphism network for rs-fMRI functional connectivity analysis. Front. Neurosci. 2020, 14, 630. [Google Scholar] [CrossRef] [PubMed]
  127. Liao, L.; Zhang, X.; Zhao, F.; Lou, J.; Wang, L.; Xu, X.; Li, G. Multi-branch deformable convolutional neural network with label distribution learning for fetal brain age prediction. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 424–427. [Google Scholar]
  128. Natekar, P.; Kori, A.; Krishnamurthi, G. Demystifying brain tumor segmentation networks: Interpretability and uncertainty analysis. Front. Comput. Neurosci. 2020, 14, 6. [Google Scholar] [CrossRef] [PubMed]
  129. Pereira, S.; Meier, R.; Alves, V.; Reyes, M.; Silva, C.A. Automatic brain tumor grading from MRI data using convolutional neural networks and quality assessment. In Proceedings of the Understanding and Interpreting Machine Learning in Medical Image Computing Applications: First International Workshops, MLCN 2018, DLF 2018, and iMIMIC 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 16–20 September 2018; Proceedings 1. Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 106–114. [Google Scholar]
  130. Pominova, M.; Artemov, A.; Sharaev, M.; Kondrateva, E.; Bernstein, A.; Burnaev, E. Voxelwise 3D convolutional and recurrent neural networks for epilepsy and depression diagnostics from structural and functional MRI data. In Proceedings of the 2018 IEEE International Conference on Data Mining Workshops (ICDMW), Singapore, 17–20 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 299–307. [Google Scholar]
  131. Xie, B.; Lei, T.; Wang, N.; Cai, H.; Xian, J.; He, M.; Xie, H. Computer-aided diagnosis for fetal brain ultrasound images using deep convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1303–1312. [Google Scholar] [CrossRef]
  132. El Adoui, M.; Drisis, S.; Benjelloun, M. Multi-input deep learning architecture for predicting breast tumor response to chemotherapy using quantitative MR images. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1491–1500. [Google Scholar] [CrossRef]
  133. Obikane, S.; Aoki, Y. Weakly supervised domain adaptation with point supervision in histopathological image segmentation. In Proceedings of the Pattern Recognition: ACPR 2019 Workshops, Auckland, New Zealand, 26 November 2019; Proceedings 5. Springer: Singapore, 2020; pp. 127–140. [Google Scholar]
  134. Candemir, S.; White, R.D.; Demirer, M.; Gupta, V.; Bigelow, M.T.; Prevedello, L.M.; Erdal, B.S. Automated coronary artery atherosclerosis detection and weakly supervised localization on coronary CT angiography with a deep 3-dimensional convolutional neural network. Comput. Med Imaging Graph. 2020, 83, 101721. [Google Scholar] [CrossRef]
  135. Cong, C.; Kato, Y.; Vasconcellos, H.D.; Lima, J.; Venkatesh, B. Automated stenosis detection and classification in X-ray angiography using deep neural network. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1301–1308. [Google Scholar]
  136. Huo, Y.; Terry, J.G.; Wang, J.; Nath, V.; Bermudez, C.; Bao, S.; Landman, B.A. Coronary calcium detection using 3D attention identical dual deep network based on weakly supervised learning. In Proceedings of the Medical Imaging 2019: Image Processing, San Diego, CA, USA, 19–21 February 2019; SPIE: Bellingham, DC, USA, 2019; Volume 10949, pp. 308–315. [Google Scholar]
  137. Patra, A.; Noble, J.A. Incremental learning of fetal heart anatomies using interpretable saliency maps. In Proceedings of the Medical Image Understanding and Analysis: 23rd Conference, MIUA 2019, Liverpool, UK, 24–26 July 2019; Proceedings 23. Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 129–141. [Google Scholar]
  138. Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. Explainable deep learning for pulmonary disease and coronavirus COVID-19 detection from X-rays. Comput. Methods Programs Biomed. 2020, 196, 105608. [Google Scholar] [CrossRef]
  139. Chen, B.; Li, J.; Lu, G.; Zhang, D. Lesion location attention guided network for multi-label thoracic disease classification in chest X-rays. IEEE J. Biomed. Health Inform. 2019, 24, 2016–2027. [Google Scholar] [CrossRef]
  140. He, J.; Shang, L.; Ji, H.; Zhang, X. Deep learning features for lung adenocarcinoma classification with tissue pathology images. In Proceedings of the Neural Information Processing: 24th International Conference, ICONIP 2017, Guangzhou, China, 14–18 November 2017; Proceedings, Part IV. Springer International Publishing: Berlin/Heidelberg, Germany, 2017; Volume 24, pp. 742–751. [Google Scholar]
  141. Hosny, A.; Parmar, C.; Coroller, T.P.; Grossmann, P.; Zeleznik, R.; Kumar, A.; Aerts, H.J. Deep learning for lung cancer prognostication: A retrospective multi-cohort radiomics study. PLoS Med. 2018, 15, e1002711. [Google Scholar] [CrossRef]
  142. Humphries, S.M.; Notary, A.M.; Centeno, J.P.; Strand, M.J.; Crapo, J.D.; Silverman, E.K.; For the Genetic Epidemiology of COPD (COPDGene) Investigators. Deep learning enables automatic classification of emphysema pattern at CT. Radiology 2020, 294, 434–444. [Google Scholar] [CrossRef]
  143. Ko, H.; Chung, H.; Kang, W.S.; Kim, K.W.; Shin, Y.; Kang, S.J.; Lee, J. COVID-19 pneumonia diagnosis using a simple 2D deep learning framework with a single chest CT image: Model development and validation. J. Med Internet Res. 2020, 22, e19569. [Google Scholar] [CrossRef] [PubMed]
  144. Mahmud, T.; Rahman, M.A.; Fattah, S.A. CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization. Comput. Biol. Med. 2020, 122, 103869. [Google Scholar] [CrossRef] [PubMed]
  145. Paul, R.; Schabath, M.; Gillies, R.; Hall, L.; Goldgof, D. Convolutional Neural Network ensembles for accurate lung nodule malignancy prediction 2 years in the future. Comput. Biol. Med. 2020, 122, 103882. [Google Scholar] [CrossRef]
  146. Philbrick, K.A.; Yoshida, K.; Inoue, D.; Akkus, Z.; Kline, T.L.; Weston, A.D.; Erickson, B.J. What does deep learning see? Insights from a classifier trained to predict contrast enhancement phase from CT images. Am. J. Roentgenol. 2018, 211, 1184–1193. [Google Scholar] [CrossRef] [PubMed]
  147. Qin, R.; Wang, Z.; Jiang, L.; Qiao, K.; Hai, J.; Chen, J.; Yan, B. Fine-Grained Lung Cancer Classification from PET and CT Images Based on Multidimensional Attention Mechanism. Complexity 2020, 2020, 6153657. [Google Scholar] [CrossRef]
  148. Teramoto, A.; Yamada, A.; Kiriyama, Y.; Tsukamoto, T.; Yan, K.; Zhang, L.; Fujita, H. Automated classification of benign and malignant cells from lung cytological images using deep convolutional neural network. Inform. Med. Unlocked 2019, 16, 100205. [Google Scholar] [CrossRef]
  149. Xu, R.; Cong, Z.; Ye, X.; Hirano, Y.; Kido, S.; Gyobu, T.; Tomiyama, N. Pulmonary textures classification via a multi-scale attention network. IEEE J. Biomed. Health Inform. 2019, 24, 2041–2052. [Google Scholar] [CrossRef]
  150. Vila-Blanco, N.; Carreira, M.J.; Varas-Quintana, P.; Balsa-Castro, C.; Tomas, I. Deep neural networks for chronological age estimation from OPG images. IEEE Trans. Med Imaging 2020, 39, 2374–2384. [Google Scholar] [CrossRef]
  151. Kim, M.; Han, J.C.; Hyun, S.H.; Janssens, O.; Van Hoecke, S.; Kee, C.; De Neve, W. Medinoid: Computer-aided diagnosis and localization of glaucoma using deep learning. Appl. Sci. 2019, 9, 3064. [Google Scholar] [CrossRef]
  152. Martins, J.; Cardoso, J.S.; Soares, F. Offline computer-aided diagnosis for Glaucoma detection using fundus images targeted at mobile devices. Comput. Methods Programs Biomed. 2020, 192, 105341. [Google Scholar] [CrossRef]
  153. Meng, Q.; Hashimoto, Y.; Satoh, S. How to extract more information with less burden: Fundus image classification and retinal disease localization with ophthalmologist intervention. IEEE J. Biomed. Health Inform. 2020, 24, 3351–3361. [Google Scholar] [CrossRef] [PubMed]
  154. Wang, R.; Fan, D.; Lv, B.; Wang, M.; Zhou, Q.; Lv, C.; Xie, G.; Wang, L. OCT image quality evaluation based on deep and shallow features fusion network. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1561–1564. [Google Scholar]
  155. Zhang, R.; Tan, S.; Wang, R.; Manivannan, S.; Chen, J.; Lin, H.; Zheng, W.S. Biomarker localization by combining CNN classifier and generative adversarial network. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019; Proceedings, Part I 22. Springer: Berlin/Heidelberg, Germany, 2019; pp. 209–217. [Google Scholar]
  156. Chen, X.; Lin, L.; Liang, D.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.H.; Chen, Y.W.; Tong, R.; Wu, J. A dual-attention dilated residual network for liver lesion classification and localization on CT images. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 235–239. [Google Scholar]
  157. Itoh, H.; Lu, Z.; Mori, Y.; Misawa, M.; Oda, M.; Kudo, S.e.; Mori, K. Visualising decision-reasoning regions in computer-aided pathological pattern diagnosis of endoscytoscopic images based on CNN weights analysis. In Proceedings of the Medical Imaging 2020: Computer-Aided Diagnosis, Houston, TX, USA, 6–19 February 2020; SPIE: Bellingham, DC, USA, 2020; Volume 11314, pp. 761–768. [Google Scholar]
  158. Korbar, B.; Olofson, A.M.; Miraflor, A.P.; Nicka, C.M.; Suriawinata, M.A.; Torresani, L.; Suriawinata, A.A.; Hassanpour, S. Looking under the hood: Deep neural network visualization to interpret whole-slide image analysis outcomes for colorectal polyps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 69–75. [Google Scholar]
  159. Kowsari, K.; Sali, R.; Ehsan, L.; Adorno, W.; Ali, A.; Moore, S.; Amadi, B.; Kelly, P.; Syed, S.; Brown, D. Hmic: Hierarchical medical image classification, a deep learning approach. Information 2020, 11, 318. [Google Scholar] [CrossRef] [PubMed]
  160. Wang, J.; Cui, Y.; Shi, G.; Zhao, J.; Yang, X.; Qiang, Y.; Du, Q.; Ma, Y.; Kazihise, N.G.F. Multi-branch cross attention model for prediction of KRAS mutation in rectal cancer with t2-weighted MRI. Appl. Intell. 2020, 50, 2352–2369. [Google Scholar] [CrossRef]
  161. Cheng, C.T.; Ho, T.Y.; Lee, T.Y.; Chang, C.C.; Chou, C.C.; Chen, C.C.; Chung, I.; Liao, C.H. Application of a deep learning algorithm for detection and visualization of hip fractures on plain pelvic radiographs. Eur. Radiol. 2019, 29, 5469–5477. [Google Scholar] [CrossRef] [PubMed]
  162. Gupta, V.; Demirer, M.; Bigelow, M.; Sarah, M.Y.; Joseph, S.Y.; Prevedello, L.M.; White, R.D.; Erdal, B.S. Using transfer learning and class activation maps supporting detection and localization of femoral fractures on anteroposterior radiographs. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1526–1529. [Google Scholar]
  163. Zhang, B.; Tan, J.; Cho, K.; Chang, G.; Deniz, C.M. Attention-based cnn for kl grade classification: Data from the osteoarthritis initiative. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 731–735. [Google Scholar]
  164. von Schacky, C.E.; Sohn, J.H.; Liu, F.; Ozhinsky, E.; Jungmann, P.M.; Nardo, L.; Posadzy, M.; Foreman, S.C.; Nevitt, M.C.; Link, T.M.; et al. Development and validation of a multitask deep learning model for severity grading of hip osteoarthritis features on radiographs. Radiology 2020, 295, 136–145. [Google Scholar] [CrossRef]
  165. Lee, J.H.; Ha, E.J.; Kim, D.; Jung, Y.J.; Heo, S.; Jang, Y.H.; An, S.H.; Lee, K. Application of deep learning to the diagnosis of cervical lymph node metastasis from thyroid cancer with CT: External validation and clinical utility for resident training. Eur. Radiol. 2020, 30, 3066–3072. [Google Scholar] [CrossRef]
  166. Langner, T.; Wikström, J.; Bjerner, T.; Ahlström, H.; Kullberg, J. Identifying morphological indicators of aging with neural networks on large-scale whole-body MRI. IEEE Trans. Med. Imaging 2019, 39, 1430–1437. [Google Scholar] [CrossRef]
  167. Li, C.; Yao, G.; Xu, X.; Yang, L.; Zhang, Y.; Wu, T.; Sun, J. DCSegNet: Deep learning framework based on divide-and-conquer method for liver segmentation. IEEE Access 2020, 8, 146838–146846. [Google Scholar] [CrossRef]
  168. Mohamed Musthafa, M.; Mahesh, T.R.; Vinoth Kumar, V.; Guluwadi, S. Enhancing brain tumor detection in MRI images through explainable AI using Grad-CAM with Resnet 50. BMC Med. Imaging 2024, 24, 107. [Google Scholar]
  169. Wang, C.W.; Khalil, M.A.; Lin, Y.J.; Lee, Y.C.; Chao, T.K. Detection of erbb2 and cen17 signals in fluorescent in situ hybridization and dual in situ hybridization for guiding breast cancer her2 target therapy. Artif. Intell. Med. 2023, 141, 102568. [Google Scholar] [CrossRef]
  170. Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; Müller, K.R.; Samek, W. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 2015, 10, e0130140. [Google Scholar] [CrossRef]
  171. Montavon, G.; Lapuschkin, S.; Binder, A.; Samek, W.; Müller, K.R. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognit. 2017, 65, 211–222. [Google Scholar] [CrossRef]
  172. Samek, W.; Binder, A.; Montavon, G.; Lapuschkin, S.; Müller, K.R. Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2660–2673. [Google Scholar] [CrossRef] [PubMed]
  173. Kohlbrenner, M.; Bauer, A.; Nakajima, S.; Binder, A.; Samek, W.; Lapuschkin, S. Towards best practice in explaining neural network decisions with LRP. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–7. [Google Scholar]
  174. Arquilla, K.; Gajera, I.D.; Darling, M.; Bhati, D.; Singh, A.; Guercio, A. Exploring Fine-Grained Feature Analysis for Bird Species Classification using Layer-wise Relevance Propagation. In Proceedings of the 2024 IEEE World AI IoT Congress (AIIoT), Melbourne, Australia, 29–31 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 625–631. [Google Scholar]
  175. Eitel, F.; Soehler, E.; Bellmann-Strobl, J.; Brandt, A.U.; Ruprecht, K.; Giess, R.M.; Kuchling, J.; Asseyer, S.; Weygandt, M.; Haynes, J.D.; et al. Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation. Neuroimage Clin. 2019, 24, 102003. [Google Scholar] [CrossRef] [PubMed]
  176. Thomas, A.W.; Heekeren, H.R.; Müller, K.R.; Samek, W. Analyzing neuroimaging data through recurrent deep learning models. Front. Neurosci. 2019, 13, 1321. [Google Scholar] [CrossRef] [PubMed]
  177. Schlemper, J.; Oktay, O.; Schaap, M.; Heinrich, M.; Kainz, B.; Glocker, B.; Rueckert, D. Attention gated networks: Learning to leverage salient regions in medical images. Med. Image Anal. 2019, 53, 197–207. [Google Scholar] [CrossRef]
  178. Katar, O.; Yildirim, O. An explainable vision transformer model based white blood cells classification and localization. Diagnostics 2023, 13, 2459. [Google Scholar] [CrossRef]
  179. Jetley, S.; Lord, N.A.; Lee, N.; Torr, P.H. Learn to pay attention. arXiv 2018, arXiv:1804.02391. [Google Scholar]
  180. Li, S.; Dong, M.; Du, G.; Mu, X. Attention dense-u-net for automatic breast mass segmentation in digital mammogram. IEEE Access 2019, 7, 59037–59047. [Google Scholar] [CrossRef]
  181. Yan, Y.; Kawahara, J.; Hamarneh, G. Melanoma recognition via visual attention. In Proceedings of the Information Processing in Medical Imaging: 26th International Conference, IPMI 2019, Hong Kong, China, 2–7 June 2019; Proceedings 26. Springer: Berlin/Heidelberg, Germany, 2019; pp. 793–804. [Google Scholar]
  182. Górriz, M.; Antony, J.; McGuinness, K.; Giró-i Nieto, X.; O’Connor, N.E. Assessing knee OA severity with CNN attention-based end-to-end architectures. In Proceedings of the International Conference on Medical Imaging with Deep Learning, PMLR, London, UK, 8–10 July 2019; pp. 197–214. [Google Scholar]
  183. Xu, X.; Li, C.; Fan, X.; Lan, X.; Lu, X.; Ye, X.; Wu, T. Attention Mask R-CNN with edge refinement algorithm for identifying circulating genetically abnormal cells. Cytom. Part A 2023, 103, 227–239. [Google Scholar] [CrossRef]
  184. Bramlage, L.; Cortese, A. Generalized attention-weighted reinforcement learning. Neural Netw. 2022, 145, 10–21. [Google Scholar] [CrossRef] [PubMed]
  185. Chen, Q.; Shao, Q. Single image super-resolution based on trainable feature matching attention network. Pattern Recognit. 2024, 149, 110289. [Google Scholar] [CrossRef]
  186. Dubost, F.; Adams, H.; Yilmaz, P.; Bortsova, G.; van Tulder, G.; Ikram, M.A.; Niessen, W.; Vernooij, M.W.; de Bruijne, M. Weakly supervised object detection with 2D and 3D regression neural networks. Med. Image Anal. 2020, 65, 101767. [Google Scholar] [CrossRef] [PubMed]
  187. Lian, C.; Liu, M.; Wang, L.; Shen, D. End-to-end dementia status prediction from brain mri using multi-task weakly-supervised attention network. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019; Proceedings, Part IV 22. Springer: Berlin/Heidelberg, Germany, 2019; pp. 158–167. [Google Scholar]
  188. Wang, H.; Feng, J.; Zhang, Z.; Su, H.; Cui, L.; He, H.; Liu, L. Breast mass classification via deeply integrating the contextual information from multi-view data. Pattern Recognit. 2018, 80, 42–52. [Google Scholar] [CrossRef]
  189. Li, L.; Xu, M.; Liu, H.; Li, Y.; Wang, X.; Jiang, L.; Wang, Z.; Fan, X.; Wang, N. A large-scale database and a CNN model for attention-based glaucoma detection. IEEE Trans. Med Imaging 2019, 39, 413–424. [Google Scholar] [CrossRef]
  190. Yang, H.; Kim, J.Y.; Kim, H.; Adhikari, S.P. Guided soft attention network for classification of breast cancer histopathology images. IEEE Trans. Med Imaging 2019, 39, 1306–1315. [Google Scholar] [CrossRef]
  191. Pesce, E.; Withey, S.J.; Ypsilantis, P.P.; Bakewell, R.; Goh, V.; Montana, G. Learning to detect chest radiographs containing pulmonary lesions using visual attention networks. Med. Image Anal. 2019, 53, 26–38. [Google Scholar] [CrossRef]
  192. Singla, S.; Gong, M.; Ravanbakhsh, S.; Sciurba, F.; Poczos, B.; Batmanghelich, K.N. Subject2Vec: Generative-discriminative approach from a set of image patches to a vector. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, 16–20 September 2018; Proceedings, Part I. Springer: Berlin/Heidelberg, Germany, 2018; pp. 502–510. [Google Scholar]
  193. Sun, J.; Darbehani, F.; Zaidi, M.; Wang, B. Saunet: Shape attentive u-net for interpretable medical image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, 4–8 October 2020; Proceedings, Part IV 23. Springer: Berlin/Heidelberg, Germany, 2020; pp. 797–806. [Google Scholar]
  194. Zhu, Z.; Ding, X.; Zhang, D.; Wang, L. Weakly-supervised balanced attention network for gastric pathology image localization and classification. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–4. [Google Scholar]
  195. Barata, C.; Celebi, M.E.; Marques, J.S. Explainable skin lesion diagnosis using taxonomies. Pattern Recognit. 2021, 110, 107413. [Google Scholar] [CrossRef]
  196. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  197. Srivastava, A.; Chandra, M.; Saha, A.; Saluja, S.; Bhati, D. Current Advances in Locality-Based and Feature-Based Transformers: A Review. In Proceedings of the International Conference on Data & Information Sciences, Edinburgh, UK, 11–13 August 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 321–335. [Google Scholar]
  198. Wu, J.; Mi, Q.; Zhang, Y.; Wu, T. SVTNet: Automatic bone age assessment network based on TW3 method and vision transformer. Int. J. Imaging Syst. Technol. 2024, 34, e22990. [Google Scholar] [CrossRef]
  199. Park, S.; Kim, G.; Oh, Y.; Seo, J.B.; Lee, S.M.; Kim, J.H.; Moon, S.; Lim, J.K.; Park, C.M.; Ye, J.C. Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation. Nat. Commun. 2022, 13, 3848. [Google Scholar] [CrossRef]
  200. Chen, J.; Frey, E.C.; He, Y.; Segars, W.P.; Li, Y.; Du, Y. Transmorph: Transformer for unsupervised medical image registration. Med. Image Anal. 2022, 82, 102615. [Google Scholar] [CrossRef]
  201. Gupte, S.R.; Hou, C.; Wu, G.H.; Galaz-Montoya, J.G.; Chiu, W.; Yeung-Levy, S. CryoViT: Efficient Segmentation of Cryogenic Electron Tomograms with Vision Foundation Models. bioRxiv 2024. [Google Scholar] [CrossRef]
  202. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
  203. Karimi, D.; Vasylechko, S.D.; Gholipour, A. Convolution-free medical image segmentation using transformers. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part I 24. Springer: Berlin/Heidelberg, Germany, 2021; pp. 78–88. [Google Scholar]
  204. Yun, B.; Wang, Y.; Chen, J.; Wang, H.; Shen, W.; Li, Q. Spectr: Spectral transformer for hyperspectral pathology image segmentation. arXiv 2021, arXiv:2103.03604. [Google Scholar] [CrossRef]
  205. Wenxuan, W.; Chen, C.; Meng, D.; Hong, Y.; Sen, Z.; Jiangyun, L. Transbts: Multimodal brain tumor segmentation using transformer. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 109–119. [Google Scholar]
  206. Hatamizadeh, A.; Tang, Y.; Nath, V.; Yang, D.; Myronenko, A.; Landman, B.; Roth, H.R.; Xu, D. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 574–584. [Google Scholar]
  207. Li, S.; Sui, X.; Luo, X.; Xu, X.; Liu, Y.; Goh, R. Medical image segmentation using squeeze-and-expansion transformers. arXiv 2021, arXiv:2105.09511. [Google Scholar]
  208. Zhang, Y.; Higashita, R.; Fu, H.; Xu, Y.; Zhang, Y.; Liu, H.; Zhang, J.; Liu, J. A multi-branch hybrid transformer network for corneal endothelial cell segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part I 24. Springer: Berlin/Heidelberg, Germany, 2021; pp. 99–108. [Google Scholar]
  209. Lin, A.; Chen, B.; Xu, J.; Zhang, Z.; Lu, G.; Zhang, D. Ds-transunet: Dual swin transformer u-net for medical image segmentation. IEEE Trans. Instrum. Meas. 2022, 71, 1–15. [Google Scholar] [CrossRef]
  210. Li, Y.; Cai, W.; Gao, Y.; Li, C.; Hu, X. More than encoder: Introducing transformer decoder to upsample. In Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Las Vegas, NV, USA, 6–8 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1597–1602. [Google Scholar]
  211. Xu, G.; Zhang, X.; He, X.; Wu, X. Levit-unet: Make faster encoders with transformer for medical image segmentation. In Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Xiamen, China, 13–15 October 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 42–53. [Google Scholar]
  212. Chang, Y.; Menghan, H.; Guangtao, Z.; Xiao-Ping, Z. Transclaw u-net: Claw u-net with transformers for medical image segmentation. arXiv 2021, arXiv:2107.05188. [Google Scholar]
  213. Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-unet: Unet-like pure transformer for medical image segmentation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 205–218. [Google Scholar]
  214. Petit, O.; Thome, N.; Rambour, C.; Themyr, L.; Collins, T.; Soler, L. U-net transformer: Self and cross attention for medical image segmentation. In Proceedings of the Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, 27 September 2021; Proceedings 12. Springer: Berlin/Heidelberg, Germany, 2021; pp. 267–276. [Google Scholar]
  215. Xie, Y.; Zhang, J.; Shen, C.; Xia, Y. Cotr: Efficiently bridging cnn and transformer for 3d medical image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part III 24. Springer: Berlin/Heidelberg, Germany, 2021; pp. 171–180. [Google Scholar]
  216. Gao, Y.; Zhou, M.; Metaxas, D.N. UTNet: A hybrid transformer architecture for medical image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part III 24. Springer: Berlin/Heidelberg, Germany, 2021; pp. 61–71. [Google Scholar]
  217. Chen, B.; Liu, Y.; Zhang, Z.; Lu, G.; Kong, A.W.K. Transattunet: Multi-level attention-guided u-net with transformer for medical image segmentation. IEEE Trans. Emerg. Top. Comput. Intell. 2023. [Google Scholar] [CrossRef]
  218. Dong, B.; Wang, W.; Fan, D.P.; Li, J.; Fu, H.; Shao, L. Polyp-pvt: Polyp segmentation with pyramid vision transformers. arXiv 2021, arXiv:2108.06932. [Google Scholar] [CrossRef]
  219. Shen, Z.; Yang, H.; Zhang, Z.; Zheng, S. Automated kidney tumor segmentation with convolution and transformer network. In International Challenge on Kidney and Kidney Tumor Segmentation; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–12. [Google Scholar]
  220. Deng, K.; Meng, Y.; Gao, D.; Bridge, J.; Shen, Y.; Lip, G.; Zhao, Y.; Zheng, Y. Transbridge: A lightweight transformer for left ventricle segmentation in echocardiography. In Proceedings of the Simplifying Medical Ultrasound: Second International Workshop, ASMUS 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, 27 September 2021; Proceedings 2. Springer: Berlin/Heidelberg, Germany, 2021; pp. 63–72. [Google Scholar]
  221. Jia, Q.; Shu, H. Bitr-unet: A cnn-transformer combined network for mri brain tumor segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Singapore, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 3–14. [Google Scholar]
  222. Hatamizadeh, A.; Nath, V.; Tang, Y.; Yang, D.; Roth, H.R.; Xu, D. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In Proceedings of the International MICCAI Brainlesion Workshop, Singapore, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 272–284. [Google Scholar]
  223. Li, Y.; Wang, S.; Wang, J.; Zeng, G.; Liu, W.; Zhang, Q.; Jin, Q.; Wang, Y. Gt u-net: A u-net like group transformer network for tooth root segmentation. In Proceedings of the Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, 27 September 2021; Proceedings 12. Springer: Berlin/Heidelberg, Germany, 2021; pp. 386–395. [Google Scholar]
  224. Gheflati, B.; Rivaz, H. Vision transformers for classification of breast ultrasound images. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, UK, 11–15 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 480–483. [Google Scholar]
  225. Zheng, Y.; Gindra, R.; Betke, M.; Beane, J.E.; Kolachalama, V.B. A deep learning based graph-transformer for whole slide image classification. medRxiv 2021. [Google Scholar] [CrossRef]
  226. Yu, S.; Ma, K.; Bi, Q.; Bian, C.; Ning, M.; He, N.; Li, Y.; Liu, H.; Zheng, Y. Mil-vt: Multiple instance learning enhanced vision transformer for fundus image classification. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part VIII 24. Springer: Berlin/Heidelberg, Germany, 2021; pp. 45–54. [Google Scholar]
  227. Sun, R.; Li, Y.; Zhang, T.; Mao, Z.; Wu, F.; Zhang, Y. Lesion-aware transformers for diabetic retinopathy grading. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10938–10947. [Google Scholar]
  228. Perera, S.; Adhikari, S.; Yilmaz, A. Pocformer: A lightweight transformer architecture for detection of covid-19 using point of care ultrasound. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 9–22 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 195–199. [Google Scholar]
  229. Park, S.; Kim, G.; Kim, J.; Kim, B.; Ye, J.C. Federated split vision transformer for COVID-19 CXR diagnosis using task-agnostic training. arXiv 2021, arXiv:2111.01338. [Google Scholar]
  230. Shome, D.; Kar, T.; Mohanty, S.N.; Tiwari, P.; Muhammad, K.; AlTameem, A.; Zhang, Y.; Saudagar, A.K.J. Covid-transformer: Interpretable COVID-19 detection using vision transformer for healthcare. Int. J. Environ. Res. Public Health 2021, 18, 11086. [Google Scholar] [CrossRef]
  231. Liu, C.; Yin, Q. Automatic diagnosis of covid-19 using a tailored transformer-like network. In Proceedings of the Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2021; Volume 2010, p. 012175. [Google Scholar]
  232. Park, S.; Kim, G.; Oh, Y.; Seo, J.B.; Lee, S.M.; Kim, J.H.; Moon, S.; Lim, J.K.; Ye, J.C. Vision transformer for COVID-19 cxr diagnosis using chest X-ray feature corpus. arXiv 2021, arXiv:2103.07055. [Google Scholar]
  233. Gao, X.; Qian, Y.; Gao, A. COVID-VIT: Classification of COVID-19 from CT chest images based on vision transformer models. arXiv 2021, arXiv:2107.01682. [Google Scholar]
  234. Mondal, A.K.; Bhattacharjee, A.; Singla, P.; Prathosh, A. xViTCOS: Explainable vision transformer based COVID-19 screening using radiography. IEEE J. Transl. Eng. Health Med. 2021, 10, 1–10. [Google Scholar] [CrossRef] [PubMed]
  235. Hsu, C.C.; Chen, G.L.; Wu, M.H. Visual transformer with statistical test for covid-19 classification. arXiv 2021, arXiv:2107.05334. [Google Scholar]
  236. Zhang, L.; Wen, Y. A transformer-based framework for automatic COVID19 diagnosis in chest CTs. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 513–518. [Google Scholar]
  237. Ambita, A.A.E.; Boquio, E.N.V.; Naval, P.C., Jr. Covit-gan: Vision transformer forcovid-19 detection in ct scan imageswith self-attention gan forDataAugmentation. In Proceedings of the International Conference on Artificial Neural Networks, Bratislava, Slovakia, 14 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 587–598. [Google Scholar]
  238. Zhang, Y.; Pan, X.; Li, C.; Wu, T. 3D liver and tumor segmentation with CNNs based on region and distance metrics. Appl. Sci. 2020, 10, 3794. [Google Scholar] [CrossRef]
  239. Azad, R.; Kazerouni, A.; Heidari, M.; Aghdam, E.K.; Molaei, A.; Jia, Y.; Jose, A.; Roy, R.; Merhof, D. Advances in medical image analysis with vision transformers: A comprehensive review. Med. Image Anal. 2023, 103000. [Google Scholar] [CrossRef]
  240. Li, Z.; Li, Y.; Li, Q.; Wang, P.; Guo, D.; Lu, L.; Jin, D.; Zhang, Y.; Hong, Q. Lvit: Language meets vision transformer in medical image segmentation. IEEE Trans. Med. Imaging 2023. [Google Scholar] [CrossRef]
Figure 1. Overview of Deep Learning Models and Techniques in Medical Imaging: This diagram illustrates the main categories of deep learning models used in medical imaging, including model types, understanding model structure and functionality, and interpretation and visualization techniques. It highlights specific methods such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), transformer-based architectures, autoencoders, Local Interpretable Model-agnostic Explanations (LIME), Integrated Gradient (IG), Gradient-Class Activation Mapping (Grad-CAM), and Layer-Wise Relevance Propagation (LRP), Attention and Vision Transformers.
Figure 1. Overview of Deep Learning Models and Techniques in Medical Imaging: This diagram illustrates the main categories of deep learning models used in medical imaging, including model types, understanding model structure and functionality, and interpretation and visualization techniques. It highlights specific methods such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), transformer-based architectures, autoencoders, Local Interpretable Model-agnostic Explanations (LIME), Integrated Gradient (IG), Gradient-Class Activation Mapping (Grad-CAM), and Layer-Wise Relevance Propagation (LRP), Attention and Vision Transformers.
Jimaging 10 00239 g001
Figure 2. Timeline of XAI Technique Development in medical imaging applications.
Figure 2. Timeline of XAI Technique Development in medical imaging applications.
Jimaging 10 00239 g002
Figure 3. Example uses of perturbation-based attribution methods for model interpretability which shows the comparison of several approaches to interpretation for identifying congestive heart failure on chest X-rays (Seah et al., 2018) [54].
Figure 3. Example uses of perturbation-based attribution methods for model interpretability which shows the comparison of several approaches to interpretation for identifying congestive heart failure on chest X-rays (Seah et al., 2018) [54].
Jimaging 10 00239 g003
Figure 4. Examples of gradient-based attribution methods for model interpretability. (A) Class maximization visualization of malignant and benign breast masses on mammograms [13]. (B) Integrated gradients visualizing evidence of diabetic retinopathy on retinal fundus images [46]. (C) Visualization of malignant and benign breast masses [13]. (D) Guided backpropagation applied to ultrasound images for fetal heartbeat localization [8]. (E) Differentiation between benign and malignant breast masses in mammograms [10]. (F) Grad-CAM visualizations identifying discriminative regions in magnetoencephalography images for detecting eye-blink artifacts [59].
Figure 4. Examples of gradient-based attribution methods for model interpretability. (A) Class maximization visualization of malignant and benign breast masses on mammograms [13]. (B) Integrated gradients visualizing evidence of diabetic retinopathy on retinal fundus images [46]. (C) Visualization of malignant and benign breast masses [13]. (D) Guided backpropagation applied to ultrasound images for fetal heartbeat localization [8]. (E) Differentiation between benign and malignant breast masses in mammograms [10]. (F) Grad-CAM visualizations identifying discriminative regions in magnetoencephalography images for detecting eye-blink artifacts [59].
Jimaging 10 00239 g004
Figure 5. Example uses of decomposition-based attribution methods for model interpretability to show the Layerwise relevance propagation for model interpretation in diagnosing multiple sclerosis on brain MRI (Eitel et al., 2019) [175]. The Attention U-Net for organ segmentation in abdominal CT scans [177] and the areas upon which the model correctly focuses its predictions on the test images in explainable Vision Transformers model [178].
Figure 5. Example uses of decomposition-based attribution methods for model interpretability to show the Layerwise relevance propagation for model interpretation in diagnosing multiple sclerosis on brain MRI (Eitel et al., 2019) [175]. The Attention U-Net for organ segmentation in abdominal CT scans [177] and the areas upon which the model correctly focuses its predictions on the test images in explainable Vision Transformers model [178].
Jimaging 10 00239 g005
Figure 6. Bubble chart provides a concise overview of imaging techniques categorized by anatomical contexts.
Figure 6. Bubble chart provides a concise overview of imaging techniques categorized by anatomical contexts.
Jimaging 10 00239 g006
Table 1. Comparison chart that illustrates the visualization techniques methods.
Table 1. Comparison chart that illustrates the visualization techniques methods.
AttributesPerturbationGradientDecompositionTrainable Attention Models
Model DependencyModel-agnosticDifferentiableModel-specificModel-specific
Access to Model ParametersNoYesYesYes
Computational EfficiencySlowerFasterVariesVaries
Table 2. Overview of Various Studies Using Perturbation-Based Methods in Medical Imaging.
Table 2. Overview of Various Studies Using Perturbation-Based Methods in Medical Imaging.
DomainTaskModalityPerformanceTechniqueCitation
BreastClassificationMRIN/AIG[45]
EyeClassificationDRAccuracy: 95.5%IG[46]
MultipleClassificationDRN/AIG[47]
ChestDetectionX-rayAccuracy: 94.9%, AUC: 97.4%LIME[48]
GastrointestinalClassificationEndoscopyAccuracy: 97.9%LIME[49]
BrainSegmentation, DetectionMRIICC: 93.0%OS[50]
BrainClassificationMRIAccuracy: 85.0%OS[51]
BreastDetection, ClassificationHistologyAccuracy: 55.0%OS[52]
Eye, ChestClassification, DetectionOCT, X-rayEye Accuracy: 94.7%, Chest Accuracy: 92.8%OS[53]
ChestClassificationX-rayAUC: 82.0%OS, IG, LIME[54]
Table 3. Performance metrics of various Medical Imaging tasks across different modalities using CAM.
Table 3. Performance metrics of various Medical Imaging tasks across different modalities using CAM.
Domain-TaskModalityPerformanceCitation
Bladder ClassificationHistologyMean Accuracy: 69.9%[77]
Brain ClassificationMRIAccuracy: 86.7%[78,79]
Brain DetectionMRI, PET, CTAccuracy: 90.2–95.3%, F1: 91.6–94.3%[80,81]
Breast ClassificationX-ray, Ultrasound, MRIAccuracy: 83.0–89.0%[82,83,84,85]
Breast DetectionX-ray, UltrasoundMean AUC: 81.0%, AUC: Mt-Net 98.0%, Sn-Net 92.8%, Accuracy: 92.5%[86,87,88,89]
Chest ClassificationX-ray, CTAccuracy: 97.8%, Average AUC: 75.5–96.0%[90,91,92,93,94,95,96,97]
Chest SegmentationX-rayAccuracy: 95.8%[98]
Eye ClassificationFundus Photography, OCT, CTF1: 95.0%, Precision: 93.0%, AUC: 88.0–99.0%[99,100,101,102,103]
Eye DetectionFundus PhotographyAccuracy: 73.2–99.1%, AUC: 99.0%[104,105,106,107,108]
Gastrointestinal (GI) ClassificationEndoscopyMean Accuracy: 93.2%[109,110,111,112]
Liver Classification, SegmentationHistologyMean Accuracy: 87.5%[113,114]
Musculoskeletal ClassificationMRI, X-rayAccuracy: 86.0%, AUC: 85.3%[115,116]
Skin Classification, SegmentationDermatoscopyAccuracy: 83.6%, F1: 82.7%[117,118]
Skull ClassificationX-rayAUC: 88.0–93.0%[119]
Thyroid ClassificationUltrasoundAccuracy: 87.3%, AUC: 90.1%[120]
Lymph Node Classification, DetectionHistologyAccuracy: 91.9%, AUC: 97.0%[121]
Various ClassificationCT, MRI, Ultrasound, X-ray, FundoscopyF1: 98.0%, Accuracy: 98.0%[122,123]
Table 4. Performance metrics of various Medical Imaging tasks across different modalities Using Grad-CAM.
Table 4. Performance metrics of various Medical Imaging tasks across different modalities Using Grad-CAM.
Domain-TaskModalityPerformanceCitation
Brain ClassificationMRI81.6–94.2% accuracy[125,126,127,128,129,130]
Brain DetectionUltrasound94.2% accuracy[131]
Breast ClassificationMRI91.0% AUC[132]
Breast SegmentationHistology95.6% accuracy[133]
CardiovascularCT, X-ray, Ultrasound81.2–92.7% accuracy, AUC (81.0–96.3%)[134,135,136,137]
Chest ClassificationX-ray, CT, Histology72.0–99.9% accuracy, AUC (70.0–97.9%)[138,139,140,141,142,143,144,145,146,147,148,149]
Dental ClassificationX-ray85.4% accuracy, 92.5% AUC[150]
Eye ClassificationFundus, OCT81–97.5% accuracy, AUC (48.1–99.2%)[151,152,153,154,155]
Gastrointestinal (GI) ClassificationCT, Endoscopy, Histology, MRI86.9–93.7% accuracy[156,157,158,159,160]
MusculoskeletalX-ray74.8–96.3% accuracy[161,162,163,164]
Thyroid ClassificationCT82.8% accuracy, 88.4% AUC[165]
Whole-Body ScansMRIR2 value of 83.0%[166]
Liver segmentationCT scans96% accuracy LiTS[167]
Brain Tumor DetectionMRI images98.52% accuracy[168]
Breast CancerDISH and FISH images97% accuracy[169]
Table 5. Overview of Studies Using Trainable Attention Models in Medical Imaging.
Table 5. Overview of Studies Using Trainable Attention Models in Medical Imaging.
DomainTaskModalityPerformanceCitation
BrainDetectionMRIAccuracy: 76.5%[186]
BrainDetection, ClassificationMRICC: 61.3–64.8%, RMSE: 1.503–5.701[187]
BreastClassificationX-rayAccuracy: 85.0%, AUC: 89.0%[188]
BreastSegmentationMammoAccuracy: 78.4%, F1: 82.2%[189]
BreastClassificationHistologyAccuracy: 90.3, AUC: 98.4%[190]
ChestDetectionX-rayAccuracy: 73.0–84.0%[191]
ChestClassificationCTAccuracy: 87.6%[192]
ChestSegmentationMRIAccuracy: 91.3%[193]
EyeDetectionFundus PhotographyAccuracy: 96.2%, AUC: 98.3%[180]
Gastrointestinal (GI)ClassificationHistologyAccuracy: 88.4%[194]
SkinDermatoscopy [195]
SkinClassificationDermatoscopyAverage Precision: 67.2%, AUC: 88.3%[181]
Female Reproductive System, StomachClassification, SegmentationCT, Fetal UltrasoundsUltrasound Classification: Accuracy: 97.7–98.0%, F1: 92.2–93.3%, CT Segmentation: Recall: 75.1–83.5%[177]
Skeletal (Joint)ClassificationX-rayAccuracy: 64.3%[182]
Table 6. Overview of Studies Using Vision Transformers in Medical Imaging.
Table 6. Overview of Studies Using Vision Transformers in Medical Imaging.
DomainTaskModalityPerformanceCitation
StomachsegmentationCT, MRIDice Score: 77.5%, Hausdorff distance: 31.7%[202]
Brain, Pancreas, HippocampussegmentationMRI, CTDice Scores: Brain: 87.9%, Pancreas: 83.6%, Hippocampus: 88.1%[203]
Bile-ductsegmentationHyperspectralAverage Dice Score: 75.2%[204]
BrainsegmentationMRIDice Scores: Enhancing Tumor Region: 78.7%, Whole Tumor Region: 90.1%, Regions of Tumor Core: 81.7%[205]
Brain, SpleensegmentationMRI, CTDice Score: 89.1%[206]
Eye, Rectal, BrainsegmentationFundus, Colonoscopy, MRIAverage Dice Score: 91.7%[207]
EyesegmentationPathologyDice Score: 78.6%, F1: 82.1%[208]
Multi-organsegmentationColonoscopy, HistologyAverage Dice Score: 86.8%[209]
Aorta, Gallbladder, Kidney, Liver, Pancreas, Spleen, StomachsegmentationMRI, CTAverage Dice Score: 78.1–80.4%[210,211,212,213,214,215]
HeartsegmentationMRIAverage Dice Score: 88.3%[216]
Skin, ChestsegmentationX-ray, CTAverage Dice Score: Skin: 90.7%, Chest: 86.6%[217]
RectalsegmentationColonoscopy, HistologyAverage Dice Score: 91.7%[218]
KidneysegmentationCTDice Score: 92.3%[219]
HeartsegmentationEchocardio- graphyDice Score: 91.4%[220]
BrainsegmentationMRIDice Score: 91.3–93.5%[221,222]
TeethsegmentationX-rayDice Score: 92.5%[223]
BreastclassificationUltrasoundAccuracy: 86.7%, AUC 95.0%[224]
LungclassificationMicroscopyAccuracy: 97.5%[225]
EyeclassificationFundusAccuracy: 95.9%, AUC: 96.3%[226,227]
ChestclassificationUltrasoundAccuracy: 93.9%[228]
ChestclassificationX-rayAverage AUC: 93.1%, Accuracy: COVID: 98.0%, Pneumonia: 92.0%[229,230,231,232]
LungclassificationCTF1: 76.0%[233]
ChestclassificationX-ray, CTOverall Accuracy: 87.2–98.1%, F1: 93.5%[234,235,236,237]
Table 7. Comparison of Visualization Techniques.
Table 7. Comparison of Visualization Techniques.
Visualization TechniqueTaskBody PartsModalityAccuracyEvaluation Metric
CAMImage classification and localizationBrain, chest, abdomenX-ray, MRI, CT scans85.0–95.0%Accuracy for classification; IoU for localization tasks
Grad-CAMImage classification and localizationBrain, chest, abdomenX-ray, MRI, CT scans85.0–95.0%Accuracy for classification; IoU for localization tasks
LRPSegmentation, classificationBrain, liver, lungsMRI, CT scans90.0%Dice coefficient for segmentation accuracy
IGImage classificationBreast, lung, spineX-ray, MRI80.0–92.0%AUC-ROC for classification
Attention-basedImage classification, object detectionBrain, chestX-ray, MRI5.0% to 10.0%Accuracy for classification; mAP for object detection
LIMELocal explanations for model predictionsN/AN/AN/ATask-specific metrics
Gradient-basedVisualize feature importanceN/AN/AN/AFeature importance metrics, SHAP values, Grad-CAM++
Vision TransformerDynamically attend to relevant featuresVarious body partsVarious modalitiesN/ATask-specific metrics
Table 8. Techniques Organized by Task.
Table 8. Techniques Organized by Task.
TaskTechniquesApplicationPerformance MetricsExamples
ClassificationCAM, Grad-CAM, Attention, ViTsDisease diagnosis, organ identificationAccuracy, AUC-ROC, Precision, RecallDisease Diagnosis: High AUC for cancer detection (e.g., mammograms); Organ Identification: CAM for liver segmentation or brain MRI; ViTs: High accuracy in lung and breast cancer classification
SegmentationLRP, IG, ViTsTumor segmentation, anatomical structure delineationDice Similarity Coefficient (DSC), Intersection over Union (IoU)Tumor Segmentation: Accurate tumor boundary delineation; Anatomical Structure: IG for cardiac structure in CT scans; ViTs: High DSC scores in brain and stomach segmentation
DetectionSaliency maps, CAM, Attention, ViTsLesion detection, nodule localizationMean Average Precision (mAP), Sensitivity, SpecificityLesion Detection: Saliency maps for skin cancer detection; Nodule Localization: CAM for lung nodule detection in CT scans; ViTs: Improved lesion detection in various modalities
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bhati, D.; Neha, F.; Amiruzzaman, M. A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging. J. Imaging 2024, 10, 239. https://doi.org/10.3390/jimaging10100239

AMA Style

Bhati D, Neha F, Amiruzzaman M. A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging. Journal of Imaging. 2024; 10(10):239. https://doi.org/10.3390/jimaging10100239

Chicago/Turabian Style

Bhati, Deepshikha, Fnu Neha, and Md Amiruzzaman. 2024. "A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging" Journal of Imaging 10, no. 10: 239. https://doi.org/10.3390/jimaging10100239

APA Style

Bhati, D., Neha, F., & Amiruzzaman, M. (2024). A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging. Journal of Imaging, 10(10), 239. https://doi.org/10.3390/jimaging10100239

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop