Next Article in Journal
Nanomedicine Strategies for Targeting Tumor Stroma
Next Article in Special Issue
Generation of a Realistic Synthetic Laryngeal Cancer Cohort for AI Applications
Previous Article in Journal
Editorial: The Application of Proteogenomics to Urine Analysis for the Identification of Novel Biomarkers of Prostate Cancer: An Exploratory Study
Previous Article in Special Issue
Assessing the Performance of a Novel Stool-Based Microbiome Test That Predicts Response to First Line Immune Checkpoint Inhibitors in Multiple Cancer Types
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of H&E Stain Normalization on Deep Learning Models in Cancer Image Classification: Performance, Complexity, and Trade-Offs

1
Digital Healthcare Research Center, Pukyong National University, Busan 48513, Republic of Korea
2
Institute of Biochemistry, Faculty of Mathematics and Natural Science, University of Cologne, 50923 Cologne, Germany
3
School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore
4
Department of Software Engineering, Sri Lanka Technological Campus (SLTC), Padukka 10500, Sri Lanka
5
Division of Smart Healthcare, College of Information Technology and Convergence, Pukyong National University, Busan 48513, Republic of Korea
6
Department of Industry 4.0 Convergence Bionics Engineering, Pukyoung National University, Busan 48513, Republic of Korea
*
Author to whom correspondence should be addressed.
Cancers 2023, 15(16), 4144; https://doi.org/10.3390/cancers15164144
Submission received: 26 June 2023 / Revised: 28 July 2023 / Accepted: 2 August 2023 / Published: 17 August 2023
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Precision Oncology)

Abstract

:

Simple Summary

This research study investigates the impact of stain normalization on deep learning models for cancer image classification by evaluating model performance, complexity, and trade-offs. The primary objective is to assess the improvement in accuracy, performance, and resource optimization of deep learning models through the standardization of visual appearance in histopathology images using stain normalization techniques, alongside batch size and image size optimization. The findings provide valuable insights for selecting appropriate deep learning models in achieving precise cancer classification, considering the effects of H&E stain normalization and computational resource availability. This study contributes to the existing knowledge on the performance, complexity, and trade-offs associated with applying deep learning models to cancer image classification tasks.

Abstract

Accurate classification of cancer images plays a crucial role in diagnosis and treatment planning. Deep learning (DL) models have shown promise in achieving high accuracy, but their performance can be influenced by variations in Hematoxylin and Eosin (H&E) staining techniques. In this study, we investigate the impact of H&E stain normalization on the performance of DL models in cancer image classification. We evaluate the performance of VGG19, VGG16, ResNet50, MobileNet, Xception, and InceptionV3 on a dataset of H&E-stained cancer images. Our findings reveal that while VGG16 exhibits strong performance, VGG19 and ResNet50 demonstrate limitations in this context. Notably, stain normalization techniques significantly improve the performance of less complex models such as MobileNet and Xception. These models emerge as competitive alternatives with lower computational complexity and resource requirements and high computational efficiency. The results highlight the importance of optimizing less complex models through stain normalization to achieve accurate and reliable cancer image classification. This research holds tremendous potential for advancing the development of computationally efficient cancer classification systems, ultimately benefiting cancer diagnosis and treatment.

1. Introduction

Histopathology image analysis plays a crucial role in cancer diagnosis and treatment. With the advancements in deep learning (DL) techniques, the use of DL models for cancer image classification has shown promising results [1]. However, the performance and reliability of these models heavily rely on the quality and consistency of input data. Histology images are commonly stained using Hematoxylin and Eosin (H&E) to enhance tissue contrast and aid in visual interpretation. However, variations in staining protocols and equipment can introduce visual inconsistencies among images, potentially affecting the performance of DL models [2,3].
Histology image stain normalization techniques have emerged as a means to address these visual inconsistencies by standardizing the appearance of images. By applying stain normalization methods, it is possible to remove or reduce staining variations and ensure a consistent visual representation of the underlying tissue structures. This normalization process holds the potential to improve the accuracy, reliability, and resource utilization of DL models for cancer image classification tasks.
In this research study, we investigate the impact of histology image stain normalization on the of DL models performance in cancer image classification [4,5]. We conduct a comprehensive analysis using Generative Adversarial Network (GAN)-based stain normalization, and evaluate its impact on DL models performance, complexity, and trade-offs within the context of cancer classification tasks [6,7].
Furthermore, the study aims to explore the optimization of batch size and image size, which are important parameters in DL model training, to maximize the benefits of stain normalization in less complex models. By finding the optimal combination of these parameters, the study aims to enhance the overall performance of DL models in cancer image classification.

2. Materials and Methods

2.1. Dataset

This research utilizes two publicly available breast cancer datasets for training the GAN models and evaluating the performance of DL models in multiclass breast cancer classification. The following provides a description of the datasets used:
CAMELYON16 Challenge Dataset was utilized to train the GAN models for stain normalization in two domains: Aperio and Hamamatsu. These two domains represent different imaging scanners commonly used in histopathology. This dataset consists of 400 whole-slide images (WSIs) of sentinel lymph nodes, obtained from two distinct datasets collected at Radboud University Medical Center (Nijmegen, The Netherlands) and the University Medical Center Utrecht (Utrecht, The Netherlands). The training dataset consists of 170 WSIs of lymph nodes, with 100 of them being normal slides and 70 containing metastases [8]. Additionally, there is a second training dataset consisting of 100 WSIs, including 60 normal slides and 40 slides containing metastases. The test dataset consists of 130 WSIs collected from both universities. Figure 1 shows histopathology images from the same stained slide captured using Aperio Scanscope XT scanner and Hamamatsu Nanozoomer 2.0-HT scanner.
ICIAR 2018 Breast Cancer Histology (BACH) Grand Challenge Dataset: This dataset consists of 400 training and 100 test H&E-stained microscopy images with a resolution of 2048 × 1536 pixels. The images were scanned using a Leica DM 2000 LED microscope with a pixel resolution of 0.42 × 0.42 µm. Two expert pathologists labeled the images into four classes. While the labels of training images are available, the labels of test images are withheld [9]. This dataset exhibits significant color variability, making it suitable for color normalization tasks and evaluating the performance of automated cancer diagnostic systems. In this research, the dataset was used for performing multiclass classification of breast histopathology images, specifically classifying them into normal, benign, in situ, and invasive carcinoma classes. Figure 2 shows microscopy images labeled with the predominant cancer type present in each image. The images showcase different cancer types, providing valuable insights into the variations in staining patterns and characteristics within the dataset.

2.2. Data Preprocessing

The primary objective of data preprocessing is to convert the original whole-slide images (WSIs) into manageable patch images of sizes 128 × 128, 256 × 256, and 512 × 512, which are suitable for subsequent tasks such as stain normalization and classification. The ICIAR 2018 dataset comprised 2048 × 1536 Tag Image File Format (TIFF) images [10].
For the ICIAR 2018 dataset, the preprocessing phase involved the generation of image patches. This process commenced by applying Otsu thresholding to remove the background from the images, effectively separating the foreground (tissue) from the background and improving the subsequent patch generation process. After the removal of the background, patches were generated at ×40 magnification, resulting in the creation of Portable Network Graphic (PNG) patch images, each with dimensions of 128 × 128, 256 × 256, and 512 × 512 pixels. The generation of multiple patch image sizes aimed to explore the impact of image size variation on subsequent tasks, including stain normalization and classification.

2.3. Stain Normalization

Stain normalization in histopathological images aims to standardize the appearance and address color inconsistencies caused by staining protocols, slide preparation techniques, and imaging conditions. This process adjusts the color properties of stained images to achieve uniformity across diverse samples. The evaluation of stain normalization techniques on the performance of DL models for cancer image classification is crucial in enhancing classification performance and developing efficient and accurate systems.
In this study, we employed three specific Generative Adversarial Networks (GANs), namely StainGAN, MultipathGAN, and CycleGAN for the purpose of stain normalization in histopathological images [11,12,13,14]. The utilization of GANs provides several benefits, including the ability to learn intricate mappings between staining protocols, generate realistic normalized images, and enhance the standardization process in cancer image analysis [15,16,17,18,19].
Figure 3 provides a visual representation of the stain normalization results using different GAN models. This visualization offers valuable insights into the impact of these models on enhancing image quality for histopathological analysis. It demonstrates the effectiveness of the GANs in normalizing stained images and improving their visual quality, thereby contributing to more accurate and reliable histopathological analysis.

2.4. Generative Adversarial Networks (GANs) Performance Evaluation Metrices

In order to proceed with the performance evaluation of the DL models, we conducted an evaluation of stain normalization processes to select the most suitable stain normalization GAN. This evaluation encompassed assessing the performance of different stain normalization GANs and subsequently evaluating the quality of the generated images. To ensure a comprehensive evaluation, appropriate metrics were employed in this process as follows.
The Structural Similarity Index (SSIM) is a widely used metric for assessing the similarity between two images. It takes into account three components: luminance, contrast, and structure. The SSIM ranges between −1 and 1, where a value of 1 indicates a perfect match [20,21].
The equation for SSIM is as follows:
S S I M = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
In this equation, μ x , μ y , are the mean and σ x , σ y are the standard deviation of the intensity values present in the two images, respectively. σ x y is the covariance between the two images’ intensities. The constants c 1 , c 2 are used for negating the weak denominator effect.
The Fréchet Inception Distance (FID) is another metric used to evaluate the quality and diversity of generated images. It measures the similarity between the distribution of real images and the distribution of generated images in feature space, as captured by a pre-trained Inception model. A lower FID indicates better image quality and diversity 240 [22]. Mathematically, the Fréchet Distance is used to compute the distance between two “multivariate” normal distribution. For a “univariate” normal distribution, the Fréchet Distance is given as
d X , Y = μ X μ Y 2 + σ X σ Y 2
where μ and σ are the mean and standard deviation of the normal distributions, X and Y are two normal distributions.
In the context of GAN evaluation, the FID utilizes feature distances calculated with a pre-trained Inception V3 model. The use of activations from the Inception V3 model to summarize each image gives the FID value.
The Fréchet Inception Distance for “multivariate” normal distribution is given by,
F I D = μ X μ Y 2 T r X + Y 2 X Y
where X and Y are the real and fake embeddings (activation from the Inception model) assumed to be two multivariate normal distributions. μ X and μ Y are the magnitudes of the vector X and Y. Tr is the trace of the matrix and ∑ X and ∑ Y are the covariance matrix of the vectors.
The Inception Score (IS) is a metric used to evaluate the quality and diversity of generated images. It measures how well the generated images fool a pre-trained Inception model. A higher IS indicates better image quality and diversity [23].
The equation for IS is as follows:
I S ( G ) = e x p E x ~ p g D K L p y | X p y
where x ~ p g indicates that x is an image sampled from p g , D K L p q is the KL-divergence between the distributions p and q, p(y|X) is the conditional class distribution, and p y = X p ( y | X ) p g ( X ) is the marginal class distribution.
Additionally, other commonly used metrics for evaluating stain normalization techniques include Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR). MSE quantifies the average squared difference between pixel values of the generated and reference images, indicating improved stain normalization with lower MSE values. PSNR measures the ratio between the maximum possible image power and noise power, providing insights into image quality.
R M S E = 1 n x n y i , j n x n y r i , j t i , j r ( i , j ) 2
The PSNR computes the peak signal-to-noise ratio, in decibels, between two images. This ratio is used as a quality measurement between the generated and a target image. The higher the PSNR, the better the quality in generated image. The mean square error (MSE) and the peak signal-to-noise ratio (PSNR) are used to compare image quality.
P S N R = 20 l o g 10 R 2 R M S E 2

2.5. Image Classification

After generating stain-normalized images using stain normalization GANs, the subsequent step involves performing image classification. In this section, the focus lies in employing deep learning (DL) models to classify stained histopathological cancer images into distinct categories, including benign, in situ, invasive, and normal. The objective is to evaluate the efficacy of the stain normalization techniques in enhancing the classification accuracy of DL models [24,25]. By assessing the performance of DL models on the stain-normalized images, the study aims to determine the impact of stain normalization on the accuracy and reliability of cancer image classification.
The ICIAR2018 Breast Cancer Histology dataset is used for the image classification process. This dataset consists of stained normalized images that have undergone the previously explained stain normalization techniques. The dataset is divided into training, validation, and testing sets to ensure proper evaluation of the models’ performance. It ensures that the same set of images is used for training, validation, and testing, with different DL models [26,27,28]. Six different DL models are employed for image classification, ranging from less complex to high complex DL models including MobileNet, XceptionNet, InceptionV3, ResNet50, VGG16, and VGG19, chosen based on their proven performance in image classification tasks and compatibility with the stained histopathological images [29,30,30,31,32]. Table 1 provides an overview of the DL models used for cancer image classification, including their model size, parameter count, and depth.
In the process of training the DL models, separately stain-unnormalized and stain-normalized image patches were used. Both stain-normalized and stain-unnormalized datasets consisted of image patches of three distinct sizes: 128 × 128, 256 × 256, and 512 × 512. This selection enabled an examination of how different image sizes affected the performance of the DL models when trained on unnormalized datasets.
By incorporating both stain-unnormalized and stain-normalized datasets, the intention was to compare the performance of the DL models on stain-unnormalized images with their performance on images that underwent stain normalization. Through this analysis, the effectiveness of stain normalization in enhancing the models’ classification performance could be assessed.
Furthermore, the utilization of varying image patch sizes and batch sizes allowed for an evaluation of the impact of image resolution on the performance, efficiency, resource utilization, and trade-offs of the DL models. This facilitated the identification of the optimal image size and batch that yielded the most favorable classification results. By systematically exploring different image resolutions, the study aimed to determine the resolution that strikes the best balance between accuracy and computational efficiency, providing insights into the optimal image size for the classification task.
Figure 4 illustrates the workflow of stain-normalized image classification using a variety of deep learning models. The figure depicts the sequential steps involved in the classification process, highlighting the key stages and interactions between different components.
The evaluation of the image classification results involves analyzing metrics such as accuracy, precision, recall, and F1-score. These metrics provide quantitative measures of the models’ performance in correctly classifying the stained histopathological images into their respective categories. Additionally, the performance of the image classification models is compared with and without the application of stain normalization techniques to assess the impact of the normalization process on classification accuracy.

3. Results and Discussion

3.1. GAN Models Selection

GANs’ performance evaluation metrics collectively provide quantitative measures to assess the quality, diversity, and visual fidelity of stain-normalized images. By considering both perceptual and statistical aspects, these metrics contribute to a comprehensive assessment of the generated image quality, enabling informed decision-making in stain normalization research. Table 2 presents a comprehensive summary of the evaluation of various stain normalization methods, using a range of quantitative metrics.
Figure 5 represents the variation of evaluation metrics (SSIM, FID, and IS) with respect to the iteration number for GANs evaluation.
These graphs offer a visual depiction of the dynamic changes in metric values throughout the training process, providing valuable insights into the performance and convergence of the GANs. By observing the trends and fluctuations in the evaluation metrics, researchers can assess the progress and effectiveness of the GAN models and make informed decisions regarding their training and optimization.
The evaluation metrics used in this study provide valuable insights into the performance of each stain normalization method. Through comprehensive analysis, the results clearly indicate that StainGAN exhibits superior performance compared to other stain normalization GANs. The evaluation metrics highlight the effectiveness of StainGAN in achieving accurate and consistent stain normalization, making it a promising choice for enhancing image quality and standardization in histopathological analysis.

3.2. Deep Learning Model Performance in Cancer Classification

In the image classification phase, various DL models, including MobileNet, XceptionNet, InceptionV3, ResNet50, VGG16, and VGG19, were employed to classify the stain-normalized histopathological images into different cancer categories. The evaluation of the models’ performance with different image sizes used and involved metrics such as accuracy, precision, recall, and F1-score [8,23,33,34].
Table 3 summarizes the classification performance of different DL models on the dataset with varying image sizes (128 × 128, 256 × 256, and 512 × 512) without stain normalization. The primary objective of this table is to demonstrate the models’ effectiveness in cancer classification when stain normalization is not employed.
When analyzing the performance metrices, it is evident that VGG16 consistently achieved the highest accuracy scores across all image sizes. Specifically, it attained an accuracy of 68.22% on the 128 × 128 image size, 71.59% on the 256 × 256 image size, and 76.80% on the 512 × 512 image size. Similarly, among the models examined, Xception demonstrated strong performance, particularly excelling on the 256 × 256 and 512 × 512 image sizes with an accuracy of 68.92% and 75.25%, respectively.
In contrast, ResNet50 exhibited comparatively lower accuracy scores across all image sizes, suggesting a relatively weaker performance on this unstain-normalized dataset. These findings emphasize the influence of image size on the models’ classification performance. Notably, larger image resolutions, such as the 512 × 512 size, tend to yield higher accuracy scores, potentially augmenting the models’ ability to discern subtle patterns and features within the histopathological images.
Table 4 summarizes the performance of different DL models on the dataset with varying image sizes (128 × 128, 256 × 256, and 512 × 512) with stain normalization. The primary objective of this table is to demonstrate the models’ effectiveness in cancer classification when stain normalization is employed.
The analysis of the results from the provided Table 4 reveals significant insights into the performance of DL models trained on stain-normalized datasets at different image sizes. Across all image sizes, VGG16 consistently achieved the highest accuracy scores on the stain-normalized dataset. It obtained accuracy values of 74.24% for the 128 × 128 image size, 85.29% for the 256 × 256 image size, and 88.64% for the 512 × 512 image size. These results demonstrate the robustness of VGG16 in accurately classifying cancer samples when stain normalization is applied. The XceptionNet exhibited strong performance on the stain-normalized dataset, particularly excelling on the 256 × 256 and 512 × 512 image sizes with accuracies of 82.92% and 85.13%, respectively. This suggests that XceptionNet is effective in capturing relevant features and patterns in histopathological images, even after stain normalization.
In contrast, ResNet50 showed relatively lower accuracy scores across all image sizes on the stain-normalized dataset, indicating its comparatively weaker performance in this context. This suggests that ResNet50 might struggle to fully exploit the benefits of stain normalization for improving classification accuracy. Examining precision, recall, and F1-score, VGG16 consistently achieved high scores across all image sizes on the stain-normalized dataset. This demonstrates that VGG16 not only achieved high accuracy but also exhibited a good balance between true positives and false positives, resulting in high precision and recall values.
The results emphasize the influence of image size on the models’ performance, when stain normalization is applied. Larger image resolutions, such as the 512 × 512 size, tend to yield higher accuracy scores, indicating the potential enhancement in capturing subtle patterns and features within stain-normalized histopathological images.
The analysis of the results presented in Table 3 and Table 4 unveiled a notable enhancement in the classification accuracy of the DL models upon the application of stain normalization techniques. The incorporation of stain-normalized images, which were generated using stain normalization GANs, ensured a consistent visual depiction of tissue structures across diverse samples. This consistency, in terms of color and intensity, played a crucial role in enabling the DL models to extract more meaningful features. Consequently, the models exhibited improved accuracy in cancer classification tasks. These findings emphasize the effectiveness of stain normalization in standardizing the image data and enhancing the models’ capacity to discern relevant patterns and structures, thus leading to more accurate classification outcomes.

3.3. Computational Complexity Analysis

The computational complexity analysis aimed to investigate the impact of input image sizes and batch sizes on the resource utilization of DL models used in cancer image classification. The analysis involved a comprehensive comparison of various performance metrics, including the number of parameters and image size in the DL models, processing speed in relation to both image size and batch size, FLOPs (floating-point operations per second) relative to image size, and the correlation between image size and batch size with GPU usage. This evaluation encompassed diverse DL models, each employing different input image sizes and batch sizes.
Table 5 provides information on different models, including their respective image sizes, number of parameters, and FLOPs (floating-point operations) measured in millions. The FLOPs served as a measure of computational complexity, providing insights into the computational demands of the models.
Through meticulous analysis of these performance metrics, this investigation yielded invaluable insights into the trade-offs, complexities, and resource requirements associated with DL models deployed in breast cancer image classification tasks that encompass diverse input image sizes and batch sizes.
The experiments were conducted using a computer system with specific specifications to ensure efficient execution of the DL experiments. The computer system used for these experiments was equipped with an Intel Core i5 processor running at 3.5 GHz, 64 GB of RAM, and a high-performance NVIDIA GeForce RTX 4090 graphics card.
The choice of this computer system was driven by the need for substantial computational power to handle the large-scale DL tasks involved in training and evaluating the models. The inclusion of the NVIDIA GeForce RTX 4090 graphics card ensured accelerated training and inference processes, leveraging the card’s parallel computing capabilities.
Figure 6 illustrates the relationship between the number of FLOPs and the number of parameters in different DL models. The FLOPs metric provides insights into the computational complexity of the models, reflecting the number of arithmetic operations required for processing the input data.
In Figure 6, the size of the plot demonstrates size of input image and how the number of FLOPs changes as the number of parameters varies across different models. Each point on the graph represents a specific model configuration, with the x-axis denoting the number of parameters and the y-axis representing the corresponding number of FLOPs.
In the case of the models MobileNet, Xception, InceptionV3, and ResNet50, the number of training parameters remained constant regardless of the input image size. However, the number of floating-point operations (FLOPs) performed during model inference varied based on the input image size. This means that as the size of the input image increased, the computational workload in terms of FLOPs also increased. This insight is valuable for optimizing computational efficiency and resource allocation when utilizing these models, as it allows for better understanding of the computational requirements associated with different input sizes.
To evaluate the effect of increased image sizes on classification performance, the accuracy of each model was measured using the test dataset. Additionally, the processing speed, quantified as the number of images processed per second (IPS), was examined to identify disparities in computational efficiency. The findings of this investigation are summarized in Table 6, offering a comprehensive overview of diverse DL models. The table presents relevant information such as image sizes, the number of images processed per second, and corresponding batch and images sizes. By conducting a comparative assessment of processing speeds across different image sizes and batch sizes, valuable insights were obtained concerning the computational efficiency and capability of each model to handle varying workloads.
The results presented in Table 3 and Table 4 provide evidence that increasing the image size has a positive impact on cancer classification performance of the DL models. The larger image sizes allow for capturing more detailed information, leading to improved accuracy in classification. However, it is important to consider the potential challenges associated with increasing image size, such as computational complexity and increased training and inference time.
To further investigate the impact on processing speed, we examined the relationship between batch size and image size on DL model performance. We explored how varying these parameters influenced the computational demands of the models during training and inference. By analyzing the processing speed, we aimed to identify an optimal balance between image size and batch size that maximizes both classification accuracy and computational efficiency.
Table 6 provides information about the processing speed of different DL models measured in IPS for various image sizes (128 × 128, 256 × 256, 512 × 512) and batch sizes (1, 2, 4, 8, 16, 32, 64, 128, 256).
Figure 7 illustrates the relationship between the number of IPS and the batch size in various DL models when image size is changed. The IPS metric serves as a valuable indicator of the computational efficiency of the models, quantifying the number of images processed within a second across different image sizes and batch sizes.
Analyzing the results, it is evident that increasing the image size generally leads to a decrease in processing speed across all models. This is expected since larger images contain more pixels and require more computational resources, resulting in a reduced number of images processed per second.
Furthermore, Table 6 shows that the batch size also influences the processing speed. Generally, as the batch size increases, the processing speed improves, indicating better utilization of parallel processing capabilities. However, there is a diminishing return in speed improvements beyond a certain batch size, as the models may experience limitations in memory or computational capacity.
Considering specific models, it can be observed that MobileNet generally achieves the highest processing speeds compared to other models, particularly at larger image sizes and batch size. On the other hand, Inception3 also consistently demonstrates faster processing speeds across different image sizes and batch sizes.
This relationship provides meaningful insights into the models’ ability to handle larger workloads and deliver faster processing speeds, which are crucial considerations for optimizing DL model performance in real-world applications.
Furthermore, our analysis included an examination of memory utilization to investigate the impact of larger input image sizes and varying batch sizes on the utilization of Graphics Processing Unit (GPU) memory. Table 7 provides insights into GPU memory utilization in DL models, highlighting the impact of image and batch sizes on resource demands.
Among the DL models analyzed, MobileNet consistently demonstrates lower GPU memory requirements compared to others, irrespective of image size or batch size. Xception and InceptionV3 also exhibit relatively low GPU memory usage, with InceptionV3 occasionally showing slightly higher requirements than Xception. ResNet50 generally demands more GPU memory than MobileNet, Xception, and InceptionV3. Notably, VGG16 and VGG19 exhibit the highest GPU memory usage across the analyzed models, even with smaller image sizes and batch sizes. These findings emphasize the importance of considering GPU memory limitations when selecting a DL model, as higher memory requirements may restrict the feasible batch size or image size. Employing optimization techniques such as model pruning can be beneficial in reducing the GPU memory footprint.
A subset of data corresponding to specific batch sizes is absent in the 256 × 256 and 512 × 512 image resolutions for certain DL models. This discrepancy arises from the inherent limitations of computer GPU memory. The constrained memory capacity of the GPU hindered the feasibility of processing and storing the entire dataset for these batch sizes at the aforementioned higher image resolutions. Consequently, the absence of data points in the experimental results directly stems from these hardware limitations.
Figure 8 illustrates the relationship between the number of GPU memory utilization and the batch size in different DL models when image size is changed. The GPU memory utilization metric serves as a valuable indicator of the computational resource demand of the models, quantifying the amount of GPU memory used during the training of DL models for different image sizes and batch sizes.
The absence of data for certain batch sizes of DL models in the 256 × 256 and 512 × 512 image sizes is attributed to the inability to complete the processing task due to insufficient GPU memory. These larger image sizes necessitate a substantial amount of memory for processing, and when the allocated GPU memory falls short, the task cannot be executed successfully. This limitation arises from the inherent physical limitations of the GPU memory capacity, which imposes restrictions on the sizes of images that can be processed. As a result, data collection and analysis for batch sizes in these image sizes were not possible due to the impracticality of storing and manipulating the necessary data within the available memory resources. This underscores the critical importance of effective memory resource management and considering the hardware limitations when working with DL models that involve large image sizes and batch sizes. Employing memory optimization techniques and adopting memory-efficient architectures can help alleviate these constraints and enable the processing of larger image sizes within the constraints of the available GPU memory.
These insights played a crucial role in identifying potential challenges and limitations related to GPU memory capacity, providing invaluable guidance for optimizing the selection of image sizes and batch sizes. The ultimate goal of these optimization efforts was to maximize the effective utilization of available computational resources and ensure the smooth execution of the DL models. This approach aimed to strike a balance between resource efficiency and computational performance, allowing for efficient utilization of the GPU and facilitating optimal model performance.

4. Discussion

The findings of our study highlight the effectiveness of DL models for cancer classification and the importance of stain normalization techniques in improving their performance. Our evaluation revealed that VGG16 emerged as the best model for cancer classification, thanks to its deep architecture and large number of parameters. However, it is worth noting that other models such as MobileNet and Xception can also deliver competitive performance when stain normalization techniques are properly applied.
Stain normalization plays a critical role in addressing the challenge of staining variations. These stain variations can introduce image appearance discrepancies and hinder accurate classification. By applying H&E stain normalization on histopathology images, these variations can be mitigated, allowing the models to focus on relevant features and patterns. The significance of stain normalization is particularly pronounced for less complex models such as MobileNet, Xception, and Inception, as their fewer parameters may limit their ability to capture subtle differences caused by staining variations.
Our results indicate that stain normalization greatly impacts the performance of these less complex models, enabling them to achieve improved accuracy and efficiency in stain normalized cancer image classification tasks. This finding emphasizes the need to consider both the model architecture and the implementation of stain normalization techniques when developing cancer classification models. By incorporating stain normalization into the workflow, researchers and developers can enhance the performance of less complex models and achieve results comparable to more complex models such as VGG16.
Furthermore, the efficiency and computational complexity of MobileNet and Xception make them attractive candidates for practical applications. These models offer a good balance between performance and resource requirements, making them suitable for deployment in real-world scenarios where computational resources may be limited.
Although our evaluation showcased the impressive performance of various DL models for cancer classification, it is important to acknowledge the limitations and failure of certain models. In our study, the InceptionV3, ResNet50, and VGG19 models exhibited subpar performance compared to other models, highlighting its failure in accurately classifying cancer images.
VGG19, an extension of the VGG16 model, features a deeper architecture with 19 layers. While the increased depth allows for more complex representations to be learned, it also introduces challenges such as vanishing gradients and increased computational requirements. In our study, we found that despite its increased depth, VGG19 did not yield superior performance compared to VGG16. This suggests that the additional layers may not have provided substantial benefits for cancer classification, potentially due to diminishing returns or the risk of overfitting the data.
ResNet50, on the other hand, is a popular residual network architecture that addresses the vanishing gradient problem by introducing residual connections. These connections enable the gradient to flow directly from earlier layers to later layers, facilitating the training of deeper networks. However, in our experiments, ResNet50 did not perform as well as some of the other models, including VGG16. This could be attributed to several factors, such as the complexity of cancer images and the specific dataset used in our study. It is possible that ResNet50’s architecture may not have been optimized for capturing the relevant features and patterns in stain-normalized cancer images.
The low performance of complex DL models such as InceptionV3 and ResNet50 in our study highlight the importance of model selection and the need for careful consideration when choosing an appropriate architecture for stain-normalized cancer image classification. It is crucial to assess the suitability of a model’s architecture, complexity, efficiency, and ability to capture relevant features within the context of the specific dataset and task at hand. While InceptionV3 and ResNet50 may have demonstrated strong performance in other domains or with different datasets, their performance limitations in our study underscore the need for thorough evaluation and model selection in the context of stain-normalized cancer classification.

5. Conclusions

In conclusion, our study highlights the significant impact of H&E stain normalization on the selection of DL models for cancer image classification. While VGG16 exhibited strong performance, InceptionV3 and ResNet50 faced limitations in this context. Notably, stain normalization techniques greatly enhanced the performance of less complex models such as MobileNet, Xception, and Inception. These models emerged as competitive alternatives with lower computational complexity, improved computational efficiency, and reduced resource requirements. The findings underscore the importance of optimizing less complex models through stain normalization, achieving accurate and reliable cancer image classification while striking a balance between performance, complexity, efficiency, and trade-offs. This research holds tremendous potential for advancing the development of computationally efficient cancer classification systems.

Author Contributions

Conceptualization, N.M. and B.-I.L.; methodology, N.M.; software, L.Y.; validation, D.F. and L.Y.; formal analysis, N.M., P.J. and D.F.; investigation, B.-I.L.; resources, B.-I.L.; data curation, P.J.; writing—original draft preparation, N.M.; writing—review and editing, N.M. and B.-I.L.; visualization, P.J. and L.Y.; supervision, B.-I.L.; project administration, B.-I.L.; funding acquisition, B.-I.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The publicly shared datasets are available at https://camelyon17.grand-challenge.org/ (accessed on 22 February 2023), https://iciar2018-challenge.grand-challenge.org/Dataset/ (accessed on 16 March 2023). The codes are available from the author (N.M.; [email protected]) upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Campanella, G.; Hanna, M.G.; Geneslaw, L.; Miraflor, A.; Silva, V.W.K.; Busam, K.J.; Brogi, E.; Reuter, V.E.; Klimstra, D.S.; Fuchs, T.J. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat. Med. 2019, 25, 1301–1309. [Google Scholar] [CrossRef]
  2. Madabhushi, A.; Lee, G. Image analysis and machine learning in digital pathology: Challenges and opportunities. Med. Image Anal. 2016, 33, 170–175. [Google Scholar] [CrossRef] [PubMed]
  3. Gurcan, M.N.; Boucheron, L.E.; Can, A.; Madabhushi, A.; Rajpoot, N.M.; Yener, B. Histopathological Image Analysis: A Review. IEEE Rev. Biomed. Eng. 2009, 2, 147–171. [Google Scholar] [CrossRef]
  4. Colling, R.; Pitman, H.; Oien, K.; Rajpoot, N. Artificial intelligence in digital pathology: A roadmap to routine use in clinical practice. J. Pathol. Inform. 2018, 9, 1. [Google Scholar] [CrossRef] [PubMed]
  5. Hartman, D.J.; Pantanowitz, L.; McHugh, J.S. Artificial intelligence in pathology: Challenges and opportunities. J. Pathol. Inform. 2019, 10, 16. [Google Scholar]
  6. Lee, J.-S.; Ma, Y.-X. Stain Style Transfer for Histological Images Using S3CGAN. Sensors 2022, 22, 1044. [Google Scholar] [CrossRef]
  7. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A Survey on Deep Learning in Medical Image Analysis. Medical Image Analysis; Elsevier: Amsterdam, The Netherlands, 2017; pp. 60–88. [Google Scholar] [CrossRef]
  8. Ehteshami, B.; Veta, M.; van Diest, P.J.; van Ginneken, B.; Karssemeijer, N.; Litjens, G.; van der Laak, J.A.W.M.; The CAMELYON16 Consortium. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women with Breast Cancer. JAMA 2017, 318, 2199–2210. [Google Scholar] [CrossRef]
  9. Aresta, G.; Araújo, T.; Kwok, S.; Chennamsetty, S.S.; Safwan, M.; Alex, V.; Marami, B.; Prastawa, M.; Chan, M.; Donovan, M.; et al. BACH: Grand challenge on breast cancer histology images. Med. Image Anal. 2019, 56, 122–136. [Google Scholar] [CrossRef]
  10. Kothari, S.; Phan, J.H.; Stokes, T.H.; Wang, M.D. Pathology imaging informatics for quantitative analysis of whole-slide images. J. Am. Med. Inform. Assoc. 2013, 20, 1099–1108. [Google Scholar] [CrossRef]
  11. Tarek Shaban, M.; Baur, C.; Nava, N.; Albarqouni, S. StainGAN: Stain style transfer for digital histopathology images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2020; pp. 7948–7957. [Google Scholar]
  12. Runz, M.; Rusche, D.; Schmidt, S.; Weihrauch, M.R.; Hesser, J.; Weis, C.-A. Normalization of HE-stained histological images using cycle consistent generative adversarial networks. Diagn. Pathol. 2021, 16, 71. [Google Scholar] [CrossRef]
  13. Wang, S.; Yang, D.M.; Ruan, S.; Zhang, H. MultipathGAN: Towards multi-path generative adversarial networks for stain style transfer. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2018; pp. 191–199. [Google Scholar]
  14. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1125–1134. [Google Scholar]
  15. Shen, Y.; Luo, Y.; Shen, D.; Ke, J. RandStainNA: Learning Stain-Agnostic Features from Histology Slides by Bridging Stain Augmentation and Normalization. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2022; pp. 212–221. [Google Scholar] [CrossRef]
  16. Deng, S.; Zhang, X.; Yan, W.; Chang, E.I.-C.; Fan, Y.; Lai, M.; Xu, Y. Deep learning in digital pathology image analysis: A survey. Front. Med. 2020, 14, 470–487. [Google Scholar] [CrossRef] [PubMed]
  17. Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. A dataset for breast cancer histopathological image classification. IEEE Trans. Biomed. Eng. 2016, 63, 1455–1462. [Google Scholar] [CrossRef]
  18. Altini, N.; Marvulli, T.M.; Zito, F.A.; Caputo, M.; Tommasi, S.; Azzariti, A.; Brunetti, A.; Prencipe, B.; Mattioli, E.; De Summa, S.; et al. The role of unpaired image-to-image translation for stain color normalization in colorectal cancer histology classification. Comput. Methods Programs Biomed. 2023, 234, 107511. [Google Scholar] [CrossRef]
  19. Salvi, M.; Molinari, F.; Acharya, U.R.; Molinaro, L.; Meiburger, K.M. Impact of stain normalization and patch selection on the performance of convolutional neural networks in histological breast and prostate cancer classification. Comput. Methods Programs Biomed. Update 2021, 1, 100004. [Google Scholar] [CrossRef]
  20. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  21. Treder, M.S.; Codrai, R.; Tsvetanov, K.A. Quality assessment of anatomical MRI images from generative adversarial networks: Human assessment and image quality metrics. J. Neurosci. Methods 2022, 374, 109579. [Google Scholar] [CrossRef] [PubMed]
  22. Ronquillo, N.; Harguess, J. On Evaluating Video-based Generative Adversarial Networks (GANs). In Proceedings of the 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 9–11 October 2018; pp. 1–7. [Google Scholar]
  23. Malhotra, R.; Sharma, K.; Kumar, K.; Rath, N. Integrating SSIM in GANs to Generate High-Quality Brain MRI Images. In Data Engineering and Communication Technology; Reddy, K.A., Devi, B.R., George, B., Raju, K.S., Eds.; Springer: Singapore, 2021; pp. 419–426. [Google Scholar]
  24. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
  25. Hameed, Z.; Garcia-Zapirain, B.; Aguirre, J.J.; Isaza-Ruget, M.A. Multiclass classification of breast cancer histopathology images using multilevel features of deep convolutional neural network. Sci. Rep. 2022, 12, 15600. [Google Scholar] [CrossRef] [PubMed]
  26. Kumaraswamy, E.; Kumar, S.; Sharma, M. An Invasive Ductal Carcinomas Breast Cancer Grade Classification Using an Ensemble of Convolutional Neural Networks. Diagnostics 2023, 13, 1977. [Google Scholar] [CrossRef]
  27. Kundale, J.; Dhage, S. Classification of Breast Cancer using Histology images: Handcrafted and Pre-Trained Features Based Approach. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1074, 012008. [Google Scholar] [CrossRef]
  28. Munien, C.; Viriri, S. Classification of Hematoxylin and Eosin-Stained Breast Cancer Histology Microscopy Images Using Transfer Learning with EfficientNets. Comput. Intell. Neurosci. 2021, 2021, 1–17. [Google Scholar] [CrossRef]
  29. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  30. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2818–2826. [Google Scholar] [CrossRef]
  31. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
  32. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  33. Srinidhi, C.L.; Ciga, O.; Martel, A.L. Deep neural network models for computational histopathology: A survey. Med. Image Anal. 2020, 67, 101813. [Google Scholar] [CrossRef] [PubMed]
  34. Bejnordi, B.E.; Litjens, G.; Timofeeva, N.; Otte-Holler, I.; Homeyer, A.; Karssemeijer, N.; van der Laak, J.A. Stain Specific Standardization of Whole-Slide Histopathological Images. IEEE Trans. Med. Imaging 2015, 35, 404–415. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) Image acquired using Aperio Scanscope XT scanner. (b) Image acquired using Hamamatsu Nanozoomer 2.0-HT scanner. Scale bar, 16.95 µm.
Figure 1. (a) Image acquired using Aperio Scanscope XT scanner. (b) Image acquired using Hamamatsu Nanozoomer 2.0-HT scanner. Scale bar, 16.95 µm.
Cancers 15 04144 g001
Figure 2. Microscopy images labeled with the predominant cancer type in each image: (a) Normal, (b) benign, (c) in situ carcinoma, and (d) invasive carcinoma. Scale bar, 164 µm.
Figure 2. Microscopy images labeled with the predominant cancer type in each image: (a) Normal, (b) benign, (c) in situ carcinoma, and (d) invasive carcinoma. Scale bar, 164 µm.
Cancers 15 04144 g002
Figure 3. CycleGAN, MultipathGAN, and StainGAN normalized sample images from the dataset, labeled with the predominant cancer type in each image. (a) Benign, (b) in situ carcinoma, (c) invasive carcinoma, and (d) normal. Scale bar, 31.5 µm.
Figure 3. CycleGAN, MultipathGAN, and StainGAN normalized sample images from the dataset, labeled with the predominant cancer type in each image. (a) Benign, (b) in situ carcinoma, (c) invasive carcinoma, and (d) normal. Scale bar, 31.5 µm.
Cancers 15 04144 g003
Figure 4. Workflow of stain normalized images classification using diverse deep learning models. Scale bar, 11,698.35 µm.
Figure 4. Workflow of stain normalized images classification using diverse deep learning models. Scale bar, 11,698.35 µm.
Cancers 15 04144 g004
Figure 5. Variation of evaluation metrics (SSIM, FID, and IS) with respect to the iteration number for GANs evaluation. (a) Variation of SSIM values; (b) variation of FID values; (c) variation of IS score.
Figure 5. Variation of evaluation metrics (SSIM, FID, and IS) with respect to the iteration number for GANs evaluation. (a) Variation of SSIM values; (b) variation of FID values; (c) variation of IS score.
Cancers 15 04144 g005
Figure 6. Relationship between model complexity and computational efficiency in deep learning: FLOPs versus parameters.
Figure 6. Relationship between model complexity and computational efficiency in deep learning: FLOPs versus parameters.
Cancers 15 04144 g006
Figure 7. Relationship between model computational efficiency and batch size in different input image sizes in deep learning models: (a) Image size 128 × 128, (b) image size 256 × 256, and (c) image size 512 × 512.
Figure 7. Relationship between model computational efficiency and batch size in different input image sizes in deep learning models: (a) Image size 128 × 128, (b) image size 256 × 256, and (c) image size 512 × 512.
Cancers 15 04144 g007
Figure 8. Relationship between GPU memory utilization and batch size in different input image sizes in deep learning models: (a) image size 128 × 128, (b) image size 256 × 256, and (c) image size 512 × 512.
Figure 8. Relationship between GPU memory utilization and batch size in different input image sizes in deep learning models: (a) image size 128 × 128, (b) image size 256 × 256, and (c) image size 512 × 512.
Cancers 15 04144 g008
Table 1. Deep learning models used for cancer image classification.
Table 1. Deep learning models used for cancer image classification.
ModelSize (MB)Number of ParametersModel Depth
MobileNet164.3 M55
MobileNetV2143.5 M105
XceptionNet8822.9 M81
InceptionV39223.9 M189
ResNet509825.6 M107
VGG16528138.4 M16
VGG19549143.7 M19
Table 2. Comparative evaluation of stain normalization methods based on image quality metrics.
Table 2. Comparative evaluation of stain normalization methods based on image quality metrics.
Stain Normalization GANsSSIMFIDISPSNRRMSE
StainGAN0.868 ± 0.02423.417 ± 3.1351.549 ± 0.06415.805 ± 0.86242.604 ± 4.671
MultipathGAN0.854 ± 0.03429.135 ± 5.4341.505 ± 0.10215.548 ± 2.03145.186 ± 9.409
CycleGAN0.844 ± 0.03629.291 ± 5.1561.519 ± 0.11415.357 ± 1.88344.426 ± 8.519
Table 3. Classification performance of deep learning models on dataset without stain normalization and with different image sizes.
Table 3. Classification performance of deep learning models on dataset without stain normalization and with different image sizes.
Image SizeDeep Learning ModelAccuracyPrecisionRecallF1-Score
128 × 128MobileNet0.62690.64870.62690.6220
Xception0.62670.62870.62480.6254
InceptionV30.61210.62800.61220.6112
ResNet500.56570.57250.56580.5664
VGG160.68220.70030.68220.6838
VGG190.57840.58320.57840.5787
256 × 256MobileNet0.65270.71130.65600.6429
Xception0.68920.68890.68920.6881
InceptionV30.62680.64860.62690.6220
ResNet500.59830.62030.59840.5954
VGG160.71590.73180.71590.7168
VGG190.66070.73170.66640.6515
512 × 512MobileNet0.71310.74140.71430.7135
Xception0.75250.75710.75250.7535
InceptionV30.68770.68940.68610.6862
ResNet500.61560.61620.61500.6155
VGG160.76800.78670.76800.7681
VGG190.75840.76660.75840.7599
Table 4. Classification performance of deep learning models on dataset with stain normalization and with different image sizes.
Table 4. Classification performance of deep learning models on dataset with stain normalization and with different image sizes.
Image SizeDeep Learning ModelAccuracyPrecisionRecallF1-Score
128 × 128MobileNet0.67410.67670.67410.6733
Xception0.68050.77560.68470.6829
InceptionV30.66430.66880.66350.6652
ResNet500.61460.61580.61460.6149
VGG160.74240.76990.74240.7382
VGG190.65870.66760.65870.6594
256 × 256MobileNet0.82770.83570.82770.8267
Xception0.82920.84110.82920.8310
InceptionV30.76690.77260.76690.7678
ResNet500.71990.73110.71990.7206
VGG160.85290.86180.85290.8536
VGG190.80720.82320.80720.8083
512 × 512MobileNet0.86720.87220.86720.8652
Xception0.85130.86320.85120.8531
InceptionV30.76370.78460.76380.7632
ResNet500.73390.74660.73390.7345
VGG160.88640.88690.88640.8862
VGG190.81300.81380.81300.8121
Table 5. Comparison of deep learning models: image size, number of parameters, and FLOPs.
Table 5. Comparison of deep learning models: image size, number of parameters, and FLOPs.
Model NameImage SizeNumber of Parameters (×106) FLOPs (×106)
MobileNet128 × 1284.23372.97
256 × 2564.231485.43
512 × 5124.235934.96
Xception128 × 12822.86239.03
256 × 25622.86958.55
512 × 51222.863851.63
InceptionV3128 × 12823.821528.10
256 × 25623.827848.18
512 × 51223.8235,399.45
ResNet50128 × 12825.582532.11
256 × 25625.5810,101.11
512 × 51225.5840,362.60
VGG16128 × 12869.1510,133.14
256 × 256169.8140,407.33
512 × 512572.47161,504.10
VGG19128 × 12874.4612,851.05
256 × 256175.1251,278.97
512 × 512577.78204,990.64
Table 6. Processing speed (images per second) of different image and batch sizes in deep learning models.
Table 6. Processing speed (images per second) of different image and batch sizes in deep learning models.
Model NameImage SizeBatch Size
1248163264128256
MobileNet128 × 128540.151081.932015.893603.895401.056719.057660.027451.877728.02
256 × 256499.39845.631239.181627.301889.371904.561921.141938.621950.49
512 × 512245.67330.38415.58441.72473.93475.73477.47476.42470.84
Xception128 × 128350.60677.291283.172542.023508.504409.725038.364865.044978.35
256 × 256323.11606.33830.591119.341228.951196.501232.761235.981272.97
512 × 512176.37236.75273.99280.28285.12272.57260.54244.76229.11
InceptionV3128 × 128181.49360.02699.411340.372401.513956.925846.226462.297017.33
256 × 256174.06340.00618.271063.911344.541521.711653.221710.761752.93
512 × 512153.89254.46324.17363.19394.28405.36417.29421.13424.68
ResNet50128 × 128158.43430.28686.12897.261076.861101.831156.991202.261225.30
256 × 256233.74429.29657.80880.031052.601119.811164.721207.70-
512 × 512146.97207.08247.62276.85293.37299.92305.13307.33-
VGG16128 × 128588.771015.011674.912399.893011.983516.273896.293979.164111.39
256 × 256416.60595.79761.39895.84995.871004.951025.181038.52-
512 × 512189.76224.26250.75254.00259.98259.52264.67269.04-
VGG19128 × 128497.74894.281472.742077.662649.243087.243462.043537.243676.34
256 × 256361.73524.49676.96794.41881.99888.68915.30932.38-
512 × 512166.89198.35222.40227.34232.69233.35240.80--
Table 7. GPU memory utilization in different deep learning models with image and batch sizes.
Table 7. GPU memory utilization in different deep learning models with image and batch sizes.
Model NameImage SizeBatch Size
1248163264128256
MobileNet128 × 128654.72686.84714.36824.921032.461362.241874.982368.623264.04
256 × 2561096.161136.681208.651312.421492.931772.062483.442994.193846.82
512 × 5121134.661486.161992.822192.602572.843024.183582.364548.025994.56
Xception128 × 128698.00718.26764.82866.901098.161408.381896.302414.003316.94
256 × 2561121.041208.321297.451386.821568.101689.062578.163145.263994.14
512 × 5121206.811538.722008.462264.822612.003165.783685.704726.126492.24
InceptionV3128 × 128758.14796.52846.28948.921164.181476.821992.462536.283476.21
256 × 2561184.341268.681352.251462.001685.101926.562786.413369.724279.15
512 × 5121276.461684.822048.602314.002688.843246.213872.654981.096974.26
ResNet50128 × 128862.54986.141236.821692.742362.683494.825676.468648.9412,426.17
256 × 2561162.881368.621668.482486.843822.626574.4810136.8316264.43-
512 × 5121326.761578.562186.263124.335294.149364.6914,664.1421,672.97-
VGG16128 × 128982.461032.421216.941635.542468.263981.466686.729436.8414,281.27
256 × 2561562.251784.192014.732986.294686.247682.4212984.6119842.47-
512 × 5121824.642662.733264.484496.126462.629962.6316,962.7824,462.92-
VGG19128 × 1281062.661263.781469.931865.032694.044268.827252.9210,721.8215,386.22
256 × 2561782.421984.822348.563146.944984.568086.4814,628.9421,485.39-
512 × 5122142.362696.423468.764686.626662.6410,264.5318,492.62--
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Madusanka, N.; Jayalath, P.; Fernando, D.; Yasakethu, L.; Lee, B.-I. Impact of H&E Stain Normalization on Deep Learning Models in Cancer Image Classification: Performance, Complexity, and Trade-Offs. Cancers 2023, 15, 4144. https://doi.org/10.3390/cancers15164144

AMA Style

Madusanka N, Jayalath P, Fernando D, Yasakethu L, Lee B-I. Impact of H&E Stain Normalization on Deep Learning Models in Cancer Image Classification: Performance, Complexity, and Trade-Offs. Cancers. 2023; 15(16):4144. https://doi.org/10.3390/cancers15164144

Chicago/Turabian Style

Madusanka, Nuwan, Pramudini Jayalath, Dileepa Fernando, Lasith Yasakethu, and Byeong-Il Lee. 2023. "Impact of H&E Stain Normalization on Deep Learning Models in Cancer Image Classification: Performance, Complexity, and Trade-Offs" Cancers 15, no. 16: 4144. https://doi.org/10.3390/cancers15164144

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop