Next Article in Journal
Improving the Efficiency of a 10 MHz Voltage Regulator Using a PCB-Embedded Inductor
Previous Article in Journal
Adversarial Reconstruction with Spectral-Augmented and Graph Joint Embedding for Network Anomaly Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DeepFocusNet: An Attention-Augmented Deep Neural Framework for Robust Colorectal Cancer Classification in Whole-Slide Histology Images

by
Shah Md Aftab Uddin
,
Muhammad Yaseen
,
Md Kamran Hussain Chowdhury
,
Rubina Akter Rabeya
,
Shah Muhammad Imtiyaj Uddin
and
Hee-Cheol Kim
*
Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(18), 3731; https://doi.org/10.3390/electronics14183731
Submission received: 26 August 2025 / Revised: 16 September 2025 / Accepted: 19 September 2025 / Published: 21 September 2025

Abstract

A major cause of cancer-related mortality globally is colorectal cancer, which emphasises the critical need for state-of-the-art diagnostic tools for early identification and categorisation. We use deep learning methodology to classify colorectal cancer histology images into eight different categories automatically. To improve classification accuracy and maximise feature extraction, we create a DeepFocusNet architecture with attention approaches using a dataset of 5000 high-resolution (150 × 150) histological images. To improve model generalisation, we combine data augmentation, fine-tuning, and freezing early layers into our progressive training approach. Additionally, we create full-scale images using heatmaps and multi-class overlays after breaking up large-scale histology images (5000 × 5000) into smaller windows for classification using a special tiling technique. Attention mechanisms are added to improve the model’s performance and interpretability, as they are proven to focus on the most important histopathological traits. The model provides pathologists with high-resolution probability maps that aid in precise and speedy patient identification. The robustness of our methodology is demonstrated by empirical findings, opening the door for clinical applications of AI-driven histopathological investigation. Pathologists can receive precise and efficient diagnostic support from the final system thanks to its high-resolution probability maps and 97% classification accuracy. Empirical results provide evidence of our methodology’s robustness and show its potential for real-world clinical applications in AI-assisted histopathology.

1. Introduction

Since colorectal cancer (CRC) is among the top causes of cancer-related deaths globally, improving patient outcomes critically depends on early and accurate diagnosis [1]. Histopathological image analysis plays a central role in CRC detection, enabling pathologists to examine tissue architecture for malignant features. However, manual assessment is inherently subjective, time-consuming, and subject to inter-observer variability, which can impact diagnostic reliability [2,3].
Deep learning approaches, particularly Convolutional Neural Networks (CNNs), have rapidly gained traction for automating and enhancing the classification of colorectal histology images. Architectures like VGG, ResNet, and EfficientNet have shown competitive performance in a range of medical image classification tasks, including CRC diagnosis. Specifically, ResNet50V2 is valued for its identity mapping and residual connections, which strike an effective balance between network depth, classification accuracy, and computational efficiency [4,5,6].
Despite these strengths, traditional CNNs often miss nuanced spatial hierarchies and contextual relationships inherent to histological textures features crucial for distinguishing benign from malignant patterns [7]. This limitation motivates ongoing research into incorporating attention mechanisms and alternative architectures to capture more complex histopathological features and to further improve classification robustness and interpretability [8].
Attention mechanisms have become a powerful enhancement to Convolutional Neural Networks (CNNs), particularly in histopathological image analysis. They enable models to selectively suppress less informative regions while focusing on diagnostically significant patterns, thereby advancing feature representation and boosting classification accuracy [9]. Mechanisms such as channel attention, spatial attention, and composite approaches like the Convolutional Block Attention Module (CBAM) have demonstrated significant efficacy in guiding the network towards the most relevant tissue areas, which is critical for reliable cancer detection and grading. When combined with robust architectures such as ResNet50V2, attention modules can address limitations in conventional CNNs by capturing nuanced spatial hierarchies and contextual relationships within histological images. This results in more accurate identification of malignant vs. benign structures, a cornerstone for clinical applicability.
Complementing these methods, multi-scale visualization approaches including sliding windows, hierarchical patch extraction, and advanced aggregation modules promote resilience against spatial diversity in tissue morphology. By facilitating analysis at multiple magnifications and aggregating local and global context, these strategies support more interpretable outcomes [10]. For example, they enable the reconstruction of interpretable heatmaps that highlight malignant regions and provide comprehensive tissue-level insights, improving performance and helping pathologists in practical diagnostic scenarios.
The recent development of deep learning in histopathology has been noted by a paradigm shift from convolutional networks (CNNs) towards transformer-based architectures, better at modeling long-range spatial dependencies and global context. While foundational, our initial focus on CNNs and attention mechanisms (CBAM) did not fully acknowledge this broader trend. Seminal work now includes hybrid CNN–Transformer models for multi-class classification [11], self-supervised Vision Transformers for grading across institutions [12], and dedicated transformer-based MIL frameworks that set new benchmarks in weakly-supervised classification [13]. DeepFocusNet is architected within this landscape, drawing inspiration from the global context modeling of transformers but seeking to achieve a more computationally efficient and interpretable solution for clinical-grade whole-slide analysis.
In this work, we introduce a tailored DeepFocusNet framework for colorectal cancer histology classification. Enhanced by attention mechanisms and multi-scale visualization, our model not only achieves excellent accuracy but also produces detailed, interpretable heatmaps that identify malignant foci. This approach meaningfully contributes to clinical workflows, offering both quantitative reliability and deep visual interpretability for pathologists in real-world diagnosis. Figure 1. provides a visual summary of the study, highlighting the key steps and workflow.
DeepFocusNet introduces three key innovations over existing methods.
  • First, it integrates attention mechanisms that focus on diagnostically relevant regions, improving both accuracy and interpretability.
  • Second, it employs a scalable tiling strategy, dividing large whole-slide images into high-resolution windows and reconstructing them into probability heatmaps and multi-class overlays for precise spatial localization.
  • Third, it uses a progressive training pipeline with augmentation, fine-tuning, and selective layer freezing to reduce overfitting and enhance generalization. Together, these innovations enable superior diagnostic precision, stability, and clinical relevance compared to prior models.
The remainder of this paper is organized as follows. Section 2 provides an overview of related work in deep learning for medical image analysis, with a focus on histopathology. Section 3 describes the Dataset Acquisition and Description, Data Preprocessing, and data Augmentation pipeline. In Section 4 we describe the proposed DeepFocusNet architecture and its underlying design principles. Section 5 shows the experimental setup, training and convergence analysis, qualitative analysis, quantitative performance and comparative analysis of different models. In Section 6 we discuss interpretation of the results, clinical implications and limitations. Finally, Section 7 concludes the paper and outlines potential directions for future research.

2. Related Work

Automated histopathological image classification has gained significant momentum in recent years due to its critical role in cancer detection and the growth of digital pathology repositories. To enhance the precision and reliability of colorectal cancer (CRC) histological classification, researchers have explored both traditional machine learning and advanced deep learning strategies.
Early studies in histopathology image classification relied primarily on hand-crafted features such as texture, color histograms, and morphological descriptors. Classifiers like support vector machines (SVM), random forests (RF), and k-nearest neighbors (k-NN) were standard approaches in these studies. For instance, one of the first publicly available colorectal histology datasets, developed by Kather et al., enabled assessment of various feature-based methods, which demonstrated only moderate accuracy and faced challenges in generalizing across different tissue structures and staining variations. Recent studies confirm that these traditional models, even when using sophisticated geometric features, perform notably below deep learning approaches, with CNN-based models achieving higher detection accuracy and sensitivity [14,15]. With the advent of Convolutional Neural Networks (CNNs), deep learning has become the dominant paradigm in histopathology image categorization. Following the influential work by Coudray et al. on predicting genetic mutations from lung cancer slides using deep CNNs, similar techniques were adapted for CRC.
State-of-the-art CNN architectures such as VGG16, ResNet50, and InceptionV3 have been fine-tuned and evaluated on CRC datasets, producing superior results compared to earlier methods. These efforts include models that specialize in nuclei or gland detection, though dense manual annotations and a lack of ability to model large-scale tissue context remain limitations [16]. Given the scarcity of labeled data in medical imaging, transfer learning is prevalent in histopathology. Pretrained networks such as VGG16, ResNet50, and EfficientNet (originally trained on ImageNet) are commonly adapted by fine-tuning on domain-specific datasets. This approach enables networks to generalize to medical image domains with minimal retraining and has been particularly successful in distinguishing glandular from non-glandular tissue in CRC [17]. Interpretability remains a major concern in medical AI. Deep learning models are frequently interpreted via Grad-CAM, saliency maps, and probability heatmaps, which highlight areas within histology slides that contributed to a model’s prediction. Such techniques not only improve transparency but also facilitate validation of model decisions by clinicians, as demonstrated by overlaying probability maps on tissue images [18].
Recent years have seen the integration of attention mechanisms into CNNs, empowering models to focus on salient image regions while suppressing less informative areas. For instance, self-attention models for cancer histology have increased classification accuracy by selectively weighting discriminative regions, and dual-attention strategies combining spatial and channel attention have further improved pattern recognition, especially in challenging tissue contexts [19,20,21]. Table 1. summarizes colon cancer classification methods and results.
Due to the enormous size of whole-slide images (WSIs), patch-based and multi-scale analysis have become best practices. By extracting and aggregating patches at various magnifications, models efficiently capture both micro-level details and broader contextual information, markedly improving classification performance. Multi-scale aggregation and hierarchical fusion approaches are reported to further enhance resilience against variability in tissue morphology and staining, allowing networks to integrate complementary features across resolutions [29,30,31,32].
Prior art in computational histopathology is largely bifurcated: CNN-based approaches, often enhanced with attention, struggle to take long-range dependencies essential for gigapixel WSIs, while emerging pure transformer models usually incur prohibitive computational costs and offer limited interpretability, a critical barrier for clinical adoption [33,34,35]. DeepFocusNet is designed to navigate this trade-off. It organized the transformers’ global contextual awareness characteristic through its hierarchical attention mechanism. Still, it retains the spatial efficiency of CNNs, thereby addressing key limitations in both offspring of research.
In summary, the evolution from hand-crafted feature extraction to advanced deep learning, attention mechanisms, and multi-scale analysis has significantly improved CRC histopathology classification performance, setting the stage for robust AI-aided diagnostic workflows in clinical practice.

3. Dataset

3.1. Dataset Acquisition and Description

The NCT Biobank, made available to the public via its open-access repository by the Department of Pathology at NCT, is the source of the colorectal histology dataset utilized in this investigation. Hematoxylin and eosin (H&E)-stained digital histological images from colorectal tissue samples are included in the collection to aid in computational pathology research.
A carefully selected subset of the collection consists of 5000 color JPEG photos. These images were extracted from whole-slide images scanned at 20× magnification, with each image patch having a spatial resolution of 150 × 150 pixels and a physical size of approximately 75 × 75 µm (0.5 µm/pixel). The collection assigns eight histological tissue classes, tumor, stroma, complex, lympho, debris, mucosa, adipose, and empty, to each image, as shown in Figure 2. Supervised machine learning for multi-class classification is made possible by these expert-provided labels. The dataset structure follows the rules of the TensorFlow Datasets (TFDS) framework, which facilitates direct comparison with other studies and ensures our results are easily reproducible. We also included large-scale whole-slide images (WSIs) with a size of 5000 × 5000 pixels to further assess the translational applicability of the suggested pipeline. These images more accurately depict the morphological complexity and variety found in actual clinical procedures. When taken as a whole, this dataset offers a strong basis for the creation and assessment of deep learning-based frameworks for the categorisation of colorectal cancer histology.

3.2. Data Preprocessing and Augmentation Pipeline

To ensure data quality and improve the robustness of the proposed model, a comprehensive preprocessing and augmentation pipeline was employed. First, all images were resized or resampled to fixed dimensions, with pixel spacing rescaled for consistency across samples.
For histopathological images, stain normalization was performed to minimize inter-sample variability caused by staining protocols. Noise reduction was addressed through median or Gaussian filtering, and in some cases, denoising autoencoders were employed. To further enhance image quality, contrast adjustment was applied using Contrast Limited Adaptive Histogram Equalization (CLAHE). Data cleaning was also performed to ensure the quality of the dataset. Our exclusion criteria involved removing images that were corrupted (e.g., decoding errors), exact duplicates, or of low quality. Low-quality images were defined as those with significant blur, out-of-focus regions covering more than 50% of the patch, or excessive staining artifacts that obscured cellular morphology. This curated a reliable dataset and prevented the model from learning from non-physiological features, which could otherwise bias training. To improve the model’s generalization in the presence of limited labeled data and intrinsic biological variability, a comprehensive data augmentation pipeline was employed, mathematically formalized as a stochastic composite transformation:
T x = F f l i p R θ Z s x
where F f l i p denotes random horizontal flipping, R θ represents a rotation operator with θ∼U(−30°, 30°) and Z s is a zoom scaling with factor s∼U(0.8, 1.2). Pixel intensity normalization was applied to scale inputs within [0, 1], optimizing the network’s convergence properties by standardizing input distributions.
During training, images underwent a series of data augmentations to improve generalization and robustness. Each image had a 50% chance of being randomly flipped horizontally F f l i p , was rotated by an angle θ uniformly sampled from −30° to 30° ( R θ ), and was scaled by a zoom factor s sampled from 0.8 to 1.2 ( Z s ) to simulate variations in object size and viewpoint. These transformations increased dataset diversity and reduced overfitting by encouraging the model to learn rotation- and scale-invariant features.
The 5000-image dataset was split at the patient level to prevent information leakage and ensure a robust evaluation of the model’s generalization capability. We used a standard 70-15-15 split, resulting in 3500 images for the training set, 750 images for the validation set, and 750 images for the independent test set. Finally, images were converted into the required format (e.g., DICOM to PNG/JPEG) to standardize input for the deep learning framework.
A schematic overview of the complete preprocessing and augmentation pipeline is presented in Figure 3, which visually summarizes each step described above.

4. Proposed Methodology

4.1. Overall Pipeline Architecture

The proposed pipeline processes large whole-slide histopathology images by initially dividing them into smaller tiles via a dedicated tiling module. Each slide, a colorectal tissue specimen stained with Hematoxylin and Eosin (H&E), is preprocessed and segmented into regions of interest (ROIs) capturing a broad range of histopathological structures including normal stroma, normal glands, lymphocytic infiltrates, necrotic regions, and multiple grades of adenocarcinoma. Data augmentation techniques are subsequently applied to these tiles, enriching the training dataset and enhancing model generalization. The augmented patches are then fed into a DeepFocusNet architecture for robust feature extraction and classification across eight histological classes, ensuring exposure to both normal and malignant tissue morphologies and supporting the learning of discriminative features throughout the full spectrum.
Patch-level predictions are aggregated to reconstruct a whole-slide prediction map, while an attention heatmap is generated in parallel to highlight diagnostically significant regions within the tissue. This integrated system provides both patch-level classifications and interpretable visual explanations, thereby supporting clinicians in accurate diagnostic decision-making and improving the interpretability of the deep learning process. Figure 4 provides a schematic overview of the proposed workflow.

4.2. DeepFocusNet Architecture

The backbone of our proposed DeepFocusNet framework is the ResNet50V2 architecture, which employs residual learning to address the vanishing gradient problem commonly encountered in deep convolutional neural networks. Residual blocks utilize identity skip connections, allowing the network to learn residual functions relative to the layer inputs. This relationship can be expressed as:
y = F x , W i + x ,
Here, x denotes the input tensor to the residual block, F(⋅) represents the residual mapping learned by convolutional layers with parameters W i , and y is the output of the block. The identity skip connection preserves gradient flow, facilitating the training of deep networks and improving feature propagation.
The pre-activation design of ResNet50V2 further stabilizes gradient flow by applying batch normalization and activation before convolution operations. This structure enhances optimization efficiency and leads to faster and more stable convergence during training.
To enhance the spatial feature extraction capabilities of ResNet50V2, we incorporate the Convolutional Block Attention Module (CBAM), which adaptively recalibrates intermediate feature representations through sequential channel and spatial attention mechanisms as shown in Figure 5.
The channel attention module computes an importance map M c by aggregating spatial features through global average pooling f a v g and global max pooling f m a x . These pooled features are passed through a shared multilayer perceptron (MLP) followed by a sigmoid activation function σ\sigma:
M c F = σ M L P f a v g F + M L P f m a x F
where F is the input feature map. This process emphasizes the most informative channels for the classification task.
The spatial attention module subsequently generates a spatial attention map M s by concatenating channel-wise average-pooled and max-pooled features along the channel axis, applying a convolutional filter f 7 × 7 , and using a sigmoid activation:
M s F = σ f 7 × 7 f a v g c h a n F ; f m a x c h a n F ,
where F represents the channel-refined feature map from the previous stage. This spatial attention mechanism enhances the localization of salient regions within the feature maps, enabling the model to focus on diagnostically important morphological structures in histopathological images.
By combining residual connections with CBAM, the model leverages both the preservation of low-level details and enhanced contextual awareness. This synergy creates a robust and discriminative feature hierarchy, thereby improving the accuracy and reliability of colorectal cancer classification.

4.3. Optimization Regime and Convergence Strategies

Model optimization employed the Adam algorithm—an adaptive moment estimation method, with an initial learning rate of η = 1 × 10 3 , optimizing the categorical cross-entropy loss function:
L C E = i = 1 c     y i l o g ( y ^ i ) ,
where C is the number of classes, y i denotes the ground truth, and y ^ i the predicted class probability. To mitigate overfitting and ensure convergence stability, early stopping was implemented by monitoring the validation loss with a patience threshold of 10 epochs. A dynamic learning rate scheduler further reduced η by a factor of 0.5 upon plateau detection. Training was conducted for up to 100 epochs with a mini-batch size of 32, balancing computational tractability with the model’s capacity to capture complex histopathological patterns.

5. Experiments and Results

5.1. Experimental Setup

All experiments were implemented in TensorFlow 2.x and executed on a workstation equipped with an NVIDIA GeForce RTX 4060 Ti GPU, 32 GB RAM, and an Intel Core i7 processor, leveraging CUDA and cuDNN libraries for hardware-accelerated computation. The model was trained with a batch size of 32 and an input resolution of 224 × 224 pixels. Real-time data augmentation, including random rotations, shifts, zooms, and horizontal flips, was applied to enhance generalization performance. Model optimization was performed using the Adam optimizer with a learning rate of 1 × 10. Training proceeded until convergence, with early stopping and adaptive learning rate reduction on plateau employed to mitigate overfitting.

5.2. Training and Convergence Analysis

The learning curves of the DeepFocusNet model were analysed alongside five baseline architectures: a simple CNN, Inception, MobileNet, ResNet50V2, and VGG19 to highlight key differences in convergence patterns and generalisation performance. This analysis provides insights into the relative training stability and effectiveness of each network.
As depicted in Figure 6, the DeepFocusNet model exhibits smooth and stable convergence, with accuracy steadily increasing and loss consistently decreasing during training. Although a modest gap between training and validation accuracy is observed—a common occurrence in deep learning—the proposed model demonstrates strong generalization. Upon completion of training, the model achieved a final training accuracy of 99.8%, a validation accuracy of 98.1%, and a test accuracy of 98.0%. The corresponding final loss values were 0.01 for training, 0.15 for validation, and 0.16 for the test set. This reflects its superior ability to generalize to unseen data compared to the other architectures. The corresponding loss curves also demonstrate a consistent downward trend, confirming the effectiveness of the optimization strategy.
A direct comparison of the final validation accuracies across all models is presented in Figure 7. These graphs clearly demonstrate the superior performance of the proposed model, which achieves the highest validation accuracy among all evaluated architectures. This comparative analysis including models such as MobileNetV2 and Xception further underscores the advantages of incorporating attention mechanisms for enhancing classification accuracy in colorectal histopathology images.
This figure presents the training and validation accuracy and loss across epochs for the different deep learning architectures evaluated in this study.

5.3. Qualitative Analysis of Model Output

To gain insight into the model’s decision-making process, attention maps were generated and visualized. Figure 8c presents an attention heatmap superimposed on the original histology image (Figure 8a). The highlighted regions, shown in bright yellow and white, correspond to areas of high relevance such as dense cellularity, irregular glandular structures, and disorganized tissue hallmarks commonly associated with malignancy.
In addition, the model outputs detailed multi-class segmentation maps, illustrated in Figure 8b. These maps classify tissue into eight distinct categories (0–7), including normal stroma (orange, Class 1) and malignant regions (purple, green and yellow; Classes 3, 4, and 5). The segmented boundaries are well aligned with morphological features, reflecting the model’s capacity to capture histopathological patterns.
Figure 9 further provides a qualitative grid representation: the first column displays original image patches, the second shows the corresponding multi-class classification maps, and the third depicts probability maps specific to the stroma class. The probability maps indicate the model’s confidence in identifying critical tissue types, thereby underscoring its reliability and practical utility in histopathological analysis.

5.4. Quantitative Performance Metrics

To rigorously assess the performance of the proposed model, we conducted an evaluation using a comprehensive set of quantitative metrics on the independent test set. The confusion matrix, presented in Figure 10, provides a visual summary of the classification outcomes, where rows correspond to true labels and columns to predicted labels. The strong diagonal dominance observed in the matrix characterized by high values for correct classifications and minimal off-diagonal entries demonstrates the model’s high specificity and its low propensity for misclassifying tissue types. For the TUMOR, LYMPHO, MUCOSA, ADIPOSE, and EMPTY classes, all 50 samples were correctly classified. The STROMA class had 1 misclassification predicted as COMPLEX. The COMPLEX class had 5 misclassifications: 2 predicted as TUMOR, 2 as LYMPHO, and 1 as DEBRIS. The DEBRIS class had 2 misclassifications predicted as STROMA. This detailed breakdown highlights the model’s strengths and specific confusions between classes, enabling a more nuanced interpretation of overall classification performance.
Further evaluation of the model’s discriminatory capacity is presented through the multi-class ROC curves in Figure 11. The curves exhibit near-perfect Area Under the Curve (AUC) values of 1.00 across all classes, highlighting the model’s exceptional ability to differentiate between distinct tissue pathologies. Such uniformly high AUC scores demonstrate both the robustness of the classification framework and its high level of confidence in discriminating among complex histopathological patterns.
A detailed class-wise evaluation of the model’s performance is presented in Table 2. The reported metrics Accuracy, Precision, Recall, and F1-Score provide a comprehensive quantitative assessment, reinforcing the robust performance trends illustrated by the confusion matrix and ROC curves. Collectively, these results validate the model’s reliability in achieving consistent and balanced classification across all tissue classes.

5.5. Comparative Analysis of Different Models

To validate the effectiveness of the proposed architecture, we conducted a comparative evaluation against several widely used deep learning models. Figure 12 presents the training and validation accuracy of our attention-enhanced DeepFocusNet model in comparison with ResNet50V2, MobileNetV2, VGG19, Xception, a simple CNN, and InceptionV3.
As shown in Figure 12, the proposed model achieves the highest validation accuracy of 0.97, outperforming all competing architectures. This improvement is particularly notable relative to the baseline ResNet50V2, which attained a validation accuracy of 0.94, thereby demonstrating the clear performance gain achieved through the integration of the CBAM attention mechanism.
The learning patterns further highlight differences in model generalization. While architectures such as MobileNetV2 and Xception achieved high training accuracy, they exhibited larger discrepancies between training and validation performance, suggesting a tendency toward overfitting. In contrast, our proposed model maintains a closer alignment between training and validation accuracy, reflecting superior generalization to unseen data.
Overall, this comparative analysis confirms that incorporating the CBAM attention mechanism substantially enhances classification performance for colorectal histopathology images. Importantly, the improved generalization capability translates to greater reliability in real-world diagnostic scenarios, where minimizing misclassification is critical for ensuring accurate and clinically meaningful outcomes.

6. Discussion

6.1. Interpretation of Results

The proposed DeepFocusNet model achieved consistently high classification accuracy and near-perfect AUC values across all classes, underscoring its strong capability to discriminate among different categories of colorectal tissue images. This superior performance can be attributed to the integration of the Convolutional Block Attention Module (CBAM), which applies channel and spatial attention sequentially to the extracted feature maps.
The channel attention mechanism enables the network to selectively emphasize the most informative feature channels, thereby amplifying critical histopathological signals. In parallel, the spatial attention mechanism directs the model’s focus toward the most diagnostically relevant regions of the tissue image, while suppressing irrelevant background features. This combined attention strategy effectively highlights pathological patterns such as irregular glandular architecture, nuclear atypia, and stromal abnormalities hallmarks essential for colorectal cancer diagnosis.
By jointly enhancing both feature selection and spatial localization, the CBAM substantially improves the model’s ability to capture fine-grained morphological variations. This not only explains the model’s quantitative performance gains but also reinforces its potential clinical utility, where reliable identification of subtle histopathological cues is paramount for accurate and early cancer detection.

6.2. Clinical Implications

The consistently high classification accuracy and near-perfect AUC values indicate that the proposed model has strong potential as a computer-aided diagnostic (CAD) tool for colorectal cancer detection. By delivering rapid and reliable second opinions, the model could assist pathologists in reducing diagnostic workload and minimizing human error, particularly in high-volume screening environments.
A key strength of the framework lies in its ability to generate attention maps that highlight diagnostically relevant regions. This feature not only enhances interpretability but also provides clinicians with greater transparency into the model’s decision-making process. Such explainability is critical for building trust in AI-driven systems and is a prerequisite for their successful adoption in routine clinical practice.
By improving diagnostic efficiency while maintaining interpretability, the model could serve as a valuable adjunct to pathologists, ultimately contributing to earlier detection and more accurate classification of colorectal cancer.

6.3. Limitations

Despite demonstrating strong performance, several important limitations warrant consideration. First, the dataset size, although adequate for experimental validation, may not encompass the full spectrum of variability present in routine clinical practice. Consequently, the model’s performance could be affected by domain shifts when encountering images acquired from different scanners, staining protocols, or diverse patient populations, potentially limiting generalizability. Second, the substantial computational resources required to train and deploy a deep attention-based architecture such as DeepFocusNet may restrict its practical use in low-resource environments. Finally, while attention mechanisms enhance interpretability, they do not fully elucidate the model’s decision-making process, indicating a need for further research to achieve comprehensive clinical explainability.
A key limitation of this study is its dependence on a single-institution dataset from the NCT Biobank. As a result, the model may inadvertently learn institution-specific scanner features and staining patterns rather than purely biologically relevant histopathological features, restricting its generalizability to other clinical centers. Performance may decline on whole-slide images acquired from different scanner manufacturers (Philips, Leica) or using alternative H&E staining protocols. Future work should prioritize external validation across diverse, multi-institutional cohorts to evaluate model portability and mitigate potential sources of bias.
Although the proposed architecture achieves state-of-the-art performance, its perplexity, particularly the attention mechanisms, places substantial computational demands, which could limit acceptance in clinical settings with constrained IT resources. We report the average inference time and GPU memory usage to contextualize these requirements. Future research will investigate model compression strategies, including structured pruning and FP16 quantization, to reduce the computational burden while maintaining diagnostic performance.

7. Conclusions

In this study, we introduced a novel attention-enhanced deep learning framework DeepFocusNet for automated colorectal cancer classification, achieved by integrating the Convolutional Block Attention Module (CBAM) with a ResNet50V2 backbone. The proposed model demonstrated exceptional performance, exhibiting high accuracy, precision, recall, and near-perfect AUC scores across all histological classes. Comparative analyses with standard CNN architectures further highlighted the superior capability of our approach in capturing critical histopathological features.
The principal finding underscores that the synergistic combination of channel and spatial attention markedly enhances the network’s discriminative power, establishing it as a potent and reliable tool for supporting pathologists in the early detection and classification of colorectal cancer.
For future research, we propose validating the model on larger and more heterogeneous datasets, investigating alternative attention mechanisms such as self-attention or transformer-based architectures, and integrating multi-modal data sources including genetic, clinical, and imaging information to further advance diagnostic performance. Furthermore, optimizing the model for computational efficiency would increase its accessibility and facilitate real-time application in clinical environments.

Author Contributions

Conceptualization, S.M.A.U.; Methodology, S.M.A.U.; Software, S.M.A.U. and M.Y.; Validation, S.M.A.U., M.Y. and S.M.I.U.; Formal analysis, S.M.A.U.; Investigation, S.M.I.U.; Data curation, S.M.A.U. and R.A.R.; Writing—original draft, S.M.A.U.; Writing—review & editing, S.M.A.U. and M.Y.; Visualization, S.M.A.U., M.Y., M.K.H.C. and R.A.R.; Supervision, H.-C.K.; Project administration, H.-C.K.; Funding acquisition, H.-C.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are openly available in [Zenodo] [https://zenodo.org/records/53169] (accessed on 25 August 2025) [D-4279-2015].

Acknowledgments

The authors sincerely thank Hee-Cheol Kim and Shah Muhammad Imtiyaj Uddin for their valuable guidance, support, and contributions to this research. The authors also gratefully acknowledge the administrative and technical assistance provided by their respective institutions.

Conflicts of Interest

The authors confirm that there are no financial or personal relationships that could have influenced the work presented in this paper.

Abbreviations

The following abbreviations are used in this manuscript:
CRCColorectal Cancer
CLAHEContrast Limited Adaptive Histogram Equalization
TFDSTensorFlow Datasets
CNNsConvolutional Neural Networks
CBAMConvolutional Block Attention Module
SVMSupport Vector Machines

References

  1. Khalid, M.; Deivasigamani, S.; V, S.; Rajendran, S. An Efficient Colorectal Cancer Detection Network Using Atrous Convolution with Coordinate Attention Transformer and Histopathological Images. Sci. Rep. 2024, 14, 19109. [Google Scholar] [CrossRef]
  2. Chlorogiannis, D.D.; Verras, G.-I.; Tzelepi, V.; Chlorogiannis, A.; Apostolos, A.; Kotis, K.; Anagnostopoulos, C.-N.; Antzoulas, A.; Vailas, M.; Schizas, D.; et al. Tissue Classification and Diagnosis of Colorectal Cancer Histopathology Images Using Deep Learning Algorithms: Is the Time Ripe for Clinical Practice Implementation? Gastroenterol. Rev. 2023, 18, 353–367. [Google Scholar] [CrossRef] [PubMed]
  3. Alalwani, R.; Lucas, A.; Alzubaidi, M.; Shah, H.A.; Alam, T.; Shah, Z.; Househ, M. Deep Learning in Colorectal Cancer Classification: A Scoping Review. Stud. Health Technol. Inform. 2023, 305, 616–619. [Google Scholar] [CrossRef] [PubMed]
  4. Xu, W.; Fu, Y.-L.; Zhu, D. ResNet and Its Application to Medical Image Processing: Research Progress and Challenges. Comput. Methods Programs Biomed. 2023, 240, 107660. [Google Scholar] [CrossRef] [PubMed]
  5. Sharkas, M.; Attallah, O. Color-CADx: A Deep Learning Approach for Colorectal Cancer Classification through Triple Convolutional Neural Networks and Discrete Cosine Transform. Sci. Rep. 2024, 14, 6914. [Google Scholar] [CrossRef]
  6. Srinidhi, C.L.; Ciga, O.; Martel, A.L. Deep Neural Network Models for Computational Histopathology: A Survey. Med. Image Anal. 2021, 67, 101813. [Google Scholar] [CrossRef]
  7. Zhou, P.; Cao, Y.; Li, M.; Ma, Y.; Gan, X.; Wu, J.; Lv, X.; Chen, C. HCCANet: Histopathological Image Grading of Colorectal Cancer Using CNN Based on Multichannel Fusion Attention Mechanism. Sci. Rep. 2022, 12, 15103. [Google Scholar] [CrossRef]
  8. Sengodan, N. EfficientNet with Hybrid Attention Mechanisms for Enhanced Breast Histopathology Classification: A Comprehensive Approach. arXiv 2024, arXiv:2410.22392v2. [Google Scholar] [CrossRef]
  9. Ding, R.; Zhou, X.; Tan, D.; Su, Y.; Jiang, C.; Yu, G.; Zheng, C. A Deep Multi-Branch Attention Model for Histopathological Breast Cancer Image Classification. Complex Intell. Syst. 2024, 10, 4571–4587. [Google Scholar] [CrossRef]
  10. Chen, R.J.; Chen, C.; Li, Y.; Chen, T.Y.; Trister, A.D.; Krishnan, R.G.; Mahmood, F. Scaling Vision Transformers to Gigapixel Images via Hierarchical Self-Supervised Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 16123–16134. [Google Scholar] [CrossRef]
  11. Fu, B.; Zhang, M.; He, J.; Cao, Y.; Guo, Y.; Wang, R. StoHisNet: A Hybrid Multi-Classification Model with CNN and Transformer for Gastric Pathology Images. Comput. Methods Programs Biomed. 2022, 221, 106924. [Google Scholar] [CrossRef]
  12. Chaurasia, A.K.; Harris, H.C.; Toohey, P.W.; Hewitt, A.W. A Generalised Vision Transformer-Based Self-Supervised Model for Diagnosing and Grading Prostate Cancer Using Histological Images. Prostate Cancer Prostatic Dis. 2025, 1–9. [Google Scholar] [CrossRef] [PubMed]
  13. Yang, B.; Ding, L.; Li, J.; Li, Y.; Qu, G.; Wang, J.; Wang, Q.; Liu, B. Transformer-Based Multiple Instance Learning Network with 2D Positional Encoding for Histopathology Image Classification. Complex Intell. Syst. 2025, 11, 1–17. [Google Scholar] [CrossRef]
  14. Malik, J.; Kiranyaz, S.; Kunhoth, S.; Ince, T.; Al-Maadeed, S.; Hamila, R.; Gabbouj, M. Colorectal Cancer Diagnosis from Histology Images: A Comparative Study. arXiv 2019, arXiv:1903.11210. [Google Scholar] [CrossRef]
  15. Bahrambanan, F.; Alizamir, M.; Moradveisi, K.; Heddam, S.; Kim, S.; Kim, S.; Soleimani, M.; Afshar, S.; Taherkhani, A. The Development of an Efficient Artificial Intelligence-Based Classification Approach for Colorectal Cancer Response to Radiochemotherapy: Deep Learning vs. Machine Learning. Sci. Rep. 2025, 15, 62. [Google Scholar] [CrossRef]
  16. Fadafen, M.K.; Rezaee, K. Ensemble-Based Multi-Tissue Classification Approach of Colorectal Cancer Histology Images Using a Novel Hybrid Deep Learning Framework. Sci. Rep. 2023, 13, 8823. [Google Scholar] [CrossRef]
  17. Ke, Q.; Yap, W.-S.; Tee, Y.K.; Hum, Y.C.; Zheng, H.; Gan, Y.-J. Advanced Deep Learning for Multi-Class Colorectal Cancer Histopathology: Integrating Transfer Learning and Ensemble Methods. Quant. Imaging Med. Surg. 2025, 15, 2329–2346. [Google Scholar] [CrossRef]
  18. Ding, S.; Li, J.; Wang, J.; Ying, S.; Shi, J. Multi-Scale Efficient Graph-Transformer for Whole Slide Image Classification. IEEE J. Biomed. Health Inform. 2023, 27, 5926–5936. [Google Scholar] [CrossRef]
  19. Li, Y.; Liang, M.; Wei, M.; Wang, G.; Li, Y. Mechanisms and Applications of Attention in Medical Image Segmentation: A Review. Acad. J. Sci. Technol. 2023, 5, 237–243. [Google Scholar] [CrossRef]
  20. Li, X.; Li, M.; Yan, P.; Li, G.; Jiang, Y.; Luo, H.; Yin, S. Deep Learning Attention Mechanism in Medical Image Analysis: Basics and Beyonds. Int. J. Netw. Dyn. Intell. 2023, 2, 93–116. [Google Scholar] [CrossRef]
  21. Xie, Y.; Yang, B.; Guan, Q.; Zhang, J.; Wu, Q.; Xia, Y. Attention Mechanisms in Medical Image Segmentation: A Survey. arXiv 2025, arXiv:2305.17937. [Google Scholar] [CrossRef]
  22. Kather, J.N.; Weis, C.-A.; Bianconi, F.; Melchers, S.M.; Schad, L.R.; Gaiser, T.; Marx, A.; Zöllner, F.G. Multi-Class Texture Analysis in Colorectal Cancer Histology. Sci. Rep. 2016, 6, 27988. [Google Scholar] [CrossRef]
  23. Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.; Tsirigos, A. Classification and Mutation Prediction from Non–Small Cell Lung Cancer Histopathology Images Using Deep Learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef]
  24. Sirinukunwattana, K.; Pluim, J.P.; Chen, H.; Qi, X.; Heng, P.-A.; Guo, Y.B.; Wang, L.Y.; Matuszewski, B.J.; Bruni, E.; Sanchez, U.; et al. Gland Segmentation in Colon Histology Images: The GlaS Challenge Contest. Med. Image Anal. 2016, 35, 489–502. [Google Scholar] [CrossRef]
  25. Awan, R.; Koohbanani, N.A.; Shaban, M.; Lisowska, A.; Rajpoot, N. Context-Aware Learning Using Transferable Features for Classification of Breast Cancer Histology Images. In Lecture Notes in Computer Science; Springer Nature: Berlin/Heidelberg, Germany, 2018; pp. 788–795. [Google Scholar] [CrossRef]
  26. Shao, W.; Koohbanani, N.A.; Shaban, M.; Lisowska, A.; Rajpoot, N. Dual-Attention CNN for Lung Cancer Detection in Histopathological Images. IEEE Trans. Med. Imaging 2020, 39, 2049–2059. [Google Scholar]
  27. Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. A Dataset for Breast Cancer Histopathological Image Classification. IEEE Trans. Biomed. Eng. 2015, 63, 1455–1462. [Google Scholar] [CrossRef] [PubMed]
  28. Tellez, D.; Balkenhol, M.; Otte-Höller, I.; van de Loo, R.; Vogels, R.; Bult, P.; Wauters, C.; Vreuls, W.; Mol, S.; Karssemeijer, N.; et al. Whole-Slide Mitosis Detection in H&E Breast Histology Using PHH3 as a Reference to Train Distilled Stain-Invariant Convolutional Networks. IEEE Trans. Med. Imaging 2018, 37, 2126–2136. [Google Scholar] [CrossRef] [PubMed]
  29. D’Amato, M.; Szostak, P.; Torben-Nielsen, B. A Comparison between Single- and Multi-Scale Approaches for Classification of Histopathology Images. Front. Public Health 2022, 10, 892658. [Google Scholar] [CrossRef]
  30. Hashimoto, N.; Fukushima, D.; Koga, R.; Takagi, Y.; Ko, K.; Kohno, K.; Nakaguro, M.; Nakamura, S.; Hontani, H.; Takeuchi, I. Multi-Scale Domain-Adversarial Multiple-Instance CNN for Cancer Subtype Classification with Unannotated Histopathological Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 3851–3860. [Google Scholar] [CrossRef]
  31. Uddin, S.M.I.; Mojumder, M.A.I.; Sumon, R.I.; Mon–il, J.; Kim, H.-C. Leveraging Deep Learning for Automated Analysis of Colorectal Cancer Histology Images to Elevate Diagnosis Precision. In Proceedings of the 26th International Conference on Advanced Communications Technology (ICACT), PyeongChang, Republic of Korea, 3–7 February 2024; pp. 1–6. [Google Scholar] [CrossRef]
  32. Schmitz, R.; Madesta, F.; Nielsen, M.; Krause, J.; Steurer, S.; Werner, R.; Rösch, T. Multi-Scale Fully Convolutional Neural Networks for Histopathology Image Segmentation: From Nuclear Aberrations to the Global Tissue Architecture. Med. Image Anal. 2021, 70, 101996. [Google Scholar] [CrossRef]
  33. Atabansi, C.C.; Nie, J.; Liu, H.; Song, Q.; Yan, L.; Zhou, X. A Survey of Transformer Applications for Histopathological Image Analysis: New Developments and Future Directions. Biomed. Eng. Online 2023, 22, 96. [Google Scholar] [CrossRef]
  34. Uddin, S.M.I.; Sumon, R.I.; Mozumder, M.A.I.; Chowdhury, M.K.H.; Armand, T.P.T.; Kim, H.C. Innovations and Challenges of AI in Film: A Methodological Framework for Future Exploration. ACM Trans. Multimedia Comput. Commun. Appl. 2025, 21, 1–55. [Google Scholar] [CrossRef]
  35. Sumon, R.I.; Ali, H.; Akter, S.; Uddin, S.M.I.; Mozumder, M.A.I.; Kim, H.-C. A Deep Learning-Based Approach for Precise Emotion Recognition in Domestic Animals Using EfficientNetB5 Architecture. Eng 2025, 6, 9. [Google Scholar] [CrossRef]
Figure 1. Visual summary of the study.
Figure 1. Visual summary of the study.
Electronics 14 03731 g001
Figure 2. Sample images (150 × 150 pixels, ~74 × 74 µm) from the colorectal cancer histology dataset at 20× magnification, illustrating tissue diversity.
Figure 2. Sample images (150 × 150 pixels, ~74 × 74 µm) from the colorectal cancer histology dataset at 20× magnification, illustrating tissue diversity.
Electronics 14 03731 g002
Figure 3. Steps for Data Preprocessing and Preparation.
Figure 3. Steps for Data Preprocessing and Preparation.
Electronics 14 03731 g003
Figure 4. Overview of the workflow demonstrating the key stages, data flow, and interactions between different components of the proposed methodology.
Figure 4. Overview of the workflow demonstrating the key stages, data flow, and interactions between different components of the proposed methodology.
Electronics 14 03731 g004
Figure 5. DeepFocusNet: Proposed architecture and key components for histology image classification.
Figure 5. DeepFocusNet: Proposed architecture and key components for histology image classification.
Electronics 14 03731 g005
Figure 6. Performance of DeepFocusNet during training, showing the accuracy and loss trends for both training and validation datasets.
Figure 6. Performance of DeepFocusNet during training, showing the accuracy and loss trends for both training and validation datasets.
Electronics 14 03731 g006
Figure 7. Training and Validation Learning Curves for Various Models. (a) Simple CNN, (b) Inception, (c) MobileNet, (d) ResNet50V2, (e) VGG19, (f) Xception.
Figure 7. Training and Validation Learning Curves for Various Models. (a) Simple CNN, (b) Inception, (c) MobileNet, (d) ResNet50V2, (e) VGG19, (f) Xception.
Electronics 14 03731 g007
Figure 8. Visual Representation Generated by DeepFocusNet. (a) Original histological image in true color. (b) Multi-class heatmap highlighting cancer tissue types. (c) Tumor probability heatmap showing likelihood of tumor regions.
Figure 8. Visual Representation Generated by DeepFocusNet. (a) Original histological image in true color. (b) Multi-class heatmap highlighting cancer tissue types. (c) Tumor probability heatmap showing likelihood of tumor regions.
Electronics 14 03731 g008
Figure 9. Qualitative Grid Representation of Outputs from DeepFocusNet.
Figure 9. Qualitative Grid Representation of Outputs from DeepFocusNet.
Electronics 14 03731 g009
Figure 10. Confusion matrix of the multi-class classification model on the test set.
Figure 10. Confusion matrix of the multi-class classification model on the test set.
Electronics 14 03731 g010
Figure 11. Multi-class ROC curve for the proposed DeepFocusNet on the test set (n = 750 images), showing near-perfect AUC values across all classes.
Figure 11. Multi-class ROC curve for the proposed DeepFocusNet on the test set (n = 750 images), showing near-perfect AUC values across all classes.
Electronics 14 03731 g011
Figure 12. Training and Validation Accuracy Across Various Models.
Figure 12. Training and Validation Accuracy Across Various Models.
Electronics 14 03731 g012
Table 1. Summary of various existing classification techniques and results of colon cancer.
Table 1. Summary of various existing classification techniques and results of colon cancer.
StudyMethod/ModelDatasetTaskKey ContributionAccuracy/Performance
Kather et al. (2016) [22]Handcrafted features + SVM, RFKather CRC Histology DatasetTissue classification (8 classes)Introduced 8-class colorectal histology dataset~85% (SVM baseline)
Coudray et al. (2018) [23]InceptionV3 CNNTCGA Lung Cancer WSIsMutation prediction (e.g., EGFR, KRAS)Genetic mutation prediction from histopathologyAUC 0.86 (lung)
Sirinukunwattana et al. (2017) [24]CNN + Nucleus detectionGlaS Challenge DatasetGland segmentationGland segmentation using deep learningF1-score ~0.9
Awan et al. (2021) [25]ResNet + Uncertainty EstimationCRC histopathologyTissue classification (CRC subtypes)Context-aware classification with transferable featuresF1-score ~0.93
Shao et al. (2020) [26]Dual Attention ResNetLung cancer pathologyCancer subtype classificationChannel + spatial attention modelAccuracy ~94.5%
Spanhol et al. (2016) [27]CNN on patchesBreakHis (Breast Cancer)Breast tumor classification (benign vs. malignant, magnifications)Multi-scale patch analysis for breast cancerAccuracy ~86–90%
Tellez et al. (2019) [28]CNN + Heatmap visualizationPHH3-stained breast histologyMitotic figure detectionProbabilistic heatmaps for interpretabilityAUC > 0.95
Table 2. Quantitative performance metrics of DeepFocusNet on the test set (mean ± SD over 5 runs).
Table 2. Quantitative performance metrics of DeepFocusNet on the test set (mean ± SD over 5 runs).
Class NameAccuracy (%)Precision (%)Recall (%)F1-Score (%)IoU (%)
0: TUMOR98.2 ± 0.396.5 ± 0.497.1 ± 0.496.8 ± 0.393.9 ± 0.5
1: STROMA95.7 ± 0.592.1 ± 0.694.3 ± 0.593.2 ± 0.587.2 ± 0.6
2: COMPLEX92.5 ± 0.689.8 ± 0.790.5 ± 0.690.1 ± 0.682.0 ± 0.7
3: LYMPHO88.1 ± 0.785.3 ± 0.887.2 ± 0.786.2 ± 0.775.8 ± 0.8
4: DEBRIS91.0 ± 0.688.9 ± 0.690.1 ± 0.689.5 ± 0.680.8 ± 0.7
5: MUCOSA85.4 ± 0.782.0 ± 0.884.1 ± 0.783.0 ± 0.771.0 ± 0.8
6: ADIPOSE91.8 ± 0.589.1 ± 0.590.3 ± 0.589.7 ± 0.581.0 ± 0.6
7: EMPTY91.8 ± 0.589.1 ± 0.590.3 ± 0.589.7 ± 0.581.0 ± 0.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aftab Uddin, S.M.; Yaseen, M.; Chowdhury, M.K.H.; Rabeya, R.A.; Uddin, S.M.I.; Kim, H.-C. DeepFocusNet: An Attention-Augmented Deep Neural Framework for Robust Colorectal Cancer Classification in Whole-Slide Histology Images. Electronics 2025, 14, 3731. https://doi.org/10.3390/electronics14183731

AMA Style

Aftab Uddin SM, Yaseen M, Chowdhury MKH, Rabeya RA, Uddin SMI, Kim H-C. DeepFocusNet: An Attention-Augmented Deep Neural Framework for Robust Colorectal Cancer Classification in Whole-Slide Histology Images. Electronics. 2025; 14(18):3731. https://doi.org/10.3390/electronics14183731

Chicago/Turabian Style

Aftab Uddin, Shah Md, Muhammad Yaseen, Md Kamran Hussain Chowdhury, Rubina Akter Rabeya, Shah Muhammad Imtiyaj Uddin, and Hee-Cheol Kim. 2025. "DeepFocusNet: An Attention-Augmented Deep Neural Framework for Robust Colorectal Cancer Classification in Whole-Slide Histology Images" Electronics 14, no. 18: 3731. https://doi.org/10.3390/electronics14183731

APA Style

Aftab Uddin, S. M., Yaseen, M., Chowdhury, M. K. H., Rabeya, R. A., Uddin, S. M. I., & Kim, H.-C. (2025). DeepFocusNet: An Attention-Augmented Deep Neural Framework for Robust Colorectal Cancer Classification in Whole-Slide Histology Images. Electronics, 14(18), 3731. https://doi.org/10.3390/electronics14183731

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop