Enhancing AI-Driven Diagnosis of Invasive Ductal Carcinoma with Morphologically Guided and Interpretable Deep Learning
Abstract
1. Introduction
2. Related Works
3. Materials and Methods
3.1. Dataset
3.2. Data Management and Data Folds
3.3. Multi-Band Visual Feature Extraction and Fusion
3.3.1. Gabor Wavelets
3.3.2. Orientation Invariant Features
3.3.3. Linear Fusion
3.4. Deep Learning Models
3.4.1. Model Architectures
3.4.2. Hyperparameter Settings
3.4.3. Hardware Environment
3.5. Evaluation Metrics
4. Results
4.1. Optimal Learning Rate
4.2. Additional Dense Layer
4.3. Response Fusion Types and Loss Functions
4.4. Optimal Weights and Scales
4.5. Model Analysis and Benchmarking
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
MIR | Mortality-to-Incidence Ratio |
BCa | Breast Cancer |
IDC | Invasive Ductal Carcinoma |
AI | Artificial Intelligence |
CAD | Computer-Aided Diagnosis |
ML | Machine Learning |
DL | Deep Learning |
CNN | Convolutional Neural Network |
ANN | Artificial Neural Network |
WSI | Whole Slide Image |
GPU | Graphic Processing Unit |
H&E | Hematoxylin and Eosin |
BrT | Breast Tumor |
FC | Fully Connected |
ANOVA | Analysis of Variance |
AD | Alzheimer’s Disease |
XAI | Explainable AI |
MRI | Magnetic Resonance Image |
CAM | Class Activation Mapping |
ROI | Region of Interest |
PNG | Portable Network Graphic |
BCE | Binary Cross-Entropy |
CCE | Categorical Cross-Entropy |
LR | Learning Rate |
PC | Personal Computer |
OS | Operating System |
CPU | Central Processing Unit |
TP | True Positive |
TN | True Negative |
FP | False Positive |
FN | False Negative |
ROC | Receiver Operator Characteristic |
AUC | Area Under Curve |
References
- Bray, F.; Laversanne, M.; Sung, H.; Ferlay, J.; Siegel, R.L.; Soerjomataram, I.; Jemal, A. Global cancer statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2024, 74, 229–263. [Google Scholar] [CrossRef] [PubMed]
- Cao, W.; Qin, K.; Li, F.; Chen, W. Comparative study of cancer profiles between 2020 and 2022 using global cancer statistics (GLOBOCAN). J. Natl. Cancer Cent. 2024, 4, 128–134. [Google Scholar] [CrossRef] [PubMed]
- American Cancer Society. Invasive Breast Cancer (IDC/ILC). American Cancer Society. 2021. Available online: https://www.cancer.org/cancer/types/breast-cancer/about/types-of-breast-cancer/invasive-breast-cancer.html (accessed on 17 April 2025).
- Murcia-Gómez, D.; Rojas-Valenzuela, I.; Valenzuela, O. Impact of image preprocessing methods and deep learning models for classifying histopathological breast cancer images. Appl. Sci. 2022, 12, 11375. [Google Scholar] [CrossRef]
- Fortunato, A.; Mallo, D.; Cisneros, L.; King, L.M.; Khan, A.; Curtis, C.; Ryser, M.D.; Lo, J.Y.; Hall, A.; Marks, J.R.; et al. Evolutionary measures show that recurrence of DCIS is distinct from progression to breast cancer. Breast Cancer Res. 2025, 27, 43. [Google Scholar] [CrossRef]
- Golestan, A.; Tahmasebi, A.; Maghsoodi, N.; Faraji, S.N.; Irajie, C.; Ramezani, A. Unveiling promising breast cancer biomarkers: An integrative approach combining bioinformatics analysis and experimental verification. BMC Cancer 2024, 24, 155. [Google Scholar] [CrossRef]
- Roy, S.; Shanmugam, G.; Rakshit, S.; Pradeep, R.; George, M.; Sarkar, K. Exploring the immunomodulatory potential of Brahmi (Bacopa monnieri) in the treatment of invasive ductal carcinoma. Med. Oncol. 2024, 41, 115. [Google Scholar] [CrossRef]
- Mirbabaie, M.; Stieglitz, S.; Frick, N.R.J. Artificial intelligence in disease diagnostics: A critical review and classification on the current state of research guiding future direction. Health Technol. 2021, 11, 693–731. [Google Scholar] [CrossRef]
- Kumar, Y.; Koul, A.; Singla, R.; Ijaz, M.F. Artificial intelligence in disease diagnosis: A systematic literature review, synthesizing framework and future research agenda. J. Ambient. Intell. Human. Comput. 2023, 14, 8459–8486. [Google Scholar] [CrossRef] [PubMed]
- Xu, Y.; Khan, T.M.; Song, Y.; Meijering, E. Edge deep learning in computer vision and medical diagnostics: A comprehensive survey. Artif. Intell. Rev. 2025, 58, 93. [Google Scholar] [CrossRef]
- NVIDIA. CUDA Toolkit (release 11.8). 2022. Available online: https://developer.nvidia.com/cuda-toolkit (accessed on 2 July 2023).
- Chen, H.; Belash, E.; Liu, Y.; Recheis, M. TensorFlow.NET: Google’s TensorFlow full binding in .NET Standard. 2023. Available online: https://github.com/SciSharp/TensorFlow.NET (accessed on 5 February 2025).
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA, USA, 7–9 May 2015; Computational and Biological Learning Society: Cambridge, UK, 2015; Volume 2015, pp. 1–14. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Suganyadevi, S.; Seethalakshmi, V.; Balasamy, K. A review on deep learning in medical image analysis. Int. J. Multimed. Inf. Retr. 2022, 11, 19–38. [Google Scholar] [CrossRef]
- Mall, P.K.; Singh, P.K.; Srivastav, S.; Narayan, V.; Paprzycki, M.; Jaworska, T.; Ganzha, M. A comprehensive review of deep neural networks for medical image processing: Recent developments and future opportunities. Healthc. Anal. 2023, 4, 100216. [Google Scholar] [CrossRef]
- Mahmud, T.; Barua, K.; Habiba, S.U.; Sharmen, N.; Hossain, M.S.; Andersson, K. An explainable AI paradigm for Alzheimer’s diagnosis using deep transfer learning. Diagnostics 2024, 14, 345. [Google Scholar] [CrossRef] [PubMed]
- Zarella, M.D.; Bowman, D.; Aeffner, F.; Farahani, N.; Xthona, A.; Absar, S.F.; Parwani, A.; Bui, M.; Hartman, D.J. A Practical Guide to Whole Slide Imaging: A White Paper from the Digital Pathology Association. Arch. Pathol. Lab. Med. 2019, 143, 222–234. [Google Scholar] [CrossRef]
- Rodriguez, J.P.M.; Rodriguez, R.; Silva, V.W.K.; Kitamura, F.C.; Corradi, G.C.A.; Bertoletti de Marchi, A.C.; Rieder, R. Artificial intelligence as a tool for diagnosis in digital pathology whole slide images: A systematic review. J. Pathol. Inform. 2022, 13, 100138. [Google Scholar] [CrossRef] [PubMed]
- Fatima, G.; Alhmadi, H.; Ali Mahdi, A.; Hadi, N.; Fedacko, J.; Magomedova, A.; Parvez, S.; Mehdi Raza, A. Transforming diagnostics: A comprehensive review of advances in digital pathology. Cureus 2024, 16, e71890. [Google Scholar] [CrossRef]
- Yang, J.; Chen, H.; Zhao, Y.; Yang, F.; Zhang, Y.; He, L.; Yao, J. ReMix: A General and Efficient Framework for Multiple Instance Learning Based Whole Slide Image Classification. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2022. MICCAI 2022; Lecture Notes in Computer Science; Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2022; Volume 13432. [Google Scholar]
- Wang, H.; Luo, L.; Wang, F.; Tong, R.; Chen, Y.W.; Hu, H. Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Bag-Level Classifier is a Good Instance-Level Teacher. IEEE Trans. Med. Imaging 2024, 43, 3964–3976. [Google Scholar] [CrossRef] [PubMed]
- Evans, A.J.; Brown, R.W.; Bui, M.M.; Chlipala, E.A.; Lacchetti, C.; Milner, D.A.; Pantanowitz, L.; Parwani, A.V.; Reid, K.; Riben, M.W.; et al. Validating whole slide imaging systems for diagnostic purposes in pathology: Guideline update from the College of American Pathologists. Arch. Pathol. Lab. Med. 2022, 146, 440–450. [Google Scholar] [CrossRef]
- Murtaza, G.; Abdul Wahab, A.W.; Raza, G.; Shuib, L. A tree based multiclassification of breast tumor histopathology images through deep learning. Comput. Med. Imaging Graph. 2021, 89, 101870. [Google Scholar] [CrossRef]
- Dunn, C.; Brettle, D.; Cockroft, M.; Keating, E.; Revie, C.; Treanor, D. Quantitative assessment of H&E staining for pathology: Development and clinical evaluation of a novel system. Diagn. Pathol. 2024, 19, 42. [Google Scholar]
- Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Comput. Graph. Appl. 2001, 21, 34–41. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X.; Peng, H.; Wang, Z. Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion. 2017, 36, 191–207. [Google Scholar] [CrossRef]
- Biswas, A.A. A comprehensive review of explainable AI for disease diagnosis. Array 2024, 22, 100345. [Google Scholar] [CrossRef]
- Muhammad, D.; Bendechache, M. Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis. Comput. Struct. Biotechnol. J. 2024, 24, 542–560. [Google Scholar] [CrossRef]
- Embeyale, D.; Chen, Y.T.; Assabie, Y. Automatic grading of barley grain for brewery industries using convolutional neural network based on texture features. J. Agric. Food Res. 2025, 20, 101752. [Google Scholar] [CrossRef]
- Serte, S.; Demirel, H. Gabor wavelet-based deep learning for skin lesion classification. Comput. Biol. Med. 2019, 113, 103423. [Google Scholar] [CrossRef]
- Yuan, Y.; Zhang, J.; Wang, Q. Deep Gabor convolution network for person re-identification. Neurocomputing 2020, 378, 387–398. [Google Scholar] [CrossRef]
- Thanh Le, H.; Phung, S.L.; Chapple, P.B.; Bouzerdoum, A.; Ritz, C.H.; Tran, L.C. Deep Gabor Neural Network for Automatic Detection of Mine-Like Objects in Sonar Imagery. IEEE Access 2020, 8, 94126–94139. [Google Scholar] [CrossRef]
- Jaber, A.G.; Muniyandi, R.C.; Usman, O.L.; Singh, H.K.R. A Hybrid Method of Enhancing Accuracy of Facial Recognition System Using Gabor Filter and Stacked Sparse Autoencoders Deep Neural Network. Appl. Sci. 2022, 12, 11052. [Google Scholar] [CrossRef]
- Janowczyk, A.; Madabhushi, A. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. J. Pathol. Inform. 2016, 7, 29. [Google Scholar] [CrossRef]
- Mooney, P. (n.d.). Breast Histopathology Images [Data Set]. Kaggle. Available online: https://www.kaggle.com/datasets/paultimothymooney/breast-histopathology-images (accessed on 2 December 2024).
- Gabor, D. Theory of communication. J. Inst. Electr. Eng. —Part. III: Radio. Commun. Eng. 1946, 93, 429–457. [Google Scholar] [CrossRef]
- Ghiasi-Shirazi, K. Learning 2D Gabor filters by infinite kernel learning regression. J. Comput. Math. Data Sci. 2021, 1, 100016. [Google Scholar] [CrossRef]
- Alekseev, A.; Bobe, A. GaborNet: Gabor filters with learnable parameters in deep convolutional neural network. In Proceedings of the 2019 International Conference on Engineering and Telecommunication (EnT), Dolgoprudny, Russia, 20–21 November 2019; pp. 1–4. [Google Scholar]
- Watt, R.; Ledgeway, T.; Dakin, S.C. Families of models for gabor paths demonstrate the importance of spatial adjacency. J. Vision. 2008, 8, 23. [Google Scholar] [CrossRef] [PubMed]
- Saleh, A.Y.; Chern, L.H. Autism spectrum disorder classification using deep learning. Int. J. Online Biomed. Eng. 2021, 17, 24603. [Google Scholar] [CrossRef]
- Pereira, C.; Guede-Fernández, F.; Vigário, R.; Coelho, P.; Fragata, J.; Londral, A. Image analysis system for early detection of cardiothoracic surgery wound alterations based on artificial intelligence models. Appl. Sci. 2023, 13, 2120. [Google Scholar] [CrossRef]
- Cheng, H.; Liu, X.; Zhang, J.; Dong, X.; Ma, X.; Zhang, Y.; Meng, H.; Chen, X.; Yue, G.; Li, Y.; et al. GLMKD: Joint global and local mutual knowledge distillation for weakly supervised lesion segmentation in histopathology images, Expert. Syst. Appl. 2025, 279, 127425. [Google Scholar] [CrossRef]
- Yuan, Y.; Wang, L.; Zhong, G.; Gao, W.; Jiao, W.; Dong, J.; Shen, B.; Xia, D.; Wei, X. Adaptive Gabor convolutional networks. Pattern Recogn. 2022, 124, 108495. [Google Scholar] [CrossRef]
- Kovač, I.; Marák, P. Finger vein recognition: Utilization of adaptive gabor filters in the enhancement stage combined with SIFT/SURF-based feature extraction. Signal Image Video Process. 2023, 17, 635–641. [Google Scholar] [CrossRef]
- Ganaie, M.A.; Hu, M.; Malik, A.K.; Tanveer, M.; Suganthan, P.N. Ensemble deep learning: A review. Eng. Appl. Artif. Intell. 2022, 115, 105151. [Google Scholar] [CrossRef]
- Yang, Y.; Lv, H.; Chen, N. A Survey on ensemble learning under the era of deep learning. Artif. Intell. Rev. 2023, 56, 5545–5589. [Google Scholar] [CrossRef]
- Zhao, J.; Lang, R.; Guo, X.; Chen, L.; Gu, F.; Fan, Y.; Fu, X.; Fu, L. Clinicopathologic characteristics of pleomorphic carcinoma of the breast. Virchows Arch. 2010, 456, 31–37. [Google Scholar] [CrossRef] [PubMed]
Model A: λ = 4 and α = 0.90 (CCE) | |||||
---|---|---|---|---|---|
VGG-16 | VGG-19 | ResNet-50 | DenseNet-121 | p 1 | |
Accuracy | 0.7890 ± 0.0051 | 0.7896 ± 0.0087 | 0.8125 ± 0.0069 | 0.7921 ± 0.0059 | 0.003 *** |
Precision | 0.7938 ± 0.0142 | 0.8063 ± 0.0069 | 0.8179 ± 0.0089 | 0.7921 ± 0.0110 | 0.038 ** |
Recall | 0.7770 ± 0.0236 | 0.7588 ± 0.0335 | 0.8092 ± 0.0126 | 0.7919 ± 0.0112 | 0.084 * |
Specificity | 0.8011 ± 0.0218 | 0.8198 ± 0.0171 | 0.8160 ± 0.0126 | 0.7923 ± 0.0152 | 0.219 |
F1-Score | 0.7849 ± 0.0058 | 0.7813 ± 0.0151 | 0.8134 ± 0.0066 | 0.7919 ± 0.0062 | 0.005 ** |
Model B: λ = 16 and α = 0.90 (BCE) | |||||
---|---|---|---|---|---|
VGG-16 | VGG-19 | ResNet-50 | DenseNet-121 | p 1 | |
Accuracy | 0.7882 ± 0.0080 | 0.7931 ± 0.0065 | 0.8062 ± 0.0045 | 0.7964 ± 0.0049 | 0.023 ** |
Precision | 0.7837 ± 0.0075 | 0.7975 ± 0.0137 | 0.8029 ± 0.0107 | 0.7997 ± 0.0110 | 0.201 |
Recall | 0.7894 ± 0.0209 | 0.7872 ± 0.0357 | 0.8057 ± 0.0142 | 0.7940 ± 0.0051 | 0.740 |
Specificity | 0.7871 ± 0.0079 | 0.7988 ± 0.0253 | 0.8071 ± 0.0085 | 0.7989 ± 0.0134 | 0.492 |
F1-Score | 0.7863 ± 0.0112 | 0.7915 ± 0.0131 | 0.8041 ± 0.0043 | 0.7968 ± 0.0035 | 0.154 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jarujunawong, S.; Horkaew, P. Enhancing AI-Driven Diagnosis of Invasive Ductal Carcinoma with Morphologically Guided and Interpretable Deep Learning. Appl. Sci. 2025, 15, 6883. https://doi.org/10.3390/app15126883
Jarujunawong S, Horkaew P. Enhancing AI-Driven Diagnosis of Invasive Ductal Carcinoma with Morphologically Guided and Interpretable Deep Learning. Applied Sciences. 2025; 15(12):6883. https://doi.org/10.3390/app15126883
Chicago/Turabian StyleJarujunawong, Suphakon, and Paramate Horkaew. 2025. "Enhancing AI-Driven Diagnosis of Invasive Ductal Carcinoma with Morphologically Guided and Interpretable Deep Learning" Applied Sciences 15, no. 12: 6883. https://doi.org/10.3390/app15126883
APA StyleJarujunawong, S., & Horkaew, P. (2025). Enhancing AI-Driven Diagnosis of Invasive Ductal Carcinoma with Morphologically Guided and Interpretable Deep Learning. Applied Sciences, 15(12), 6883. https://doi.org/10.3390/app15126883