Gastric Cancer Image Classification: A Comparative Analysis and Feature Fusion Strategies
Abstract
:1. Introduction
- We proposed a comparative analysis of various HC and deep features across four different ML classifiers to identify the most stable and high-performing feature–classifier pairs for classifying gastric cancer histopathological images and distinguishing between normal and abnormal cells;
- We explored and analyzed various feature fusion techniques to determine their effectiveness in enhancing classification accuracy in the task at hand;
- We conducted a cross-magnification experiment to evaluate the impact of different image resolutions on classification performance, providing insights into the efficacy of utilizing multiple magnifications in pathology image analyses;
- Since different magnifications highlight unique tissue features, we conducted a cross-magnification experiment to assess the impact of varying image resolutions on classification performance, providing insights on the use of different magnifications in this field;
- We thoroughly evaluated the GasHisSDB dataset and compared our results with the state-of-the-art techniques.
2. Related Work
3. Materials and Methods
3.1. Dataset
3.2. Feature Extraction Methods
3.2.1. Invariant Moments
- Chebyshev polynomial of order p:
- Legendre polynomial of order p:
- Zernike polynomial of order n with repetition m:
3.2.2. Texture Features
3.2.3. Color Features
3.2.4. Deep Features
3.3. Classification Methods
3.4. Performance Evaluation Measures
- True Negatives (TNs): instances correctly predicted as negative.
- False Positives (FPs): instances incorrectly predicted as positive.
- False Negatives (FNs): instances incorrectly predicted as negative.
- True Positives (TPs): instances correctly predicted as positive.
- Accuracy (A): It is the ratio of correct predictions to the total number of predictions:
- Precision (P) is the ratio of TPs to the sum of TPs and FPs, indicating the classifier’s efficiency in predicting positive instances:
- Recall (R), also known as sensitivity, is the ratio of TPs to the sum of TPs and FNs:
- Specificity (S) is the ratio of TNs to the sum of TNs and FPs:
- F1-score (F1) is the harmonic mean of P and R, considering both FPs and FNs:
- Matthews Correlation Coefficient (MCC) is a comprehensive measure that considers all elements of the confusion matrix (TP, TN, FP, FN). Ranging from to , it provides a high score only when the classifier performs well in both the positive and negative classes:
- Balanced accuracy (BACC) is defined as the mean of specificity and sensitivity:
3.5. Experimental Setup
4. Experimental Results
4.1. HC Feature Performance
4.2. Deep Feature Performance
4.3. Feature Fusion Performance
4.4. Cross-Magnification Performance
- Results of testing on S-B: The classifiers were evaluated on the test set in the first experiment. Combining LBP and DenseNet-201 as features yielded varied results across different classifiers. The RF classifier outperformed others, achieving an accuracy of 89.04%, an F1 of 90.76%, and a balanced accuracy of 89.76%. The SVM also demonstrated strong performance, particularly with a precision of 97.42%, though it lagged in recall compared to RF.
- Results of testing on S-C: The second experiment, with testing on the sub-database, illustrated a greater challenge for the classifiers, reflected in the generally lower performance values. The LBP + DenseNet-201 combination showed that RF remained the most reliable classifier with an accuracy of 78.89% and an F1-score of 79.73%. While demonstrating high precision at 96.18%, the SVM struggled with recall and balanced accuracy, indicating a possible reliance on the pixels’ image data.
4.5. Comparison with the State of the Art
5. Discussion
5.1. On the HC vs. Deep Feature Comparison
5.2. On the Feature Fusion Performance
5.3. On the Cross-Magnification Performance
5.4. Limitations
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
ML | Machine Learning |
DL | Deep Learning |
HC | Handcrafted |
CNN | Convolutional Neural Network |
GC | Gastric Cancer |
EGC | Early-Stage Gastric Cancer |
AGC | Advanced Gastric Cancer |
CV | Computer Vision |
WSI | Whole Slide Image |
LGFFN | Lightweight Gated Fully Fused Network |
GHI | Gated Hybrid Input |
CH | Chebyshev Moment |
LM | Second-Order Legendre Moment |
ZM | Zernike Moment |
HAR | Rotation-Invariant Haralick |
LBP | Local Binary Pattern |
Hist | Histogram |
AC | Autocorrelogram |
Haar | Haar-Like |
DT | Decision Tree |
kNN | k-Nearest Neighbor |
SVM | Support Vector Machine |
RF | Random Forest |
TN | True Negative |
FP | False Positive |
FN | False Negative |
TP | True Positive |
A | Accuracy |
P | Precision |
R | Recall |
S | Specificity |
F | F1-Score |
MCC | Matthews Correlation Coefficient |
BACC | Balanced Accuracy |
Cval | Cross-Validation |
Appendix A. Further Results
Descriptor | A | P | R | S | F1 | MCC | BACC |
---|---|---|---|---|---|---|---|
AC | 62.78 | 69.39 | 68.97 | 53.26 | 69.18 | 22.20 | 61.12 |
Haar | 59.70 | 62.56 | 83.31 | 23.43 | 71.46 | 8.33 | 53.37 |
Hist | 68.78 | 74.49 | 73.71 | 61.22 | 74.10 | 34.84 | 67.46 |
HAR | 68.78 | 74.85 | 72.99 | 62.32 | 73.91 | 35.10 | 67.66 |
LBP | 71.22 | 76.23 | 76.26 | 63.47 | 76.25 | 39.74 | 69.87 |
CH_1 | 71.11 | 76.15 | 76.17 | 63.35 | 76.16 | 39.52 | 69.76 |
CH_2 | 71.05 | 76.12 | 76.07 | 63.35 | 76.09 | 39.41 | 69.71 |
LM | 71.32 | 76.16 | 76.64 | 63.16 | 76.40 | 39.87 | 69.90 |
ZM | 58.06 | 65.74 | 64.24 | 48.57 | 64.98 | 12.73 | 56.40 |
Appendix B
Descriptor | A | P | R | S | F1 | MCC | BACC |
---|---|---|---|---|---|---|---|
AC | 57.34 | 70.04 | 51.66 | 66.06 | 59.46 | 17.42 | 58.86 |
Haar | 42.17 | 56.99 | 18.40 | 78.67 | 27.82 | −3.61 | 48.53 |
Hist | 64.64 | 71.97 | 68.15 | 59.24 | 70.01 | 27.07 | 63.70 |
HAR | 61.06 | 68.53 | 66.05 | 53.41 | 67.26 | 19.29 | 59.73 |
LBP | 69.51 | 75.32 | 73.86 | 62.82 | 74.58 | 36.50 | 68.34 |
CH_1 | 66.16 | 72.41 | 71.28 | 58.29 | 71.84 | 29.45 | 64.78 |
CH_2 | 65.46 | 72.09 | 70.14 | 58.29 | 71.10 | 28.24 | 64.21 |
LM | 66.32 | 72.57 | 71.38 | 58.55 | 71.97 | 29.81 | 64.97 |
ZM | 57.40 | 65.03 | 64.19 | 46.97 | 64.60 | 11.12 | 55.58 |
Descriptor | A | P | R | S | F1 | MCC | BACC |
---|---|---|---|---|---|---|---|
AlexNet | 75.00 | 79.47 | 79.19 | 68.57 | 79.33 | 47.72 | 73.88 |
DarkNet-19 | 78.25 | 82.04 | 82.04 | 72.42 | 82.04 | 54.46 | 77.23 |
DarkNet-53 | 81.64 | 84.78 | 84.95 | 76.57 | 84.86 | 61.55 | 80.76 |
DenseNet-201 | 84.92 | 87.51 | 87.60 | 80.80 | 87.56 | 68.42 | 84.20 |
EfficientNet B0 | 78.79 | 82.62 | 82.29 | 73.41 | 82.46 | 55.64 | 77.85 |
Inception-v3 | 74.54 | 79.21 | 78.60 | 68.30 | 78.90 | 46.81 | 73.45 |
Inception-ResNet-v2 | 73.52 | 78.02 | 78.35 | 66.10 | 78.18 | 44.49 | 72.22 |
ResNet-18 | 76.73 | 81.20 | 80.13 | 71.50 | 80.66 | 51.46 | 75.82 |
ResNet-50 | 81.30 | 84.62 | 84.47 | 76.42 | 84.55 | 60.87 | 80.45 |
ResNet-101 | 80.13 | 83.67 | 83.48 | 74.97 | 83.58 | 58.42 | 79.23 |
VGG19 | 76.40 | 80.97 | 79.79 | 71.20 | 80.37 | 50.80 | 75.49 |
Xception | 79.38 | 83.30 | 82.49 | 74.59 | 82.89 | 56.94 | 78.54 |
Descriptor | A | P | R | S | F1 | MCC | BACC |
---|---|---|---|---|---|---|---|
AlexNet | 80.85 | 83.80 | 84.77 | 74.82 | 84.28 | 59.78 | 79.80 |
DarkNet-19 | 83.99 | 86.47 | 87.20 | 79.05 | 86.84 | 66.40 | 83.13 |
DarkNet-53 | 88.25 | 88.89 | 92.11 | 82.32 | 90.48 | 75.25 | 87.22 |
DenseNet-201 | 88.21 | 89.04 | 91.84 | 82.63 | 90.42 | 75.16 | 87.23 |
EfficientNet B0 | 87.53 | 89.01 | 90.60 | 82.82 | 89.80 | 73.79 | 86.71 |
Inception-ResNet-v2 | 76.13 | 79.89 | 80.98 | 68.69 | 80.43 | 49.85 | 74.83 |
Inception-v3 | 80.88 | 83.65 | 85.04 | 74.48 | 84.34 | 59.80 | 79.76 |
ResNet-101 | 87.89 | 88.57 | 91.87 | 81.79 | 90.19 | 74.48 | 86.83 |
ResNet-18 | 84.21 | 85.89 | 88.47 | 77.68 | 87.16 | 66.73 | 83.07 |
ResNet-50 | 86.97 | 88.20 | 91.09 | 80.41 | 89.62 | 73.19 | 85.75 |
VGG19 | 83.35 | 85.94 | 86.94 | 77.95 | 86.44 | 65.88 | 82.45 |
Xception | 85.15 | 86.74 | 89.44 | 78.95 | 88.07 | 70.17 | 84.20 |
References
- Ilic, M.; Ilic, I. Epidemiology of stomach cancer. World J. Gastroenterol. 2022, 28, 1187. [Google Scholar] [CrossRef] [PubMed]
- Hu, W.; Li, C.; Li, X.; Rahaman, M.M.; Ma, J.; Zhang, Y.; Chen, H.; Liu, W.; Sun, C.; Yao, Y.; et al. GasHisSDB: A new gastric histopathology image dataset for computer aided diagnosis of gastric cancer. Comput. Biol. Med. 2022, 142, 105207. [Google Scholar] [CrossRef] [PubMed]
- Hirasawa, T.; Aoyama, K.; Tanimoto, T.; Ishihara, S.; Shichijo, S.; Ozawa, T.; Ohnishi, T.; Fujishiro, M.; Matsuo, K.; Fujisaki, J.; et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018, 21, 653–660. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Y.; Hu, B.; Wang, Y.; Yin, X.; Jiang, Y.; Zhu, X. Identification of gastric cancer with convolutional neural networks: A systematic review. Multim. Tools Appl. 2022, 81, 11717–11736. [Google Scholar] [CrossRef] [PubMed]
- Xie, K.; Peng, J. Deep learning-based gastric cancer diagnosis and clinical management. J. Radiat. Res. Appl. Sci. 2023, 16, 100602. [Google Scholar] [CrossRef]
- Yong, M.P.; Hum, Y.C.; Lai, K.W.; Lee, Y.L.; Goh, C.H.; Yap, W.S.; Tee, Y.K. Histopathological gastric cancer detection on GasHisSDB dataset using deep ensemble learning. Diagnostics 2023, 13, 1793. [Google Scholar] [CrossRef] [PubMed]
- Yoon, H.J.; Kim, S.; Kim, J.H.; Keum, J.S.; Oh, S.I.; Jo, J.; Chun, J.; Youn, Y.H.; Park, H.; Kwon, I.G.; et al. A lesion-based convolutional neural network improves endoscopic detection and depth prediction of early gastric cancer. J. Clin. Med. 2019, 8, 1310. [Google Scholar] [CrossRef] [PubMed]
- Hu, W.; Chen, H.; Liu, W.; Li, X.; Sun, H.; Huang, X.; Grzegorzek, M.; Li, C. A comparative study of gastric histopathology sub-size image classification: From linear regression to visual transformer. Front. Med. 2022, 9, 1072109. [Google Scholar] [CrossRef] [PubMed]
- Zhang, K.; Wang, H.; Cheng, Y.; Liu, H.; Gong, Q.; Zeng, Q.; Zhang, T.; Wei, G.; Wei, Z.; Chen, D. Early gastric cancer detection and lesion segmentation based on deep learning and gastroscopic images. Sci. Rep. 2024, 14, 7847. [Google Scholar]
- Marini, N.; Otálora, S.; Podareanu, D.; van Rijthoven, M.; van der Laak, J.; Ciompi, F.; Müller, H.; Atzori, M. Multi_Scale_Tools: A Python Library to Exploit Multi-Scale Whole Slide Images. Front. Comput. Sci. 2021, 3, 684521. [Google Scholar] [CrossRef]
- Ashtaiwi, A. Optimal Histopathological Magnification Factors for Deep Learning-Based Breast Cancer Prediction. Appl. Syst. Innov. 2022, 5, 87. [Google Scholar] [CrossRef]
- Cao, R.; Tang, L.; Fang, M.; Zhong, L.; Wang, S.; Gong, L.; Li, J.; Dong, D.; Tian, J. Artificial intelligence in gastric cancer: Applications and challenges. Gastroenterol. Rep. 2022, 10, 64. [Google Scholar] [CrossRef]
- Hu, W.; Li, C.; Rahaman, M.M.; Chen, H.; Liu, W.; Yao, Y.; Sun, H.; Grzegorzek, M.; Li, X. EBHI: A new Enteroscope Biopsy Histopathological H&E Image Dataset for image classification evaluation. Phys. Medica 2023, 107, 102534. [Google Scholar]
- Li, S.; Liu, W. LGFFN-GHI: A Local-Global Feature Fuse Network for Gastric Histopathological Image Classification. J. Comput. Commun. 2022, 10, 91–106. [Google Scholar] [CrossRef]
- Putzu, L.; Loddo, A.; Ruberto, C.D. Invariant Moments, Textural and Deep Features for Diagnostic MR and CT Image Retrieval. In Proceedings of the 19th International Conference of Computer Analysis of Images and Patterns, CAIP 2021, Virtual, 28–30 September 2021; Proceedings, Part I; Lecture Notes in Computer Science. Tsapatsoulis, N., Panayides, A., Theocharides, T., Lanitis, A., Pattichis, C.S., Vento, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2021; Volume 13052, pp. 287–297. [Google Scholar] [CrossRef]
- Ruberto, C.D.; Loddo, A.; Putzu, L. On The Potential of Image Moments for Medical Diagnosis. J. Imaging 2023, 9, 70. [Google Scholar] [CrossRef]
- Mukundan, R.; Ong, S.H.; Lee, P.A. Image analysis by Tchebichef moments. IEEE Trans. Image Process. 2001, 10, 1357–1364. [Google Scholar] [CrossRef] [PubMed]
- Ruberto, C.D.; Putzu, L.; Rodriguez, G. Fast and accurate computation of orthogonal moments for texture analysis. Pattern Recognit. 2018, 83, 498–510. [Google Scholar] [CrossRef]
- Teh, C.; Chin, R.T. On Image Analysis by the Methods of Moments. IEEE Trans. Pattern Anal. Mach. Intell. 1988, 10, 496–513. [Google Scholar] [CrossRef]
- Teague, M.R. Image analysis via the general theory of moments. J. Opt. Soc. Am. 1980, 70, 920–930. [Google Scholar] [CrossRef]
- Wee, C.; Raveendran, P. On the computational aspects of Zernike moments. Image Vis. Comput. 2007, 25, 967–980. [Google Scholar] [CrossRef]
- Mirjalili, F.; Hardeberg, J.Y. On the Quantification of Visual Texture Complexity. J. Imaging 2022, 8, 248. [Google Scholar] [CrossRef] [PubMed]
- Putzu, L.; Ruberto, C.D. Rotation Invariant Co-occurrence Matrix Features. In Proceedings of the 19th International Conference of Image Analysis and Processing, ICIAP 2017, Catania, Italy, 11–15 September 2017; Proceedings, Part I; Lecture Notes in Computer Science. Battiato, S., Gallo, G., Schettini, R., Stanco, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10484, pp. 391–401. [Google Scholar] [CrossRef]
- He, D.C.; Wang, L. Texture unit, texture spectrum, and texture analysis. IEEE Trans. Geosci. Remote. Sens. 1990, 28, 509–512. [Google Scholar]
- Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
- Van de Weijer, J.; Schmid, C. Coloring Local Feature Extraction. In Proceedings of the 9th European Conference on Computer Vision, ECCV 2006, Graz, Austria, 7–13 May 2006; Proceedings, Part II; Lecture Notes in Computer Science. Leonardis, A., Bischof, H., Pinz, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3952, pp. 334–348. [Google Scholar] [CrossRef]
- Huang, J.; Kumar, R.; Mitra, M.; Zhu, W.; Zabih, R. Image Indexing Using Color Correlograms. In Proceedings of the 1997 IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’97), San Juan, Puerto Rico, 17–19 June 1997; IEEE Computer Society: Piscataway, NJ, USA, 1997; pp. 762–768. [Google Scholar] [CrossRef]
- Viola, P.A.; Jones, M.J. Rapid Object Detection using a Boosted Cascade of Simple Features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (CVPR 2001), Kauai, HI, USA, 8–14 December 2001; IEEE Computer Society: Piscataway, NJ, USA, 2001; pp. 511–518. [Google Scholar] [CrossRef]
- Bodapati, J.D.; Veeranjaneyulu, N. Feature Extraction and Classification UsingDeep Convolutional Neural Networks. J. Cyber Secur. Mobil. 2019, 8, 261–276. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (CVPR 2009), Miami, FL, USA, 20–25 June 2009; IEEE Computer Society: Piscataway, NJ, USA, 2009; pp. 248–255. [Google Scholar] [CrossRef]
- Putzu, L.; Piras, L.; Giacinto, G. Convolutional neural networks for relevance feedback in content based image retrieval. Multim. Tools Appl. 2020, 79, 26995–27021. [Google Scholar] [CrossRef]
- Wang, H.; Wu, X.; Huang, Z.; Xing, E.P. High-Frequency Component Helps Explain the Generalization of Convolutional Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, 13–19 June 2020; IEEE Computer Vision Foundation: Piscataway, NJ, USA, 2020; pp. 8681–8691. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Piscataway, NJ, USA, 2017; pp. 6517–6525. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Piscataway, NJ, USA, 2017; pp. 2261–2269. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, Long Beach, CA, USA, 9–15 June 2019; Chaudhuri, K., Salakhutdinov, R., Eds.; PMLR: New York, NY, USA, 2019; Volume 97, pp. 6105–6114. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016; IEEE Computer Society: Piscataway, NJ, USA, 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Singh, S., Markovitch, S., Eds.; AAAI Press: Washington, DC, USA, 2017; pp. 4278–4284. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016; IEEE Computer Society: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; Conference Track Proceedings. Bengio, Y., LeCun, Y., Eds.; ACM: New York, NY, USA, 2015. [Google Scholar]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Piscataway, NJ, USA, 2017; pp. 1800–1807. [Google Scholar] [CrossRef]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6–11 July 2015; Bach, F.R., Blei, D.M., Eds.; JMLR Workshop and Conference Proceedings. JMLR: Cambridge, MA, USA, 2015; Volume 37, pp. 448–456. [Google Scholar]
- Quinlan, J.R. Learning efficient classification procedures and their application to chess end games. In Machine Learning; Springer: Berlin/Heidelberg, Germany, 1983; pp. 463–482. [Google Scholar]
- Cover, T.M.; Hart, P.E. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- Lin, Y.; Lv, F.; Zhu, S.; Yang, M.; Cour, T.; Yu, K.; Cao, L.; Huang, T.S. Large-scale image classification: Fast feature extraction and SVM training. In Proceedings of the The 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; IEEE Computer Society: Piscataway, NJ, USA, 2011; pp. 1689–1696. [Google Scholar]
- Breiman, L. Random Forests. Mach. Learn. 2001, 4, 5–32. [Google Scholar] [CrossRef]
- Springenberg, M.; Frommholz, A.; Wenzel, M.; Weicken, E.; Ma, J.; Strodthoff, N. From modern CNNs to vision transformers: Assessing the performance, robustness, and classification strategies of deep learning models in histopathology. Med. Image Anal. 2023, 87, 102809. [Google Scholar] [CrossRef]
- Fu, X.; Liu, S.; Li, C.; Sun, J. MCLNet: An multidimensional convolutional lightweight network for gastric histopathology image classification. Biomed. Signal Process. Control. 2023, 80, 104319. [Google Scholar] [CrossRef]
- Song, Y.; Wang, T.; Cai, P.; Mondal, S.K.; Sahoo, J.P. A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities. ACM Comput. Surv. 2023, 55, 1–40. [Google Scholar] [CrossRef]
Sub-Database | Size | Abnormal | Normal |
---|---|---|---|
S-A | pixels | 13,124 | 20,160 |
S-B | pixels | 24,801 | 40,460 |
S-C | pixels | 59,151 | 87,500 |
Total | 97,076 | 148,120 |
CNN | Parameters (M) | Input Shape | Feature Layer | # of Features |
---|---|---|---|---|
AlexNet [33] | 60 | Pen. FC | 4096 | |
DarkNet-19 [34] | 20.8 | Conv19 | 1000 | |
DarkNet-53 [35] | 20.8 | Conv53 | 1000 | |
DenseNet-201 [36] | 25.6 | Avg. Pool | 1920 | |
EfficientNetB0 [37] | 5.3 | Avg. Pool | 1280 | |
Inception-v3 [38] | 21.8 | Last FC | 1000 | |
Inception-ResNet-v2 [39] | 55 | Avg. Pool | 1536 | |
ResNet-18 [40] | 11.7 | Pool5 | 512 | |
ResNet-50 [40] | 26 | Avg. Pool | 1024 | |
ResNet-101 [40] | 44.6 | Pool5 | 1024 | |
VGG19 [41] | 144 | Pen. FC | 4096 | |
XceptionNet [42] | 22.9 | Avg. Pool | 2048 |
Descriptor | A | P | R | S | F1 | MCC | BACC |
---|---|---|---|---|---|---|---|
AC | 67.30 | 69.18 | 82.99 | 43.20 | 75.45 | 28.71 | 63.09 |
Haar | 62.18 | 62.38 | 94.59 | 12.38 | 75.18 | 12.45 | 53.49 |
Hist | 44.67 | 96.29 | 9.00 | 99.47 | 16.47 | 17.91 | 54.23 |
HAR | 62.07 | 73.64 | 58.21 | 68.00 | 65.02 | 25.64 | 63.10 |
LBP | 62.64 | 70.00 | 67.06 | 55.85 | 68.50 | 22.69 | 61.46 |
CH_1 | 75.92 | 77.37 | 85.14 | 61.75 | 81.07 | 48.61 | 73.45 |
CH_2 | 72.50 | 71.57 | 90.55 | 44.76 | 79.95 | 40.78 | 67.66 |
LM | 73.32 | 72.65 | 89.73 | 48.11 | 80.29 | 42.61 | 68.92 |
ZM | 63.84 | 69.78 | 71.08 | 52.72 | 70.43 | 23.93 | 61.90 |
Descriptor | A | P | R | S | F1 | MCC | BACC |
---|---|---|---|---|---|---|---|
AC | 71.76 | 72.89 | 84.97 | 51.47 | 78.47 | 39.09 | 68.22 |
Haar | 62.48 | 62.65 | 94.22 | 13.71 | 75.26 | 13.61 | 53.97 |
Hist | 73.26 | 77.24 | 79.19 | 64.15 | 78.20 | 43.66 | 71.67 |
HAR | 76.78 | 78.69 | 84.55 | 64.84 | 81.52 | 50.63 | 74.69 |
LBP | 79.57 | 80.74 | 87.03 | 68.11 | 83.77 | 56.61 | 77.57 |
CH_1 | 78.11 | 79.99 | 85.17 | 67.28 | 82.50 | 53.56 | 76.22 |
CH_2 | 78.07 | 79.84 | 85.34 | 66.90 | 82.50 | 53.43 | 76.12 |
LM | 78.25 | 80.20 | 85.09 | 67.73 | 82.58 | 53.87 | 76.41 |
ZM | 65.19 | 68.16 | 79.84 | 42.70 | 73.54 | 24.26 | 61.27 |
Descriptor | A | P | R | S | F1 | MCC | BACC |
---|---|---|---|---|---|---|---|
AlexNet | 72.97 | 77.49 | 76.56 | 65.81 | 77.02 | 42.80 | 71.18 |
DarkNet-19 | 77.86 | 82.20 | 80.82 | 71.95 | 81.50 | 53.28 | 76.39 |
DarkNet-53 | 82.83 | 85.84 | 84.99 | 76.88 | 85.41 | 63.65 | 80.94 |
DenseNet-201 | 86.02 | 89.01 | 86.61 | 81.84 | 87.79 | 69.91 | 84.23 |
EfficientNet B0 | 82.84 | 85.76 | 85.35 | 76.96 | 85.55 | 63.48 | 81.15 |
Inception-v3 | 73.60 | 78.88 | 74.73 | 67.51 | 76.75 | 45.86 | 71.12 |
Inception-ResNet-v2 | 69.87 | 75.03 | 71.65 | 62.78 | 73.29 | 35.68 | 67.21 |
ResNet-18 | 77.78 | 82.17 | 80.82 | 71.86 | 81.49 | 53.13 | 76.34 |
ResNet-50 | 82.77 | 86.11 | 84.34 | 77.87 | 85.22 | 63.43 | 81.11 |
ResNet-101 | 82.52 | 85.76 | 84.71 | 76.81 | 85.23 | 63.00 | 80.76 |
VGG19 | 79.60 | 83.94 | 81.78 | 73.70 | 82.84 | 57.67 | 77.74 |
XceptionNet | 82.24 | 85.75 | 84.22 | 76.22 | 84.97 | 62.53 | 80.22 |
Descriptor | A | P | R | S | F1 | MCC | BACC |
---|---|---|---|---|---|---|---|
AlexNet | 84.02 | 85.55 | 88.57 | 77.03 | 87.03 | 66.29 | 82.80 |
DarkNet-19 | 88.30 | 88.68 | 92.49 | 81.87 | 90.54 | 75.33 | 87.18 |
DarkNet-53 | 90.30 | 90.72 | 93.55 | 85.30 | 92.11 | 79.58 | 89.42 |
DenseNet-201 | 91.93 | 92.61 | 94.20 | 88.46 | 93.40 | 83.05 | 91.33 |
EfficientNet B0 | 89.89 | 89.96 | 93.77 | 83.92 | 91.83 | 78.71 | 88.85 |
Inception-v3 | 85.52 | 85.64 | 91.42 | 76.46 | 88.44 | 69.39 | 83.94 |
Inception-ResNet-v2 | 83.25 | 84.10 | 89.21 | 74.10 | 86.58 | 64.55 | 81.65 |
ResNet-18 | 86.99 | 87.32 | 91.87 | 79.50 | 89.53 | 72.54 | 85.68 |
ResNet-50 | 89.92 | 90.12 | 93.63 | 84.23 | 91.84 | 78.77 | 88.93 |
ResNet-101 | 89.59 | 89.76 | 93.48 | 83.62 | 91.58 | 78.07 | 88.55 |
VGG19 | 85.98 | 86.61 | 90.92 | 78.40 | 88.71 | 70.41 | 84.66 |
Xception | 88.58 | 89.08 | 92.49 | 82.59 | 90.75 | 75.94 | 87.54 |
Strategy | Classifier | A | P | R | S | F1 | MCC | BACC |
---|---|---|---|---|---|---|---|---|
LBP+ DenseNet-201 | DT | 88.21 | 89.04 | 91.84 | 82.63 | 90.42 | 75.16 | 87.23 |
SVM | 94.41 | 95.14 | 95.66 | 92.50 | 95.40 | 88.29 | 94.08 | |
RF | 92.16 | 92.83 | 94.35 | 88.80 | 93.58 | 83.53 | 91.57 | |
LBP+ EfficientNetB0 | DT | 87.55 | 89.01 | 90.63 | 82.82 | 89.81 | 73.82 | 86.72 |
SVM | 94.05 | 94.78 | 95.44 | 91.92 | 95.11 | 87.53 | 93.68 | |
RF | 89.65 | 90.19 | 93.03 | 84.46 | 91.59 | 78.21 | 88.74 | |
DenseNet-201 + EfficientNetB0 | DT | 90.30 | 91.05 | 93.13 | 85.94 | 92.08 | 79.59 | 89.54 |
SVM | 94.89 | 95.76 | 95.81 | 93.49 | 95.78 | 89.31 | 94.65 | |
RF | 91.83 | 92.31 | 94.37 | 87.92 | 93.33 | 82.82 | 91.15 | |
LBP + DenseNet-201 + EfficientNetB0 | DT | 90.31 | 91.07 | 93.13 | 85.98 | 92.09 | 79.63 | 89.56 |
SVM | 95.03 | 95.86 | 95.93 | 93.64 | 95.90 | 89.59 | 94.79 | |
RF | 92.26 | 92.67 | 94.72 | 88.50 | 93.68 | 83.74 | 91.61 |
Strategy | Classifier | A | P | R | S | F1 | MCC | BACC |
---|---|---|---|---|---|---|---|---|
LBP + DenseNet-201 | DT | 86.69 | 90.56 | 87.68 | 85.08 | 89.09 | 72.10 | 86.38 |
SVM | 86.41 | 97.42 | 80.20 | 96.53 | 87.98 | 74.51 | 88.37 | |
RF | 89.04 | 95.12 | 86.78 | 92.74 | 90.76 | 77.87 | 89.76 | |
LBP + EfficientNetB0 | DT | 85.17 | 88.66 | 87.25 | 81.79 | 87.95 | 68.71 | 84.52 |
SVM | 85.02 | 96.81 | 78.42 | 95.79 | 86.65 | 72.04 | 87.10 | |
RF | 87.38 | 90.20 | 89.36 | 84.15 | 89.78 | 73.30 | 86.76 | |
DenseNet-201 + EfficientNetB0 | DT | 88.43 | 91.65 | 89.50 | 86.69 | 90.56 | 75.66 | 88.09 |
SVM | 85.88 | 98.40 | 78.50 | 97.92 | 87.33 | 74.19 | 88.21 | |
RF | 89.55 | 94.36 | 88.43 | 91.37 | 91.30 | 78.51 | 89.90 | |
LBP + DenseNet-201 + EfficientNetB0 | DT | 88.42 | 91.64 | 89.50 | 86.67 | 90.55 | 75.65 | 88.08 |
SVM | 85.82 | 98.45 | 78.36 | 97.98 | 87.26 | 74.12 | 88.17 | |
RF | 89.56 | 94.79 | 87.99 | 92.12 | 91.26 | 78.67 | 90.05 |
Strategy | Classifier | A | P | R | S | F1 | MCC | BACC |
---|---|---|---|---|---|---|---|---|
LBP + DenseNet-201 | DT | 77.05 | 86.23 | 73.23 | 82.70 | 79.20 | 54.89 | 77.97 |
SVM | 68.92 | 96.18 | 49.89 | 97.07 | 65.70 | 49.83 | 73.48 | |
RF | 78.89 | 93.29 | 69.62 | 92.60 | 79.73 | 61.41 | 81.11 | |
LBP + EfficientNetB0 | DT | 63.58 | 89.19 | 44.34 | 92.05 | 59.23 | 39.09 | 68.20 |
SVM | 54.36 | 96.04 | 24.53 | 98.50 | 39.07 | 31.44 | 61.51 | |
RF | 71.38 | 92.27 | 56.79 | 92.96 | 70.31 | 50.63 | 74.88 | |
DenseNet-201 + EfficientNetB0 | DT | 74.70 | 88.69 | 66.02 | 87.55 | 75.69 | 52.89 | 76.78 |
SVM | 59.96 | 96.81 | 34.02 | 98.34 | 50.34 | 39.00 | 66.18 | |
RF | 79.73 | 92.84 | 71.54 | 91.84 | 80.81 | 62.39 | 81.69 | |
LBP + DenseNet-201 + EfficientNetB0 | DT | 74.70 | 88.69 | 66.02 | 87.55 | 75.69 | 52.89 | 76.78 |
SVM | 60.31 | 96.72 | 34.66 | 98.26 | 51.03 | 39.39 | 66.46 | |
RF | 78.44 | 93.88 | 68.33 | 93.41 | 79.09 | 61.10 | 80.87 |
Work | Split (%) | Model Details | A (%) | ||
---|---|---|---|---|---|
S-C | S-B | S-A | |||
[2] | 40/40/20 | VGG16 | 96.12 | 96.47 | 95.90 |
40/40/20 | ResNet50 | 96.09 | 95.94 | 96.09 | |
[48] | 40/20/40 | InceptionV3 trained from scratch | - | - | 98.83 |
40/20/40 | InceptionV3 + ResNet50 (feature concatenation) | - | - | 98.80 | |
[14] | 60/20/20 | LGFFN | - | - | 96.81 |
[49] | 80/-/20 | MCLNet based on ShuffleNetV2 | 96.28 | 97.95 | 97.85 |
[6] | 40/20/40 | Ensemble | 97.72 | 98.68 | 99.20 |
Ours | 5-fold CVal | SVM with feature fusion | 60.31 * | 85.82 * | 95.03 |
Ours | 5-fold CVal | RF with feature fusion | 78.44 * | 89.56 * | 92.26 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Loddo, A.; Usai, M.; Di Ruberto, C. Gastric Cancer Image Classification: A Comparative Analysis and Feature Fusion Strategies. J. Imaging 2024, 10, 195. https://doi.org/10.3390/jimaging10080195
Loddo A, Usai M, Di Ruberto C. Gastric Cancer Image Classification: A Comparative Analysis and Feature Fusion Strategies. Journal of Imaging. 2024; 10(8):195. https://doi.org/10.3390/jimaging10080195
Chicago/Turabian StyleLoddo, Andrea, Marco Usai, and Cecilia Di Ruberto. 2024. "Gastric Cancer Image Classification: A Comparative Analysis and Feature Fusion Strategies" Journal of Imaging 10, no. 8: 195. https://doi.org/10.3390/jimaging10080195
APA StyleLoddo, A., Usai, M., & Di Ruberto, C. (2024). Gastric Cancer Image Classification: A Comparative Analysis and Feature Fusion Strategies. Journal of Imaging, 10(8), 195. https://doi.org/10.3390/jimaging10080195