Unsupervised Learning Based on Multiple Descriptors for WSIs Diagnosis
Abstract
:1. Introduction
- (1)
- We propose an unsupervised deep learning model that uses stacked autoencoders to learn the multiple low-dimensional latent spaces from an input, including the histogram of oriented gradients (HOG) [7] and local binary patterns (LBPs) [8]), along with the original image to fuse the heterogeneous features. Two additional inputs as descriptors help the model learn and classify multiple variants’ cellular information robustly from less or partially available annotated data. Although only three image descriptors, which are raw images and their HOG and LBP, representations are assumed as the inputs, we can use as many as required for future research.
- (2)
- Our model outperforms the existing state-of-the-art models, yielding the best score for binary-class and multi-class classification using four metrics, and it achieved the highest overall multi-class accuracies and other metrics for two public benchmark WSI cancer datasets.
- (3)
- We also discuss the performance of the model from various perspectives using multiple-input descriptors, visualization experiments such as confusion matrices, and the t-SNE technique, which confirmed that our model classifies the WSIs similarly to the way pathologists diagnose. The AUC (ROC) curves of the model also showed our model can be used clinically for cancer diagnosis.
2. Related Work
3. Proposed Model and Architecture
3.1. Pre-Processing WSIs
3.1.1. Segment and Annotate Tissue
3.1.2. WSI Patching
3.2. Dataset Preparation
3.2.1. Generating Representations
3.2.2. Training Dataset
3.2.3. Testing Dataset
3.3. Training and Classification
3.3.1. Feature Learning
3.3.2. Classification
3.4. Model Architecture
- (1)
- Normalize (N):This layer accepts the raw image to normalize the color distribution of an over-/under-stained image to a well-stained target image by using a simple H&E color normalization technique [41].
- (2)
- Input (I): This layer accepts a normalized image, which is pre-processed further to extract features using two image descriptors, HOG and LBPs, corresponding to the image from the raw images H and L, respectively.
- (3)
- Feature Representation Set (FRS): This operation is applied to flatten the output of each input image into feature vectors prior to feeding it to the network layers, which significantly reduces the layer operations and prepares the model input for learning.
- (4)
- Autoencoder Layers (AE-x): Autoencoders are mainly dimensionality reduction algorithms with a couple of important properties: autoencoders are a specific type of feedforward neural network where the output is the mirror image of the input. Each AE consists of three components: an encoder, hidden layers (latent space), and a decoder. The encoder compresses the input into a lower-dimensional representation, which is known as a latent space, and then reconstructs the output from this representation. The tissues vary in the target images; thus, to explore different features, our model architecture has hidden layers with a sufficient number of operations to represent each of these features, as shown in Table 1.
- (5)
- Fused Feature (FF): This operation, applied to concatenate the hidden learned representation of each AE hidden representation in the previous layer, significantly reduces the data handling and prepares the model for the final classification layers.
- (6)
- Output (O): The number of output neurons corresponding to each class, which are normalized using the softmax function, depends on the type of classification. In the present study, we conducted a multi-class classification (eight or nine classes).
4. Datasets
4.1. ICIAR2018
4.2. Dartmouth Lung Cancer
5. Experimental Setup
5.1. Aspects of Performance Evaluation
5.2. Best Hyperparameters
5.3. Performance Evaluation Metrics
6. Performance Results
6.1. Comparison with Deep Learning Models
6.2. Comparison with Different Deep Encoders
7. Discussion
7.1. Effect of Multiple Descriptors
7.2. Pathologist’s Analysis of the Results
7.3. ROC Curves’ Visualizations
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Shamshirband, S.; Fathi, M.; Dehzangi, A.; Chronopoulos, A.T.; Alinejad-Rokny, H. A review on deep learning approaches in healthcare systems: Taxonomies, challenges, and open issues. J. Biomed. Inform. 2021, 113, 103627. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Wei, Y.; Wu, Q.; Zhao, P.; Niu, S.; Huang, J.; Tan, M. Collaborative unsupervised domain adaptation for medical image diagnosis. IEEE Trans. Image Process. 2020, 29, 7834–7844. [Google Scholar] [CrossRef]
- Lee, J.; Warner, E.; Shaikhouni, S.; Bitzer, M.; Kretzler, M.; Gipson, D.; Pennathur, S.; Bellovich, K.; Bhat, Z.; Gadegbeku, C.; et al. Unsupervised machine learning for identifying important visual features through bag-of-words using histopathology data from chronic kidney disease. Sci. Rep. 2022, 12, 1–13. [Google Scholar] [CrossRef] [PubMed]
- Yan, J.; Chen, H.; Li, X.; Yao, J. Deep contrastive learning based tissue clustering for annotation-free histopathology image analysis. Comput. Med. Imaging Graph. 2022, 97, 102053. [Google Scholar] [CrossRef] [PubMed]
- Chikontwe, P.; Kim, M.; Nam, S.J.; Go, H.; Park, S.H. Multiple instance learning with center embeddings for histopathology classification. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 519–528. [Google Scholar]
- Wang, X.; Han, T.X.; Yan, S. An HOG-LBP human detector with partial occlusion handling. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 27 September–4 October 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 32–39. [Google Scholar]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 886–893. [Google Scholar]
- Ahonen, T.; Hadid, A.; Pietikainen, M. Face description with local binary patterns: Application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 2037–2041. [Google Scholar] [CrossRef]
- Yu, K.H.; Zhang, C.; Berry, G.J.; Altman, R.B.; Ré, C.; Rubin, D.L.; Snyder, M. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nat. Commun. 2016, 7, 12474. [Google Scholar] [CrossRef] [Green Version]
- Balazsi, M.; Blanco, P.; Zoroquiain, P.; Levine, M.D.; Burnier, M.N., Jr. Invasive ductal breast carcinoma detector that is robust to image magnification in whole digital slides. J. Med. Imaging 2016, 3, 027501. [Google Scholar] [CrossRef] [Green Version]
- Barker, J.; Hoogi, A.; Depeursinge, A.; Rubin, D.L. Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles. Med. Image Anal. 2016, 30, 60–71. [Google Scholar] [CrossRef] [Green Version]
- Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. Breast cancer histopathological image classification using convolutional neural networks. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 2560–2567. [Google Scholar]
- Zhu, X.; Yao, J.; Zhu, F.; Huang, J. Wsisa: Making survival prediction from whole slide histopathological images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7234–7242. [Google Scholar]
- Whitney, J.; Corredor, G.; Janowczyk, A.; Ganesan, S.; Doyle, S.; Tomaszewski, J.; Feldman, M.; Gilmore, H.; Madabhushi, A. Quantitative nuclear histomorphometry predicts oncotype DX risk categories for early stage ER+ breast cancer. BMC Cancer 2018, 18, 610. [Google Scholar] [CrossRef] [Green Version]
- Bahlmann, C.; Patel, A.; Johnson, J.; Ni, J.; Chekkoury, A.; Khurd, P.; Kamen, A.; Grady, L.; Krupinski, E.; Graham, A.; et al. Automated detection of diagnostically relevant regions in H&E stained digital pathology slides. In Proceedings of the Medical Imaging 2012: Computer-Aided Diagnosis; San Diego, CA, USA, 4–9 February 2012, International Society for Optics and Photonics: Bellingham, WA, USA, 2012; Volume 8315, p. 831504. [Google Scholar]
- Bejnordi, B.E.; Litjens, G.; Hermsen, M.; Karssemeijer, N.; van der Laak, J.A. A multi-scale superpixel classification approach to the detection of regions of interest in whole slide histopathology images. In Proceedings of the Medical Imaging 2015: Digital Pathology, Orlando, FL, USA, 25–26 February 2015; International Society for Optics and Photonics: Bellingham, WA, USA, 2015; Volume 9420, p. 94200H. [Google Scholar]
- Litjens, G.; Sánchez, C.I.; Timofeeva, N.; Hermsen, M.; Nagtegaal, I.; Kovacs, I.; Hulsbergen-Van De Kaa, C.; Bult, P.; Van Ginneken, B.; Van Der Laak, J. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Sci. Rep. 2016, 6, 26286. [Google Scholar] [CrossRef] [Green Version]
- Bejnordi, B.E.; Zuidhof, G.; Balkenhol, M.; Hermsen, M.; Bult, P.; van Ginneken, B.; Karssemeijer, N.; Litjens, G.; van der Laak, J. Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images. J. Med. Imaging 2017, 4, 044504. [Google Scholar] [CrossRef] [PubMed]
- Lin, H.; Chen, H.; Dou, Q.; Wang, L.; Qin, J.; Heng, P.A. Scannet: A fast and dense scanning framework for metastastic breast cancer detection from whole-slide image. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 539–546. [Google Scholar]
- Cruz-Roa, A.; Gilmore, H.; Basavanhally, A.; Feldman, M.; Ganesan, S.; Shih, N.; Tomaszewski, J.; Madabhushi, A.; González, F. High-throughput adaptive sampling for whole-slide histopathology image analysis (HASHI) via convolutional neural networks: Application to invasive breast cancer detection. PLoS ONE 2018, 13, e0196828. [Google Scholar] [CrossRef] [PubMed]
- Attallah, O. MB-AI-His: Histopathological diagnosis of pediatric medulloblastoma and its subtypes via AI. Diagnostics 2021, 11, 359. [Google Scholar] [CrossRef] [PubMed]
- Attallah, O.; Zaghlool, S. AI-Based Pipeline for Classifying Pediatric Medulloblastoma Using Histopathological and Textural Images. Life 2022, 12, 232. [Google Scholar] [CrossRef]
- Anwar, F.; Attallah, O.; Ghanem, N.; Ismail, M.A. Automatic breast cancer classification from histopathological images. In Proceedings of the 2019 International Conference on Advances in the Emerging Computing Technologies (AECT), Al Madinah Al Munawwarah, Saudi Arabia, 10 February 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
- Attallah, O.; Anwar, F.; Ghanem, N.M.; Ismail, M.A. Histo-CADx: Duo cascaded fusion stages for breast cancer diagnosis from histopathological images. PeerJ Comput. Sci. 2021, 7, e493. [Google Scholar] [CrossRef]
- Dundar, M.M.; Badve, S.; Bilgin, G.; Raykar, V.; Jain, R.; Sertel, O.; Gurcan, M.N. Computerized classification of intraductal breast lesions using histopathological images. IEEE Trans. Biomed. Eng. 2011, 58, 1977–1984. [Google Scholar] [CrossRef] [Green Version]
- Sudharshan, P.; Petitjean, C.; Spanhol, F.; Oliveira, L.E.; Heutte, L.; Honeine, P. Multiple instance learning for histopathological breast cancer image classification. Expert Syst. Appl. 2019, 117, 103–111. [Google Scholar] [CrossRef]
- Mercan, C.; Aksoy, S.; Mercan, E.; Shapiro, L.G.; Weaver, D.L.; Elmore, J.G. Multi-instance multi-label learning for multi-class classification of whole slide breast histopathology images. IEEE Trans. Med. Imaging 2017, 37, 316–325. [Google Scholar] [CrossRef] [Green Version]
- Xia, Y.; Nie, L.; Zhang, L.; Yang, Y.; Hong, R.; Li, X. Weakly supervised multilabel clustering and its applications in computer vision. IEEE Trans. Cybern. 2016, 46, 3220–3232. [Google Scholar] [CrossRef]
- Doyle, S.; Feldman, M.; Tomaszewski, J.; Madabhushi, A. A boosted Bayesian multiresolution classifier for prostate cancer detection from digitized needle biopsies. IEEE Trans. Biomed. Eng. 2010, 59, 1205–1218. [Google Scholar] [CrossRef]
- Basavanhally, A.; Ganesan, S.; Feldman, M.; Shih, N.; Mies, C.; Tomaszewski, J.; Madabhushi, A. Multi-field-of-view framework for distinguishing tumor grade in ER+ breast cancer from entire histopathology slides. IEEE Trans. Biomed. Eng. 2013, 60, 2089–2099. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Liu, Y.; Gadepalli, K.; Norouzi, M.; Dahl, G.E.; Kohlberger, T.; Boyko, A.; Venugopalan, S.; Timofeev, A.; Nelson, P.Q.; Corrado, G.S.; et al. Detecting cancer metastases on gigapixel pathology images. arXiv 2017, arXiv:1703.02442. [Google Scholar]
- Hou, L.; Samaras, D.; Kurc, T.M.; Gao, Y.; Davis, J.E.; Saltz, J.H. Patch-based convolutional neural network for whole slide tissue image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2424–2433. [Google Scholar]
- Mahmood, T.; Kim, S.G.; Koo, J.H.; Park, K.R. Artificial Intelligence-Based Tissue Phenotyping in Colorectal Cancer Histopathology Using Visual and Semantic Features Aggregation. Mathematics 2022, 10, 1909. [Google Scholar] [CrossRef]
- Mobadersany, P.; Yousefi, S.; Amgad, M.; Gutman, D.A.; Barnholtz-Sloan, J.S.; Vega, J.E.V.; Brat, D.J.; Cooper, L.A. Predicting cancer outcomes from histology and genomics using convolutional networks. Proc. Natl. Acad. Sci. USA 2018, 115, E2970–E2979. [Google Scholar] [CrossRef] [Green Version]
- Guo, W.; Liang, W.; Deng, Q.; Zou, X. A Multimodal Affinity Fusion Network for Predicting the Survival of Breast Cancer Patients. Front. Genet. 2021, 1323. [Google Scholar] [CrossRef]
- Tong, L.; Sha, Y.; Wang, M.D. Improving classification of breast cancer by utilizing the image pyramids of whole-slide imaging and multi-scale convolutional neural networks. In Proceedings of the 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC), Milwaukee, WI, USA, 15–19 July 2019; IEEE: Piscataway, NJ, USA, 2019; Volume 1, pp. 696–703. [Google Scholar]
- Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P.A. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; pp. 1096–1103. [Google Scholar]
- Roy, A.G.; Conjeti, S.; Carlier, S.G.; Houissa, K.; König, A.; Dutta, P.K.; Laine, A.F.; Navab, N.; Katouzian, A.; Sheet, D. Multiscale distribution preserving autoencoders for plaque detection in intravascular optical coherence tomography. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1359–1362. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A.; Bengio, Y. Deep Learning; MIT Press Cambridge: Cambridge, MA, USA, 2016; Volume 1. [Google Scholar]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Comput. Graph. Appl. 2001, 21, 34–41. [Google Scholar] [CrossRef]
- Aresta, G.; Araújo, T.; Kwok, S.; Chennamsetty, S.S.; Safwan, M.; Alex, V.; Marami, B.; Prastawa, M.; Chan, M.; Donovan, M.; et al. Bach: Grand challenge on breast cancer histology images. Med. Image Anal. 2019, 56, 122–139. [Google Scholar] [CrossRef]
- Wei, J.W.; Tafe, L.J.; Linnik, Y.A.; Vaickus, L.J.; Tomita, N.; Hassanpour, S. Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks. Sci. Rep. 2019, 9, 1–8. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Wetteland, R.; Engan, K.; Eftestøl, T.; Kvikstad, V.; Janssen, E.A. Multiclass Tissue Classification of Whole-Slide Histological Images using Convolutional Neural Networks. ICPRAM 2019, 1, 320–327. [Google Scholar]
- Toğaçar, M.; Özkurt, K.B.; Ergen, B.; Cömert, Z. BreastNet: A novel convolutional neural network model through histopathological images for the diagnosis of breast cancer. Phys. A Stat. Mech. Appl. 2020, 545, 123592. [Google Scholar] [CrossRef]
- Aatresh, A.A.; Alabhya, K.; Lal, S.; Kini, J.; Saxena, P. LiverNet: Efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1549–1563. [Google Scholar]
- Burçak, K.C.; Baykan, Ö.K.; Uğuz, H. A new deep convolutional neural network model for classifying breast cancer histopathological images and the hyperparameter optimisation of the proposed model. J. Supercomput. 2021, 77, 973–989. [Google Scholar] [CrossRef]
- Maaten, L.v.d.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Block | Layers | Dimensions | # of Param. | Repetition |
---|---|---|---|---|
N | Input-N | N N 3 | - | - |
I | RAW | 40 40 3 | - | - |
HOG | 8 8 1 | - | - | |
LBP | 16 16 1 | - | - | |
FRS | Feature-RAW | 4800 | 23,044,800 | - |
Feature-HOG | 64 | 4160 | - | |
Feature-LBP | 255 | 64,770 | - | |
AE-R | Latent Vector | H | Varies | NL |
AE-H | Latent Vector | H | Varies | NL |
AE-L | Latent Vector | H | Varies | NL |
FF | Concatenate Latent Vectors | 5118 | - | - |
O | Dense | 1024 | 5,241,856 | - |
BN | 1024 | 4096 | - | |
Dense | 512 | 524,800 | - | |
Batch Normalization | 512 | 2048 | - | |
Softmax | 2/4/5 | Varies | - |
Classes | Subtypes | # of Patches | |
---|---|---|---|
Unbalanced | Balanced | ||
Non-Carcinoma | Benign (B) | 3770 | 3770 |
Normal (N) | 1200 | 1200 | |
Carcinoma | In-Situ (IS) | 1655 | 1655 |
Invasive (IV) | 25,590 | 3800 | |
Total | 32,215 | 10,425 |
Classes | # of WSIs | # of Patches |
---|---|---|
Acinar (A) | 9 | 38,611 |
Lepidic (L) | 5 | 39,092 |
Micropapillary (M) | 5 | 40,349 |
Papillary (P) | 4 | 32,228 |
Solid (S) | 8 | 38,190 |
Total | 31 | 188,470 |
Dataset | Parameters | Proposed Model | ||
---|---|---|---|---|
Autoencoder | Classifier | |||
Binary | Multi | |||
ICIAR2018 | Number of Hidden Layers (NL) | 1 | 1 | - |
Number of Neurons (K) | Double | Same | - | |
Activation Map | ReLU | - | ||
Epochs | 200 | 30 | ||
Learning Rate | ||||
Optimizer | Adam | Adam | ||
Batch Size | 64 | 128 | 32 | |
Dartmouth | Number of Hidden Layers (NL) | 1 | - | |
Number of Neurons (K) | Same | - | ||
Activation Map | ReLU | - | ||
Epochs | 200 | 40 | ||
Learning Rate | ||||
Optimizer | Adam | Adam | ||
Batch Size | 512 | 64 |
Model | Metrics | |||||||
---|---|---|---|---|---|---|---|---|
Unbalanced | Balanced | |||||||
Accuracy | Sensitivity | Precision | -Score | Accuracy | Sensitivity | Precision | -Score | |
Ours | 0.864 | 0.863 | 0.866 | 0.826 | 0.904 | 0.904 | 0.919 | 0.903 |
ResNet-50 | 0.798 | 0.808 | 0.784 | 0.796 | 0.887 | 0.864 | 0.873 | 0.852 |
Inception-V3 | 0.833 | 0.843 | 0.807 | 0.824 | 0.864 | 0.874 | 0.853 | 0.840 |
MobileNet | 0.763 | 0.668 | 0.771 | 0.716 | 0.831 | 0.842 | 0.825 | 0.827 |
DensetNet-121 | 0.861 | 0.861 | 0.843 | 0.852 | 0.884 | 0.875 | 0.856 | 0.837 |
RuneCNN | 0.792 | 0.776 | 0.792 | 0.784 | 0.820 | 0.813 | 0.834 | 0.845 |
BreastNet | 0.830 | 0.823 | 0.814 | 0.820 | 0.861 | 0.852 | 0.846 | 0.834 |
LiverNet | 0.862 | 0.852 | 0.821 | 0.810 | 0.839 | 0.848 | 0.857 | 0.863 |
HCNN | 0.854 | 0.859 | 0.835 | 0.842 | 0.885 | 0.863 | 0.854 | 0.871 |
Dataset | Model | Metrics | |||||||
---|---|---|---|---|---|---|---|---|---|
Unbalanced | Balanced | ||||||||
Accuracy | Sensitivity | Precision | -Score | Accuracy | Sensitivity | Precision | -Score | ||
ICIAR | Ours | 0.805 | 0.798 | 0.729 | 0.752 | 0.872 | 0.870 | 0.888 | 0.870 |
ResNet-50 | 0.789 | 0.705 | 0.681 | 0.693 | 0.851 | 0.843 | 0.864 | 0.850 | |
Inception-V3 | 0.697 | 0.744 | 0.724 | 0.734 | 0.840 | 0.861 | 0.842 | 0.834 | |
MobileNet | 0.701 | 0.726 | 0.672 | 0.698 | 0.862 | 0.854 | 0.843 | 0.825 | |
DensetNet-121 | 0.707 | 0.658 | 0.672 | 0.665 | 0.854 | 0.861 | 0.873 | 0.852 | |
RuneCNN | 0.642 | 0.608 | 0.656 | 0.631 | 0.832 | 0.824 | 0.827 | 0.834 | |
BreastNet | 0.763 | 0.758 | 0.774 | 0.759 | 0.842 | 0.836 | 0.821 | 0.815 | |
LiverNet | 0.784 | 0.776 | 0.754 | 0.763 | 0.856 | 0.861 | 0.843 | 0.827 | |
HCNN | 0.791 | 0.784 | 0.773 | 0.788 | 0.853 | 0.855 | 0.851 | 0.842 | |
Dartmouth | Ours | - | - | - | - | 0.946 | 0.941 | 0.942 | 0.941 |
ResNet-50 | - | - | - | - | 0.914 | 0.911 | 0.899 | 0.905 | |
Inception-V3 | - | - | - | - | 0.912 | 0.912 | 0.913 | 0.913 | |
MobileNet | - | - | - | - | 0.884 | 0.909 | 0.873 | 0.891 | |
DensetNet-121 | - | - | - | - | 0.922 | 0.932 | 0.879 | 0.905 | |
RuneCNN | - | - | - | - | 0.877 | 0.893 | 0.814 | 0.852 | |
BreastNet | - | - | - | - | 0.901 | 0.934 | 0.926 | 0.921 | |
LiverNet | - | - | - | - | 0.911 | 0.920 | 0.933 | 0.930 | |
HCNN | - | - | - | - | 0.921 | 0.916 | 0.930 | 0.919 |
Dataset | |||
---|---|---|---|
Encoders | ICIAR | Dartmouth | |
Unbalanced | Balanced | ||
Ours | 0.805 (0.022) | 0.872 (0.022) | 0.946 (0.019) |
ResNet-50 | 0.742 (0.040) | 0.854 (0.014) | 0.710 (0.018) |
Inception-V3 | 0.723 (0.035) | 0.837 (0.009) | 0.761 (0.052) |
DensetNet-121 | 0.765 (0.022) | 0.824 (0.037) | 0.757 (0.019) |
MobileNet | 0.667 (0.060) | 0.841 (0.043) | 0.723 (0.013) |
RuneCNN | 0.641 (0.047) | 0.838 (0.037) | 0.672 (0.027) |
BreastNet | 0.788 (0.023) | 0.856 (0.043) | 0.826 (0.023) |
LiverNet | 0.767 (0.034) | 0.859 (0.076) | 0.871 (0.014) |
HCNN | 0.779 (0.067) | 0.864 (0.034) | 0.865 (0.017) |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sheikh, T.S.; Kim, J.-Y.; Shim, J.; Cho, M. Unsupervised Learning Based on Multiple Descriptors for WSIs Diagnosis. Diagnostics 2022, 12, 1480. https://doi.org/10.3390/diagnostics12061480
Sheikh TS, Kim J-Y, Shim J, Cho M. Unsupervised Learning Based on Multiple Descriptors for WSIs Diagnosis. Diagnostics. 2022; 12(6):1480. https://doi.org/10.3390/diagnostics12061480
Chicago/Turabian StyleSheikh, Taimoor Shakeel, Jee-Yeon Kim, Jaesool Shim, and Migyung Cho. 2022. "Unsupervised Learning Based on Multiple Descriptors for WSIs Diagnosis" Diagnostics 12, no. 6: 1480. https://doi.org/10.3390/diagnostics12061480
APA StyleSheikh, T. S., Kim, J.-Y., Shim, J., & Cho, M. (2022). Unsupervised Learning Based on Multiple Descriptors for WSIs Diagnosis. Diagnostics, 12(6), 1480. https://doi.org/10.3390/diagnostics12061480