Artificial Intelligence-Based Mitosis Detection in Breast Cancer Histopathology Images Using Faster R-CNN and Deep CNNs
Abstract
:1. Introduction
2. Related Works
2.1. Mitosis Detection Using Handcrafted Features
2.2. Mitosis Detection Using Deep Features
3. Contribution
- -
- This proposed technique provides state-of-the-art results in mitosis-detection tasks as per the ICPR 2012 and ICPR 2014 contest datasets.
- -
- Faster R-CNN is used in the first stage in which the primary detection of mitotic cells was performed. We adopt Resnet-50 as a features-extraction network for the first time, thus obtaining better results as compared to the other techniques.
- -
- In the proposed technique, a large number of false positives are produced because of the minute differences between mitotic and non-mitotic objects. To reduce the number of false positives, we perform post-processing on the basis of statistical, texture, shape, and color features.
- -
- To further reduce the number of false positives, we perform a score-level fusion of Resnet-50 and a dense convolutional network (Densenet)-201. This is used for the first time in the mitotic-cells-detection task and it significantly reduces the number of false positives.
- -
- - To allow other researchers to perform fair comparisons, our trained models are publicly available in [33].
4. Proposed Method
4.1. Overview of Proposed Approach
4.2. Mitotic-Cell Detection Using Faster R-CNN
4.3. False-Positive Mitotic-Cells Removal via Post-Processing
4.4. Final Classification of Mitotic Cells via Score-Level Fusion of Two CNNs
5. Experiments and Performance Analysis
5.1. Datasets
5.1.1. ICPR 2012 MITOSIS Dataset
5.1.2. ICPR 2014 Dataset
5.2. Data Augmentation
5.3. Experimental Setup and Training
5.3.1. Experimental Setup
5.3.2. Training
Training of Faster R-CNN
Training of Resnet-50 and Densenet-201
5.4. Performance Evaluation of Proposed Method
5.4.1. Performance Evaluation Metric
5.4.2. Ablation Study
5.4.3. Correct and Incorrect Detection Cases with Proposed Method
5.4.4. Cross-Dataset Experiment-TUPAC16
6. Activation Maps and Discussion
- -
- Our proposed-technique results show that recent advances in deep-learning algorithms have decreased the gap between diagnoses performed by human experts and computers. Moreover, a good performance with the ICPR 2012 and ICPR 2014 datasets has proved the generalization capabilities of our proposed technique, and thus, our technique may be used for various lesion detections.
- -
- We have observed that significant variation exists in the sizes of the mitotic cells. Therefore, Faster R-CNN feature-extraction network and anchor-boxes selection play a key role in the detection of mitotic cells. By using Resnet-50 for feature extraction, we successfully extracted efficient features because Resnet-50 uses skip connections, and thus, the mitotic cell’s information is not lost. Moreover, we fixed the anchor scale size to 64 instead of 128, 256, or 512 and selected anchor boxes that have an intersection-over-union value less than or equal to 0.5 with ground truths. Therefore, by using Resnet-50 as a feature-extraction network, fixing the anchor-scale size to 64, and limiting the number of anchor boxes, we achieved the state-of-the-art performance.
- -
- We have also observed that Faster R-CNN also depends on the underlying feature-extraction network and RPN. Therefore, in our case Faster R-CNN rapidly converges in only 25 epochs, because of the use of Resnet-50 as a feature-extraction network, the smaller anchor scale, and the limited anchor boxes.
- -
- We have observed that some of the false-positive cases comprise an irregular morphology and dark bluish color and have large variations in texture. These issues can be eliminated by using handcrafted features such as LBP, HOG, and statistical and color features for improving the performance.
- -
- Mitotic-cell-detection techniques [2,24,25,26,27] comprise the use of additional classifiers for performance improvement. Although classifiers such as Resnet-50 and Densenet-201 exhibit an outstanding performance owing to Resnet-50′s residual learning and skip connection for feature reusability and Densenet-201′s feature propagation, feature reusability, and smaller number of parameters, we still observed that the performance can be further improved because single-modality data lack uniqueness and universality. Therefore, in our proposed technique we performed score-level fusion and improved our obtained results as compared to those of state-of-the-art methods.
- -
- Deep-learning networks require a large amount of data for successful training. Owing to the lack of data, some other techniques such as data augmentation are used to increase the data. Data augmentation in the case of mitotic-cell detection is a challenging problem because there are minute differences between mitotic and non-mitotic cells. We observed that the flipping and translation technique for data augmentation produces robust data as proved by the activation maps in Figure 8, where trained classifiers successfully found features in the test data for decision making.
7. Conclusions
Author Contributions
Acknowledgments
Conflicts of Interest
Appendix A
Layer Type | Output Size | Numbers of Filters | Kernel Size | Strides | Paddings | Iterations | |
---|---|---|---|---|---|---|---|
Image input layer | 224 × 224 × 3 | ||||||
Conv1 | 112 × 112 × 64 | 64 | 7 × 7 × 3 | 2 | 3 | 1 | |
Maximum pool | 55 × 55 × 64 | 1 | 3 × 3 | 2 | 0 | 1 | |
Conv2 | Conv2-1 (1 × 1 Convolutional Mapping) | 55 × 55 × 64 | 64 | 1 × 1 × 64 | 1 | 0 | 1 |
55 × 55 × 64 | 64 | 3 × 3 × 64 | 1 | 1 | |||
55 × 55 × 256 | 256 | 1 × 1 × 64 | 1 | 0 | |||
55 × 55 × 256 | 256 | 1 × 1 × 64 | 1 | 0 | |||
Conv2-2-Conv2-3 (Identity Mapping) | 55 × 55 × 64 | 64 | 1 × 1 × 256 | 1 | 0 | 2 | |
55 × 55 × 64 | 64 | 3 × 3 × 64 | 1 | 1 | |||
55 × 55 × 256 | 256 | 1 × 1 × 64 | 1 | 0 | |||
Conv3 | Conv3-1 (1 × 1 Convolutional Mapping) | 28 × 28 × 128 | 128 | 1 × 1 × 256 | 2 | 0 | 1 |
28 × 28 × 128 | 128 | 3 × 3 × 128 | 1 | 1 | |||
28 × 28 × 512 | 512 | 1 × 1 × 128 | 1 | 0 | |||
28 × 28 × 512 | 512 | 1 × 1 × 256 | 2 | 0 | |||
Conv3-2-Conv3-4 (Identity Mapping | 28 × 28 × 128 | 128 | 1 × 1 × 512 | 1 | 0 | 3 | |
28 × 28 × 128 | 128 | 3 × 3 × 128 | 1 | 1 | |||
28×28×512 | 512 | 1 × 1 × 128 | 1 | 0 | |||
Conv4 | Conv4-1 (1 × 1 Convolutional Mapping) | 14 × 14 × 256 | 256 | 1 × 1 × 512 | 2 | 0 | 1 |
14 × 14 × 256 | 256 | 1 × 1 × 512 | 1 | 1 | |||
14 × 14 × 1024 | 1024 | 1 × 1 × 512 | 1 | 0 | |||
14 × 14 × 1024 | 1024 | 1 × 1 × 512 | 2 | 0 | |||
Conv4-2-Conv4-6 (Identity Mapping) | 14 × 14 × 256 | 256 | 1 × 1 × 1024 | 1 | 0 | 5 | |
14 × 14 × 256 | 256 | 1 × 1 × 256 | 1 | 1 | |||
14 × 14 × 1024 | 1024 | 1 × 1 × 256 | 1 | 0 | |||
Conv5 | Conv5-1 (1 × 1 Convolutional Mapping) | 7 × 7 × 512 | 512 | 1 × 1 × 1024 | 2 | 0 | 1 |
7 × 7 × 512 | 512 | 3 × 3 × 512 | 1 | 1 | |||
7 × 7 × 2048 | 2048 | 1 × 1 × 512 | 1 | 0 | |||
7 × 7 × 2048 | 2048 | 1 × 1 × 1024 | 2 | 0 | |||
Conv5-2-Conv5-3 (Identity Mapping) | 7 × 7 × 512 | 512 | 1 × 1 × 2048 | 1 | 0 | 2 | |
7 × 7 × 512 | 512 | 3 × 3 × 512 | 1 | 1 | |||
7 × 7 × 2048 | 2048 | 1 × 1 × 512 | 1 | 0 |
Layer Type | Number of Filters | Output Size | Kernel Size | Strides | Paddings |
---|---|---|---|---|---|
5_3rd CL Input layer | 7 × 7 × 2048 | ||||
6th CL (ReLU) | 512 | 7 × 7 × 2048 | 3 × 3 × 512 | 1 | 1 |
Classification CL (Softmax) | 18 | 7 × 7 × 18 | 1 × 1 × 512 | 1 | 0 |
6th CL Regression CL | 36 | 7 × 7 × 36 | 1 × 1 × 512 | 1 | 0 |
Layer Type | Output Size |
---|---|
5_3rd CL RPN proposal region Input layer | 7 × 7 × 2048 (height × width × depth) 300 × 4 (ROI coordinate *) |
ROI pooling layer | 7 × 7 × 512 (height × width × depth) × 300 |
1st fully connected layer (ReLU) (Dropout) | 4096 × 300 |
2nd fully connected layer (ReLU) (Dropout) | 4096 × 300 |
Classification convolutional layer (Softmax) | 2 × 300 |
2nd Fully connected layer Regression fully connected layer | 4 × 300 |
References
- Ghoncheh, M.; Pournamdar, Z.; Salehiniya, H. Incidence and mortality and epidemiology of breast cancer in the world. Asian Pac. J. Cancer Prev. 2016, 17, 43–46. [Google Scholar] [CrossRef] [Green Version]
- Li, C.; Wang, X.; Liu, W.; Latecki, L.J. DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks. Med. Image Anal. 2018, 45, 121–133. [Google Scholar] [CrossRef] [PubMed]
- Arsalan, M.; Kim, D.S.; Lee, M.B.; Owais, M.; Park, K.R. FRED-Net: Fully residual encoder–decoder network for accurate iris segmentation. Expert Syst. Appl. 2019, 122, 217–241. [Google Scholar] [CrossRef]
- Sajjad, M.; Khan, S.; Muhammad, K.; Wu, W.; Ullah, A.; Baik, S.W. Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J. Comput. Sci. 2019, 30, 174–182. [Google Scholar] [CrossRef]
- Khan, S.; Islam, N.; Jan, Z.; Ud Din, I.; Rodrigues, J.J.P.C. A novel deep learning based framework for the detection and classification of breast cancer using transfer learning. Pattern Recognit. Lett. 2019, 125, 1–6. [Google Scholar] [CrossRef]
- Arsalan, M.; Kim, D.S.; Owais, M.; Park, K.R. OR-Skip-Net: Outer residual skip network for skin segmentation in non-ideal situations. Expert Syst. Appl. 2020, 141, 1–26. [Google Scholar] [CrossRef]
- Lakshmanaprabu, S.K.; Mohanty, S.N.; Shankar, K.; Arunkumar, N.; Ramirez, G. Optimal deep learning model for classification of lung cancer on CT images. Futur. Gener. Comp. Syst. 2019, 92, 374–382. [Google Scholar]
- Owais, M.; Arsalan, M.; Choi, J.; Park, K.R. Effective diagnosis and treatment through content-based medical image retrieval (CBMIR) by using artificial intelligence. J. Clin. Med. 2019, 8, 462. [Google Scholar] [CrossRef] [Green Version]
- Arsalan, M.; Owais, M.; Mahmood, T.; Cho, S.W.; Park, K.R. Aiding the diagnosis of diabetic and hypertensive retinopathy using artificial intelligence-based semantic segmentation. J. Clin. Med. 2019, 8, 1446. [Google Scholar] [CrossRef] [Green Version]
- Karim, A.; Mishra, A.; Newton, M.A.H.; Sattar, A. Efficient toxicity prediction via simple features using shallow neural networks and decision trees. Acs Omega 2019, 4, 1874–1888. [Google Scholar] [CrossRef]
- Veta, M.; van Diest, P.J.; Willems, S.M.; Wang, H.; Madabhushi, A.; Cruz-Roa, A.; Gonzalez, F.; Larsen, A.B.L.; Vestergaard, J.S.; Dahl, A.B.; et al. Assessment of algorithms for mitosis detection in breast cancer histopathology images. Med. Image Anal. 2015, 20, 237–248. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Irshad, H. Automated mitosis detection in histopathology using morphological and multi-channel statistics features. J. Pathol. Inf. 2013, 4, 1–6. [Google Scholar] [CrossRef] [PubMed]
- Tashk, A.; Helfroush, M.S.; Danyali, H.; Akbarzadeh, M. An automatic mitosis detection method for breast cancer histopathology slide images based on objective and pixel-wise textural features classification. In Proceedings of the 5th Conference on Information and Knowledge Technology, Shiraz, Iran, 28–30 May 2013; pp. 406–410. [Google Scholar]
- Sommer, C.; Fiaschi, L.; Hamprecht, F.A.; Gerlich, D.W. Learning-based mitotic cell detection in histopathological images. In Proceedings of the 21st International Conference on Pattern Recognition, Tsukuba, Japan, 11–15 November 2012; pp. 2306–2309. [Google Scholar]
- Paul, A.; Dey, A.; Mukherjee, D.P.; Sivaswamy, J.; Tourani, V. Regenerative random forest with automatic feature selection to detect mitosis in histopathological breast cancer images. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 94–102. [Google Scholar]
- Hameed, A.A.; Karlik, B.; Salman, M.S. Back-propagation algorithm with variable adaptive momentum. Knowl. Based Syst. 2016, 114, 79–87. [Google Scholar] [CrossRef]
- Ciresan, D.C.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. Mitosis detection in breast cancer histology images with deep neural networks. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention, Nagoya, Japan, 22–26 September 2013; pp. 411–418. [Google Scholar]
- Malon, C.D.; Cosatto, E. Classification of mitotic figures with convolutional neural networks and seeded blob features. J. Pathol. Inform 2013, 4, 1–5. [Google Scholar] [CrossRef]
- Wang, H.; Cruz-Roa, A.; Basavanhally, A.; Gilmore, H.; Shih, N.; Feldman, M.; Tomaszewski, J.; Gonzalez, F.; Madabhushi, A. Cascaded ensemble of convolutional neural networks and handcrafted features for mitosis detection. In Proceedings of the SPIE Medical Imaging, San Diego, CA, USA, 15–20 February 2014; pp. 1–10. [Google Scholar]
- Chen, H.; Dou, Q.; Wang, X.; Qin, J.; Heng, P.-A. Mitosis detection in breast cancer histology images via deep cascaded networks. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 1160–1166. [Google Scholar]
- Madzarov, G.; Gjorgjevikj, D.; Chorbev, I. A Multi-class SVM classifier utilizing binary decision tree. Informatica 2009, 33, 233–241. [Google Scholar]
- Sommer, C.; Straehle, C.; Kothe, U.; Hamprecht, F.A. Ilastik: Interactive learning and segmentation toolkit. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; pp. 230–233. [Google Scholar]
- Held, M.; Schmitz, M.H.A.; Fischer, B.; Walter, T.; Neumann, B.; Olma, M.H.; Peter, M.; Ellenberg, J.; Gerlich, D.W. CellCognition: time-resolved phenotype annotation in high-throughput live cell imaging. Nat. Methods 2010, 7, 747–754. [Google Scholar] [CrossRef] [Green Version]
- Li, C.; Wang, X.; Liu, W.; Latecki, L.J.; Wang, B.; Huang, J. Weakly supervised mitosis detection in breast histopathology images using concentric loss. Med. Image Anal. 2019, 53, 165–178. [Google Scholar] [CrossRef]
- Cai, D.; Sun, X.; Zhou, N.; Han, X.; Yao, J. Efficient mitosis detection in breast cancer histology images by RCNN. In Proceedings of the IEEE 16th International Symposium on Biomedical Imaging, Venice, Italy, 8–11 April 2019; pp. 919–922. [Google Scholar]
- Li, Y.; Mercan, E.; Knezevitch, S.; Elmore, J.G.; Shapiro, L.G. Efficient and accurate mitosis detection—A lightweight RCNN approach. In Proceedings of the 7th International Conference on Pattern Recognition Applications and Methods, Funchal, Portugal, 16–18 January 2018; pp. 69–77. [Google Scholar]
- Dodballapur, V.; Song, Y.; Huang, H.; Chen, M.; Chrzanowski, W.; Cai, W. Mask-driven mitosis detection in histopathology images. In Proceedings of the IEEE 16th International Symposium on Biomedical Imaging, Venice, Italy, 8–11 April 2019; pp. 1855–1859. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Sengupta, A.; Ye, Y.; Wang, R.; Liu, C.; Roy, K. Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 2019, 13, 1–10. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar]
- Available online: http://dm.dgu.edu/link.html (accessed on 24 November 2019).
- ImageNet Large Scale Visual Recognition Challenge 2015 (ILSVRC2015). Available online: http://image-net.org/challenges/LSVRC/2015/ (accessed on 24 November 2019).
- Zhong, Z.; Sun, L.; Huo, Q. An anchor-free region proposal network for Faster R-CNN-based text detection approaches. Int. J. Doc. Anal. Recognit. 2019, 22, 315–327. [Google Scholar] [CrossRef] [Green Version]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Uijlings, J.R.R.; van de Sande, K.E.A.; Gevers, T.; Smeulders, A.W.M. Selective search for object recognition. Int. J. Comput. Vis. 2013, 104, 154–171. [Google Scholar] [CrossRef] [Green Version]
- Ahonen, T.; Hadid, A.; Pietikäinen, M. Face recognition with local binary patterns. In Proceedings of the European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; pp. 469–481. [Google Scholar]
- Mahmood, T.; Ziauddin, S.; Shahid, A.R.; Safi, A. Mitosis detection in breast cancer histopathology images using statistical, color and shape-based features. J. Med. Imaging Health Inf. 2018, 8, 932–938. [Google Scholar] [CrossRef]
- Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
- Kalakech, M.; Porebski, A.; Vandenbroucke, N.; Hamad, D. Unsupervised local binary pattern histogram selection scores for color texture classification. J. Imaging 2018, 4, 112. [Google Scholar] [CrossRef] [Green Version]
- Sicilia, R.; Cordelli, E.; Merone, M.; Luperto, E.; Papalia, R.; Iannello, G.; Soda, P. Early radiomic experiences in classifying prostate cancer aggressiveness using 3D local binary patterns. In Proceedings of the IEEE 32nd International Symposium on Computer-Based Medical Systems, Cordoba, Spain, 5–7 June 2019; pp. 355–360. [Google Scholar]
- Nguyen, D.T.; Cho, S.R.; Shin, K.Y.; Bang, J.W.; Park, K.R. Comparative study of human age estimation with or without preclassification of gender and facial expression. Sci. World J. 2014, 2014, 1–15. [Google Scholar] [CrossRef] [PubMed]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 1–8. [Google Scholar]
- Nazir, M.; Jan, Z.; Sajjad, M. Facial expression recognition using histogram of oriented gradients based transformed features. Clust. Comput. 2018, 21, 539–548. [Google Scholar] [CrossRef]
- Lee, W.-Y.; Ko, K.-E.; Sim, K.-B. Robust lip detection based on histogram of oriented gradient features and convolutional neural network under effects of light and background. Optik 2017, 136, 462–469. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
- He, M.; Horng, S.; Fan, P.; Run, R.-S.; Chen, R.-J.; Lai, J.-L.; Khan, M.K.; Sentosa, K.O. Performance evaluation of score level fusion in multimodal biometric systems. Pattern Recognit. 2010, 43, 1789–1800. [Google Scholar] [CrossRef]
- Yılmaz, M.B.; Yanıkoğlu, B. Score level fusion of classifiers in off-line signature verification. Info. Fusion 2016, 32, 109–119. [Google Scholar] [CrossRef]
- Castrillón-Santana, M.; Lorenzo-Navarro, J.; Ramón-Balmaseda, E. Multi-scale score level fusion of local descriptors for gender classification in the wild. Multimed. Tools Appl. 2017, 76, 4695–4711. [Google Scholar] [CrossRef] [Green Version]
- Ross, A. and Jain, A. Information fusion in biometrics. Pattern Recognit. Lett. 2003, 24, 2115–2125. [Google Scholar] [CrossRef]
- Roux, L.; Racoceanu, D.; Loménie, N.; Kulikova, M.; Irshad, H.; Klossa, J.; Capron, F.; Genestie, C.; Le Naour, G.; Gurcan, M.N. Mitosis detection in breast cancer histological images an ICPR 2012 contest. J. Pathol. Inf. 2013, 4, 1–7. [Google Scholar] [CrossRef] [PubMed]
- MITOS-ATYPIA-14 Grand Challenge. Available online: https://mitos-atypia-14.grand-challenge.org/ (accessed on 14 November 2019).
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- MATLAB R2019a at a Glance. Available online: https://www.mathworks.com/products/new_products/release2019a.html (accessed on 10 November 2019).
- Intel® Core i7-3770K Processor. Available online: http://ark.intel.com/content/www/us/en/ark/products/65523/intel-core-i7-3770k-processor-8m-cache-up-to-3-90-ghz.html (accessed on 12 November 2019).
- GeForce GTX 1070. Available online: https://www.nvidia.com/ko-kr/geforce/products/10series/geforce-gtx-1070-ti/ (accessed on 12 November 2019).
- Dogo, E.M.; Afolabi, O.J.; Nwulu, N.I.; Twala, B.; Aigbavboa, C.O. A comparative analysis of gradient descent-based optimization algorithms on convolutional neural networks. In Proceedings of the International Conference on Computational Techniques, Electronics, and Mechanical Systems, Belgaum India, 21–22 December 2018; pp. 92–99. [Google Scholar]
- Mitosis Detection in Breast Cancer Histological Images (MITOS dataset). Available online: http://ludo17.free.fr/mitos_2012/results.html (accessed on 10 November 2019).
- Elston, C.W.; Ellis, I.O. Pathological prognostic factors in breast cancer. I. The value of histological grade in breast cancer: Experience from a large study with long-term follow-up. Histopathology 2002, 41, 154–161. [Google Scholar] [CrossRef] [PubMed]
- Akram, S.U.; Qaiser, T.; Graham, S.; Kannala, J.; Heikkilä, J.; Rajpoot, N. Leveraging unlabeled whole-slide images for mitosis detection. In Proceedings of the International Workshop on Ophthalmic Medical Image Analysis, Granada, Spain, 16–20 September 2018; pp. 69–77. [Google Scholar]
- Paeng, K.; Hwang, S.; Park, S.; Kim, M. A unified framework for tumor proliferation score prediction in breast histopathology. In Proceedings of the 3rd International Workshop on Deep Learning in Medical Image Analysis, Quebec City, QC, Canada, 10–14 September 2017; pp. 231–239. [Google Scholar]
Category | Method | Datasets | Strength | Weakness |
---|---|---|---|---|
Hand-crafted features | Morphological and statistical features with decision tree classifier [12] | ICPR 2012 | Efficient in capturing texture features for mitotic cell segmentation | Low detection performance and computationally expensive |
LBP and SVM classifier [13] | ICPR 2012 | High discriminative power, computational simplicity, and invariance to grayscale changes | Affected by rotation and limited structural information capturing | |
Shape, texture, and intensity features with SVM classifier [14] | ICPR 2012 | Small amount of parameter tuning and low user effort | Low detection performance and object segmentation using open-source software | |
Intensity, texture, and regenerative random forest tree classifier [15] | ICPR 2012 | Good performance for large data | Computationally expensive and complex due to random forest tree | |
Deep features | Sliding-window-based classification [17] | ICPR 2012 | Good detection performance | Computationally expensive |
Combination of color, texture, and shape features, and CNN features with SVM classifier [18] | ICPR 2012 | Easy to accommodate for multi-scanner data without major redesign | Computationally expensive | |
Handcrafted and CNN features, random forest classifier, and CNN [19] | ICPR 2012 | Fast and high precision | Using fixed global and local threshold in object-detection stage | |
FCN model for objects segmentation and CNN for classification [20] | ICPR 2012 | Robust, fast, and high precision | Not suitable for weakly annotated datasets, and object detection stage is computationally expensive | |
Faster R-CNN-based detection and Resnet-50 for classification [2] | ICPR 2012 ICPR 2014 | Good performance and inference time | VGG-16 is used as a feature extraction network of Faster R-CNN, which have the vanishing gradient issue | |
Concentric circle approach for objects detection and FCN for segmentation [24] | ICPR 2012 ICPR 2014 TUPAC-16 | Good technique for weakly annotated datasets | Low detection performance | |
Modified Faster R-CNN with Resnet-101 feature-extraction network [25] | ICPR 2014 TUPAC-16 | Less inference time | Resnet-101 can be replaced by shallow network | |
Lightweight region-based R-CNN [26] | ICPR 2012 ICPR 2014 | No requirement of powerful GPUs | Low detection performance | |
Mask R-CNN for object detection and handcrafted and CNN features [27] | ICPR 2012 ICPR 2014 | Highest performance and inference time | Using expensive GPUs and intensive training | |
Faster R-CNN and score-level fusion of Resnet-50 and Densenet-201 (proposed) | ICPR 2012 ICPR 2014 | High detection performance | Long processing time owing to multiple networks and intensive training |
Technique | Precision | Recall | F1-Measure |
---|---|---|---|
Sommer et al. [14] | 0.519 | 0.798 | 0.629 |
Malon et al. [18] | 0.747 | 0.590 | 0.659 |
Tashk et al. [13,60] | 0.699 | 0.72 | 0.709 |
Irshad [12,60] | 0.698 | 0.74 | 0.718 |
Wang et al. [19] | 0.84 | 0.65 | 0.735 |
Ciresan et al. [17] | 0.88 | 0.70 | 0.782 |
Li et al. [26] | 0.78 | 0.79 | 0.784 |
Chen et al. [20] | 0.804 | 0.772 | 0.788 |
Li et al. [24] | 0.846 | 0.762 | 0.802 |
Paul et al. [15] | 0.835 | 0.811 | 0.823 |
Li et al. [2] | 0.854 | 0.812 | 0.832 |
Proposed method | 0.876 | 0.841 | 0.858 |
Technique | Precision | Recall | F1-Measure |
---|---|---|---|
Li et al. [2] | N.R. | N.R. | 0.572 |
Cai et al. [25] | 0.53 | 0.66 | 0.585 |
Li et al. [24] | 0.495 | 0.785 | 0.607 |
Li et al. [26] | 0.654 | 0.663 | 0.659 |
Dodballapur et al. [27] | 0.58 | 0.82 | 0.68 |
Proposed method | 0.848 | 0.583 | 0.691 |
Technique | Precision | Recall | F1-Measure |
---|---|---|---|
FRCNN | 0.540 | 0.851 | 0.661 |
FRCNN + PP | 0.641 | 0.851 | 0.731 |
FRCNN + PP + D-net | 0.793 | 0.722 | 0.756 |
FRCNN + PP + R-net | 0.7692 | 0.792 | 0.780 |
FRCNN + PP + SF (Proposed) | 0.876 | 0.841 | 0.858 |
Technique | Precision | Recall | F1-Measure |
---|---|---|---|
FRCNN | 0.521 | 0.641 | 0.575 |
FRCNN + PP | 0.536 | 0.64 | 0.584 |
FRCNN + PP + D-net | 0.674 | 0.599 | 0.634 |
FRCNN + PP + R-net | 0.689 | 0.586 | 0.633 |
FRCNN + PP + SF (Proposed) | 0.848 | 0.583 | 0.691 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mahmood, T.; Arsalan, M.; Owais, M.; Lee, M.B.; Park, K.R. Artificial Intelligence-Based Mitosis Detection in Breast Cancer Histopathology Images Using Faster R-CNN and Deep CNNs. J. Clin. Med. 2020, 9, 749. https://doi.org/10.3390/jcm9030749
Mahmood T, Arsalan M, Owais M, Lee MB, Park KR. Artificial Intelligence-Based Mitosis Detection in Breast Cancer Histopathology Images Using Faster R-CNN and Deep CNNs. Journal of Clinical Medicine. 2020; 9(3):749. https://doi.org/10.3390/jcm9030749
Chicago/Turabian StyleMahmood, Tahir, Muhammad Arsalan, Muhammad Owais, Min Beom Lee, and Kang Ryoung Park. 2020. "Artificial Intelligence-Based Mitosis Detection in Breast Cancer Histopathology Images Using Faster R-CNN and Deep CNNs" Journal of Clinical Medicine 9, no. 3: 749. https://doi.org/10.3390/jcm9030749