Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images
Abstract
:1. Introduction
- (i)
- A comparative study of OD segmentation using five different deep CNNs as the encoder in DeepLabv3+ architecture;
- (ii)
- Comparison of eleven pretrained CNNs as the glaucoma classifier using transfer learning techniques;
- (iii)
- Comparison of eleven pretrained CNNs as the feature extractors using SVM classifier
- (iv)
- Ensemble of the diverse CNN based learners from P1 and P2 using probability averaging strategy.
2. Methodology
2.1. OD Segmentation Using DeepLabv3+ Semantic Segmentation
2.2. Classification of Normal and Glaucoma Retinal Images Using Deep CNNs
- (P1)
- pretrained CNNs for transfer learning;
- (P2)
- pretrained deep CNNs as the feature extractors;
- (P3)
- an ensemble of methods, P1 and P2.
2.2.1. Transfer Learning Using Pretrained Deep CNNs (Method P1)
2.2.2. Pretrained CNNs as Features Descriptors and SVM as Classifier (Method P2)
2.2.3. Ensemble Learning of Methods in P1 and P2
3. Experimental Results
- True Positive (TP) is the number of correctly prediction of OD pixels;
- True Negative (TN) is the number of correctly detection of non-OD pixels;
- False Positive (FP) is the number of wrongly detected of non-OD pixels as OD pixels;
- False Negative (FN) is the number of wrongly identified of OD pixels as non-OD pixels.
4. Discussion
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- World Health Organization. Global Report on Vision. World Health Organization. 2019. Available online: http://www.who.int/publications-detail/world-report-on-vision (accessed on 28 February 2020).
- Gupta, N.; Aung, T.; Congdon, N.; Lerner, F.; Dada, T.; Olawoye, S.; Resnikoff, S.; Wang, N.; Wormald, R. International Council of Ophthalmology Guidelines for Glaucoma Eye Care; International Council of Ophthalmology: San Francisco, CA, USA, 2016. [Google Scholar]
- Fu, H.; Cheng, J.; Xu, Y.; Zhang, C.; Wong, D.W.K.; Liu, J.; Cao, X. Disc-aware ensemble network for glaucoma screening from fundus image. IEEE Trans. Med. Imaging 2018, 37, 2493–2501. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lundy, D.C.; Choplin, N.T. Atlas of Glaucoma; Informa: London, UK, 2007. [Google Scholar]
- Orlando, J.I.; Fu, H.; Breda, J.B.; van Keer, K.; Bathula, D.R.; Diaz-Pinto, A.; Fang, R.; Heng, P.A.; Kim, J.; Lee, J.; et al. REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med Image Anal. 2020, 59, 101570. [Google Scholar] [CrossRef]
- Diaz-Pinto, A.; Morales, S.; Naranjo, V.; Köhler, T.; Mossi, J.M.; Navea, A. CNNs for automatic glaucoma assessment using fundus images: An extensive validation. Biomed. Eng. Online 2019, 18, 29. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, Z.; Yin, F.S.; Liu, J.; Wong, W.K.; Tan, N.M.; Lee, B.H.; Cheng, J.; Wong, T.Y. Origa-light: An online retinal fundus image database for glaucoma analysis and research. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 3065–3068. [Google Scholar]
- Fumero, F.; Alayón, S.; Sanchez, J.L.; Sigut, J.; Gonzalez-Hernandez, M. RIM-ONE: An open retinal image database for optic nerve evaluation. In Proceedings of the 2011 24th International Symposium on Computer-Based Medical Systems (CBMS), Bristol, UK, 27–30 June 2011; pp. 1–6. [Google Scholar] [CrossRef]
- Sivaswamy, J.; Krishnadas, S.R.; Joshi, G.D.; Jain, M.; Tabish, A.U.S. Drishti-gs: Retinal image dataset for optic nerve head (onh) segmentation. In Proceedings of the 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), Beijing, China, 29 April–2 May 2014; pp. 53–56. [Google Scholar] [CrossRef] [Green Version]
- Budai, A.; Bock, R.; Maier, A.; Hornegger, J.; Michelson, G. Robust vessel segmentation in fundus images. Int. J. Biomed. Imaging 2013, 2013, 154860. [Google Scholar] [CrossRef] [Green Version]
- Foong, A.W.; Saw, S.M.; Loo, J.L.; Shen, S.; Loon, S.C.; Rosman, M.; Aung, T.; Tan, D.T.; Tai, E.S.; Wong, T.Y. Rationale and methodology for a population-based study of eye diseases in Malay people: The Singapore Malay eye study (SiMES). Ophthalmic Epidemiol. 2007, 14, 25–35. [Google Scholar] [CrossRef]
- Sng, C.C.; Foo, L.L.; Cheng, C.Y.; Allen, J.C., Jr.; He, M.; Krishnaswamy, G.; Nongpiur, M.E.; Friedman, D.S.; Wong, T.Y.; Aung, T. Determinants of anterior chamber depth: The Singapore Chinese Eye Study. Ophthalmology 2012, 119, 1143–1150. [Google Scholar] [CrossRef]
- Pan, C.W.; Wong, T.Y.; Chang, L.; Lin, X.Y.; Lavanya, R.; Zheng, Y.F.; Kok, Y.O.; Wu, R.Y.; Aung, T.; Saw, S.M. Ocular biometry in an urban Indian population: The Singapore Indian Eye Study (SINDI). Investig. Ophthalmol. Vis. Sci. 2011, 52, 6636–6642. [Google Scholar] [CrossRef] [Green Version]
- Cheng, J.; Yin, F.; Wong, D.W.K.; Tao, D.; Liu, J. Sparse dissimilarity-constrained coding for glaucoma screening. IEEE Trans. Biomed. Eng. 2015, 62, 1395–1403. [Google Scholar] [CrossRef]
- Chakravarty, A.; Sivaswamy, J. Glaucoma classification with a fusion of segmentation and image-based features. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 689–692. [Google Scholar]
- Karkuzhali, S.; Manimegalai, D. Computational intelligence-based decision support system for glaucoma detection. Biomed. Res. 2017, 28, 976–1683. [Google Scholar]
- Mohamed, N.A.; Zulkifley, M.A.; Zaki, W.M.D.W.; Hussain, A. An automated glaucoma screening system using cup-to-disc ratio via Simple Linear Iterative Clustering superpixel approach. Biomed. Signal Process. Control 2019, 53, 101454. [Google Scholar] [CrossRef]
- Selvathi, D.; Prakash, N.B.; Gomathi, V.; Hemalakshmi, G.R. Fundus Image Classification Using Wavelet Based Features in Detection of Glaucoma. Biomed. Pharmacol. J. 2018, 11, 795–805. [Google Scholar] [CrossRef]
- Maheshwari, S.; Pachori, R.B.; Acharya, U.R. Automated diagnosis of glaucoma using empirical wavelet transform and correntropy features extracted from fundus images. IEEE J. Biomed. Health Inform. 2016, 21, 803–813. [Google Scholar] [CrossRef]
- Guo, F.; Mai, Y.; Zhao, X.; Duan, X.; Fan, Z.; Zou, B. and Xie, B. Yanbao: A mobile app using the measurement of clinical parameters for glaucoma screening. IEEE Access 2018, 6, 77414–77428. [Google Scholar] [CrossRef]
- Bajwa, M.N.; Malik, M.I.; Siddiqui, S.A.; Dengel, A.; Shafait, F.; Neumeier, W.; Ahmed, S. Two-stage framework for optic disc localization and glaucoma classification in retinal fundus images using deep learning. BMC Med. Inform. Decis. Mak. 2019, 19, 136. [Google Scholar]
- Gómez-Valverde, J.J.; Antón, A.; Fatti, G.; Liefers, B.; Herranz, A.; Santos, A.; Sánchez, C.I.; Ledesma-Carbayo, M.J. Automatic glaucoma classification using color fundus images based on convolutional neural networks and transfer learning. Biomed. Opt. Express 2019, 10, 892–913. [Google Scholar] [CrossRef] [Green Version]
- Orlando, J.I.; Prokofyeva, E.; del Fresno, M.; Blaschko, M.B. Convolutional neural network transfer for automated glaucoma identification. SPIE: International Society for Optics and Photonics. In Proceedings of the 12th International Symposium on Medical Information Processing and Analysis, Tandil, Argentina, 26 January 2017; Volume 10160, p. 101600U. [Google Scholar]
- Asaoka, R.; Tanito, M.; Shibata, N.; Mitsuhashi, K.; Nakahara, K.; Fujino, Y.; Matsuura, M.; Murata, H.; Tokumo, K.; Kiuchi, Y. Validation of a deep learning model to screen for glaucoma using images from different fundus cameras and data augmentation. Ophthalmol. Glaucoma 2019, 2, 224–231. [Google Scholar] [CrossRef]
- Juneja, M.; Singh, S.; Agarwal, N.; Bali, S.; Gupta, S.; Thakur, N.; Jindal, P. Automated detection of Glaucoma using deep learning convolution network (G-net). Multimed. Tools Appl. 2020, 79, 15531–15553. [Google Scholar] [CrossRef]
- Win, K.Y.; Maneerat, N.; Hamamoto, K.; Syna, S. A cascade of encoder-decoder with atrous separable convolution and ensemble deep convolutional neural networks for Tuberculosis detection. IEEE Access 2020. under review. [Google Scholar]
- Karim, M.R.; Rahman, A.; Jares, J.B.; Decker, S.; Beyan, O. A snapshot neural ensemble method for cancer-type prediction based on copy number variations. Neural Comput. Appl. 2019, 1–19. [Google Scholar] [CrossRef] [Green Version]
- Tiulpin, A.; Thevenot, J.; Rahtu, E.; Lehenkari, P.; Saarakkala, S. Automatic knee osteoarthritis diagnosis from plain radiographs: A deep learning-based approach. Sci. Rep. 2018, 8, 1–10. [Google Scholar] [CrossRef]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the Springer on European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, San Francisco, CA, USA, 3–8 December 2012; pp. 1097–1105. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–45206. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 4, p. 12. [Google Scholar]
- Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8697–8710. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Rokach, L. Ensemble-based classifiers. Artif. Intell. Rev. 2010, 33, 1–39. [Google Scholar] [CrossRef]
Dataset | Number of Images | Ground Truth Information | Source | Avail | ||
---|---|---|---|---|---|---|
Norm | Glau | Total | ||||
REFUGE | 1080 | 120 | 1200 | Pixel-wise annotations of OD and OC Localization mask of Fovea Classification labels of normal and glaucomatous | Orlando et al. [5] | Online |
ACRIMA | 309 | 396 | 705 | Classification labels of normal and glaucomatous | Diaz-Pinto et al. [6] | Online |
ORIGA | 482 | 168 | 650 | Segmentation masks of OD and OC Classification labels of normal and glaucomatous | Zhang et al. [7] | Online |
RIMONE | 92 | 39 | 131 | Manual segmentation masks of OD Classification labels of normal and glaucomatous | Fumero et al. [8] | Online |
DRISHTI-GS1 | 31 | 70 | 101 | Manual segmentation masks of optic nerve head for 50 training images Classification labels of normal and glaucomatous | Sivaswamy et al. [9] | Online |
HRF | 15 * DR 15 | 15 | 45 | Segmentation masks of field of view (FOV), blood vessels and OD Classification Labels of Normal, DR and Glaucomatous | Budai et al. [10] | Online |
SiMES | 482 | 168 | 650 | Classification labels of normal and glaucomatous | Foong et al. [11] | Private |
SCES | 1630 | 46 | 1676 | Classification labels of normal and glaucomatous | Sng et al. [12] | Private |
SiNDI | 5670 | 113 | 5783 | Classification labels of normal and glaucomatous | Fu et al. [13] | Private |
Network | Depth | Size (MB) | Parameters (×106) | Image Input Size |
---|---|---|---|---|
AlexNet | 8 | 227 | 61.0 | 227 × 227 |
GoogleNet | 22 | 27 | 7.0 | 224 × 224 |
InceptionV3 | 48 | 89 | 23.9 | 299 × 299 |
XceptionNet | 71 | 85 | 22.9 | 299 × 299 |
Resnet-101 | 101 | 167 | 44.6 | 224 × 224 |
ShuffleNet | 50 | 6.3 | 1.4 | 224 × 224 |
SqueezeNet | 18 | 4.6 | 1.24 | 227 × 227 |
MobileNet | 53 | 13 | 3.5 | 224 × 224 |
InceptionResNet | 164 | 209 | 55.9 | 299 × 299 |
DenseNet | 201 | 77 | 20.0 | 224 × 224 |
NasNet-Large | * | 360 | 88.9 | 331 × 331 |
Datasets | No. of Images | Image Acquisition | ||||
---|---|---|---|---|---|---|
Camera | FOV | Resolution | Ethnicity | Focus | ||
REFUGE | 1200 | Train: Zeiss Visucam 500 | – | JPEG 2124 × 2056 | Chinese | Center macula and OD visible |
Validation/Testing: Canon CR-2 | – | JPEG 1634 × 1634 | ||||
ACRIMA | 705 | Topcon TRC | 35° | JPEG 2048 × 1536 | Spanish | OD |
ORIGA | 650 | – | – | JPEG 3072 × 2048 | Malay | OD |
RIM–ONE | 131 | Kowa WX 3D | 34° | JPEG 2144 × 1424 | Spanish | OD |
DRISHTI–GS1 | 101 | NM/FA | 30° | PNG 2896 × 1944 | Indian | OD |
Total | 2787 | – |
Methods | Performance Measures | ||
---|---|---|---|
Accuracy | Dice Coefficient | IoU | |
DeepLabv3+ + ResNet18 | 99.70% | 90.95% | 83.56% |
DeepLabv3+ + ResNet50 | 99.64% | 88.78% | 80.26% |
DeepLabv3+ + XceptionNet | 99.71% | 91.39% | 84.48% |
DeepLabv3+ + InceptionResNet | 99.72% | 91.29% | 84.21% |
DeepLabv3+ + MobileNet | 99.70% | 91.73% | 84.89% |
Pretrained Deep CNNs | Performance Measures | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
REFUGE | RIM-ONE | ACRIMA | ORGIA | DRISTI-GS1 | ||||||
ACC (%) | AUC (%) | ACC (%) | AUC (%) | ACC (%) | AUC (%) | ACC (%) | AUC (%) | ACC (%) | AUC (%) | |
AlexNet | 90.00 | 81.69 | 74.87 | 69.21 | 96.23 | 99.85 | 68.42 | 59.62 | 70.00 | 81.48 |
GoogleNet | 92.50 | 88.69 | 94.74 | 91.03 | 91.51 | 96.49 | 71.79 | 77.03 | 70.00 | 79.89 |
InceptionV3 | 90.25 | 87.26 | 71.05 | 76.92 | 93.87 | 99.36 | 75.38 | 76.86 | 70.00 | 65.08 |
XceptionNet | 89.00 | 85.29 | 81.58 | 88.14 | 98.11 | 99.80 | 77.44 | 81.68 | 66.67 | 66.67 |
ResNet-50 | 93.50 | 92.97 | 92.11 | 98.08 | 95.75 | 99.56 | 75.90 | 80.19 | 73.33 | 78.31 |
SqueezeNet | 91.00 | 89.58 | 97.37 | 100 | 95.75 | 98.84 | 78.46 | 79.17 | 56.67 | 76.19 |
ShuffleNet | 93.25 | 94.09 | 92.11 | 97.44 | 96.23 | 99.75 | 72.31 | 80.04 | 86.67 | 78.84 |
MobileNet | 94.50 | 93.04 | 92.11 | 99.36 | 98.58 | 99.96 | 80.51 | 81.54 | 76.67 | 73.02 |
DenseNet | 93.00 | 94.64 | 94.74 | 99.04 | 99.53 | 99.98 | 77.44 | 73.92 | 73.33 | 78.31 |
InceptionResNet | 92.25 | 92.08 | 68.42 | 61.86 | 96.23 | 98.84 | 80.00 | 81.31 | 70.00 | 68.78 |
NasNet-Large | 92.75 | 90.44 | 86.84 | 95.19 | 96.23 | 99.85 | 73.85 | 77.72 | 70.00 | 69.31 |
Deep Features Descriptors | Performance Measures | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
REFUGE | RIM-ONE | ACRIMA | ORGIA | DRISTI-GS1 | ||||||
ACC (%) | AUC (%) | ACC (%) | AUC (%) | ACC (%) | AUC (%) | ACC (%) | AUC (%) | ACC (%) | AUC (%) | |
AlexNet | 93.00 | 88.07 | 86.84 | 92.63 | 91.51 | 95.47 | 76.41 | 80.46 | 80.00 | 79.89 |
GoogleNet | 93.75 | 87.81 | 84.21 | 90.38 | 90.57 | 90.57 | 74.87 | 75.23 | 70.00 | 72.49 |
InceptionV3 | 92.25 | 90.31 | 63.16 | 74.04 | 95.28 | 95.28 | 78.97 | 81.39 | 73.33 | 70.90 |
XceptionNet | 93.25 | 90.26 | 78.95 | 81.09 | 91.04 | 94.90 | 75.90 | 76.12 | 73.33 | 68.78 |
ResNet-50 | 94.75 | 90.66 | 81.58 | 91.67 | 93.40 | 98.39 | 75.38 | 80.12 | 70.00 | 62.96 |
SqueezeNet | 92.00 | 84.83 | 81.58 | 96.15 | 94.81 | 98.45 | 78.46 | 80.06 | 80.00 | 78.84 |
ShuffleNet | 93.75 | 92.24 | 89.47 | 97.44 | 92.45 | 97.99 | 74.36 | 77.56 | 86.67 | 83.07 |
MobileNet | 92.75 | 89.25 | 86.84 | 94.23 | 92.92 | 96.48 | 77.44 | 81.30 | 76.67 | 85.71 |
DenseNet | 93.50 | 93.94 | 84.21 | 89.74 | 96.23 | 98.75 | 78.46 | 81.92 | 70.00 | 77.25 |
InceptionResNet | 92.00 | 88.78 | 78.95 | 91.99 | 92.45 | 96.63 | 78.46 | 82.06 | 83.33 | 91.53 |
NasNet-Large | 93.25 | 90.81 | 86.84 | 93.59 | 91.04 | 95.92 | 74.87 | 79.23 | 83.33 | 80.42 |
Ensemble Learners | Performance Measures | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
REFUGE | RIM-ONE | ACRIMA | ORGIA | DRISTI-GS1 | ||||||
ACC (%) | AUC (%) | ACC (%) | AUC (%) | ACC (%) | ACC (%) | AUC (%) | ACC (%) | AUC (%) | ACC (%) | |
Ensemble of P1 (E(P1)) | 95.59 | 95.10 | 97.37 | 100 | 99.53 | 99.98 | 83.59 | 88.86 | 83.33 | 85.19 |
Ensemble of P2 (E(P2)) | 95.75 | 94.32 | 92.11 | 99.04 | 96.23 | 99.01 | 80.00 | 85.26 | 90.00 | 92.06 |
Datasets | Performance Measures | ||||
---|---|---|---|---|---|
RIM-ONE | Maheshwari et al. (2016) [19] | Gómez-Valverde et al. (2019) [22] | Diaz-Pinto et al. (2019) [6] | Proposed method | |
ACC = 81.32% | Sen = 87.01% Spe = 89.01% AUC = 94% | ACC = 71.21% AUC = 85.75% | E(P1) | E(P2) | |
ACC = 97.37% AUC = 100% | ACC = 92.11% AUC = 99.04% | ||||
DRISTI-GS1 | Chakravarty et al. (2017) [15] | Orlando et al. (2017) [23] | Diaz-Pinto et al. (2019) [6] | Proposed method | |
ACC = 76.77% AUC = 78% | AUC = 76.26% | ACC = 75.25% AUC = 80.41% | E(P1) | E(P2) | |
ACC = 83.33% AUC = 85.19% | ACC = 90% AUC = 92.06% | ||||
ORIGA | GUO et al. (2018) [20] | Bajwa et al. (2019) [21] | Proposed method | ||
ACC = 76.90% AUC = 83.10% | Sen=71.17% AUC=87.40% | E(P1) | E(P2) | ||
ACC = 83.59% AUC = 88.86% | ACC = 80.00% AUC = 85.26% | ||||
ACRIMA | Diaz-Pinto et al. (2019) [6] | Proposed method | |||
ACC = 70.21% AUC = 76.78% | E(P1) | E(P2) | |||
ACC = 99.53% AUC = 99.98% | ACC = 96.23% AUC = 99.01% | ||||
REFUGE | Orlando et al. (2020) [5] | Proposed method | |||
Sen = 97.52% AUC = 98.85% | E(P1) | E(P2) | |||
ACC = 95.59% AUC = 95.10% | ACC = 95.75% AUC = 94.32% |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sreng, S.; Maneerat, N.; Hamamoto, K.; Win, K.Y. Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images. Appl. Sci. 2020, 10, 4916. https://doi.org/10.3390/app10144916
Sreng S, Maneerat N, Hamamoto K, Win KY. Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images. Applied Sciences. 2020; 10(14):4916. https://doi.org/10.3390/app10144916
Chicago/Turabian StyleSreng, Syna, Noppadol Maneerat, Kazuhiko Hamamoto, and Khin Yadanar Win. 2020. "Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images" Applied Sciences 10, no. 14: 4916. https://doi.org/10.3390/app10144916
APA StyleSreng, S., Maneerat, N., Hamamoto, K., & Win, K. Y. (2020). Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images. Applied Sciences, 10(14), 4916. https://doi.org/10.3390/app10144916