Next Article in Journal
Adaptive Quantization Parameter Estimation for HEVC Based Surveillance Scalable Video Coding
Next Article in Special Issue
A Synchronized Multi-Unit Wireless Platform for Long-Term Activity Monitoring
Previous Article in Journal
Delta Multi-Stage Interconnection Networks for Scalable Wireless On-Chip Communication
Previous Article in Special Issue
Realizing an Integrated Multistage Support Vector Machine Model for Augmented Recognition of Unipolar Depression
Open AccessArticle

Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction

1
Department of CSE, Vignan’s Foundation for Science Technology and Research, Guntur 522213, India
2
Department of IT, Vignan’s Foundation for Science Technology and Research, Guntur 522213, India
3
Faculty of Computer Science, University of Northern British Columbia, Prince George, BC V2N 4Z9, Canada
4
Department of Computer and Electronics Systems Engineering, Hankuk University of Foreign Studies, Yongin-si 17035, Korea
5
School of Information Technology and Engineering, The Vellore Institute of Technology (VIT), Vellore 632014, India
6
Department of Computer Science, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Korea
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(6), 914; https://doi.org/10.3390/electronics9060914
Received: 3 May 2020 / Revised: 25 May 2020 / Accepted: 28 May 2020 / Published: 30 May 2020
(This article belongs to the Special Issue Computational Intelligence in Healthcare)
Diabetic Retinopathy (DR) is one of the major causes of visual impairment and blindness across the world. It is usually found in patients who suffer from diabetes for a long period. The major focus of this work is to derive optimal representation of retinal images that further helps to improve the performance of DR recognition models. To extract optimal representation, features extracted from multiple pre-trained ConvNet models are blended using proposed multi-modal fusion module. These final representations are used to train a Deep Neural Network (DNN) used for DR identification and severity level prediction. As each ConvNet extracts different features, fusing them using 1D pooling and cross pooling leads to better representation than using features extracted from a single ConvNet. Experimental studies on benchmark Kaggle APTOS 2019 contest dataset reveals that the model trained on proposed blended feature representations is superior to the existing methods. In addition, we notice that cross average pooling based fusion of features from Xception and VGG16 is the most appropriate for DR recognition. With the proposed model, we achieve an accuracy of 97.41%, and a kappa statistic of 94.82 for DR identification and an accuracy of 81.7% and a kappa statistic of 71.1% for severity level prediction. Another interesting observation is that DNN with dropout at input layer converges more quickly when trained using blended features, compared to the same model trained using uni-modal deep features. View Full-Text
Keywords: diabetic retinopathy (DR); pre-trained deep ConvNet; uni-modal deep features; multi-modal deep features; transfer learning; 1D pooling; cross pooling diabetic retinopathy (DR); pre-trained deep ConvNet; uni-modal deep features; multi-modal deep features; transfer learning; 1D pooling; cross pooling
Show Figures

Figure 1

MDPI and ACS Style

Bodapati, J.D.; Naralasetti, V.; Shareef, S.N.; Hakak, S.; Bilal, M.; Maddikunta, P.K.R.; Jo, O. Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction. Electronics 2020, 9, 914.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop