Classification of Retinitis Pigmentosa Stages Based on Machine Learning by Fusion of Image Features of VF and MfERG Maps
Abstract
:1. Introduction
- Firstly, in this study, we present a novel method for automatic staging of RP by fusing handcrafted features extracted from VF grayscale maps and mfERG P1 wave amplitude maps and using them as input data to machine-learning algorithms such as Support Vector Machine (SVM), Decision Tree (DT), K-Nearest Neighbor (KNN), and Discriminant Analysis (DA). To the best of our knowledge, this is the first study to use grayscale maps and P1 wave amplitude maps for the staging of RP.
- Secondly, we determine the best handcrafted features to optimize the performance of machine-learning models by performing feature selection with the RF algorithm.
- Thirdly, this paper contributes to the field of ophthalmology by validating the effectiveness of grayscale maps and P1 wave amplitude maps in the staging of RP disease.
- Finally, our study shows that the automatic detection of RP stages can offer important support to researchers in managing the disease and determining optimal treatment.
Related Works
2. Materials and Methods
2.1. Patients Database
2.2. Preprocessing
2.3. Handcrafted Feature Extraction
2.4. Feature Selection Based on Random Forest (RF)
2.5. Machine-Learning Models with All Features and Selected Features
- Support Vector Machine (SVM) is a commonly used machine-learning algorithm designed to classify both linear and nonlinear separable datasets by forming an optimal hyperplane that maximizes the margin between classes. To establish the basic structure of the SVM, the regularization parameter C, the kernel function, the kernel degree, and the coding method need to be determined. Accordingly, GS was applied to determine the optimal hyperparameters, and 1000 was chosen as the regularization parameter C, Gaussian (RBF) as the kernel function, 2.2 as the kernel order, and one vs. one as the coding method.
- Decision Tree (DT) is a machine-learning classifier that can be applied to regression and classification problems by operating based on a divide-and-conquer strategy. It consists of a tree structure where each internal node represents a decision rule, and the leaf nodes correspond to class labels. The classification process starts at the root node, where the samples are recursively split according to their feature values until they reach a terminal leaf node. To improve the classification performance of the DT model, it is important to determine its hyperparameters. In this context, the optimal configuration using GS involved setting the minimum number of leaf node observations to 1, the maximum number of decision splitting operations to 32, and the splitting criterion to 2.
- K-Nearest Neighbor (KNN) is a distance-based supervised learning algorithm used for classification and regression tasks. Unlike other algorithms, KNN does not create an explicit training phase; instead, it stores all training set samples and classifies new data based on similarity measures. The classification process involves calculating the distance between a new data point and all training set instances and assigning the class label based on the majority vote of k nearest neighbors. The value of k (the number of neighbors) and the distance metric used are two key hyperparameters that significantly affect the performance of the KNN model. Therefore, the choice of optimal hyperparameters is critical. In this study, the k parameter of the KNN model is set to 2, and the distance metric is set to Cityblock using the GS method.
- Discriminant Analysis (DA) is a statistical and machine-learning model applied to data visualization, classification, and dimensionality reduction tasks. This model aims to separate different classes in a dataset by finding a discriminant function that maximizes the separation between them. DA can be grouped into Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA), both of which are widely used in various classification problems. LDA assumes that different classes share the same covariance matrix, leading to linear decision boundaries, while QDA allows each class to have its own covariance matrix, leading to quadratic decision boundaries. In this study, GS was applied to the type of discriminant function and the amount of regularization to improve the classification performance by optimizing the hyperparameters of the DA model. As a result, QDA is set as the optimal discriminant function, and the optimal gamma value is set to 0.
2.6. Performance Evaluation Metrics
- The confusion matrix enables a comprehensive summary of a classifier’s predictions by comparing predicted to actual class labels. It is made up of four basic components. These are true positives (TPs), representing correctly classified positive examples; true negatives (TNs), representing correctly classified negative examples; false positives (FPs), where negative examples are misclassified as positive; and false negatives (FNs), where positive examples are misclassified as negative.
- Accuracy is a metric that represents the ratio of correctly classified samples to the total number of samples. It is expressed as Equation (2) below:
- Precision, also known as positive predictive value, measures the proportion of correctly predicted positive samples among all samples classified as positive. It is calculated by the formula given in Equation (3):
- Recall, also named sensitivity, measures the model’s ability to correctly identify all positive samples. It is calculated by the formula in Equation (4):
- The F1 score represents the harmonic mean of precision and recall and provides a balanced measure when both false positives and false negatives need to be minimized. The formula for this metric is presented in Equation (5):
- Specificity, also known as true negative rate, determines the ability of the classifier to correctly identify negative examples. Equation (6) defines the formula for specificity:
- The ROC curve is a graph which shows the relationship between the true positive rate (TPR) and the false positive rate (FPR). AUC, a summary indicator of this curve, is used to measure the success of the classifier in distinguishing between different classes. The formula for AUC is given in Equation (7) [51]:
3. Results
3.1. Experimental Setup
3.2. Evaluation of Feature Selection with RF
3.3. Results of Classification Models with All Features
3.4. Results of Machine Learning Models with Selected Features with RF Algorithm
3.5. Comparative Analysis of Performance Evaluation of Classification Models
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
RP | Retinitis Pigmentosa |
VA | Visual acuity |
VF | Visual field |
OCT | Optical coherence tomography |
MfERG | Multifocal electroretinography |
CNN | Convolutional Neural Networks |
GLCM | Gray Level Co-occurrence Matrix |
SVM | Support Vector Machine |
DT | Decision Tree |
KNN | K-Nearest Neighbor |
DA | Discriminant Analysis |
ES | Early stage |
MS | Mid-stage |
AS | Advanced stage |
QDA | Quadratic Discriminant Analysis |
References
- Grossniklaus, H.E.; Geisert, E.E.; Nickerson, J.M. Introduction to the Retina. In Progress in Molecular Biology and Translational Science; Elsevier: Amsterdam, The Netherlands, 2015; Volume 134, pp. 383–396. ISBN 978-0-12-801059-4. [Google Scholar]
- Badar, M.; Haris, M.; Fatima, A. Application of Deep Learning for Retinal Image Analysis: A Review. Comput. Sci. Rev. 2020, 35, 100203. [Google Scholar] [CrossRef]
- Qi, X.-Y.; Mi, C.-H.; Cao, D.-R.; Chen, X.-Q.; Zhang, P. Retinitis Pigmentosa and Stem Cell Therapy. Int. J. Ophthalmol. 2024, 17, 1363–1369. [Google Scholar] [CrossRef]
- Huang, Z.-Y.; Liang, L.-N.; Li, Y.-M.; Xu, K.; Li, X.-Y. Genetic, Environmental and Other Risk Factors for Progression of Retinitis Pigmentosa. Int. J. Ophthalmol. 2022, 15, 828–837. [Google Scholar] [CrossRef] [PubMed]
- Hamel, C. Retinitis Pigmentosa. Orphanet J. Rare Dis. 2006, 1, 40. [Google Scholar] [CrossRef] [PubMed]
- Wu, K.Y.; Kulbay, M.; Toameh, D.; Xu, A.Q.; Kalevar, A.; Tran, S.D. Retinitis Pigmentosa: Novel Therapeutic Targets and Drug Development. Pharmaceutics 2023, 15, 685. [Google Scholar] [CrossRef] [PubMed]
- Dias, M.F.; Joo, K.; Kemp, J.A.; Fialho, S.L.; Da Silva Cunha, A.; Woo, S.J.; Kwon, Y.J. Molecular Genetics and Emerging Therapies for Retinitis Pigmentosa: Basic Research and Clinical Perspectives. Progress. Retin. Eye Res. 2018, 63, 107–131. [Google Scholar] [CrossRef]
- Iftikhar, M.; Lemus, M.; Usmani, B.; Campochiaro, P.A.; Sahel, J.A.; Scholl, H.P.N.; Shah, S.M.A. Classification of Disease Severity in Retinitis Pigmentosa. Br. J. Ophthalmol. 2019, 103, 1595–1599. [Google Scholar] [CrossRef]
- Oner, A.; Kahraman, N.S. A New Classification for Retinitis Pigmentosa Including Multifocal Electroretinography to Evaluate the Disease Severity. Open J. Ophthalmol. 2023, 13, 37–47. [Google Scholar] [CrossRef]
- Wong, S.H.; Plant, G.T. How to Interpret Visual Fields. Pract. Neurol. 2015, 15, 374–381. [Google Scholar] [CrossRef]
- Nguyen, X.-T.-A.; Moekotte, L.; Plomp, A.S.; Bergen, A.A.; Van Genderen, M.M.; Boon, C.J.F. Retinitis Pigmentosa: Current Clinical Management and Emerging Therapies. IJMS 2023, 24, 7481. [Google Scholar] [CrossRef]
- Hoffmann, M.B.; Bach, M.; Kondo, M.; Li, S.; Walker, S.; Holopigian, K.; Viswanathan, S.; Robson, A.G. ISCEV Standard for Clinical Multifocal Electroretinography (mfERG) (2021 Update). Doc. Ophthalmol. 2021, 142, 5–16. [Google Scholar] [CrossRef] [PubMed]
- Oner, A.; Kahraman, N.S. Suprachoroidal Umbilical Cord Derived Mesenchymal Stem Cell Implantation for the Treatment of Retinitis Pigmentosa in Pediatric Patients. Am. J. Stem Cell Res. 2023, 5, 1–7. [Google Scholar]
- Kahraman, N.S.; Oner, A. Umbilical Cord Derived Mesenchymal Stem Cell Implantation in Retinitis Pigmentosa: A 6-Month Follow-up Results of a Phase 3 Trial. Int. J. Ophthalmol. 2020, 13, 1423–1429. [Google Scholar] [CrossRef]
- Suzuki, K. Overview of Deep Learning in Medical Imaging. Radiol. Phys. Technol. 2017, 10, 257–273. [Google Scholar] [CrossRef]
- Lahmiri, S.; Boukadoum, M. Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features Extraction from Biomedical Images. J. Med. Eng. 2013, 2013, 1–13. [Google Scholar] [CrossRef] [PubMed]
- Dimitrovski, I.; Kocev, D.; Kitanovski, I.; Loskovska, S.; Džeroski, S. Improved Medical Image Modality Classification Using a Combination of Visual and Textual Features. Comput. Med. Imaging Graph. 2015, 39, 14–26. [Google Scholar] [CrossRef]
- Kommineni, J.; Mandala, S.; Sunar, M.S.; Chakravarthy, P.M. Accurate Computing of Facial Expression Recognition Using a Hybrid Feature Extraction Technique. J. Supercomput. 2021, 77, 5019–5044. [Google Scholar] [CrossRef]
- Nanni, L.; Brahnam, S.; Ghidoni, S.; Lumini, A. Bioimage Classification with Handcrafted and Learned Features. IEEE/ACM Trans. Comput. Biol. Bioinf. 2019, 16, 874–885. [Google Scholar] [CrossRef]
- Bansal, P.; Garg, R.; Soni, P. Detection of Melanoma in Dermoscopic Images by Integrating Features Extracted Using Handcrafted and Deep Learning Models. Comput. Ind. Eng. 2022, 168, 108060. [Google Scholar] [CrossRef]
- Saraswat, M.; Arya, K.V. Feature Selection and Classification of Leukocytes Using Random Forest. Med. Biol. Eng. Comput. 2014, 52, 1041–1052. [Google Scholar] [CrossRef]
- Li, X.; Li, L.; Jiang, Y.; Wang, H.; Qiao, X.; Feng, T.; Luo, H.; Zhao, Y. Vision-Language Models in Medical Image Analysis: From Simple Fusion to General Large Models. Inf. Fusion. 2025, 118, 102995. [Google Scholar] [CrossRef]
- Hardas, M.; Mathur, S.; Bhaskar, A.; Kalla, M. Retinal Fundus Image Classification for Diabetic Retinopathy Using SVM Predictions. Phys. Eng. Sci. Med. 2022, 45, 781–791. [Google Scholar] [CrossRef]
- Tăbăcaru, G.; Moldovanu, S.; Răducan, E.; Barbu, M. A Robust Machine Learning Model for Diabetic Retinopathy Classification. J. Imaging 2023, 10, 8. [Google Scholar] [CrossRef]
- Ramasamy, L.K.; Padinjappurathu, S.G.; Kadry, S.; Damaševičius, R. Detection of Diabetic Retinopathy Using a Fusion of Textural and Ridgelet Features of Retinal Images and Sequential Minimal Optimization Classifier. PeerJ Comput. Sci. 2021, 7, e456. [Google Scholar] [CrossRef] [PubMed]
- Raveenthini, M.; Lavanya, R. Multiocular Disease Detection Using a Generic Framework Based on Handcrafted and Deep Learned Feature Analysis. Intell. Syst. Appl. 2023, 17, 200184. [Google Scholar] [CrossRef]
- Aljohani, A.; Aburasain, R.Y. A Hybrid Framework for Glaucoma Detection through Federated Machine Learning and Deep Learning Models. BMC Med. Inform. Decis. Mak. 2024, 24, 115. [Google Scholar] [CrossRef]
- Ameh Joseph, A.; Abdullahi, M.; Junaidu, S.B.; Hassan Ibrahim, H.; Chiroma, H. Improved Multi-Classification of Breast Cancer Histopathological Images Using Handcrafted Features and Deep Neural Network (Dense Layer). Intell. Syst. Appl. 2022, 14, 200066. [Google Scholar] [CrossRef]
- Al-Jabbar, M.; Alshahrani, M.; Senan, E.M.; Ahmed, I.A. Analyzing Histological Images Using Hybrid Techniques for Early Detection of Multi-Class Breast Cancer Based on Fusion Features of CNN and Handcrafted. Diagnostics 2023, 13, 1753. [Google Scholar] [CrossRef]
- Prince, R.; Niu, Z.; Khan, Z.Y.; Emmanuel, M.; Patrick, N. COVID-19 Detection from Chest X-Ray Images Using CLAHE-YCrCb, LBP, and Machine Learning Algorithms. BMC Bioinform. 2024, 25, 28. [Google Scholar] [CrossRef]
- Alinsaif, S. COVID-19 Image Classification: A Comparative Performance Analysis of Hand-Crafted vs. Deep Features. Computation 2024, 12, 66. [Google Scholar] [CrossRef]
- Rashid, R.; Aslam, W.; Mehmood, A.; Vargas, D.L.R.; Diez, I.D.L.T.; Ashraf, I. A Detectability Analysis of Retinitis Pigmetosa Using Novel SE-ResNet Based Deep Learning Model and Color Fundus Images. IEEE Access 2024, 12, 28297–28309. [Google Scholar] [CrossRef]
- Liu, Y.; Xie, H.; Zhao, X.; Tang, J.; Yu, Z.; Wu, Z.; Tian, R.; Chen, Y.; Chen, M.; Ntentakis, D.P.; et al. Automated Detection of Nine Infantile Fundus Diseases and Conditions in Retinal Images Using a Deep Learning System. EPMA J. 2024, 15, 39–51. [Google Scholar] [CrossRef] [PubMed]
- Wang, X.; Li, H.; Zheng, H.; Sun, G.; Wang, W.; Yi, Z.; Xu, A.; He, L.; Wang, H.; Jia, W.; et al. Automatic Detection of 30 Fundus Diseases Using Ultra-Widefield Fluorescein Angiography with Deep Experts Aggregation. Ophthalmol. Ther. 2024, 13, 1125–1144. [Google Scholar] [CrossRef] [PubMed]
- Alharbi, M. Multi-Classification of Eye Disease Based on Fundus Images Using Hybrid Squeeze Net and LRCN Model. Multimed. Tools Appl. 2024, 83, 69197–69226. [Google Scholar] [CrossRef]
- Arsalan, M.; Haider, A.; Park, C.; Hong, J.S.; Park, K.R. Multiscale Triplet Spatial Information Fusion-Based Deep Learning Method to Detect Retinal Pigment Signs with Fundus Images. Eng. Appl. Artif. Intell. 2024, 133, 108353. [Google Scholar] [CrossRef]
- Nagasato, D.; Sogawa, T.; Tanabe, M.; Tabuchi, H.; Numa, S.; Oishi, A.; Ohashi Ikeda, H.; Tsujikawa, A.; Maeda, T.; Takahashi, M.; et al. Estimation of Visual Function Using Deep Learning From Ultra-Widefield Fundus Images of Eyes With Retinitis Pigmentosa. JAMA Ophthalmol. 2023, 141, 305. [Google Scholar] [CrossRef]
- Wang, Y.-Z.; Juroch, K.; Birch, D.G. Deep Learning-Assisted Measurements of Photoreceptor Ellipsoid Zone Area and Outer Segment Volume as Biomarkers for Retinitis Pigmentosa. Bioengineering 2023, 10, 1394. [Google Scholar] [CrossRef]
- Chen, T.-C.; Lim, W.S.; Wang, V.Y.; Ko, M.-L.; Chiu, S.-I.; Huang, Y.-S.; Lai, F.; Yang, C.-M.; Hu, F.-R.; Jang, J.-S.R.; et al. Artificial Intelligence–Assisted Early Detection of Retinitis Pigmentosa—The Most Common Inherited Retinal Degeneration. J. Digit. Imaging 2021, 34, 948–958. [Google Scholar] [CrossRef]
- Wang, Y.-Z.; Wu, W.; Birch, D.G. A Hybrid Model Composed of Two Convolutional Neural Networks (CNNs) for Automatic Retinal Layer Segmentation of OCT Images in Retinitis Pigmentosa (RP). Trans. Vis. Sci. Tech. 2021, 10, 9. [Google Scholar] [CrossRef]
- Khaing, T.T.; Okamoto, T.; Ye, C.; Mannan, M.A.; Miura, G.; Yokouchi, H.; Nakano, K.; Aimmanee, P.; Makhanov, S.S.; Haneishi, H. Automatic Measurement of Choroidal Thickness and Vasculature in Optical Coherence Tomography Images of Eyes with Retinitis Pigmentosa. Artif. Life Robot. 2022, 27, 70–79. [Google Scholar] [CrossRef]
- Masumoto, H.; Tabuchi, H.; Nakakura, S.; Ohsugi, H.; Enno, H.; Ishitobi, N.; Ohsugi, E.; Mitamura, Y. Accuracy of a Deep Convolutional Neural Network in Detection of Retinitis Pigmentosa on Ultrawide-Field Images. PeerJ 2019, 7, e6900. [Google Scholar] [CrossRef] [PubMed]
- Iadanza, E.; Goretti, F.; Sorelli, M.; Melillo, P.; Pecchia, L.; Simonelli, F.; Gherardelli, M. Automatic Detection of Genetic Diseases in Pediatric Age Using Pupillometry. IEEE Access 2020, 8, 34949–34961. [Google Scholar] [CrossRef]
- Giambene, B.; Verdina, T.; Pennino, M.; Fabbrucci, M.; Cavallini, G.M.; Menchini, U. Multifocal Electroretinographic Responses in Sector Retinitis Pigmentosa. Int. Ophthalmol. 2020, 40, 703–708. [Google Scholar] [CrossRef] [PubMed]
- Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst., Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
- Paramastri, G.A.; Pramunendar, R.A. Glaucoma Detection Through Fundus Images Using Radial Basis Function With Color and GLCM Feature Extraction. In Proceedings of the 2021 International Seminar on Application for Technology of Information and Communication (iSemantic), Semarangin, Indonesia, 18 September 2021; IEEE: New York, NY, USA, 2021; pp. 222–227. [Google Scholar]
- Mutlag, W.K.; Ali, S.K.; Aydam, Z.M.; Taher, B.H. Feature Extraction Methods: A Review. J. Phys. Conf. Ser. 2020, 1591, 012028. [Google Scholar] [CrossRef]
- Iranzad, R.; Liu, X. A Review of Random Forest-Based Feature Selection Methods for Data Science Education and Applications. Int. J. Data Sci. Anal. 2024, 16, 132–145. [Google Scholar] [CrossRef]
- Ali, R.; Nipu, S.M.A.; Khan, S.A. A Decision Support System for Classifying Supplier Selection Criteria Using Machine Learning and Random Forest Approach. Decis. Anal. J. 2023, 7, 100238. [Google Scholar] [CrossRef]
- Fei, H.; Fan, Z.; Wang, C.; Zhang, N.; Wang, T.; Chen, R.; Bai, T. Cotton Classification Method at the County Scale Based on Multi-Features and Random Forest Feature Selection Algorithm and Classifier. Remote Sens. 2022, 14, 829. [Google Scholar] [CrossRef]
- Moldovanu, S.; Obreja, C.-D.; Biswas, K.C.; Moraru, L. Towards Accurate Diagnosis of Skin Lesions Using Feedforward Back Propagation Neural Networks. Diagnostics 2021, 11, 936. [Google Scholar] [CrossRef]
- Althubiti, S.A.; Paul, S.; Mohanty, R.; Mohanty, S.N.; Alenezi, F.; Polat, K. Ensemble Learning Framework with GLCM Texture Extraction for Early Detection of Lung Cancer on CT Images. Comput. Math. Methods Med. 2022, 2022, 1–14. [Google Scholar] [CrossRef]
- Gupta, S.; Thakur, S.; Gupta, A. Comparative Study of Different Machine Learning Models for Automatic Diabetic Retinopathy Detection Using Fundus Image. Multimed. Tools Appl. 2023, 83, 34291–34322. [Google Scholar] [CrossRef]
- Tamim, N.; Elshrkawey, M.; Nassar, H. Accurate Diagnosis of Diabetic Retinopathy and Glaucoma Using Retinal Fundus Images Based on Hybrid Features and Genetic Algorithm. Appl. Sci. 2021, 11, 6178. [Google Scholar] [CrossRef]
- Singh, L.K.; Khanna, M. Pooja A Novel Multimodality Based Dual Fusion Integrated Approach for Efficient and Early Prediction of Glaucoma. Biomed. Signal Process. Control 2022, 73, 103468. [Google Scholar] [CrossRef]
- Nadeem, M.W.; Goh, H.G.; Hussain, M.; Liew, S.-Y.; Andonovic, I.; Khan, M.A. Deep Learning for Diabetic Retinopathy Analysis: A Review, Research Challenges, and Future Directions. Sensors 2022, 22, 6780. [Google Scholar] [CrossRef]
- Li, X.; Jiang, Y.; Zhang, J.; Li, M.; Luo, H.; Yin, S. Lesion-Attention Pyramid Network for Diabetic Retinopathy Grading. Artif. Intell. Med. 2022, 126, 102259. [Google Scholar] [CrossRef] [PubMed]
- Zedan, M.; Zulkifley, M.; Ibrahim, A.; Moubark, A.; Kamari, N.; Abdani, S. Automated Glaucoma Screening and Diagnosis Based on Retinal Fundus Images Using Deep Learning Approaches: A Comprehensive Review. Diagnostics 2023, 13, 2180. [Google Scholar] [CrossRef]
- Jeong, Y.; Hong, Y.-J.; Han, J.-H. Review of Machine Learning Applications Using Retinal Fundus Images. Diagnostics 2022, 12, 134. [Google Scholar] [CrossRef]
Class of Patients | Total Subjects (Eyes) | Male/Female | Age (Mean ± SD) |
---|---|---|---|
ES | 32 (16 RE/16 LE) | 16/16 | 38.65 ± 17.64 |
MS | 44 (19 RE/25 LE) | 24/20 | 35.70 ± 14.45 |
AS | 48 (32 RE/16 LE) | 33/15 | 32.60 ± 11.41 |
Name of Features | Formulas | Descriptions |
---|---|---|
GLCM homogeneity | The homogeneity measures the intensity similarity of pixels in the GLCM matrix. | |
GLCM contrast | The contrast is the measurement of local intensity variations in grayscale images. | |
GLCM correlation | The correlation describes the relationship between the gray levels of neighboring pixels in an image. | |
GLCM energy | The energy measures the sum of the quadratic components in the GLCM matrix. | |
Mean | The mean calculates the average color value of each channel in RGB and HSV color spaces. | |
Standard deviation | The standard deviation represents the color distribution of each channel in RGB and HSV color spaces. | |
Skewness | The skewness calculates the symmetry of the intensity distribution of each channel in RGB and HSV color spaces. | |
Energy | The energy is the sum of the quadrature components of the channels in RGB and HSV color spaces. | |
Entropy | The entropy measures the irregularity in RGB and HSV images. | |
Kurtosis | The kurtosis calculates the peak of the color distribution in RGB and HSV according to the normal distribution. |
Ranking of Features | Imaging Modality | Names of Features | Importance Score |
---|---|---|---|
1 | MfERG | RGB Entropy_R | 0.6118 |
2 | VF | GLCM Contrast_90° | 0.5290 |
3 | MfERG | RGB Std_R | 0.5218 |
4 | VF | GLCM Homogeneity_90° | 0.5208 |
5 | MfERG | RGB Energy_R | 0.4966 |
6 | VF | Histogram Kurtosis | 0.4868 |
7 | VF | GLCM Energy_90° | 0.4744 |
8 | MfERG | RGB Kurtosis_R | 0.4404 |
9 | VF | GLCM Correlation_90° | 0.4254 |
Models | Accuracy | Precision | Recall | F1 Score | Specificity |
---|---|---|---|---|---|
SVM | 0.8710 | 0.8677 | 0.8839 | 0.8758 | 0.9328 |
DT | 0.8468 | 0.8504 | 0.8562 | 0.8533 | 0.9233 |
KNN | 0.8548 | 0.8542 | 0.8591 | 0.8566 | 0.9249 |
QDA | 0.7742 | 0.7623 | 0.7987 | 0.7801 | 0.8836 |
Number of Features | Models | Accuracy | Precision | Recall | F1 Score | Specificity |
---|---|---|---|---|---|---|
5 | SVM | 0.9355 | 0.9362 | 0.9402 | 0.9382 | 0.9670 |
DT | 0.8952 | 0.8943 | 0.8923 | 0.8933 | 0.9471 | |
KNN | 0.9274 | 0.9154 | 0.9388 | 0.9269 | 0.9621 | |
QDA | 0.9194 | 0.9186 | 0.9293 | 0.9239 | 0.9573 | |
6 | SVM | 0.9435 | 0.9419 | 0.9423 | 0.9421 | 0.9715 |
DT | 0.9032 | 0.8990 | 0.8990 | 0.8990 | 0.9517 | |
KNN | 0.9435 | 0.9454 | 0.9427 | 0.9440 | 0.9715 | |
QDA | 0.9355 | 0.9343 | 0.9323 | 0.9333 | 0.9679 | |
7 | SVM | 0.9839 | 0.9826 | 0.9855 | 0.9841 | 0.9917 |
DT | 0.9194 | 0.9217 | 0.9211 | 0.9214 | 0.9600 | |
KNN | 0.9677 | 0.9681 | 0.9678 | 0.9680 | 0.9839 | |
QDA | 0.9516 | 0.9467 | 0.9574 | 0.9520 | 0.9746 | |
8 | SVM | 0.9516 | 0.9501 | 0.9574 | 0.9537 | 0.9743 |
DT | 0.8952 | 0.8955 | 0.8933 | 0.8944 | 0.9476 | |
KNN | 0.9435 | 0.9426 | 0.9424 | 0.9425 | 0.9717 | |
QDA | 0.9355 | 0.9378 | 0.9356 | 0.9367 | 0.9671 | |
9 | SVM | 0.9435 | 0.9426 | 0.9506 | 0.9466 | 0.9700 |
DT | 0.8871 | 0.8810 | 0.8875 | 0.8842 | 0.9422 | |
KNN | 0.9274 | 0.9211 | 0.9315 | 0.9262 | 0.9624 | |
QDA | 0.9355 | 0.9372 | 0.9370 | 0.9371 | 0.9669 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Karaman, B.; Güven, A.; Öner, A.; Kahraman, N.S. Classification of Retinitis Pigmentosa Stages Based on Machine Learning by Fusion of Image Features of VF and MfERG Maps. Electronics 2025, 14, 1867. https://doi.org/10.3390/electronics14091867
Karaman B, Güven A, Öner A, Kahraman NS. Classification of Retinitis Pigmentosa Stages Based on Machine Learning by Fusion of Image Features of VF and MfERG Maps. Electronics. 2025; 14(9):1867. https://doi.org/10.3390/electronics14091867
Chicago/Turabian StyleKaraman, Bayram, Ayşegül Güven, Ayşe Öner, and Neslihan Sinim Kahraman. 2025. "Classification of Retinitis Pigmentosa Stages Based on Machine Learning by Fusion of Image Features of VF and MfERG Maps" Electronics 14, no. 9: 1867. https://doi.org/10.3390/electronics14091867
APA StyleKaraman, B., Güven, A., Öner, A., & Kahraman, N. S. (2025). Classification of Retinitis Pigmentosa Stages Based on Machine Learning by Fusion of Image Features of VF and MfERG Maps. Electronics, 14(9), 1867. https://doi.org/10.3390/electronics14091867