From Evaluation to Prediction: Analysis of Diabetic Autonomic Neuropathy Using Sudoscan and Artificial Intelligence
Abstract
:1. Introduction
2. Materials and Methods
2.1. Participating Subjects
2.2. Sudoscan Device
2.3. Statistical Analysis
2.4. Machine Learning Algorithms for Prediction
- Linear SVC (Linear Support Vector Classification) [29]—a classifier of SVM type which uses linear kernel for classifying tasks.
- Linear Discriminant Analysis (LDA) [30]—derives class conditional densities according to Bayes’ rule and gives linear decision boundaries.
- Calibrated Classifier CV (corss-validation) [31]—applies cross-validation to calibrate the base classifier in order to produce probabilistic predictions.
- Ridge Classifier CV (corss-validation) [32]—classification model with built-in cross-validation to find the best value of regularization parameter.
- Ridge Classifier [28]—ridge regression model tailored for classification.
- Passive Aggressive Classifier [33]—an online learning algorithm that is significantly fast and can deal with large-scale training problems.
- SGD Classifier (Stochastic Gradient Descent Classifier) [34]—stochastic gradient descent-based linear classifier.
- Perceptron [35]—online learning algorithm based on an artificial neural network.
- Logic Regression (Logistic Regression) [25]—statistical model that uses logistic function handling binary dependent variable.
- LGBM Classifier (Light Gradient Boosting Machine) [36]—a learning technique that uses tree-based learning algorithms.
- Extra Trees Classifier (Extremely Randomized Trees Classifier) [37]—a learning method that combines the outcomes of numerous decision trees, randomly drawn.
- Bernoulli NB (Bernoulli Naive Bayes) [38]—Naive Bayes classifier for binary/boolean features.
- Decision Tree Classifier [39]—classifier that uses a tree-like model of decisions and their possible consequences.
- Nearest Centroid [40]—classifier that assigns to each sample the label of the closest centroid.
- Extra Tree Classifier [41]—single decision tree that is part of the Extremely Randomized Trees ensemble.
- XGB Classifier (Extreme Gradient Boosting) [41]—a fast and efficient gradient-boosted decision tree implementation.
- Random Forest Classifier [26]—a learning technique that uses averaging to increase predictive accuracy after fitting several decision tree classifiers on different subsamples.
- Ada Boost Classifier [42]—a learning method that combines multiple weak classifiers to create a strong one.
- Bagging Classifier [43]—a learning method that fits several model iterations on random subsamples of the dataset and then averages the predictions.
- Nu SVC (Nu Support Vector Classification) [44]—a variant of SVM that uses a parameter nu to control the number of support vectors.
- SVC (Support Vector Classification) [45]—another variant of SVM that employs linear or non-linear classification based on various kernel functions.
- Gaussian NB (Gaussian Naive Bayes) [38]—a probabilistic ML algorithm used for many classification functions, based on the Bayes theorem.
- K-Neighbors Classifier (K-Nearest Neighbors Classifier) [46]—a non-parametric method that uses the nearest neighbors’ majority vote for classification purposes.
- Quadratic Discriminant Analysis (QDA) [47]—a classifier similar to LDA which permits each class to have a covariance matrix.
3. Results
3.1. Statistical Analysis
3.2. Prediction Using Machine Learning Methods
- The first classification model used is a linear one: Logistic Regression. Then was applied Random Forest, a model that uses multiple decision trees to improve classification performance. To evaluate the performance of the classifiers we used the following: confusion matrix, roc_auc_score, and accuracy_score. The confusion matrix gives a complete overview of the classifier performance by comparing predictions with actual values, and the accuracy_score quantifies the percentage of correct predictions. Roc_auc_score assesses the model’s ability to correctly classify positive and negative predictions at different thresholds.
- Preparation of data for classification: all collected values (‘Age’, ‘BMI’, ‘Avg feet’, ‘Avg hands’, ‘DM age’, ‘HbA1c’, ‘Cholesterol’, ‘Triglyceride’, ‘SBP’, ‘DBP’, ‘Creatinine’) are assigned X, and the column ‘Hands’ (average for ESC values measured for the left and right hand—binary: 0—representing the class at risk or possible risk of DAN (78 samples), 1—representing the class at no risk of DAN (57 samples)), the variable Y.
- Data are split into the training set (80%) and the test set (20%), using the train_test_split function.
- The models are initialized with the specified parameters using the two classifiers: LR and Random Forest.
- The model is trained on the training dataset using fit().
- Then, prediction is performed on the test dataset using predict().
- In the last step, the models are evaluated.
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Diabetic Neuropathy. Nat. Rev. Dis. Primers 2019, 5, 42. [CrossRef]
- Rogers, L.C.; Frykberg, R.G.; Armstrong, D.G.; Boulton, A.J.M.; Edmonds, M.; Van, G.H.; Hartemann, A.; Game, F.; Jeffcoate, W.; Jirkovska, A.; et al. The Charcot Foot in Diabetes. Diabetes Care 2011, 34, 2123–2129. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Q.; Qian, Z.; Wu, J.; Liu, J.; Ren, L.; Ren, L. Early Diagnosis of Diabetic Peripheral Neuropathy Based on Infrared Thermal Imaging Technology. Diabetes Metab. Res. Rev. 2021, 37, e3429. [Google Scholar] [CrossRef] [PubMed]
- Castillo-Morquecho, R.; Guevara, E.; Ramirez-GarciaLuna, J.L.; Martínez-Jiménez, M.A.; Medina-Rangel, M.G.; Kolosovas-Machuca, E.S. Digital Infrared Thermography and Machine Learning for Diabetic Foot Assessment: Thermal Patterns and Classification. J. Diabetes Metab. Disord. 2024. [Google Scholar] [CrossRef]
- Ilo, A.; Romsi, P.; Mäkelä, J. Infrared Thermography and Vascular Disorders in Diabetic Feet. J. Diabetes Sci. Technol. 2020, 14, 28–36. [Google Scholar] [CrossRef]
- Lamotte, G.; Sandroni, P. Updates on the Diagnosis and Treatment of Peripheral Autonomic Neuropathies. Curr. Neurol. Neurosci. Rep. 2022, 22, 823–837. [Google Scholar] [CrossRef] [PubMed]
- Duff, M.; Demidova, O.; Blackburn, S.; Shubrook, J. Cutaneous Manifestations of Diabetes Mellitus. Clin. Diabetes 2015, 33, 40–48. [Google Scholar] [CrossRef] [PubMed]
- Hashmi, F.; Nester, C.; Wright, C.; Newton, V.; Lam, S. Characterising the Biophysical Properties of Normal and Hyperkeratotic Foot Skin. J. Foot Ankle Res. 2015, 8, 35. [Google Scholar] [CrossRef]
- Casellini, C.M.; Parson, H.K.; Richardson, M.S.; Nevoret, M.L.; Vinik, A.I. Sudoscan, a Noninvasive Tool for Detecting Diabetic Small Fiber Neuropathy and Autonomic Dysfunction. Diabetes Technol. Ther. 2013, 15, 948–953. [Google Scholar] [CrossRef]
- Gordon Smith, A.; Lessard, M.; Reyna, S.; Doudova, M.; Robinson Singleton, J. The Diagnostic Utility of Sudoscan for Distal Symmetric Peripheral Neuropathy. J. Diabetes Complicat. 2014, 28, 511–516. [Google Scholar] [CrossRef]
- Zhao, Y.; Bao, J.-J.; Ye, L.-F.; Zhou, L. Consistency Analysis Between SUDOSCAN Examinations and Electromyography Results in Patients with Diabetes. Diabetes Metab. Syndr. Obes. Targets Ther. 2022, 15, 3397–3402. [Google Scholar] [CrossRef]
- Selvarajah, D.; Cash, T.; Davies, J.; Sankar, A.; Rao, G.; Grieg, M.; Pallai, S.; Gandhi, R.; Wilkinson, I.D.; Tesfaye, S. SUDOSCAN: A Simple, Rapid, and Objective Method with Potential for Screening for Diabetic Peripheral Neuropathy. PLoS ONE 2015, 10, e0138224. [Google Scholar] [CrossRef]
- Solatidehkordi, Z.; Dhou, S. Detecting Diabetic Autonomic Neuropathy from Electronic Health Records Using Machine Learning. In Proceedings of the 2022 IEEE International Conference on E-health Networking, Application & Services (HealthCom), Genoa, Italy, 17–19 October 2022; pp. 205–209. [Google Scholar]
- Ellahham, S. Artificial Intelligence: The Future for Diabetes Care. Am. J. Med. 2020, 133, 895–900. [Google Scholar] [CrossRef] [PubMed]
- Dankwa-Mullan, I.; Rivo, M.; Sepulveda, M.; Park, Y.; Snowdon, J.; Rhee, K. Transforming Diabetes Care Through Artificial Intelligence: The Future Is Here. Popul. Health Manag. 2019, 22, 229–242. [Google Scholar] [CrossRef] [PubMed]
- Gosak, L.; Martinović, K.; Lorber, M.; Stiglic, G. Artificial Intelligence Based Prediction Models for Individuals at Risk of Multiple Diabetic Complications: A Systematic Review of the Literature. J. Nurs. Manag. 2022, 30, 3765–3776. [Google Scholar] [CrossRef]
- Dagliati, A.; Marini, S.; Sacchi, L.; Cogni, G.; Teliti, M.; Tibollo, V.; De Cata, P.; Chiovato, L.; Bellazzi, R. Machine Learning Methods to Predict Diabetes Complications. J. Diabetes Sci. Technol. 2018, 12, 295–302. [Google Scholar] [CrossRef] [PubMed]
- Salahouddin, T.; Petropoulos, I.N.; Ferdousi, M.; Ponirakis, G.; Asghar, O.; Alam, U.; Kamran, S.; Mahfoud, Z.R.; Efron, N.; Malik, R.A.; et al. Artificial Intelligence–Based Classification of Diabetic Peripheral Neuropathy From Corneal Confocal Microscopy Images. Diabetes Care 2021, 44, e151–e153. [Google Scholar] [CrossRef]
- Williams, B.M.; Borroni, D.; Liu, R.; Zhao, Y.; Zhang, J.; Lim, J.; Ma, B.; Romano, V.; Qi, H.; Ferdousi, M.; et al. An Artificial Intelligence-Based Deep Learning Algorithm for the Diagnosis of Diabetic Neuropathy Using Corneal Confocal Microscopy: A Development and Validation Study. Diabetologia 2020, 63, 419–430. [Google Scholar] [CrossRef]
- Alam, U.; Anson, M.; Meng, Y.; Preston, F.; Kirthi, V.; Jackson, T.L.; Nderitu, P.; Cuthbertson, D.J.; Malik, R.A.; Zheng, Y.; et al. Artificial Intelligence and Corneal Confocal Microscopy: The Start of a Beautiful Relationship. J. Clin. Med. 2022, 11, 6199. [Google Scholar] [CrossRef]
- SUDOSCAN. Available online: https://www.sudoscan.com/ (accessed on 1 July 2024).
- Lefaucheur, J.-P. Measurement of Electrochemical Conductance of Penile Skin Using Sudoscan®: A New Tool to Assess Neurogenic Impotence. Neurophysiol. Clin. Neurophysiol. 2017, 47, 253–260. [Google Scholar] [CrossRef]
- Chiu, L.-T.; Lin, Y.-L.; Wang, C.-H.; Hwu, C.-M.; Liou, H.-H.; Hsu, B.-G. Electrochemical Skin Conductance by Sudoscan in Non-Dialysis Chronic Kidney Disease Patients. J. Clin. Med. 2023, 13, 187. [Google Scholar] [CrossRef] [PubMed]
- Zhu, X.; Mao, F.; Liu, S.; Zheng, H.; Lu, B.; Li, Y. Association of SUDOSCAN Values with Vibration Perception Threshold in Chinese Patients with Type 2 Diabetes Mellitus. Int. J. Endocrinol. 2017, 2017, 8435252. [Google Scholar] [CrossRef]
- LaValley, M.P. Logistic Regression. Circulation 2008, 117, 2395–2399. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, Y.; Zhang, J. New Machine Learning Algorithm: Random Forest. In Information Computing and Applications; Liu, B., Ma, M., Chang, J., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7473, pp. 246–252. ISBN 978-3-642-34061-1. [Google Scholar]
- Baralis, E.; Chiusano, S.; Garza, P. A Lazy Approach to Associative Classification. IEEE Trans. Knowl. Data Eng. 2008, 20, 156–171. [Google Scholar] [CrossRef]
- The Python Package Index (Pypi). Available online: https://pypi.org/project/lazypredict/ (accessed on 1 July 2024).
- Zhang, C.; Shao, X.; Li, D. Knowledge-Based Support Vector Classification Based on C-SVC. Procedia Comput. Sci. 2013, 17, 1083–1090. [Google Scholar] [CrossRef]
- Izenman, A.J. Linear Discriminant Analysis. In Modern Multivariate Statistical Techniques; Springer Texts in Statistics; Springer: New York, NY, USA, 2013; pp. 237–280. ISBN 978-0-387-78188-4. [Google Scholar]
- Cohen, I.; Goldszmidt, M. Properties and Benefits of Calibrated Classifiers. In Knowledge Discovery in Databases: PKDD 2004; Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3202, pp. 125–136. ISBN 978-3-540-23108-0. [Google Scholar]
- Hazarika, B.B.; Gupta, D.; Borah, P. An Intuitionistic Fuzzy Kernel Ridge Regression Classifier for Binary Classification. Appl. Soft Comput. 2021, 112, 107816. [Google Scholar] [CrossRef]
- Lu, J.; Zhao, P.; Hoi, S.C.H. Online Passive-Aggressive Active Learning. Mach. Learn. 2016, 103, 141–183. [Google Scholar] [CrossRef]
- Deepa, N.; Prabadevi, B.; Maddikunta, P.K.; Gadekallu, T.R.; Baker, T.; Khan, M.A.; Tariq, U. An AI-Based Intelligent System for Healthcare Analysis Using Ridge-Adaline Stochastic Gradient Descent Classifier. J. Supercomput. 2021, 77, 1998–2017. [Google Scholar] [CrossRef]
- Gallant, S.I. Perceptron-Based Learning Algorithms. IEEE Trans. Neural Netw. 1990, 1, 179–191. [Google Scholar] [CrossRef]
- Fan, J.; Ma, X.; Wu, L.; Zhang, F.; Yu, X.; Zeng, W. Light Gradient Boosting Machine: An Efficient Soft Computing Model for Estimating Daily Reference Evapotranspiration with Local and External Meteorological Data. Agric. Water Manag. 2019, 225, 105758. [Google Scholar] [CrossRef]
- Geurts, P.; Ernst, D.; Wehenkel, L. Extremely Randomized Trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef]
- Vikramkumar, B.; Vijaykumar, T. Bayes and Naive Bayes Classifier. arXiv 2014. [Google Scholar] [CrossRef]
- Azar, A.T.; El-Metwally, S.M. Decision Tree Classifiers for Automated Medical Diagnosis. Neural Comput. Appl. 2013, 23, 2387–2403. [Google Scholar] [CrossRef]
- Thulasidas, M. Nearest Centroid: A Bridge between Statistics and Machine Learning. In Proceedings of the 2020 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), Takamatsu, Japan, 8–11 December 2020; pp. 9–16. [Google Scholar]
- Bhati, B.S.; Rai, C.S. Ensemble Based Approach for Intrusion Detection Using Extra Tree Classifier. In Intelligent Computing in Engineering; Solanki, V.K., Hoang, M.K., Lu, Z., Pattnaik, P.K., Eds.; Advances in Intelligent Systems and Computing; Springer: Singapore, 2020; Volume 1125, pp. 213–220. ISBN 9789811527791. [Google Scholar]
- An, T.-K.; Kim, M.-H. A New Diverse AdaBoost Classifier. In Proceedings of the 2010 International Conference on Artificial Intelligence and Computational Intelligence, Sanya, China, 23–24 October 2010; pp. 359–363. [Google Scholar]
- Skurichina, M.; Duin, R.P.W. Bagging for Linear Classifiers. Pattern Recognit. 1998, 31, 909–930. [Google Scholar] [CrossRef]
- Gu, B.; Sheng, V.S. A Robust Regularization Path Algorithm for ν-Support Vector Classification. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 1241–1248. [Google Scholar] [CrossRef]
- Chang, C.-C.; Lin, C.-J. Training v -Support Vector Classifiers: Theory and Algorithms. Neural Comput. 2001, 13, 2119–2147. [Google Scholar] [CrossRef]
- Peterson, L. K-Nearest Neighbor. Scholarpedia 2009, 4, 1883. [Google Scholar] [CrossRef]
- Tharwat, A. Linear vs. Quadratic Discriminant Analysis Classifier: A Tutorial. Int. J. Appl. Pattern Recognit. 2016, 3, 145–180. [Google Scholar] [CrossRef]
- Tesfaye, S.; Chaturvedi, N.; Eaton, S.E.M.; Ward, J.D.; Manes, C.; Ionescu-Tirgoviste, C.; Witte, D.R.; Fuller, J.H. Vascular Risk Factors and Diabetic Neuropathy. N. Engl. J. Med. 2005, 352, 341–350. [Google Scholar] [CrossRef] [PubMed]
- Müller, G.; Parfentyeva, E.; Olschewsky, J.; Bornstein, S.R.; Schwarz, P.E.H. Assessment of Small Fiber Neuropathy to Predict Future Risk of Type 2 Diabetes. Prim. Care Diabetes 2013, 7, 269–273. [Google Scholar] [CrossRef]
- Hosseini Sarkhosh, S.M.; Esteghamati, A.; Hemmatabadi, M.; Daraei, M. Predicting Diabetic Nephropathy in Type 2 Diabetic Patients Using Machine Learning Algorithms. J. Diabetes Metab. Disord. 2022, 21, 1433–1441. [Google Scholar] [CrossRef] [PubMed]
Mean (std) | 25th Percentile | 75th Percentile | |
---|---|---|---|
Age | 60.9 (11.47) | 55.0 | 69.0 |
BMI | 31.73 (6.15) | 27.5 | 35.0 |
Left foot | 62.8 (15.36) | 50.0 | 75.0 |
Right foot | 61.4 (15.78) | 49.0 | 74.0 |
Left hand | 57.5 (14.73) | 48.0 | 69.0 |
Right hand | 57.8 (14.38) | 49.0 | 69.0 |
Neuro risk | 60.0 (13.19) | 50.2 | 71.3 |
Cardio risk | 37.3 (11.37) | 30.0 | 43.0 |
Nefro risk | 61.5 (18.65) | 49.0 | 71.0 |
DM age | 11.02 (8.63) | 4.0 | 16.0 |
HbA1C | 8.73 (2.00) | 7.2 | 9.9 |
Cholesterol | 202.9 (48.76) | 172.7 | 233.3 |
Triglyceride | 159.0 (81.2) | 103.0 | 185.0 |
SBP | 143.0 (20.13) | 129.0 | 156.3 |
DBP | 80.0 (12.06) | 70.0 | 87.3 |
Creatinine | 0.9 (0.25) | 0.6 | 1.0 |
Percentage of Type 1 Diabetes | 12.21% |
ESC Feet | ESC Hands | ||||||
---|---|---|---|---|---|---|---|
Weight Status | NAD | No NAD | PossibleNAD | NAD | No NAD | Possible NAD | |
No tratment | Normal weight | 2 | 16 | 2 | 1 | 15 | 4 |
Overweight | 7 | 21 | 5 | 4 | 17 | 12 | |
Obese | 30 | 29 | 23 | 15 | 25 | 42 | |
Neurotrophic tratment | Normal weight | 1 | 1 | 0 | 1 | 1 | 0 |
Overweight | 1 | 10 | 1 | 2 | 6 | 4 | |
Obese | 4 | 16 | 3 | 1 | 13 | 9 |
ESC Feet | ESC Hands | ||||||
---|---|---|---|---|---|---|---|
CHO * Values | NAD | No NAD | PossibleNAD | NAD | No NAD | Possible NAD | |
No tratment | <180 mg/dl | 1 | 31 | 5 | 5 | 23 | 9 |
(180, 200) mg/dl | 4 | 9 | 3 | 1 | 9 | 6 | |
>200 mg/dl | 32 | 26 | 24 | 31 | 25 | 26 | |
Neurotrophic tratment | <180 mg/dl | 0 | 14 | 0 | 2 | 9 | 3 |
(180, 200) mg/dl | 1 | 3 | 0 | 1 | 3 | 0 | |
>200 mg/dl | 5 | 10 | 4 | 6 | 8 | 5 |
ESC Feet | ESC Hands | ||||||
---|---|---|---|---|---|---|---|
TG * Values | NAD | No NAD | Possible NAD | NAD | No NAD | Possible NAD | |
No tratment | <180 mg/dl | 8 | 34 | 11 | 7 | 31 | 15 |
(180, 200) mg/dl | 1 | 6 | 5 | 1 | 3 | 8 | |
>200 mg/dl | 26 | 26 | 16 | 29 | 23 | 18 | |
Neurotrophic tratment | <180 mg/dl | 1 | 11 | 0 | 1 | 8 | 3 |
(180, 200) mg/dl | 1 | 4 | 0 | 0 | 5 | 0 | |
>200 mg/dl | 4 | 12 | 4 | 8 | 7 | 5 |
Hands | Feet | |||||
---|---|---|---|---|---|---|
r | p | 95% CI | r | p | 95% CI | |
Age | −0.14 | 0.073 | [−0.65, −0.23] | −0.09 | 0.224 | [−0.66, −0.24] |
BMI | −0.20 | 0.009 | [−0.35, −0.05] | −0.31 | <0.001 | [−0.45, −0.17] |
Neuro risk | −0.89 | <0.001 | [−0.95, −0.83] | −0.89 | <0.001 | [−0.95, −0.83] |
Cardio risk | −0.08 | 0.305 | [−0.23, 0.07] | −0.23 | 0.003 | [−0.37, −0.09] |
Nefro risk | −0.24 | 0.001 | [−0.38, −0.10] | −0.30 | <0.001 | [−0.44, −0.16] |
DM age | −0.23 | 0.003 | [−0.50, −0.16] | −0.15 | 0.047 | [−0.52, −0.18] |
HbA1C | 0.04 | 0.613 | [−0.13, 0.21] | −0.02 | 0.809 | [−0.19, 0.15] |
Cholesterol | −0.31 | <0.001 | [−0.15, 0.19] | −0.48 | <0.001 | [−0.46, 0.16] |
Triglyceride | −0.30 | 0.001 | [−0.34, −0.04] | −0.26 | 0.001 | [−0.37, −0.07] |
SBP | −0.04 | 0.573 | [−0.21, 0.13] | −0.17 | 0.028 | [−0.32, −0.02] |
DBP | −0.01 | 0.947 | [−0.18, 0.16] | −0.04 | 0.588 | [−0.21, 0.13] |
Creatinine | −0.02 | 0.747 | [−0.19, 0.15] | 0.04 | 0.568 | [−0.13, 0.21] |
Algorithm | Accuracy | ROC AUC | F1 Score | Sensitivity | Specificity | Confusion Matrix |
---|---|---|---|---|---|---|
Logistic Regression | 92.59% | 92.50% | 92.31% | 91.67% | 93.33% | [[14, 1], [1, 11]] |
Random Forest | 96.30% | 95.83% | 96.00% | 91.67% | 100% | [[15, 0], [1, 11]] |
Algorithm | Mean Accuracy | Mean ROC AUC |
---|---|---|
Logistic Regression | 95.6% | 99.75% |
Random Forest | 99.23% | 100% |
Model | Accuracy | Balanced Accuracy | ROC AUC | F1 Score |
---|---|---|---|---|
Linear SVC | 0.96 | 0.97 | 0.97 | 0.96 |
Linear Discriminat Analysis | 0.96 | 0.97 | 0.97 | 0.96 |
Calibrated Classifier CV | 0.96 | 0.97 | 0.97 | 0.96 |
Ridge Classifier CV | 0.96 | 0.97 | 0.97 | 0.96 |
Ridge Classifier | 0.96 | 0.97 | 0.97 | 0.96 |
Passive Aggressive Classifier | 0.96 | 0.97 | 0.97 | 0.96 |
SGD Classifier | 0.93 | 0.95 | 0.95 | 0.93 |
Perceptron | 0.93 | 0.95 | 0.95 | 0.93 |
Logic Regression | 0.93 | 0.95 | 0.95 | 0.93 |
LGBM Classifier | 0.96 | 0.94 | 0.94 | 0.96 |
Extra Trees Classifier | 0.96 | 0.94 | 0.94 | 0.96 |
Bernoulli NB | 0.89 | 0.92 | 0.92 | 0.89 |
Decision Tree Classifier | 0.89 | 0.92 | 0.92 | 0.89 |
Nearest Centroid | 0.89 | 0.91 | 0.91 | 0.89 |
Extra Tree Classifier | 0.93 | 0.91 | 0.91 | 0.93 |
XGB Classifier | 0.93 | 0.91 | 0.91 | 0.93 |
Random Forest Classifier | 0.93 | 0.91 | 0.91 | 0.93 |
Ada Boost Classifier | 0.93 | 0.91 | 0.91 | 0.93 |
Bagging Classifier | 0.93 | 0.91 | 0.91 | 0.93 |
Nu SVC | 0.93 | 0.88 | 0.88 | 0.92 |
SVC | 0.93 | 0.88 | 0.88 | 0.92 |
Gaussian NB | 0.81 | 0.87 | 0.87 | 0.82 |
K-Neighbors Classifier | 0.89 | 0.85 | 0.85 | 0.89 |
Quadratic Discriminant Analysis | 0.78 | 0.77 | 0.77 | 0.78 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Toderean, R.; Cobuz, M.; Dimian, M.; Cobuz, C. From Evaluation to Prediction: Analysis of Diabetic Autonomic Neuropathy Using Sudoscan and Artificial Intelligence. Appl. Sci. 2024, 14, 7406. https://doi.org/10.3390/app14167406
Toderean R, Cobuz M, Dimian M, Cobuz C. From Evaluation to Prediction: Analysis of Diabetic Autonomic Neuropathy Using Sudoscan and Artificial Intelligence. Applied Sciences. 2024; 14(16):7406. https://doi.org/10.3390/app14167406
Chicago/Turabian StyleToderean, Roxana, Maricela Cobuz, Mihai Dimian, and Claudiu Cobuz. 2024. "From Evaluation to Prediction: Analysis of Diabetic Autonomic Neuropathy Using Sudoscan and Artificial Intelligence" Applied Sciences 14, no. 16: 7406. https://doi.org/10.3390/app14167406
APA StyleToderean, R., Cobuz, M., Dimian, M., & Cobuz, C. (2024). From Evaluation to Prediction: Analysis of Diabetic Autonomic Neuropathy Using Sudoscan and Artificial Intelligence. Applied Sciences, 14(16), 7406. https://doi.org/10.3390/app14167406