SGD-Based Cascade Scheme for Higher Degrees Wiener Polynomial Approximation of Large Biomedical Datasets
Abstract
:1. Introduction
- Ensuring the highest possible approximation/classification accuracy via the selected method of intellectual analysis;
- Providing high generalization properties of the model based on such an analysis;
- Guaranteeing the high speed of the intelligent analysis method, particularly in the training mode.
- We designed a new ensemble scheme for a higher degree’s Wiener polynomial approximation using SGD regressors that provide a high performance during the analysis of large datasets in the biomedical engineering area;
- We chose the optimal parameters of the designed ensemble (loss of the function of the SGD algorithm, Wiener polynomial degree, and cascade levels that help us to obtain a higher prediction accuracy with strong generalization properties and decrease the duration of its training time;
- We show a higher prediction accuracy and speed of the proposed ensemble scheme when solving the heart rate prediction task using large datasets compared with the existing methods.
2. Materials and Methods
2.1. Wiener Polynomial
2.2. SGD
2.3. Proposed Ensemble Scheme Using Wiener Polynomial and SGD
2.3.1. Training Algorithm for the Proposed Scheme
- We perform a non-linear expansion of the inputs for datasample1 based on (1). Then, we train the SGD of the first node of the ensemble (SGD_1);
- We apply datasample2 on the previously trained node (SGD_1) from step 1. We add the predicted output as a new independent feature to datasample2. We perform procedure (1) and train the SGD of the second node of the ensemble (SGD_2);
- We perform steps 1 and 2 for datasample3 in application mode. We operate (1) on datasample3 extended by one independent variable as a result of step 2, and train the SGD of the third node of the ensemble (SGD_3);
- …..
- We sequentially perform all the previous steps in the application mode to train the last of the nodes of the ensemble. Next, we apply (1) to the expanded datasample3 and perform the SGD training procedure of the last node of the ensemble (SGD_N).
2.3.2. An Application Algorithm for the Proposed Scheme
- We perform a non-linear expansion of the inputs for a test sample or one data vector based on (1) and apply it to the first node of the ensemble (SGD_1);
- We add the predicted output from SGD_1 as a new independent feature, then perform the procedure (1) and apply it to the second node of the ensemble (SGD_2);
- We add the predicted output from SGD_2 as a new independent feature, then perform the procedure (1) and apply it to the second node of the ensemble (SGD_3);
- …
- We perform similar operations with all the other ensemble nodes until we reach the last one. The prediction result of the last node of the ensemble will be the sought value.
- Ensuring a high approximation accuracy due to the use of the Wiener polynomial, applied at each step of the ensemble;
- Ensuring the high performance due to the use of SGD as weak predictors;
- The possibility of a high-order approximation of the Wiener polynomial in an implicit form.
3. Modeling and Results
3.1. Dataset Descriptions
3.2. Performance Indicators
- Maximum residual error (ME):
- Median absolute error (MedAE):
- Mean absolute error (MAE):
- Mean square error (MSE):
- Mean absolute percentage error (MAPE):
- Root mean square error (RMSE):
- Coefficient of determination (R2):
3.3. Investigating the Impact of Loss Function on the Prediction Accuracy of the SGD Algorithm
- Epsilon insensitive;
- Huber;
- The squared epsilon insensitive;
- The squared loss.
3.4. Investigating the Impact of Wiener Polynomial Degree on the Prediction Accuracy and Training Time of the SGD Algorithm
3.5. Investigating the Impact of Cascade Level on the Prediction Accuracy of the Proposed Scheme
3.6. Results of the Application of the Cascading Scheme Using Ito Decomposition and SGD
4. Comparison and Discussion
4.1. Comparison with Existing Methods
4.2. Limitations of the Proposed Approach
4.3. Possibilities for the Future Research
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Garza-Ulloa, J. Applied Biomedical Engineering Using Artificial Intelligence and Cognitive Models; Academic Press: London, UK, 2022; ISBN 978-0-12-820934-9. [Google Scholar]
- Tsmots, I.; Skorokhoda, O. Methods and VLSI-Structures for Neural Element Implementation. In Proceedings of the 2010 VIth International Conference on Perspective Technologies and Methods in MEMS Design, Lviv, Ukraine, 20–23 April 2010; p. 135. [Google Scholar]
- Teslyuk, V.; Beregovskyi, V.; Denysyuk, P.; Teslyuk, T.; Lozynskyi, A. Development and Implementation of the Technical Accident Prevention Subsystem for the Smart Home System. Int. J. Intell. Syst. Appl. 2018, 10, 1–8. [Google Scholar] [CrossRef]
- Radutniy, R.; Nechyporenko, A.; Alekseeva, V.; Titova, G.; Bibik, D.; Gargin, V.V. Automated Measurement of Bone Thickness on SCT Sections and Other Images. In Proceedings of the 2020 IEEE Third International Conference on Data Stream Mining & Processing (DSMP), Lviv, Ukraine, 21–25 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 222–226. [Google Scholar]
- Nechyporenko, A.S.; Radutny, R.; Alekseeva, V.V.; Titova, G.; Gargin, V.V. Complex Automatic Determination of Morphological Parameters for Bone Tissue in Human Paranasal Sinuses. Open Bioinform. J. 2021, 14, 130–137. [Google Scholar] [CrossRef]
- Babichev, S.; Škvor, J. Technique of Gene Expression Profiles Extraction Based on the Complex Use of Clustering and Classification Methods. Diagnostics 2020, 10, 584. [Google Scholar] [CrossRef] [PubMed]
- Mochurad, L.; Yatskiv, M. Simulation of a Human Operator’s Response to Stressors under Production Conditions. In Proceedings of the 3rd International Conference on Informatics and Data-Driven Medicine, Växjö, Sweden, 19–21 November 2020; CEUR-WS 2753. pp. 156–169. [Google Scholar]
- Chumachenko, D.; Chumachenko, T.; Meniailov, I.; Pyrohov, P.; Kuzin, I.; Rodyna, R. On-Line Data Processing, Simulation and Forecasting of the Coronavirus Disease (COVID-19) Propagation in Ukraine Based on Machine Learning Approach. In Proceedings of the Data Stream Mining and Processing, Lviv, Ukraine, 21–25 August 2020; Springer: Cham, Switzerland, 2020; pp. 372–382. [Google Scholar]
- Krak, I.; Barmak, O.; Manziuk, E. Using Visual Analytics to Develop Human and Machine-centric Models: A Review of Approaches and Proposed Information Technology. Comput. Intell. 2020, 38, 921–946. [Google Scholar] [CrossRef]
- Bisikalo, O.; Chernenko, D.; Danylchuk, O.; Kovtun, V.; Romanenko, V. Information Technology for TTF Optimization of an Information System for Critical Use That Operates in Aggressive Cyber-Physical Space. In Proceedings of the 2020 IEEE International Conference on Problems of Infocommunications. Science and Technology (PIC S&T), Kharkiv, Ukraine, 6–9 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 323–329. [Google Scholar]
- Bisikalo, O.V.; Kovtun, V.V.; Kovtun, O.V.; Romanenko, V.B. Research of Safety and Survivability Models of the Information System for Critical Use. In Proceedings of the 2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT), Kyiv, Ukraine, 14–18 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 7–12. [Google Scholar]
- Park, C.; Took, C.C.; Seong, J.-K. Machine Learning in Biomedical Engineering. Biomed. Eng. Lett. 2018, 8, 139–155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Singh, Y.; Tiwari, M. A Novel Hybrid Approach for Detection of Type-2 Diabetes in Women Using Lasso Regression and Artificial Neural Network. Int. J. Intell. Syst. Appl. 2022, 14, 11–20. [Google Scholar] [CrossRef]
- Polatgil, M. Investigation of the Effect of Normalization Methods on ANFIS Success: Forestfire and Diabets Datasets. Int. J. Inf. Technol. Comput. Sci. 2022, 14, 1–8. [Google Scholar] [CrossRef]
- Korystin, O.; Nataliia, S.; Mitina, O. Risk Forecasting of Data Confidentiality Breach Using Linear Regression Algorithm. Int. J. Comput. Netw. Inf. Secur. 2022, 14, 1–13. [Google Scholar] [CrossRef]
- Tepla, T. Biocompatible Materials Selection via New Supervised Learning Methods; LAP LAMBERT Academic Publishing: Chisinau, Moldova, 2019; ISBN 978-613-9-44384-0. [Google Scholar]
- Hu, Z.; Ivashchenko, M.; Lyushenko, L.; Klyushnyk, D. Artificial Neural Network Training Criterion Formulation Using Error Continuous Domain. Int. J. Mod. Educ. Comput. Sci. 2021, 13, 13–22. [Google Scholar] [CrossRef]
- Hu, Z.; Bodyanskiy, Y.V.; Kulishova, N.Y.; Tyshchenko, O.K. A Multidimensional Extended Neo-Fuzzy Neuron for Facial Expression Recognition. Int. J. Intell. Syst. Appl. 2017, 9, 29–36. [Google Scholar] [CrossRef]
- Hu, Z.; Tereykovski, I.A.; Tereykovska, L.O.; Pogorelov, V.V. Determination of Structural Parameters of Multilayer Perceptron Designed to Estimate Parameters of Technical Systems. Int. J. Intell. Syst. Appl. 2017, 9, 57–62. [Google Scholar] [CrossRef] [Green Version]
- Babenko, V.; Panchyshyn, A.; Zomchak, L.; Nehrey, M.; Artym-Drohomyretska, Z.; Lahotskyi, T. Classical Machine Learning Methods in Economics Research: Macro and Micro Level Examples. WSEAS Trans. Bus. Econ. 2021, 18, 209–217. [Google Scholar] [CrossRef]
- Izonin, I.; Trostianchyn, A.; Duriagina, Z.; Tkachenko, R.; Tepla, T.; Lotoshynska, N. The Combined Use of the Wiener Polynomial and SVM for Material Classification Task in Medical Implants Production. Int. J. Intell. Syst. Appl. 2018, 10, 40–47. [Google Scholar] [CrossRef] [Green Version]
- Pandey, H.; Goyal, R.; Virmani, D.; Gupta, C. Ensem_SLDR: Classification of Cybercrime Using Ensemble Learning Technique. Int. J. Comput. Netw. Inf. Secur. 2021, 14, 81–90. [Google Scholar] [CrossRef]
- Maduranga, M.W.P.; Abeysekera, R. TreeLoc: An Ensemble Learning-Based Approach for Range Based Indoor Localization. Int. J. Wirel. Microw. Technol. 2021, 11, 18–25. [Google Scholar] [CrossRef]
- Khan, Z.M. Hybrid Ensemble Learning Technique for Software Defect Prediction. IJMECS 2020, 12, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Kotsovsky, V.; Geche, F.; Batyuk, A. On the Computational Complexity of Learning Bithreshold Neural Units and Networks. In Proceedings of the Lecture Notes in Computational Intelligence and Decision Making, Salisnyj Port, Ukraine, 21–25 May 2019; Springer: Cham, Switzerland, 2019; pp. 189–202. [Google Scholar]
- Garza-Ulloa, J. Machine Learning Models Applied to Biomedical Engineering. In Applied Biomedical Engineering Using Artificial Intelligence and Cognitive Models; Elsevier: Amsterdam, The Netherlands, 2022; pp. 175–334. ISBN 978-0-12-820718-5. [Google Scholar]
- Sajedi, H.; Masoumi, E. Construction of High-Accuracy Ensemble of Classifiers. Int. J. Inf. Technol. Comput. Sci. 2014, 6, 1–10. [Google Scholar] [CrossRef]
- Wu, J.; Chen, S.; Zhou, W.; Wang, N.; Fan, Z. Evaluation of Feature Selection Methods Using Bagging and Boosting Ensemble Techniques on High Throughput Biological Data. In Proceedings of the 2020 10th International Conference on Biomedical Engineering and Technology, Tokyo, Japan, 15 September 2020; ACM: New York, NY, USA, 2020; pp. 170–175. [Google Scholar]
- Ababor Abafogi, A. Boosting Afaan Oromo Named Entity Recognition with Multiple Methods. Int. J. Inf. Eng. Electron. Bus. 2021, 13, 51–59. [Google Scholar] [CrossRef]
- Mateo, J.; Rius-Peris, J.M.; Maraña-Pérez, A.I.; Valiente-Armero, A.; Torres, A.M. Extreme Gradient Boosting Machine Learning Method for Predicting Medical Treatment in Patients with Acute Bronchiolitis. Biocybern. Biomed. Eng. 2021, 41, 792–801. [Google Scholar] [CrossRef]
- Abuhaiba, I.S.I.; Dawoud, H.M. Combining Different Approaches to Improve Arabic Text Documents Classification. Int. J. Intell. Syst. Appl. 2017, 9, 39–52. [Google Scholar] [CrossRef]
- Rahman, T.; Chowdhury, M.; Khandakar, A.; Mahbub, Z.B.; Hossain, M.S.A.; Alhatou, A.; Abdalla, E.; Muthiyal, S.; Islam, K.F.; Kashem, S.B.A.; et al. BIO-CXRNET: A Robust Multimodal Stacking Machine Learning Technique for Mortality Risk Prediction of COVID-19 Patients Using Chest X-Ray Images and Clinical Data. arXiv 2022, arXiv:2206.07595. [Google Scholar]
- Izonin, I.; Greguš, M.L.; Tkachenko, R.; Logoyda, M.; Mishchuk, O.; Kynash, Y. SGD-Based Wiener Polynomial Approximation for Missing Data Recovery in Air Pollution Monitoring Dataset. In Proceedings of the Advances in Computational Intelligence, Gran Canaria, Spain, 12–14 June 2019; Rojas, I., Joya, G., Catala, A., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 781–793. [Google Scholar]
- Group Method of Data Handling (GMDH) for Deep Learning, Data Mining Algorithms Optimization, Fuzzy Models Analysis, Forecasting Neural Networks and Modeling Software Systems. Available online: http://www.gmdh.net/ (accessed on 11 October 2022).
- Lytvynenko, V.; Wojcik, W.; Fefelov, A.; Lurie, I.; Savina, N.; Voronenko, M.; Boskin, O.; Smailova, S. Hybrid Methods of GMDH-Neural Networks Synthesis and Training for Solving Problems of Time Series Forecasting. In Lecture Notes in Computational Intelligence and Decision Making; Lytvynenko, V., Babichev, S., Wójcik, W., Vynokurova, O., Vyshemyrskaya, S., Radetskaya, S., Eds.; Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2020; Volume 1020, pp. 513–531. ISBN 978-3-030-26473-4. [Google Scholar]
- Open’ko, P.; Kobzev, V.; Larin, V.; Drannyk, P.; Tkachev, V.; Uhrynovych, O. The Problem Solution of the Surface-to-Air Missile Systems Electronic Equipment Durability Prediction When Implementing the Strategy of Condition-Based Maintenance and Repair Using the Group Method of Data Handling. Sci. Pap. Social Dev. Secur. 2021, 11, 90–97. [Google Scholar] [CrossRef]
- Ivakhnenko, A.G.; Ivakhnenko, G.A.; Savchenko, E.; Wunsch, D. Problems of Further Development of GMDH Algorithms: Part 2. In Mathematical Theory of Pattern Recognition; MAIK “Nauka /Interperiodica”: Sankt Petersburg, Russia, 2002. [Google Scholar]
- Salamh, M.; Wang, L. Second-Order Least Squares Method for Dynamic Panel Data Models with Application. J. Risk Financ. Manag. 2021, 14, 410. [Google Scholar] [CrossRef]
- Lake, R.W.; Shaeri, S.; Senevirathna, S. Limitations of Parametric Group Method of Data Handling and Empirical Improvements for the Application of Rainfall Modelling; Research Square: Durham, NC, USA, 2022. [Google Scholar]
- Gatto, M.; Marcuzzi, F. Unbiased Least-Squares Modelling. Mathematics 2020, 8, 982. [Google Scholar] [CrossRef]
- Ighalo, J.O.; Adeniyi, A.G.; Marques, G. Application of Linear Regression Algorithm and Stochastic Gradient Descent in a Machine-Learning Environment for Predicting Biomass Higher Heating Value. Biofuels Bioprod. Biorefin. 2020, 14, 1286–1295. [Google Scholar] [CrossRef]
- Piltan, F.; Bayat, R.; Mehara, S.; Meigolinedjad, J. GDO Artificial Intelligence-Based Switching PID Baseline Feedback Linearization Method: Controlled PUMA Workspace. Int. J. Inf. Eng. Electron. Bus. 2012, 4, 17–26. [Google Scholar] [CrossRef] [Green Version]
- Hu, Z.; Odarchenko, R.; Gnatyuk, S.; Zaliskyi, M.; Chaplits, A.; Bondar, S.; Borovik, V. Statistical Techniques for Detecting Cyberattacks on Computer Networks Based on an Analysis of Abnormal Traffic Behavior. Int. J. Comput. Netw. Inf. Secur. 2021, 12, 19–27. [Google Scholar] [CrossRef]
- Heart Rate Prediction to Monitor Stress Level. Available online: https://www.kaggle.com/vinayakshanawad/heart-rate-prediction-to-monitor-stress-level (accessed on 19 June 2022).
- Izonin, I.; Tkachenko, R. An Approach towards the Response Surface Linearization via ANN-Based Cascade Scheme for Regression Modeling in Healthcare. Procedia Comput. Sci. 2022, 198, 724–729. [Google Scholar] [CrossRef]
- Theerthagiri, P. Predictive Analysis of Cardiovascular Disease Using Gradient Boosting Based Learning and Recursive Feature Elimination Technique. Intell. Syst. Appl. 2022, 16, 200121. [Google Scholar] [CrossRef]
- Kundu, M.; Nashiry, M.A.; Dipongkor, A.K.; Sarmin Sumi, S.; Hossain, M.A. An Optimized Machine Learning Approach for Predicting Parkinson’s Disease. Int. J. Mod. Educ. Comput. Sci. 2021, 13, 68–74. [Google Scholar] [CrossRef]
Attribute Title | Mean Value | Std | Min Value | Max Value |
---|---|---|---|---|
Mean of RR intervals (MEAN_RR) | 845.914 | 124.485 | 547.595 | 1322.01 |
Median of RR intervals (MEDIANR_R) | 841.156 | 132.003 | 517.51 | 1653.12 |
Standard deviation of RR intervals (SDRR) | 109.26 | 76.8158 | 27.2406 | 563.48 |
Root mean square of successive RR interval differences (RMSSD) | 14.9808 | 4.12688 | 5.53346 | 26.6232 |
Standard deviation of successive RR interval differences (SDRR) | 14.9801 | 4.12688 | 5.53336 | 26.623 |
Ratio of SDRR/RMSSD | 7.38995 | 5.12581 | 2.66038 | 54.3399 |
Percentage of successive RR intervals that differ by more than 25 ms (pNN25) | 9.84384 | 8.20845 | 0 | 39.4 |
Percentage of successive RR intervals that differ by more than 50 ms (pNN50) | 0.86997 | 0.9921 | 0 | 5.4 |
Kurtosis of distribution of successive RR intervals (KURT) | 0.52599 | 1.78593 | −1.8947 | 62.6724 |
Skew of distribution of successive RR intervals (SKEW) | 0.044 | 0.69987 | −2.1363 | 6.56471 |
Mean of relative RR intervals (MEAN_REL_RR) | −0.001 | 0.00016 | −0.0012 | 0.00123 |
Median of relative RR intervals (MEDIAN_REL_RR) | −0.0005 | 0.00087 | −0.0044 | 0.0021 |
Standard deviation of relative RR intervals (SDRR_REL_RR) | 0.01859 | 0.00547 | 0.00899 | 0.03654 |
Root mean square of successive relative RR interval differences (RMSSD_REL_RR) | 0.00972 | 0.00392 | 0.00322 | 0.02695 |
Standard deviation of successive relative RR interval differences (SDSD_REL_RR) | 0.00972 | 0.00392 | 0.00322 | 0.02695 |
Ratio of SDRR/RMSSD for relative RR interval differences (SDRR_RMSSD_REL_RR) | 2.005 | 0.37551 | 1.18126 | 3.70231 |
Kurtosis of distribution of relative RR intervals (KURT_REL_RR) | 0.52599 | 1.78593 | −1.8947 | 62.6724 |
Skew of distribution of relative RR intervals (SKEW_REL_RR) | 0.044 | 0.69987 | −2.1363 | 6.56471 |
Heart rate of the patient at the time of data recorded (HR) | 74.0103 | 10.3811 | 48.7372 | 113.727 |
Loss Function | ME | MedAE | MAE | MSE | MAPE | RMSE | R2 | Training Time, s |
---|---|---|---|---|---|---|---|---|
Training mode | ||||||||
Huber | 21.168 | 2.291 | 2.810 | 13.908 | 0.037 | 3.729 | 0.869 | 6.61 |
Epsilon insensitive | 15.192 | 0.749 | 1.176 | 3.745 | 0.016 | 1.935 | 0.965 | 5.06 |
Squared error | 10.454 | 0.816 | 1.159 | 2.811 | 0.016 | 1.677 | 0.974 | 5.03 |
Squared epsilon insensitive | 9.995 | 0.808 | 1.146 | 2.705 | 0.016 | 1.645 | 0.975 | 5.05 |
Test mode | ||||||||
Huber | 20.973 | 2.300 | 2.821 | 14.008 | 0.037 | 3.743 | 0.870 | - |
Epsilon insensitive | 15.150 | 0.753 | 1.181 | 3.745 | 0.016 | 1.935 | 0.965 | - |
Squared error | 10.456 | 0.821 | 1.164 | 2.840 | 0.016 | 1.685 | 0.974 | - |
Squared epsilon insensitive | 9.998 | 0.814 | 1.151 | 2.736 | 0.016 | 1.654 | 0.975 | - |
Method | ME | MedAE | MAE | MSE | MAPE | RMSE | R2 | Training Time, s |
---|---|---|---|---|---|---|---|---|
Training mode | ||||||||
SGD algorithm | 9.995 | 0.808 | 1.146 | 2.705 | 0.016 | 1.645 | 0.975 | 5.05 |
SGD algorithm + 2nd degree of Wiener polynomial | 5.265 | 0.276 | 0.428 | 0.452 | 0.006 | 0.672 | 0.996 | 15.81 |
Test mode | ||||||||
SGD algorithm | 10.038 | 0.814 | 1.151 | 2.741 | 0.016 | 1.656 | 0.975 | - |
SGD algorithm + 2nd degree of Wiener polynomial | 5.250 | 0.276 | 0.428 | 0.454 | 0.006 | 0.674 | 0.996 | - |
Level Number of the Proposed Ensemble | ME | MedAE | MAE | MSE | MAPE | RMSE | R2 | Training Time, s |
---|---|---|---|---|---|---|---|---|
Training mode | ||||||||
1 | 5.265 | 0.276 | 0.428 | 0.452 | 0.006 | 0.672 | 0.996 | 15.81 |
2 | 4.189 | 0.225 | 0.313 | 0.207 | 0.004 | 0.455 | 0.998 | 4.08 |
3 | 8.155 | 0.228 | 0.303 | 0.199 | 0.004 | 0.446 | 0.998 | 4.09 |
4 | 8.019 | 0.236 | 0.318 | 0.211 | 0.004 | 0.459 | 0.998 | 5.29 |
Test mode | ||||||||
1 | 5.250 | 0.276 | 0.428 | 0.454 | 0.006 | 0.674 | 0.996 | - |
2 | 4.266 | 0.227 | 0.317 | 0.213 | 0.004 | 0.462 | 0.998 | - |
3 | 10.739 | 0.228 | 0.304 | 0.198 | 0.004 | 0.445 | 0.998 | - |
4 | 13.032 | 0.240 | 0.323 | 0.224 | 0.004 | 0.474 | 0.998 | - |
Optimal Parameters/Performance Indicators | ME | MedAE | MAE | MSE | MAPE | RMSE | R2 | Training Time, s |
---|---|---|---|---|---|---|---|---|
MinMaxScaler(); quadratic Wiener polynomial; SGD with squared epsilon insensitive loss function; 3 levels of the proposed ensemble scheme | Training mode | |||||||
8.155 | 0.228 | 0.303 | 0.199 | 0.004 | 0.446 | 0.998 | 4.09 | |
Test mode | ||||||||
10.739 | 0.228 | 0.304 | 0.198 | 0.004 | 0.445 | 0.998 | - |
Method (Test Mode) | ME | MedAE | MAE | MSE | MAPE | RMSE | R2 | Training Time, s |
---|---|---|---|---|---|---|---|---|
Proposed method | 10.739 | 0.228 | 0.304 | 0.198 | 0.004 | 0.445 | 0.998 | 4.089 |
Gradient Boosting Regressor [46] | 6.412 | 0.227 | 0.343 | 0.264 | 0.005 | 0.514 | 0.998 | 169.547 |
SGD algorithm + 2nd degree of Wiener polynomial [33] | 5.250 | 0.276 | 0.428 | 0.454 | 0.006 | 0.674 | 0.996 | 15.810 |
SGD algorithm [41] | 9.998 | 0.814 | 1.151 | 2.736 | 0.016 | 1.654 | 0.975 | 5.047 |
AdaBoost Regressor [47] | 5.160 | 1.518 | 1.585 | 3.359 | 0.022 | 1.833 | 0.969 | 66.531 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Izonin, I.; Tkachenko, R.; Holoven, R.; Yemets, K.; Havryliuk, M.; Shandilya, S.K. SGD-Based Cascade Scheme for Higher Degrees Wiener Polynomial Approximation of Large Biomedical Datasets. Mach. Learn. Knowl. Extr. 2022, 4, 1088-1106. https://doi.org/10.3390/make4040055
Izonin I, Tkachenko R, Holoven R, Yemets K, Havryliuk M, Shandilya SK. SGD-Based Cascade Scheme for Higher Degrees Wiener Polynomial Approximation of Large Biomedical Datasets. Machine Learning and Knowledge Extraction. 2022; 4(4):1088-1106. https://doi.org/10.3390/make4040055
Chicago/Turabian StyleIzonin, Ivan, Roman Tkachenko, Rostyslav Holoven, Kyrylo Yemets, Myroslav Havryliuk, and Shishir Kumar Shandilya. 2022. "SGD-Based Cascade Scheme for Higher Degrees Wiener Polynomial Approximation of Large Biomedical Datasets" Machine Learning and Knowledge Extraction 4, no. 4: 1088-1106. https://doi.org/10.3390/make4040055