An Efﬁcient Machine Learning Approach for Diagnosing Parkinson’s Disease by Utilizing Voice Features

: Parkinson’s disease (PD) is a neurodegenerative disease that impacts the neural, physiological, and behavioral systems of the brain, in which mild variations in the initial phases of the disease make precise diagnosis difficult. The general symptoms of this disease are slow movements known as ‘bradykinesia’. The symptoms of this disease appear in middle age and the severity increases as one gets older. One of the earliest signs of PD is a speech disorder. This research proposed the effectiveness of using supervised classification algorithms, such as support vector machine (SVM), naïve Bayes, k-nearest neighbor (K-NN), and artificial neural network (ANN) with the subjective disease where the proposed diagnosis method consists of feature selection based on the filter method, the wrapper method, and classification processes. Since just a few clinical test features would be required for the diagnosis, a method such as this might reduce the time and expense associated with PD screening. The suggested strategy was compared to PD diagnostic techniques previously put forward and well-known classifiers. The experimental outcomes show that the accuracy of SVM is 87.17%, naïve Bayes is 74.11%, ANN is 96.7%, and KNN is 87.17%, and it is concluded that the ANN is the most accurate one with the highest accuracy. The obtained results were compared with those of previous studies, and it has been observed that the proposed work offers comparable and better results.


Introduction
Parkinson's disease, commonly known as Tremor, is affected by a reduction in dopamine levels in the brain which damages a person's motion functions, or physical functioning. It is one of the world's most common diseases. Intermittent neurological signs and symptoms result from these lesions, which get worse as the disease progresses [1]. Because aging causes changes in our brains, such as loss of synaptic connections and changes in neurotransmitters and neurohormones, this condition is more frequent among the elderly. With the passage of time, the neurons in a person's body begin to die and become inimitable. The consequences of neurological problems and the falling dopamine levels in the patient's body show gradually, making them difficult to detect until the patient's condition requires medical treatment [2].

Machine Learning-Based Detection of Parkinson's Disease
Over the past few decades, researchers have looked at a new way of detecting this disease through ML techniques, a subset of artificial intelligence (AI). Clinical personnel might better recognize these disease patients by combining traditional diagnostic indications with ML.
As walking is the most common activity in every person's day-to-day life, it has been linked to physical as well as neurological disorders. This disease, for example, has been identifiable using gait (mobility) data. Gait analysis approaches offer advantages such as being non-intrusive and having the potential to be extensively used in residential settings [8]. Few researchers have attempted to combine ML methods to make the procedure autonomous and possible to do offline [9].
Furthermore, persons with the subject disease in its early stages might experience speech problems [10]. These include dysphonia (weak vocal fluency), repetitious echoes (a tiny assortment of audio variations), and hypophonia (vocal musculature disharmony) [7,11]. Information from human aural emissions might be detected and evaluated using a computing unit [12,13].

Research Problem and Motivation
Early PD detection in PD patients is a crucial challenge. Even if their health deteriorates, people can enhance their quality of life if they receive an early diagnosis. Another issue is that the diagnosis of PD requires a number of steps, including gathering a thorough neurological history from the patient and examining their motor abilities in various environments.
The majority of recent studies deal with the homo dataset (text, speech, video, or image). Problems with dataset modification and multi-data handling procedures have been highlighted in the suggested study. The effectiveness of disease prediction is regulated as a result of the examination of a particular dataset. More real-time solutions are made possible by the use of machine learning-based techniques for multivariate data processing. The multi-variate vocal data analysis (MVDA) is driven to provide multiple dataset attribute-based Parkinson's disease identification utilizing machine learning approaches. This study examines the potential for improving multi-variate and multimodal data processing, which aids in raising the disease detection rate. The existing research simultaneously concentrates on various ML-based techniques such as support vector machines, naïve Bayes, K-NN, and artificial neural network evaluations of Parkinson's data based on voice features. The MVDA employs extensive datasets and machine learning approaches to improve disease identification based on these works. The incorporation of numerous patients' multivariate acoustic characteristics in the proposed MVDA is encouraged. The subjective disease has been diagnosed with the help of proposed machine learning techniques under the MVDA system.

Contribution
This research article covers the techniques of machine learning which are implemented in the auditory analysis of speech to diagnose this disease. The benefits and shortcomings of these algorithms in detecting the disease are thoroughly contrasted, and existing comparative studies' potential drawbacks are explored. The accuracy of ANN in speech analysis for diagnosis is the finest among different classifiers; however, the assumption is to enhance and adapt to the difficulties that may come from the data. Using the naïve Bayes classifier with suitable pre-processing might result in greater average accuracy. The main contribution of this paper is as follows: a. To identify which machine learning algorithms, such as SVM, KNN, naive Bayes, and ANN, offer the most accurate classifications and diagnosis of Parkinson's disease. b. To develop statistical evaluations for the diagnosis of Parkinson's disease in order to identify the frequency at which the best training and test results will be acquired, and consequently to assist in upcoming literature-based research. c. The proposed system has used an ANN classifier to attain the maximum classification accuracy when compared to the approaches used in earlier research. d. In order to improve the prediction of PD, a comprehensive methodology was employed to explore the effectiveness and efficiency of various feature selection approaches. e. The proposed model is examined with four machine learning methods, including SVM, naive Bayes, k-NN, and ANN, as well as with earlier and more current studies on PD detection.

Structure of Proposed Work
The structure of the study is as follows: Section 2 describes the related research survey. Section 3 discusses the methodology used to achieve the proposed objective. Section 4 defines the materials and methods. Section 5 examines the experiment and results. Section 6 discusses the comparative study and discussion. Finally, Section 7 concludes the proposed work.

Related Works
In order to distinguish PD cases from healthy controls, a variety of modern machine learning algorithms, including support vector machines, artificial neural networks, logistic regression, naïve Bayes, etc., have been successfully used. In this study, numerous databases, including Web of Science, Elsevier, MDPI, Scopus, Science Direct, IEEE Xplore, Springer, and Google Scholar, were utilized to survey relevant papers on Parkinson's disease.
In a survey by [14], the authors used KNN, SVM, and discrimination-function-based (DBF) classifiers for the diagnosis of PD. In their study, they used several parameters such as jitter, fundamental frequency, pitch, shimmer, and other statistical measures. The best accuracy among these classifiers was obtained from KNN with a 93.83% accuracy rate and it also provided good performance in other parameters, such as sensitivity, specificity, and error rate.
The authors in [15] used a convolution neural network classifier applied to speech classification datasets. The accuracy reached throughout the training phase, which was over 77%, makes the results optimistic. In accordance with the works mentioned above, [16] examined a variety of classifiers to identify individuals who were likely to have Parkinson's disease. They used 40 participants for their investigation, including 20 PD patients and 20 healthy controls. According to the experimental findings, the naive Bayes classifier has a detection accuracy of 65%, with a sensitivity rate of 63.6% and a specificity rate of 66.6%, respectively. In [17], the authors used three types of classifiers based on KNN, SVM, and multilayer perceptron (MLP) to diagnose Parkinson's disease. Among all these ML classifiers, SVM using an RBF kernel outperformed with an overall classification accuracy rate of 85.294%.
A summary of the most recent deep learning methods for audio signal processing is given in another work by [18]. The works that have been examined include convolution neural networks as well as other long short-term memory architecture models and audio-specific neural network models. Similar to the previous studies, [19] detected PD using naive Bayes and other machine learning approaches. In their method, relevant features were extracted from the voice signal of PD patients and healthy control subjects using signal processing techniques. The naive Bayes algorithm shows a 69.24% detection accuracy and 96.02% precision rate for the 22 voice characteristics. In [20], the authors suggested a technique for detecting Parkinson's disease using SVM on shifted delta cepstral (SDC) and single frequency filtering cepstral coefficients (SFFCC) features extracted from speech signals of PD patients and healthy controls. Comparing the standard MFCC + SDC features to the SDC + SFFCC features, performance increases of 9% were observed. The 73.33% detection accuracy with a 73.32% F1-score was displayed by the conventional SVM on SDC + SFCC features. In addition to the naive Bayes classifier, several additional supervised methods, including but not restricted to well-known deep learning methods, have been suggested to identify PD patients among healthy controls.
In a survey conducted by [21], the authors examined two recognizing decision forests i.e., SysFor and ForestPA, along with the most widely used random forest classifier, which has been utilized as a Parkinson's detector. In their study, as compared to SysFor and ForestPA, random forest's average detection accuracy on incremental trees showed 93.58%. For the purpose of classifying Parkinson's disease through sets of acoustic vocal (voice) characteristics, the authors [22] suggested two frameworks based on CNN. Both frameworks are used for the mixing of different feature sets, although they combine feature sets in different ways. While the second framework provides feature sets to the parallel input levels that are directly connected to convolution layers, the first framework first combines several feature sets before passing them as inputs to the nine-layered CNN.
AI is assisting physicians in better diagnosing and treating diseases such as postoperative hypotension, and more advanced future models may have even more widespread medical uses. The evolutionary step in the creation of therapeutic pathways and adherence is machine learning. The real benefit of machine learning, however, is that it enables provider organizations to use information about the patient population from their own systems of record to create therapeutic pathways that are unique to their procedures, clientele, and physicians [23].
The vocal biomarkers and the description of the Aachen aphasia database, which contains recordings and transcriptions of therapy sessions, were covered in [24]. The authors also discussed how the biomarkers and the database could be used to build a recognition system that automatically maps pathological speech to aphasia type and severity.
In [25], the authors examined the suggested technique using a dataset of 288 audio files from 96 patients, including 48 healthy controls and 48 participants with cognitive impairment. The suggested method outperformed techniques based on manual transcription and speech annotation, with classification results that were comparable to those of the most advanced neuropsychological screening tests and an accuracy rate of 90.57%.
In [26], the authors intended to enlighten on the early indicators of major depressive relapse, which were discreetly measured using remote measurement technologies (RMT).
RMT has the potential to alter how depression and other long-term disorders are evaluated and handled if it is found to be acceptable to patients and other important stakeholders and capable of providing clinically meaningful information predicting future deterioration.
It can be seen from the reviews above that all the research that has been carried out is only restricted to a small number of datasets. The above previous works inspired us to try a new methodology. In this study, we experimented with several feature selection methods before comparing the results with various machine learning classifiers. Table 1 illustrates the review of ML techniques used to diagnose major symptoms of PD i.e., speech recording, handwriting pattern, and gait features, where data were collected from the UCI machine learning repository, the University of Oxford (UO), and other resources for 20 studies.

Proposed Work
The proposed ML model uses an SVM, naïve Bayes, KNN, and ANN algorithm in the core. These algorithms are widely used in the literature since they are easy to use and only need a small number of parameters to be tuned. There are several processes involved in developing a model to detect PD from voice recordings. In the first phase, relevant features are extracted from the dataset for better understanding. In the second phase, machine learning techniques are applied to classify healthy as well as PD patients, which are dependent on acoustic features to predict the outputs in the form of visual representation of graphs and percentage of accuracy score tables. Finally, in the third phase, there is a difference between the entire machine learning classifier models to predict the best accuracy score. The complete technical process of the proposed work is represented in Figure 1. The proposed methodology is shown to be better than the other methodologies with respect to computational cost since few voice features were used instead of heavy feature extraction processes such as MRI, motion sensors, or handwriting assessments. Additionally, the performances of different popular classifiers were evaluated, and the best classifier was found to be ANN for PD diagnosis problems.
Electronics 2022, 11, 3782 7 instead of heavy feature extraction processes such as MRI, motion sensors, or handwr assessments. Additionally, the performances of different popular classifiers evaluated, and the best classifier was found to be ANN for PD diagnosis problems.

Feature Selection
Due to many available features, feature selection is a frequent approach use minimize the dimension of data in machine learning based on voice analysis demonstrated in Figure 2, all feature selection algorithms have the same aim of redu redundancy and increasing relevance, which improves the accuracy of the dise

Feature Selection
Due to many available features, feature selection is a frequent approach used to minimize the dimension of data in machine learning based on voice analysis. As demonstrated in Figure 2, all feature selection algorithms have the same aim of reducing redundancy and increasing relevance, which improves the accuracy of the disease's diagnosis. Prior to supplying the data to the classifier, a variety of feature selection strategies were used. The filter-based strategies take into account the importance of the characteristics. As a result, they are stable and scalable and have a low level of complexity [47,48]. The major drawback of this method is that, especially when the data are flowing in a stream, it may overlook certain useful aspects [49]. Both univariate and multivariate techniques based on filters are possible [50]. According to statistically based criteria such as information gain (IG) [51][52][53], the univariate approaches analyze attributes. Multivariate approaches calculate feature dependence before ranking the feature. In addition, a widely utilized statistical technique for data analysis is principle component analysis (PCA). By choosing a collection of features that accurately reflects the entire data set, PCA can minimize the size of the data sets. The initial variables' principal components are the components with the largest variance value since PCA is a conversion technique. Following that, the other principal components are arranged in descending order of variance values [54]. Additionally, the wrapper-based algorithms assess the quality of the chosen features based on the learning classifier's performance. Multivariate approaches calculate feature dependence before ranking the feature. In addition, a widely utilized statistical technique for data analysis is principle component analysis (PCA). By choosing a collection of features that accurately reflects the entire data set, PCA can minimize the size of the data sets. The initial variables' principal components are the components with the largest variance value since PCA is a conversion technique. Following that, the other principal components are arranged in descending order of variance values [54]. Additionally, the wrapper-based algorithms assess the quality of the chosen features based on the learning classifier's performance. In the pre-processing section, the whole procedure for filter techniques takes place independent of the model. The models are skipped by the filter. Filter methods primarily consider the data's distribution and correlation and internal relationships. As a result, filter techniques have the advantage of being simple and quick to compute. Because of their simplicity and quick computing speed, filter approaches are commonly used in the diagnosis of this disease. Some popular filtering methods are listed below. The minimum redundancy and maximum relevancy (mRMR) method selects characteristics that are far apart but have a strong "correlation" with the classification variable.
The wrapper method decides whether to have or reject a feature depending on a classifier's working change [55]. The wrapper method takes certain classifiers into account and provides a well-tailored subset. As a result, wrapper methods have a lower chance of finding the local maximum. Due to its huge gain in performance, the wrapper approach is popular among ML diagnostics. However, it has drawbacks such as being prone to overfitting and being computationally costly. Wrapper-based feature selection techniques use a classifier to build ML models with different predictor variables and select the variable subset that leads to the best model.
In contrast, filter-based methods are statistical techniques independent of a learning algorithm used to compute the correlation between the predictor and independent variables. The predictor variables are scored according to their relevance to the target variable. The variables with higher scores are then used to build the ML model. Therefore, In the pre-processing section, the whole procedure for filter techniques takes place independent of the model. The models are skipped by the filter. Filter methods primarily consider the data's distribution and correlation and internal relationships. As a result, filter techniques have the advantage of being simple and quick to compute. Because of their simplicity and quick computing speed, filter approaches are commonly used in the diagnosis of this disease. Some popular filtering methods are listed below. The minimum redundancy and maximum relevancy (mRMR) method selects characteristics that are far apart but have a strong "correlation" with the classification variable.
The wrapper method decides whether to have or reject a feature depending on a classifier's working change [55]. The wrapper method takes certain classifiers into account and provides a well-tailored subset. As a result, wrapper methods have a lower chance of finding the local maximum. Due to its huge gain in performance, the wrapper approach is popular among ML diagnostics. However, it has drawbacks such as being prone to overfitting and being computationally costly. Wrapper-based feature selection techniques use a classifier to build ML models with different predictor variables and select the variable subset that leads to the best model.
In contrast, filter-based methods are statistical techniques independent of a learning algorithm used to compute the correlation between the predictor and independent variables. The predictor variables are scored according to their relevance to the target variable. The variables with higher scores are then used to build the ML model. Therefore, this research aims to use a filter-based feature selection method, to identify the most relevant features for improved PD detection.

Dataset
The dataset of recorded speech signals was obtained from Max Little of the University of Oxford [56,57]. Table 2 contains the details of the dataset. This dataset has an assortment of acoustic speech measures from 195 persons, where 147 persons have Parkinson's disease. All the attributes in the dataset characterize an individual voice measure, and each tuple represents a total number of voice recordings made by these people. The objective of the dataset is to differentiate fit persons compared to the unhealthy using the "status" column, which is set to negative for fit persons and positive for those having the disease.

Parkinson's Disease Diagnosis Based on Voice Analysis and Machine Learning
Some studies have concentrated on the acoustic level or the fluctuations in fundamental frequency (F0) caused by vocal activities. The effects of power spectral analysis of F0 phonation in persons with sensorineural audibility loss and the disease have been examined in [58][59][60]. F0's rhythm was unique in the incidence and amplitude of the diseases. Further, the study demonstrated that the F0 analysis can be a useful tool for neurological diseases under investigation. The autocorrelation function approach was used to find the basic frequencies of speech transmissions. According to the concept, Parkinsonian dysprosody is frequently described as a simple neuro-motor disorder.
The understanding and generation of pitch characteristics in a group of patients were examined to confirm the idea. Conventional medications, such as LDOPA, define that in the early stages of PD, LDOPA is a very effective treatment of subjective disease [61]. In [62], the authors use deep learning to categorize the patient's speech data as "severe" and "not severe". The evaluation measures employed in this study were the unified Parkinson's disease ranking scale (UPDRS). The motor UPDRS examines the patient's motor ability on a 0-108 scale, while the entire UPDRS provides a range of scores from 0 to 1766.

Classification of Parkinson's Disease with ML Classifier
In this technique, we'll use an ML classifier to classify the disease. First, we select a target variable of patient health status and measure the number of patients in this report. We visualize the data graphically after assessing the health status of a patient. Two types of datasets were developed: 80% of the dataset was used for training and 20% for the testing dataset. In the following Figure 3, the score of 0 represents the healthy persons in the sample, whose count is 48, and 1 represents the patients with Parkinson's disease, whose count is 147.

Building of Machine Learning Techniques with Classifier Evaluation Metrics
By using different types of classifiers, it becomes easy to detect the disease. Classification sensitivity, Matthews's correlation coefficient (MCC), accuracy, specificity, F-score (F-measure), and other measurement parameters are used to distinguish it. Each of these measurement criteria includes a formula for calculating it and determining which classifier is the most qualitatively appropriate for the analysis. It is requisite to focus on the confusion matrix before developing these criteria [63]. The confusion matrix of the multi-class classifier is shown in  F1-Score: It represents the accuracy of a model on a given dataset which is also known as F-Score as shown in Equation (1): MCC: It is utilized for model evaluation to evaluate the quality of the binary and multi-class classifications as shown in Equation (2). It is based on true-negative, true-

Building of Machine Learning Techniques with Classifier Evaluation Metrics
By using different types of classifiers, it becomes easy to detect the disease. Classification sensitivity, Matthews's correlation coefficient (MCC), accuracy, specificity, F-score (F-measure), and other measurement parameters are used to distinguish it. Each of these measurement criteria includes a formula for calculating it and determining which classifier is the most qualitatively appropriate for the analysis. It is requisite to focus on the confusion matrix before developing these criteria [63]. The confusion matrix of the multi-class classifier is shown in Figure 4.

Building of Machine Learning Techniques with Classifier Evaluation Metrics
By using different types of classifiers, it becomes easy to detect the disease. Classification sensitivity, Matthews's correlation coefficient (MCC), accuracy, specificity, F-score (F-measure), and other measurement parameters are used to distinguish it. Each of these measurement criteria includes a formula for calculating it and determining which classifier is the most qualitatively appropriate for the analysis. It is requisite to focus on the confusion matrix before developing these criteria [63]. The confusion matrix of the multi-class classifier is shown in  F1-Score: It represents the accuracy of a model on a given dataset which is also known as F-Score as shown in Equation (1): F1-Score: It represents the accuracy of a model on a given dataset which is also known as F-Score as shown in Equation (1): MCC: It is utilized for model evaluation to evaluate the quality of the binary and multiclass classifications as shown in Equation (2). It is based on true-negative, true-positive, and false-negative, false-positive. It lies between −1 to 1 which is defined as follows: (−1): Contradiction between prediction and observation (0): No better than random prediction (1): Perfect classifier (accurate prediction).

Experiments and Results
The proposed work is implemented in Python 3.7: JupyterLab. Here we detail the experimental setup and the results of the four machine learning classification methods.

SVM-Classifier
SVM is one of the most prevalent classifier models because it provides accurate as well as highly robust results. The fundamental goal of SVM is to classify the training data by separating the classes while executing a multiple-class learning activity. It allows for the best classification performance on training data and accurately classifies patterns from the data [64]. The training procedure uses a sequential minimization strategy, and classification accuracy is shown to be higher in SVM due to its greater generalization ability [65]. The linear SVM is calculated by using the following Equation (3).
where x represents the data, y represents the class label, w represents the weight of vector orthogonal to the decision hyper-plane, b represents the offset of the hyper-plane and T shows the transpose operator [66].
In this study, we use the sklearn library in the SVM-classifier module for the classification of the given dataset. Table 3 represents the results that are generated by using the SVM classifier ( Figure 5). Figure 6 represents the confusion matrix with the true positive, true negative, false positive, and false negative value of a PD person by using the SVM classifier.

Naive Bayes Classifier
Another main essential category method of ML is the naive Bayes classifier technique. It provides effective classification and learning and the majority of results are acquired through the naïve Bayes method [67]. Naïve Bayes, based on Bayes' theorem, determines the likelihood of an event occurring depending on the event's circumstances. For instance, variations in the voice are common in people with the disease; hence, these symptoms are linked to the prediction for diagnosis of this disease. The naive variation of the theorem extends and simplifies the original Bayes theorem, which gives a mechanism for determining the probability of a target occurrence. To estimate the likelihood of the medical condition, the data comprise numerous speech signal variants. The sklearn Gaussian naive Bayes algorithm is used to provide the classifier module for the execution of the naïve Bayes categorization. The result of the classifier is shown in Table 4 and graphical representation is illustrated in Figure 7.

Parameters
Results obtained by SVM

Naive Bayes Classifier
Another main essential category method of ML is the naive Bayes classifier technique. It provides effective classification and learning and the majority of results are acquired through the naïve Bayes method [67]. Naïve Bayes, based on Bayes' theorem, determines the likelihood of an event occurring depending on the event's circumstances. For instance, variations in the voice are common in people with the disease; hence, these symptoms are linked to the prediction for diagnosis of this disease. The naive variation of the theorem extends and simplifies the original Bayes theorem, which gives a mechanism for determining the probability of a target occurrence. To estimate the likelihood of the medical condition, the data comprise numerous speech signal variants. The sklearn Gaussian naive Bayes algorithm is used to provide the classifier module for the execution of the naïve Bayes categorization. The result of the classifier is shown in Table 4 and graphical representation is illustrated in Figure 7.

Parameters
Results obtained by SVM Figure 6. Confusion Matrix and Heatmap of SVM Classifier.

Naive Bayes Classifier
Another main essential category method of ML is the naive Bayes classifier technique. It provides effective classification and learning and the majority of results are acquired through the naïve Bayes method [67]. Naïve Bayes, based on Bayes' theorem, determines the likelihood of an event occurring depending on the event's circumstances. For instance, variations in the voice are common in people with the disease; hence, these symptoms are linked to the prediction for diagnosis of this disease. The naive variation of the theorem extends and simplifies the original Bayes theorem, which gives a mechanism for determining the probability of a target occurrence. To estimate the likelihood of the medical condition, the data comprise numerous speech signal variants. The sklearn Gaussian naive Bayes algorithm is used to provide the classifier module for the execution of the naïve Bayes categorization. The result of the classifier is shown in Table 4 and graphical representation is illustrated in Figure 7.   Figure 7. Results obtained by Naïve Bayes.

Artificial Neural Network
ANN is a subfield of deep neural networks that predict how the human brain works. In general, there is a significant distinction between the human brain and ANN. The brain has 'n' number of parallel neurons, whereas the machine only has a finite sum of processors. Additionally, neurons are meeker and more relaxed than computer processors. Another major disparity between computer systems and the brain is the ability to process information on a larger scale. Neurons are made up of synapses or networks that operate together [64,68]. In this article, the main aim is to classify the functionality of ANN techniques in the early detection of this disease which is built on the subsequent phases: i.
Identifying the responsibility and function of ANN in the detection of this disease. ii.
Making observations on labels and features of datasets. iii.
Grouping the types of the studied disease centered on their symptoms. iv.
Examining the accurate outcomes.
These outcomes can be further used in the medical sector as direction for developers considering ANN deployment to enhance the civic health potential as a reaction to the studied disease [69].
In the experiment of an artificial neural network, the dataset was split into two parts i.e., the training dataset (80%) and the test dataset (20%). The classification results of the artificial neural network were found to be very high in the form of the average accuracy score which was the highest among all the classification methods, i.e., 96.7% shown in Table 5 and graphical representation is shown in Figure 8. Results obtained by Naive Bayes Figure 7. Results obtained by Naïve Bayes.

Artificial Neural Network
ANN is a subfield of deep neural networks that predict how the human brain works. In general, there is a significant distinction between the human brain and ANN. The brain has 'n' number of parallel neurons, whereas the machine only has a finite sum of processors. Additionally, neurons are meeker and more relaxed than computer processors. Another major disparity between computer systems and the brain is the ability to process information on a larger scale. Neurons are made up of synapses or networks that operate together [64,68]. In this article, the main aim is to classify the functionality of ANN techniques in the early detection of this disease which is built on the subsequent phases: i. Identifying the responsibility and function of ANN in the detection of this disease. ii. Making observations on labels and features of datasets. iii. Grouping the types of the studied disease centered on their symptoms. iv. Examining the accurate outcomes.
These outcomes can be further used in the medical sector as direction for developers considering ANN deployment to enhance the civic health potential as a reaction to the studied disease [69].
In the experiment of an artificial neural network, the dataset was split into two parts i.e., the training dataset (80%) and the test dataset (20%). The classification results of the artificial neural network were found to be very high in the form of the average accuracy score which was the highest among all the classification methods, i.e., 96.7% shown in Table 5 and graphical representation is shown in Figure 8.

K-Nearest Neighbor
The KNN technique is costly while presenting with a huge training dataset since it has been used most of the time in pattern recognition. KNN is the base concept of learning by analogy utilized to categorize the nearest neighbors. It is accomplished by comparing closely similar training tuples to the provided test tuple. As a result, "n" characteristics are utilized to recognize training tuples in which each tuple corresponds to a distinct point in the n-dimensional space. The KNN classifier's responsibility in the event of an unlabeled tuple is to explore the pattern space for all k training tuples that are close together [64]. This study aims to identify the accuracy rate of detecting the subject disease. To find out the difference between affected patients and healthy persons, the KNN algorithm is used. In terms of accuracy, experimental data reveal that the ANN classifier outperformed the KNN classifier on average. The results of the KNN classifier are shown in Table 6 with the accuracy rate of the training and test datasets, F1-score, and MCC illustrated in Figure 9.

K-Nearest Neighbor
The KNN technique is costly while presenting with a huge training dataset since it has been used most of the time in pattern recognition. KNN is the base concept of learning by analogy utilized to categorize the nearest neighbors. It is accomplished by comparing closely similar training tuples to the provided test tuple. As a result, "n" characteristics are utilized to recognize training tuples in which each tuple corresponds to a distinct point in the n-dimensional space. The KNN classifier's responsibility in the event of an unlabeled tuple is to explore the pattern space for all k training tuples that are close together [64]. This study aims to identify the accuracy rate of detecting the subject disease. To find out the difference between affected patients and healthy persons, the KNN algorithm is used. In terms of accuracy, experimental data reveal that the ANN classifier outperformed the KNN classifier on average. The results of the KNN classifier are shown in Table 6 with the accuracy rate of the training and test datasets, F1-score, and MCC illustrated in Figure 9.

Summary of Evaluation Results
The performance of all the classifier models used in the experiment for the disease's prediction is illustrated in Table 7. The artificial neural network classifier scores the highest accuracy rate followed by SVM, naïve Bayes, and KNN. Figure 10 shows the graphical representation of the results obtained by these four ML classifiers based on various parameters. Table 7 illustrates that SVM attained the average accuracy for the training and test datasets, which are 88.46% and 87.17% respectively, F1-score (66.19%), and MCC (56.59%), sensitivity and specificity 62.5% and 93.54%, respectively. In addition, the naïve Bayes achieved the average accuracy for the training and test datasets, F1-score, MCC, sensitivity, and specificity, which are 76.23%, 74.11%, 86.74%, 66.56%, 84%, and 79.76% respectively. Results obtained by KNN Figure 9. Results obtained by KNN.

Summary of Evaluation Results
The performance of all the classifier models used in the experiment for the disease's prediction is illustrated in Table 7. The artificial neural network classifier scores the highest accuracy rate followed by SVM, naïve Bayes, and KNN. Figure 10 shows the graphical representation of the results obtained by these four ML classifiers based on various parameters. Table 7 illustrates that SVM attained the average accuracy for the training and test datasets, which are 88.46% and 87.17% respectively, F1-score (66.19%), and MCC (56.59%), sensitivity and specificity 62.5% and 93.54%, respectively. In addition, the naïve Bayes achieved the average accuracy for the training and test datasets, F1-score, MCC, sensitivity, and specificity, which are 76.23%, 74.11%, 86.74%, 66.56%, 84%, and 79.76% respectively.

Comparative Study and Discussion
This section examines the efficient comparative result analysis of the proposed technique with other conventional machine learning techniques. The comparison of the proposed study with previously published research is shown in Table 8.

Comparative Study and Discussion
This section examines the efficient comparative result analysis of the proposed technique with other conventional machine learning techniques. The comparison of the proposed study with previously published research is shown in Table 8. As per the comparative analysis, the proposed model (using four machine learning algorithms) shows better results obtained as compared to all other experimental machine learning models and the existing state of the art. In the proposed study, the best result was achieved by ANN with 96.7% accuracy, which is higher than the other experimental algorithms. The authors of [49] collected 20 PD and 20 HC speech datasets using high-quality recording equipment and used KNN and SVM to analyze the datasets in order to detect PD. KNN and SVM classifiers performed with accuracy rates of 59.52% (LOSO) and 68.45% (LOSO), respectively. In addition to [50], the authors used various algorithms such as C4.5, C5.0, random forest, and CART based on decision trees. The authors experimented on 40 individuals' records, where 50% were affected with the subjective disease and 50% were HC. For this study, the highest average model accuracy of 66.5% was attained. ANN was used by [51] to identify PD. The dataset was obtained from the University of California, Irvine's machine learning library. A total of 45 attributes were chosen as input values and one outcome for the categorization using the MATLAB tool. With an accuracy of 94.93%, their suggested model was able to differentiate healthy individuals from PD subjects. In [52], the authors used random forest, SVM, MLP, and KNN classifiers for the detection of PD patients from HC. The result obtained from this study was 78.4% and 82.2% for the SVM and KNN classifiers, respectively. In a study by [53], the authors examined the comparison between the patients with PD (PWP) and healthy controls (HC) based on a variety of speech samples. In their study, human factor cepstral coefficients (HFCC) were applied. The extracted HFCC was used to generate the average voice print for each voice recording. For the classification, SVM was used with a variety of kernels, including RBF, polynomial, linear, and MLP. The SVM's linear kernel allowed for the highest accuracy of 87.5%.
In addition to the comparisons mentioned above, the performance of the proposed methodology is compared with related ML methods for PD analysis in various scenarios and with various types of evaluated PD datasets. The proposed technique outperformed other similar contributions of ML methods in terms of performance for diagnosing PD, as seen in the above table, and is thus superior to them.

Conclusions
Automated ML techniques may classify PD from HC and predict the outcome using non-invasive speech biomarkers as features. With noisy and high-dimensional data, our study compares the performance of multiple machine learning classifiers for disease detection. Accuracy at the clinical level is feasible with careful feature selection. In this paper, we compared ML classifiers: SVM with an accuracy of 87.17%, naïve Bayes' classifier with an accuracy of 74.11%, ANN with an accuracy of 96.7%, and KNN with an accuracy of 87.17%. We used these techniques to distinguish between affected patients and healthy people. The disease is diagnosed using human speech signals. The acquired results demonstrate how feature selection techniques work well with ML classifiers, especially when working with voice data where it is possible to extract a large number of phonetic characteristics. The proposed early diagnosis approach makes it possible to detect PD with high accuracy in its early stages and the subjective disease's severe symptoms can be prevented. Many categorization algorithms are being used in the medical imaging area to obtain the best level of accuracy. This research may be used in different machine learning methods and datasets to improve classifier performance and reach the maximum accuracy score. In order to improve the accuracy of the models created, future efforts will make use of the alreadyexisting recordings and add to the number of existing attributes. In order to compare the collected data, various different records processing software that are available online may also be used.  Data Availability Statement: Data in this research paper will be shared upon request made to the first author.

Conflicts of Interest:
The authors declare no conflict of interest.