Next Article in Journal
Dynamic Cerebral Perfusion Electrical Impedance Tomography: A Neuroimaging Technique for Bedside Cerebral Perfusion Monitoring During Mannitol Dehydration
Previous Article in Journal
Advances in Innovative Surgical Implant Manufacturing for Hernia Repair and Soft Tissue Reconstruction
Previous Article in Special Issue
Artificial Intelligence-Based Methods and Omics for Mental Illness Diagnosis: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stimulus-Evoked Brain Signals for Parkinson’s Detection: A Comprehensive Benchmark Performance Analysis on Cross-Stimulation and Channel-Wise Experiments

by
Krishna Patel
1,
Rajendra Gad
1,
Marissa Lourdes de Ataide
1,
Narayan Vetrekar
1,
Teresa Ferreira
2 and
Raghavendra Ramachandra
3,*
1
School of Physical and Applied Sciences, Goa University, Taleigao Plateau 403206, Goa, India
2
Neurology Department, Goa Medical College and Hospital, Bambolim 403202, Goa, India
3
Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), 7491 Gjøvik, Norway
*
Author to whom correspondence should be addressed.
Bioengineering 2025, 12(11), 1185; https://doi.org/10.3390/bioengineering12111185
Submission received: 25 July 2025 / Revised: 15 October 2025 / Accepted: 22 October 2025 / Published: 30 October 2025

Abstract

Parkinson’s disease (PD) is a progressive neurodegenerative disorder that affects both motor and cognitive functions, often resulting in misdiagnosis during its early stages. The condition severely impacts daily living, diminishing an individual’s ability to work and carry out routine tasks independently. Consequently, the development of automated methods for reliable PD detection has gained growing research interest. Among the available approaches, Electroencephalography (EEG) has emerged as a promising non-invasive and cost-effective tool. Nevertheless, most existing studies have predominantly focused on resting-state EEG, which constrains the generalizability and robustness of the proposed detection models. This study introduces a cross-stimulation evaluation framework to assess its impact on Parkinson’s disease detection algorithms and conducts channel-wise analysis to identify the most discriminative brain regions for accurate diagnosis. To support this research, we present the newly introduced Parkinson’s disease EEG (ParEEG) database, comprising 203,520 EEG samples from 60 subjects recorded based on Resting-State Visual Evoked Potential (RSVEP) and Steady-State Visually Evoked Potential (SSVEP) stimuli. In this study, we evaluate the performance of individual EEG channels using two handcrafted and two deep learning-based methods, employing a 10-fold cross-validation strategy, to ensure statistical reliability and establish benchmark results. Experimental results show that CRC and LSTM consistently achieved high accuracies (95–100%) with low variability (standard deviation < 2%). The analysis indicates that EEG channels in the frontal, fronto-central, and central–parietal regions consistently yield higher classification accuracy in Parkinson’s disease detection. Our findings offer valuable insights into channel-specific neural alterations for better interpretability in PD, and the cross-stimulation evaluation enhances the generalizability of EEG-based PD detection for practical diagnostic purposes.

1. Introduction

Parkinson’s disease (PD) is the second most common neurodegenerative disorder affecting approximately 2–3% of the global population with slightly higher prevalence in men than women [1], and this gap may vary by geographic/ethnic population [2]. PD is a complex, multifactorial syndrome characterized primarily by progressive motor impairments such as bradykinesia, tremor, rigidity, and postural instability [3,4]. These motor deficits arise largely from dopaminergic neuronal loss in the substantia nigra pars compacta, leading to dopamine depletion in the striatum. However, dopamine dysfunction also contributes to non-motor impairments [5], including deficits in executive function, working memory, attention, and visuospatial processing [6]. Importantly, PD pathology extends beyond dopaminergic pathways, as cholinergic, serotonergic, and noradrenergic systems are also affected, contributing to the diverse clinical manifestations of the disease [7]. Clinical diagnosis during early stages is often challenging, with considerable risk of misdiagnosis [8]. Advanced imaging techniques such as SPECT or cardiac MIBG can aid diagnosis but remain costly and less accessible. Alternatively, Electroencephalography (EEG) has growing interest as a low-cost, non-invasive neurophysiological tool for early PD detection. While EEG offers high temporal resolution and affordability, it may be influenced by artifacts arising from involuntary motor symptoms and the stress of EEG cap placement, which must be carefully considered in both research and clinical applications.
From the above noting, Electroencephalography (EEG) offers a non-invasive, affordable, and widely available means of capturing brain activity with high temporal resolution. While traditionally used in epilepsy and Alzheimer’s detection, recent studies have demonstrated its potential in identifying PD-related neural abnormalities [9,10]. However, EEG analysis is challenging due to its low signal-to-noise ratio and stochastic nature, motivating the integration of advanced computational methods, particularly machine learning and deep learning, for reliable PD detection [11,12]. Most of the PD detection studies rely on resting state EEG signals, recorded with eyes open or closed in a controlled, noise-free environment [13], ensuring that the EEG signals accurately reflect the brain’s resting state condition without external interference. However, beyond conventional resting-state analysis, researchers have begun exploring dynamic EEG responses under various stimulus-based conditions to better understand PD-related brain activity. Some studies have utilized cross-image presentations [14,15] and photic stimulation [16], where subjects are exposed to light flashes at different frequencies to elicit brain responses. Other investigations have examined reinforcement learning tasks [17], cognitive tasks [18], and hearing tasks [19], using voice recordings [20] and visual stimuli with foreground flickering [21] to analyze PD-related neural mechanisms. Additionally, emotional stimuli have been employed to study EEG responses to different emotional states such as happiness, fear, anger, disguised, sadness, surprise, and meditation [22,23]. Importantly, Parkinson’s disease is also associated with visual processing impairments, including reduced contrast sensitivity, abnormal eye movements, and delayed cortical responses to visual stimuli. A meta-analysis of visual evoked potential (VEP) studies reported significantly prolonged P100 latencies in PD patients compared to healthy controls, highlighting visual pathway dysfunction in the disease [24]. These findings provide strong motivation for using visually evoked paradigms, such as RSVEP and SSVEP, for PD detection.
Despite these recent advances, existing studies on Parkinson’s disease (PD) detection predominantly focus on analyzing behavioral and neural responses under specific stimulation or single stimulation conditions. Most of these studies are confined to intra-stimulation (within-stimulation) experiments, wherein classification models are trained and tested on EEG data obtained under the same type of stimulus. In contrast, this study explores the generalizability of PD detection algorithms through cross-stimulation experiments, utilizing EEG signals recorded under both Resting-State Visual Evoked Potential (RSVEP) and Steady-State Visually Evoked Potential (SSVEP) paradigms. This approach aims to evaluate the robustness and adaptability of PD detection methods across varying stimulus conditions.
Recent studies have also explored minimal-lead and single-channel EEG approaches for Parkinson’s detection. For instance, a recent work demonstrated that a single-channel forehead sensor combined with auditory task paradigms could differentiate PD patients and even predict F-DOPA PET outcomes, highlighting the feasibility of single-channel PD detection [25]. Similarly, several channel-selection and regional analysis studies have reported that a subset of frontal and central electrodes provides high discriminative power, suggesting that not all EEG channels are equally informative [26]. However, these approaches are either task-specific or limited to resting-state conditions. In contrast, our work provides a systematic channel-wise benchmarking under visually evoked paradigms (RSVEP and SSVEP), along with a cross-stimulation framework to test generalizability across conditions. This method enables the identification of the most discriminative individual channels for PD detection. Such a targeted approach offers two key advantages: (a) it highlights channels with the highest classification relevance, and (b) it reduces computational complexity by limiting analysis to the most informative channels. Moreover, this analysis facilitates the examination of region-specific brain responses to different stimulation paradigms, contributing to a more localized interpretation of Parkinson disease-related neural dysfunction.
To facilitate this research, we introduce a newly constructed Parkinson’s Disease EEG (ParEEG) database, comprising 203,520 EEG samples collected from 60 subjects (30 subjects belong to Healthy control and 30 subjects belong to Parkinson’s disease). The significance of the ParEEG database lies in recordings of brain wave signals from different emotional states, generated using Resting-State Visual Evoked Potential (RSVEP) and Steady-State Visual Evoked Potential (SSVEP) stimuli. To the best of our knowledge, cross-stimuli evaluation of PD detection using REVEP and SSVEP has not been extensively explored in prior research. In this study, we investigate extensive experimental analysis based on the average classification accuracy achieved by our PD detection algorithms, highlighting the effectiveness of single-channel cross-stimulation analysis. This approach demonstrates the potential for improved generalizability and diagnostic performance in PD detection across varying stimulus conditions. The primary contributions of this research include the following:
  • Investigate a cross-stimulation evaluation framework to assess the generalizability and robustness of Parkinson’s disease detection algorithms across varying stimulus conditions, addressing a key shortcoming of prior work that primarily relies on intra-stimulus (within-stimulus) evaluations.
  • Conduct a channel-wise performance analysis, evaluating classification accuracy at individual EEG channels to identify the most discriminative brain regions for PD detection across different stimulus conditions.
  • Introduce the newly constructed ParEEG database, comprising 203,520 EEG samples from 60 subjects (30 healthy controls and 30 individuals with Parkinson’s disease), capturing EEG responses to diverse emotional states induced by Resting-State Visual Evoked Potential (RSVEP) and Steady-State Visually Evoked Potential (SSVEP) stimuli. The ParEEG dataset will be made publicly available for research purposes to support reproducible research.
  • Present a comprehensive experimental analysis of PD detection algorithms within a cross-stimulation evaluation framework, benchmarking classification accuracy at the individual EEG channel level. The evaluation includes two handcrafted feature-based methods and two deep learning-based approaches, enabling an in-depth comparative assessment in handling variability across stimulus conditions.
The subsequent sections of this paper are structured as follows: Section 2 provides a comprehensive overview of the Parkinson’s EEG database and outlines the methodology employed for PD detection. Section 3 presents detailed experimental benchmark results based on intra-stimuli and cross-stimulation evaluation for individual channels, and Section 4 presents a discussion about the study and its findings, while Section 5 concludes the study by summarizing the key findings and insights, and outlines directions for future work.

2. Materials and Methods

2.1. Database Description

EEG data for this study were acquired using two distinct Data Collection Protocols, each incorporating audio–visual stimuli to assess neural responses. The sequences of stimuli used in these protocols are depicted in Figure 1. A Brain Product R-Net cap, equipped with 32 passive electrodes, along with a ground and reference electrode, was used to capture EEG signals at a sampling rate of 250 Hz, which is frequently adopted in research studies relevant to PD detection [27,28].
The dataset, referred to as ParEEG, consists of recordings from 60 volunteers, including 30 PD patients and 30 healthy control subjects. The age of participants ranged from 40 to 80 years. In the PD group, there were 20 males and 10 females, while the healthy control group consisted of 15 males and 15 females. The healthy control group was carefully age-matched with the PD group to minimize the confounding influence of age-related EEG differences. None of the healthy controls reported or exhibited any neurological, psychiatric, or major systemic disorders, and their health status was confirmed through a structured questionnaire and neurologist-supervised screening. All Parkinson’s disease patients were recruited from Goa Medical College and Hospital, Goa, India, under the supervision of a medical specialist. Their diagnosis was clinically established by a neurologist using the UKPDS Brain Bank Criteria, ensuring a standardized and reliable evaluation process. Additionally, the impairment levels of PD patients were documented, with approximately 70% exhibiting minimal and 30% exhibiting moderate impairment, providing a clear characterization of disease severity within the cohort. Recordings were performed with participants instructed to avoid their regular medication on the day of EEG acquisition to minimize pharmacological effects. The visual and emotional stimuli used in the experiments were obtained from publicly available sources (https://shorturl.at/5a1Cv, (accessed on 24 July 2025)). The selection and sequencing of these stimuli were performed and carried out under the supervision of an expert neurologist to ensure clinical relevance and appropriateness. To ensure data reliability, all EEG recordings were conducted in a controlled environment, with subjects seated comfortably to minimize movements and external disturbances. Visual stimuli were presented on a laptop screen positioned 76 cm from the participants, who were instructed to maintain a relaxed posture and limit eye blinking to reduce signal artifacts during EEG signal recordings. Further details of each Data Collection Protocol are outlined in the following subsections.

2.1.1. Data Collection Protocol 1 (DCP 1)

Data Collection Protocol 1 is designed to investigate the neural mechanisms underlying resting-state activity and visually evoked responses, particularly in relation to cognitive and emotional processing. This protocol follows the Resting-State Visually Evoked Potential (RSVEP) paradigm [29]. Data collection under this session begins with a one-minute baseline EEG recording during eye closure, capturing spontaneous neural oscillations primarily dominated by alpha rhythms (8–12 Hz), which are known to be associated with a relaxed but wakeful state. One-minute baseline was chosen to maintain consistency across stimulus conditions and to minimize fatigue and discomfort in participants, especially those with PD. This is immediately followed by a ten-second eye-opening phase, during which the brain’s visual processing regions, particularly the primary visual cortex (V1) and associated occipital areas, become more engaged, leading to a suppression of alpha rhythms and an increase in beta and gamma activity. This cycle is repeated to ensure consistency in data acquisition [30].
To examine the influence of emotional stimuli on neural activity, participants are presented with one-minute video clips designed to elicit relaxation, amusement (comedy), and fear (horror). These stimuli are presented twice per category to ensure reliable neural responses. Emotional processing involves multiple brain regions, including the amygdala, prefrontal cortex, and limbic system, which interact with sensory processing areas to modulate EEG signals. For example, relaxing stimuli may enhance theta and alpha power, indicating a state of mental calmness, whereas horror stimuli may lead to increased beta and gamma activity, reflecting heightened alertness and emotional arousal. A schematic representation of the RSVEP sequence in Data Collection Protocol 1 is provided in Figure 1.

2.1.2. Data Collection Protocol 2 (DCP 2)

Data Collection Protocol 2 is based on Steady-State Visually Evoked Potentials (SSVEPs), a paradigm for studying sustained visual attention, cognitive load, and brain–computer interface applications. SSVEPs are oscillatory brain responses elicited when a subject views a flickering stimulus at a specific frequency, primarily activating the occipital cortex (V1, V2, and V3) [31]. These responses are widely used to assess visual processing efficiency and cognitive workload, both of which are affected in neurodegenerative conditions such as Parkinson’s disease (PD).
In this Data Collection Protocol, participants complete four one-minute sessions involving flickering visual stimuli. During the initial sessions, they are presented with alphanumeric flickering sequences (characters 7, L, and T) oscillating within the 5–7.5 Hz frequency range. These visual stimuli are well-established for eliciting gamma-band activity in the visual cortex, thereby enabling the assessment of neural synchronization deficits in Parkinson’s disease (PD). The alphanumeric sequences are generated using MATLAB 2021a, with each character preceded by a one-second auditory cue. This cue is incorporated to promote multisensory integration and to engage higher-order cognitive networks, particularly the dorsolateral prefrontal cortex (DLPFC).
To examine the interaction between cognitive processing and emotional states, the flickering sequence is subsequently paired with video clips from the relaxation, comedy, and horror categories. These emotionally evocative videos serve as task-irrelevant distractions, providing a basis for analyzing attentional modulation in both Parkinson’s disease (PD) patients and healthy control groups. The effectiveness of attentional control, influenced by the parietal and prefrontal cortices, is crucial for cognitive function and is often impaired in PD patients. By measuring EEG responses under these varying conditions, this Data Collection Protocol provides insights into visual processing efficiency, attentional focus, and cognitive flexibility in individuals with PD and healthy controls. A schematic representation of Data Collection Protocol 2 is shown in Figure 1.

2.2. Pre-Processing

The EEG signals were initially preprocessed using a bandpass filter set with a cut-off frequency between 0.3 Hz and 100 Hz to retain relevant frequency components. This was then followed by a notch filter to eliminate 50 Hz power line interference. After preprocessing, the EEG recordings were segmented according to the different stimulus categories. Within each category, the continuous signals were further divided into 10 s intervals, resulting in EEG samples of size 32 * 2500, where 32 corresponds to the number of EEG channels and 2500 represents the number of data points, for given sampling rate.
In total, Data Collection Protocol 1 yielded 103,680 EEG samples, while Data Collection Protocol 2 produced 99,840 samples. A comprehensive breakdown of the sample distribution across different stimuli is provided in Table 1. To ensure clarity in referencing, unique notations were assigned to EEG recordings from different stimulation conditions as summarized in the same table. For example, the notation h o r r o r p 1 1 designates the second EEG sample recorded during the horror stimuli presentation condition in Data Collection Protocol 1 for the first instance. A block diagram illustrating the EEG-based PD detection process, starting from EEG data acquisition and pre-processing to classification, is presented in Figure 2.

2.3. Classification Methods

This section outlines the classification methodology used to differentiate between healthy controls and individuals diagnosed with Parkinson’s disease (PD). The analysis is performed at the level of individual EEG channels to evaluate their discriminative power. Specifically, a channel-wise approach is adopted, wherein the classifier is trained using data from a single EEG channel and subsequently tested on data with the same channel structure. Additionally, for cross-stimulation-based PD detection, the training and testing datasets consist of EEG signals recorded under different stimulation conditions. For instance, the training set may include EEG samples from the resting-state eye-closed condition ( r e c p 1 1 ), while the testing set uses EEG samples from the Horror clip ( h o r r o r p 1 1 ) in the cross-stimulation evaluation.
Signals corresponding to individual stimuli are extracted independently for each Data Collection Protocol. For instance, in Data Collection Protocol 1, EEG recordings are obtained for eight different stimuli: r e c p 1 1 , r e c p 1 2 , r e l a x p 1 1 , c o m e d y p 1 1 , h o r r o r p 1 1 , r e l a x P 1 2 , c o m e d y P 1 2 , and h o r r o r P 1 2 , following the sequence detailed in Table 1.
To train the classification model, let N represent the number of training samples, where each sample consists of a single-channel EEG segment from either a healthy control or a Parkinson’s disease (PD) subject. The training dataset can be expressed as
T train = X i , Y i R , i = 1 , 2 , , N
where X i represents an individual EEG sample from a specific electrode and Y i 0 , 1 denotes the class labels, with 0 for healthy controls and 1 for PD subjects.
For model evaluation, let M denote the number of testing samples. Each test sample undergoes classification based on the trained model, where the prediction function is given by
Y test ( X ) = 0 , if X Healthy Control 1 , if X Parkinson s Disease
This formulation ensures that the analysis is conducted at the individual channel level, allowing for an evaluation of which brain regions contribute most effectively to PD detection.
In this study, we employ four distinct algorithms for Parkinson’s disease detection, comprising two traditional handcrafted feature-based methods and two deep learning–based approaches. The selected methods include Support Vector Machine (SVM) [16], Collaborative Representation Classifier (CRC) [32], Long Short-Term Memory (LSTM) [23], and Convolutional Neural Network (CNN) [14]. A concise overview of each algorithm is provided in the following section.

2.3.1. Support Vector Machine (SVM)

In this study, a linear SVM is utilized as a binary classifier [16] to differentiate between healthy individuals and those diagnosed with Parkinson’s disease (PD). The linear kernel is selected to efficiently determine the optimal hyperplane that separates the two classes in the feature space while maintaining computational efficiency.
The decision function of the SVM is mathematically represented as
g ( c ) = h ( d ) = sign { α T · ψ ( d ) + β }
where α denotes the weight vector that defines the orientation of the decision boundary, and ψ represents a transformation function that maps input features into a higher-dimensional space to enhance separability. The bias term β ensures optimal positioning of the decision boundary for maximum class distinction.

2.3.2. Collaborative Representation Classifier (CRC)

CRC is an effective classification approach that applies collaborative representation principles combined with regularized least squares, making it well-suited for high-dimensional data such as EEG signals [32]. This method optimally reconstructs the test sample using a combination of training data, minimizing reconstruction error while ensuring generalization through regularization.
The CRC model is formulated as
d = argmin σ p δ t r σ 2 2 + α σ 2 2
where p represents the test sample requiring classification, δ t r consists of the learned features from the training dataset, and σ is the coefficient vector that quantifies the contributions of different training samples in reconstructing the test sample. The regularization parameter α controls the trade-off between reconstruction accuracy and model complexity. In this study, l 2 regularization is employed, and an optimal performance is achieved with α = 0.001 value.

2.3.3. Long Short-Term Memory (LSTM)

For EEG signal classification, a bidirectional Long Short-Term Memory (BiLSTM) network is implemented to process sequential EEG data. The model’s input consists of single-channel EEG signals, represented as a time series of features. The network architecture begins with a sequence input layer, followed by a BiLSTM layer [23] with 70 hidden units. This bidirectional setup enables the model to extract both past and future temporal dependencies, which is crucial for EEG signal interpretation. The BiLSTM layer’s output is then fed into a fully connected layer, which maps the 70-dimensional feature space onto the two output classes. A softmax layer subsequently computes class probabilities, and the classification layer determines the final output.
The LSTM network is designed to process single-channel EEG signals extracted from each of the 32 electrodes independently. For each channel, the model is trained using the Stochastic Gradient Descent with Momentum (SGDM) optimizer, with an initial learning rate of 0.01, a maximum of 50 epochs, and a mini-batch size of 35, which provides a balance between convergence speed and generalization. To prevent exploding gradients during backpropagation through time, a gradient threshold of 1 is applied. The network architecture includes a sequence input layer with input size 1 (corresponding to single-channel data), followed by a bidirectional LSTM (BiLSTM) layer with 70 hidden units, which captures temporal dependencies in both forward and backward directions. The BiLSTM output is passed through a fully connected layer, followed by a softmax layer and a classification layer to perform binary classification between healthy and Parkinson’s EEG patterns.

2.3.4. One-Dimensional Convolutional Neural Network (1D-CNN)

A lightweight 1D-CNN model is developed for classifying EEG signals into PD and healthy categories [14]. The architecture comprises two convolutional layers for feature extraction. The first layer employs 16 filters of size 5, while the second layer uses 8 filters. To preserve spatial dimensions, ‘causal’ padding is applied. Following each convolutional layer, a ReLU activation function introduces nonlinearity, and a normalization layer stabilizes training.
The convolutional operation is mathematically defined as
Output ( j ) = ReLU l = 0 L 1 θ ( l ) · Signal ( j l ) + γ
where θ ( l ) represents the convolutional filter weights, Signal ( j ) is the input EEG signal, and γ denotes the bias term.
After the convolutional layers, a global average pooling (GAP) layer is employed to condense the extracted feature maps into compact representations, effectively reducing model complexity while retaining the most salient information. The resulting feature vectors are then passed to a fully connected layer with two output nodes, corresponding to the binary classification task of distinguishing between healthy controls and Parkinson’s disease patients. A softmax layer generates class probabilities, and a classification layer determines the final predictions. The training is conducted using the SGDM optimizer, with an initial learning rate of 0.01, 50 epochs, and a mini-batch size of 35. To prevent exploding gradients, gradient clipping is implemented, ensuring stability during training.

2.4. Evaluation Method

This section presents the experimental evaluation method to obtain results using the newly developed ParEEG database, which consists of 203,520 EEG samples. The primary objective of these experiments is to evaluate Parkinson’s disease (PD) detection models, with a particular emphasis on channel-wise analysis and the effects of cross-stimulation evaluation on classification performance. Since EEG signals capture brain activity across multiple channels, each corresponding to electrical activity from different brain regions, a channel-wise approach provides valuable insights into the specific neural patterns associated with PD. Given that certain brain regions may exhibit more pronounced abnormalities than others, analyzing EEG channels individually helps identify the most discriminative ones, improving classification robustness and interpretability. This approach enables the assessment of regional variations in brain activity, enhances feature selection by focusing on highly informative channels, and offers the potential to optimize EEG setups by reducing the number of required electrodes. Notably, identifying the most relevant EEG channels can pave the way for single-channel EEG devices, making real-world implementation more feasible for clinical and wearable applications.
To achieve these objectives, we analyze the performance of four PD detection models: Support Vector Machine (SVM) [16], Collaborative Representation Classifier (CRC) [32], Long Short-Term Memory (LSTM) [23], and Convolutional Neural Network (CNN) [14]. Although the ParEEG dataset contains a large number of EEG segments (203,520), these are derived from a cohort of 60 subjects (30 PD and 30 healthy controls). To balance model complexity with the available cohort size and reduce the risk of overfitting, we employ lightweight deep learning architectures (CNN and LSTM). These models are configured with relatively few hidden layers, making them well-suited for small- to medium-sized datasets while still effectively capturing the spatial and temporal dynamics of EEG signals. To further reduce the risk of overfitting, classification was performed using 10-fold cross-validation at the subject level, ensuring that data from the same subject did not appear in both training and test sets. Each experiment is repeated 10 times with random splits, and average accuracy and standard deviation are reported to ensure reliable performance analysis and demonstrate statistical significance.
For a systematic evaluation, the ParEEG database is divided into training and testing subsets. Specifically, 50% of the subjects from both the healthy and PD groups, along with their respective EEG recordings, are allocated to the training set, while the remaining 50% constitute the testing set. The performance of PD detection models is examined under three distinct evaluation scenarios to incorporate channel-wise classification analysis.
Evaluation 1—Within-Stimulation: This serves as the baseline experiment, in which the training and testing EEG samples are derived from the same stimulus but are recorded at different time instances within a given Data Collection Protocol. The channel-wise accuracy distribution is examined to determine the most informative EEG channels contributing to reliable Parkinson’s disease detection.
Evaluation 2—Cross-Stimulation: The training and testing EEG samples belong to different stimuli within the same Data Collection Protocol (either Data Collection Protocol 1 or Data Collection Protocol 2). Channel-wise performance variation is examined to assess how evaluating different stimuli influence classification accuracy across individual EEG channel.
Evaluation 3—Cross-Stimulation: Training samples are taken from a specific stimulus (e.g., r e c p 1 1 ) in Data Collection Protocol 1, while testing samples belong to a different stimulus (e.g., r e l a x p 2 1 ) in Data Collection Protocol 2. This evaluation provides insight into how channel-wise PD detection generalizes across protocols with different stimulus conditions.
In all three evaluation scenarios, EEG samples are randomly selected in a non-overlapping manner, ensuring unbiased classification. The following subsections provide a detailed discussion of channel-wise results, highlighting key observations on the discriminative power of individual EEG channels in PD detection.

3. Results

This section presents the detailed experimental results based on the evaluation protocol described in Section 2.4. Further, detailed statistical results obtained for Evaluation 1, 2, and 3 are reported in the following subsections.

3.1. Evaluation 1

Table 2 and Table 3 demonstrate the classification accuracy obtained for Data Collection protocol (DCP) 1 and 2, respectively. Figure 3 and Figure 4 present the channel-wise performance comparison across four algorithms and illustrate the overall quantitative performance of the best-performing machine learning and deep learning algorithms respectively for DCP 1. Figure 5 and Figure 6 depict the corresponding results for DCP 2. Overall, an outstanding analysis is observed across all the algorithms, with no PD detection algorithm being exceptionally superior over other algorithms. While average classification accuracy varies across individual EEG channels, Table 2 and Table 3 reveal a remarkable 100% accuracy in Evaluation-1 for both DCP 1 and DCP 2 across several channels.

3.1.1. Observations Related to Evaluation 1 Based on DCP 1

The key observations from evaluation 1 for EEG signals acquired under Data Collection 1 are summarized as follows:
  • The CRC and LSTM algorithms demonstrated exceptional performance, attributable to the robustness of CRC in handling EEG signal variability and the capacity of LSTMs to capture long-term temporal dependencies in Parkinson’s EEG data. CNN also showed consistently strong performance, whereas SVM yielded comparatively lower accuracy, likely due to its reliance on manually extracted features that may not adequately represent the complex, nonlinear characteristics of EEG signals.
  • Comparing the classification accuracy across different stimuli in DCP 1, the horror and comedy stimuli yielded comparatively better performance across most algorithms, suggesting that these stimuli may evoke stronger neural resonances, thereby enabling the models to more effectively differentiate Parkinson’s-affected EEG patterns from healthy ones.
  • The best-performing EEG channels across all algorithms include frontal (Fp1, F9, F7), fronto-central (Fc5, Fc1, Fc2), central–parietal (Cp2), and parietal (P8), achieving an average classification accuracy ranging from 80% to 95%. This could be attributed to Parkinson’s disease being associated with widespread alterations in EEG spectral power and functional connectivity, particularly affecting the frontal and parietal regions [33,34]. In the eye-closed resting state, healthy individuals typically exhibit dominant alpha rhythms in posterior regions, which are often reduced or disrupted in PD patients [35]. Such disruptions manifest as altered activity patterns in parietal and central–parietal channels. Furthermore, during relaxed wakefulness, frontal and fronto-central regions often exhibit significant changes in EEG power and coherence in individuals with PD [36], contributing to distinguishable patterns that can support effective classification. These findings suggest that channel-wise EEG analysis is a valuable approach for identifying informative features and optimizing electrode selection in the PD detection system.

3.1.2. Observations Related to Evaluation 1 Based on DCP 2

The key observations from evaluation 1 for EEG signals acquired under Data Collection 1 are summarized as follows:
  • In DCP-2, the highest classification accuracy of 100% was achieved with the CRC and LSTM classification methods. Across all algorithms, classification accuracy was highest for the L 7 T p 2 1  Vs  L 7 T p 2 2 evaluation, while it was relatively lower in L 7 T p 2 1  Vs  L 7 T p 2 3 and L 7 T p 2 1  Vs  L 7 T p 2 4 evaluations. This suggests that the L 7 T flickering pattern induces strong and consistent neural responses, particularly in short-term sequential comparisons, thereby facilitating more accurate differentiation of PD signals from healthy controls.
  • The best-performing channels (F7, F9, Fc1, C3, P4, Cp2, Fc2, and Fp2) demonstrated high classification accuracy, spanning frontal, fronto-central, central, and parietal scalp regions. Their consistent performance highlights strong discriminative potential in Parkinson’s disease detection. The L 7 T flickering pattern, designed to elicit steady-state visual responses, may enhance neural activity effectively captured by fronto-central and parietal channels. The resulting stimulus-induced signal variations across these regions likely contribute to the enhanced classification performance observed under visual stimulation.

3.2. Evaluation 2

In the Within-Stimulation Evaluation, models are tested on the same stimulus recorded at different time points to capture intra-stimulus variations. However, this approach does not adequately assess the generalizability of algorithms across varying stimulus conditions. Evaluation 2 addresses this limitation by training and testing the PD detection models on different stimuli within the same Data Collection Protocol, thereby introducing EEG variability associated with distinct cognitive and emotional responses. This makes classification more challenging, providing a more rigorous test of model robustness under diverse neural conditions.
Table 4 and Table 5 demonstrate the classification accuracy obtained for DCP 1 and -2, respectively. Figure 7 and Figure 8 present the channel-wise performance comparison across four algorithms and illustrate the overall quantitative performance of the best-performing machine learning and deep learning algorithms respectively for DCP 1. Figure 9 and Figure 10 depict the corresponding results for DCP 2. Despite this increased complexity in this evaluation, the classification performance remains strong, demonstrating the robustness of the algorithms in handling EEG signal variations.

3.2.1. Observations Related to Evaluation 2 Based on DCP 1

The key observations from evaluation 2 for EEG signals acquired under Data Collection 1 are summarized as follows:
  • An outstanding performance was observed with CRC and LSTM algorithms, attaining the highest average classification accuracy of 99–100%, indicating their superior ability to capture spatial and temporal dependencies in Parkinson’s EEG signals. In comparison, CNN also demonstrated strong performance in PD detection, while SVM yielded the lowest accuracy among the evaluated methods.
  • In Parkinson’s patients, emotional processing and cognitive engagement involve frontal and limbic system regions, which are often affected by neurodegeneration. This may influence EEG patterns differently across Horror and Comedy stimuli compared to the Relax stimulus, resulting in better classification accuracy. Such differences could be attributed to stronger neural activation in response to emotionally and cognitively engaging stimuli, leading to clearer differentiation between Parkinson’s and healthy EEG patterns. In contrast, the R e l a x stimulus might evoke weaker cortical responses, making classification more challenging.
  • The best-performing channels across all algorithms include F7, F9, Fc5, Fc2, C3, Cp2, P3, and Fp2. The frontal and fronto-central areas (F7, F9, Fc5, and Fc2) are linked to cognitive processing and attention, which are likely heightened during emotional stimuli such as Horror and Comedy, thereby contributing to outstanding classification accuracy.

3.2.2. Observations Related to Evaluation 2 Based on DCP 2

The key observations from evaluation 2 for EEG signals acquired under Data Collection 2 are summarized as follows:
  • Again, CRC and LSTM consistently outperform other models, reinforcing their robustness in classifying Parkinson’s EEG signals. Accuracy peaks in Relax v/s Horror, reaching 93–100% for CRC and 93–99% for LSTM, which suggests a strong neural contrast between these conditions.
  • Higher classification accuracy for Relax v/s Horror indicates that the horror stimulus elicits stronger neural activity compared to Relax, making the EEG patterns more distinguishable.
  • The best-performing channels are located in frontal (F7, F9), fronto-central (Fc1), central (C3), parietal (P4, Cp2), and occipital regions. These regions are linked to emotion processing, motor control, and sensory integration, all of which are affected in Parkinson’s, thereby explaining their contribution to high classification accuracy.

3.3. Evaluation 3

Table 6 summarize the classification accuracy obtained from cross-stimuli evaluation. Figure 11 presents the channel-wise performance comparison of all four algorithms, and Figure 12 illustrates the overall quantitative performance of the two handcrafted and two deep learning algorithms. The key observations from this evaluation are summarized as follows:
  • This evaluation introduces greater variability by training on r e c p 1 1 from Protocol 1 and testing on independent instances such as r e l a x p 2 1 , c o m e d y p 2 1 , h o r r o r p 2 1 , and 7 L T p 2 1 from Protocol 2. The variation in recording conditions across protocols enables assessment of the models’ generalizability under different stimulus and temporal settings. As expected, CRC and LSTM achieve the highest accuracy, reaffirming their ability to capture spatial and temporal EEG patterns. CNN maintains consistent performance, whereas SVM lags, highlighting its sensitivity to cross-stimulation evaluation.
  • Among the test stimuli, r e c p 1 1 , 2 vs. h o r r o r p 2 1 achieves the highest classification accuracy across all algorithms. This indicates that horror stimuli evoke distinct EEG responses that enhance PD classification relative to other conditions. The stronger emotional and cognitive engagement associated with horror may likely lead to more pronounced neural differences between Parkinson’s and healthy subjects, thereby improving classification performance.
  • The most effective channels—F7, F9, Fc1, Fc2, Fc5, C4, Cp2, and P8—are primarily located in the frontal, fronto-central, and central regions, which are crucial for motor control, cognitive processing, and sensorimotor integration. These regions, often affected in Parkinson’s disease, consistently yield reliable classification performance across evaluations, even under varying stimulus and protocol conditions.
Figure 11. Illustrating the performance accuracy of the top four EEG channels across all four algorithms (Evaluation 3; only the four best-performing channels are presented for simplicity). (a) Channel 4 Accuracy Across Algorithms; (b) Channel 5 Accuracy Across Algorithms; (c) Channel 22 Accuracy Across Algorithms; (d) Channel 27 Accuracy Across Algorithms.
Figure 11. Illustrating the performance accuracy of the top four EEG channels across all four algorithms (Evaluation 3; only the four best-performing channels are presented for simplicity). (a) Channel 4 Accuracy Across Algorithms; (b) Channel 5 Accuracy Across Algorithms; (c) Channel 22 Accuracy Across Algorithms; (d) Channel 27 Accuracy Across Algorithms.
Bioengineering 12 01185 g011
Figure 12. Illustrates the performance across 32 channels for PD detection (Evaluation 3; presenting the two best-performing algorithms: CRC and LSTM for simplicity).
Figure 12. Illustrates the performance across 32 channels for PD detection (Evaluation 3; presenting the two best-performing algorithms: CRC and LSTM for simplicity).
Bioengineering 12 01185 g012
Table 6. Average Classification accuracy of four PD detection algorithms for cross-stimulation Evaluation 3 using Data Collection Protocol 1 and Data Collection Protocol 2.
Table 6. Average Classification accuracy of four PD detection algorithms for cross-stimulation Evaluation 3 using Data Collection Protocol 1 and Data Collection Protocol 2.
Ch rec p 1 1 , 2  Vs  relax p 2 1 rec p 1 1 , 2  Vs  comedy p 2 1 rec p 1 1 , 2  Vs  horror p 2 1 rec p 1 1 , 2  Vs  7 LT p 2 1
SVM CRC LSTM CNN SVM CRC LSTM CNN SVM CRC LSTM CNN SVM CRC LSTM CNN
180.6 ± 4.597.7 ± 2.895.8 ± 3.898.4 ± 1.179.9 ± 2.597.8 ± 2.197.3 ± 1.997.1 ± 1.681.5 ± 4.799.9 ± 0.298.3 ± 1.698.7 ± 1.179.0 ± 3.798.8 ± 1.398.4 ± 1.296.5 ± 0.9
272.2 ± 3.392.4 ± 3.597.8 ± 2.295.1 ± 3.371.2 ± 3.196.8 ± 2.198.8 ± 1.597.5 ± 1.372.4 ± 3.293.4 ± 3.097.7 ± 1.293.3 ± 2.070.5 ± 3.593.2 ± 3.298.7 ± 1.497.7 ± 1.5
379.8 ± 3.899.9 ± 0.299.7 ± 0.498.3 ± 1.378.5 ± 4.699.1 ± 1.298.0 ± 1.998.7 ± 0.780.6 ± 4.598.3 ± 1.599.8 ± 0.498.5 ± 1.478.6 ± 4.299.2 ± 0.898.3 ± 1.297.9 ± 1.7
486.0 ± 4.299.8 ± 0.399.8 ± 0.499.5 ± 0.487.8 ± 3.699.6 ± 1.198.9 ± 1.499.1 ± 0.887.0 ± 4.0100.0 ± 0.099.8 ± 0.598.7 ± 0.883.6 ± 3.199.5 ± 0.499.7 ± 0.498.2 ± 1.5
590.1 ± 3.398.6 ± 1.598.7 ± 2.598.3 ± 2.489.8 ± 2.8100.0 ± 0.098.1 ± 2.998.1 ± 3.590.7 ± 3.199.7 ± 0.499.3 ± 2.097.5 ± 2.287.6 ± 3.298.6 ± 1.498.9 ± 3.097.8 ± 2.9
683.4 ± 3.499.8 ± 0.399.9 ± 0.298.0 ± 2.081.2 ± 3.098.3 ± 1.498.8 ± 1.897.9 ± 1.083.7 ± 4.1100.0 ± 0.099.9 ± 0.398.6 ± 0.983.3 ± 4.199.6 ± 0.599.9 ± 0.298.6 ± 1.0
784.4 ± 3.899.9 ± 0.298.7 ± 1.298.2 ± 1.782.8 ± 3.798.9 ± 1.198.8 ± 1.397.9 ± 1.085.3 ± 3.9100.0 ± 0.098.9 ± 0.996.7 ± 1.883.0 ± 3.398.8 ± 1.698.2 ± 1.997.9 ± 0.7
881.9 ± 3.399.1 ± 1.299.4 ± 1.095.7 ± 1.783.2 ± 3.899.6 ± 1.198.4 ± 1.696.6 ± 2.283.2 ± 3.498.6 ± 1.499.5 ± 1.096.4 ± 2.277.6 ± 3.198.8 ± 0.999.7 ± 0.597.2 ± 1.8
974.0 ± 3.198.3 ± 1.999.4 ± 1.099.0 ± 0.973.2 ± 2.8100.0 ± 0.097.3 ± 1.698.7 ± 1.374.4 ± 3.699.3 ± 0.898.4 ± 1.298.7 ± 1.173.3 ± 2.798.8 ± 1.299.2 ± 1.198.3 ± 1.2
1058.9 ± 3.699.7 ± 0.799.9 ± 0.397.9 ± 1.359.5 ± 3.698.7 ± 1.498.9 ± 1.497.9 ± 1.058.8 ± 2.9100.0 ± 0.0100.0 ± 0.097.3 ± 2.459.2 ± 4.899.8 ± 0.599.8 ± 0.598.1 ± 1.5
1166.7 ± 4.398.9 ± 0.999.6 ± 0.396.1 ± 2.268.5 ± 5.899.4 ± 0.499.0 ± 0.995.8 ± 1.466.9 ± 4.599.4 ± 0.899.8 ± 0.396.1 ± 3.167.0 ± 4.196.6 ± 2.497.9 ± 1.194.3 ± 1.1
1275.8 ± 4.397.6 ± 2.5100.0 ± 0.096.7 ± 1.876.3 ± 4.099.3 ± 0.499.6 ± 0.795.3 ± 2.277.4 ± 5.5100.0 ± 0.099.8 ± 0.495.4 ± 2.774.1 ± 2.998.1 ± 2.099.9 ± 0.296.5 ± 1.5
1369.6 ± 2.694.0 ± 3.599.1 ± 1.296.2 ± 2.468.7 ± 4.193.0 ± 4.294.1 ± 2.893.3 ± 4.669.7 ± 2.992.3 ± 3.998.9 ± 1.788.7 ± 4.068.4 ± 3.296.3 ± 3.598.3 ± 1.095.1 ± 2.6
1480.0 ± 4.4100.0 ± 0.099.4 ± 1.298.1 ± 2.079.6 ± 5.198.3 ± 1.897.2 ± 3.096.6 ± 2.080.0 ± 4.4100.0 ± 0.099.6 ± 0.997.7 ± 1.480.0 ± 4.499.8 ± 0.599.6 ± 0.897.4 ± 2.4
15 77.1 ± 5.299.4 ± 0.898.9 ± 1.498.4 ± 1.279.7 ± 5.199.8 ± 0.398.7 ± 1.397.9 ± 1.177.2 ± 5.499.8 ± 0.497.9 ± 1.498.3 ± 1.076.8 ± 5.1100.0 ± 0.099.1 ± 1.398.7 ± 1.0
1677.7 ± 5.599.4 ± 0.799.9 ± 0.296.7 ± 1.976.4 ± 4.398.1 ± 1.696.7 ± 2.895.5 ± 2.078.6 ± 5.499.7 ± 0.598.3 ± 1.296.1 ± 4.176.0 ± 5.498.8 ± 1.699.7 ± 0.596.3 ± 1.5
1774.8 ± 4.199.7 ± 0.799.7 ± 0.498.6 ± 1.576.5 ± 4.299.4 ± 1.198.7 ± 0.997.3 ± 1.075.5 ± 4.8100.0 ± 0.099.2 ± 0.798.4 ± 1.473.9 ± 4.399.3 ± 0.799.1 ± 1.298.3 ± 1.2
1873.5 ± 4.299.7 ± 0.499.7 ± 0.596.7 ± 2.573.1 ± 5.299.9 ± 0.298.5 ± 1.697.4 ± 1.674.5 ± 4.999.9 ± 0.2100.0 ± 0.097.4 ± 2.071.8 ± 4.499.7 ± 0.498.9 ± 1.097.7 ± 0.9
1976.4 ± 9.697.1 ± 2.696.7 ± 1.996.2 ± 2.475.8 ± 8.099.7 ± 0.396.1 ± 2.098.2 ± 1.276.8 ± 9.399.3 ± 0.899.1 ± 0.996.1 ± 2.475.2 ± 8.396.9 ± 2.497.3 ± 1.696.1 ± 2.5
2085.7 ± 2.598.4 ± 1.599.2 ± 0.696.6 ± 2.685.2 ± 2.899.6 ± 0.498.8 ± 2.095.7 ± 1.786.7 ± 3.199.6 ± 0.599.8 ± 0.596.4 ± 1.784.9 ± 2.299.4 ± 0.599.9 ± 0.296.2 ± 1.7
2177.2 ± 8.195.2 ± 3.199.6 ± 1.098.4 ± 1.480.9 ± 7.5100.0 ± 0.099.4 ± 1.598.1 ± 1.179.0 ± 8.897.6 ± 1.699.1 ± 1.097.9 ± 0.876.2 ± 7.395.8 ± 1.599.7 ± 0.496.6 ± 1.8
2287.8 ± 2.798.9 ± 1.799.8 ± 0.497.2 ± 2.686.0 ± 3.799.9 ± 0.299.4 ± 0.897.8 ± 1.288.7 ± 3.899.9 ± 0.499.7 ± 0.796.7 ± 2.584.6 ± 3.798.6 ± 1.599.8 ± 0.396.8 ± 1.0
2363.8 ± 2.998.8 ± 1.897.2 ± 2.497.8 ± 2.467.5 ± 4.293.4 ± 3.091.6 ± 2.394.9 ± 1.265.3 ± 3.396.7 ± 3.394.6 ± 3.298.5 ± 1.164.0 ± 2.695.1 ± 2.494.5 ± 3.595.8 ± 2.6
2477.0 ± 4.697.3 ± 1.4100.0 ± 0.098.8 ± 0.779.3 ± 4.499.6 ± 0.598.3 ± 1.598.9 ± 1.078.1 ± 4.799.3 ± 0.899.9 ± 0.298.6 ± 1.376.6 ± 4.199.4 ± 0.799.9 ± 0.298.8 ± 0.8
2584.5 ± 4.497.5 ± 3.099.4 ± 1.096.2 ± 2.784.2 ± 4.899.6 ± 0.498.4 ± 2.598.4 ± 1.186.0 ± 3.899.4 ± 0.699.3 ± 1.197.3 ± 2.182.3 ± 4.098.2 ± 2.499.3 ± 1.096.3 ± 2.7
2669.0 ± 3.595.4 ± 2.398.5 ± 1.397.7 ± 1.872.7 ± 6.097.8 ± 1.798.8 ± 1.597.4 ± 1.570.3 ± 4.497.6 ± 1.699.1 ± 0.997.5 ± 1.968.9 ± 3.396.2 ± 1.499.0 ± 0.797.3 ± 1.5
2785.3 ± 1.997.1 ± 2.198.8 ± 0.995.6 ± 2.285.3 ± 2.399.9 ± 0.298.9 ± 1.897.0 ± 1.687.0 ± 1.998.7 ± 1.499.9 ± 0.397.9 ± 1.583.5 ± 1.997.5 ± 1.598.5 ± 1.395.9 ± 2.8
2879.6 ± 3.496.6 ± 1.998.4 ± 1.897.6 ± 1.582.6 ± 3.998.1 ± 1.698.9 ± 1.597.3 ± 1.680.4 ± 3.797.7 ± 1.799.4 ± 1.097.2 ± 2.678.3 ± 3.497.9 ± 1.498.8 ± 2.198.3 ± 1.9
2973.6 ± 3.496.6 ± 2.197.6 ± 1.497.3 ± 1.375.0 ± 5.997.9 ± 1.797.9 ± 1.397.4 ± 1.273.6 ± 4.097.7 ± 1.799.1 ± 1.196.6 ± 1.473.0 ± 3.696.0 ± 2.395.7 ± 2.496.4 ± 2.1
3073.8 ± 4.196.6 ± 2.699.1 ± 1.097.8 ± 1.574.7 ± 2.898.8 ± 1.499.2 ± 1.397.2 ± 1.575.8 ± 4.697.6 ± 1.699.7 ± 0.595.7 ± 3.972.7 ± 3.297.3 ± 2.199.1 ± 0.798.7 ± 0.9
3179.6 ± 2.797.9 ± 3.396.9 ± 2.098.0 ± 1.181.0 ± 5.1100.0 ± 0.096.1 ± 3.498.4 ± 1.280.6 ± 3.399.2 ± 1.499.1 ± 1.197.7 ± 1.277.6 ± 2.798.9 ± 1.496.2 ± 2.597.1 ± 1.3
3280.7 ± 4.699.8 ± 0.398.6 ± 0.998.9 ± 1.082.7 ± 4.199.7 ± 1.199.2 ± 2.098.7 ± 0.782.8 ± 4.0100.0 ± 0.098.3 ± 3.498.1 ± 1.280.1 ± 4.499.4 ± 0.699.4 ± 0.898.8 ± 0.7

4. Discussion

EEG has emerged as a powerful non-invasive biomarker for Parkinson’s disease (PD) detection, with recent studies employing resting-state, visually evoked potentials, and task-induced paradigms to explore PD-related brain dysfunction [37]. However, beyond conventional resting-state analysis, researchers have increasingly explored dynamic EEG responses under diverse stimulus-based conditions to better capture PD-related neural changes [14,15,16,22,23]. In this context, our study introduces a novel hybrid stimulus-driven EEG Data Collection Protocol, which integrates two complementary paradigms: Resting-State Visually Evoked Potential (RSVEP) and Steady-State Visually Evoked Potential (SSVEP). Data Collection Protocol 1 (RSVEP-based) involves baseline recordings (eye-closed/open) and affective video stimuli (relaxation, comedy, and horror), while Data Collection Protocol 2 (SSVEP-based) features flickering alphanumeric sequences coupled with emotional video distractions. These Data Collections Protocols were designed to enrich the temporal and cognitive diversity of evoked EEG responses across 32 channels. By capturing both spontaneous and stimulus-driven neural activity, our approach provides a multidimensional view of PD-related brain alterations and offers a significant advancement over prior studies limited to either resting-state or single-frequency SSVEP paradigms.
To further enhance interpretability and clinical relevance, we performed a detailed single-channel analysis to evaluate the contribution of individual EEG electrodes to Parkinson’s disease (PD) detection. Each of the 32 channels was assessed independently across all experiments using both handcrafted and deep learning classifiers. This analysis enabled the identification of brain regions most strongly associated with PD-related abnormalities. Notably, channels located in the frontal (F7, F9), fronto-central (FC2), and central–parietal (CP2) regions consistently achieved the highest classification accuracies, often exceeding 95–100% in both CRC and LSTM classifier models across a variety of stimulus conditions and evaluation setups. These results are in strong agreement with prior studies [38,39], which have highlighted frontal and central regions as key sites of dysfunction in PD, particularly with respect to motor planning, execution, and cognitive control. Motor symptoms such as bradykinesia, rigidity, and tremor are strongly linked to disrupted beta-band activity and impaired fronto-central network dynamics, which are reflected in the high discriminative power of these regions [40]. In addition, non-motor features of PD, including cognitive impairment and visual processing deficits, are associated with altered oscillatory activity in frontal and parietal networks [41]. This aligns with our observation that stimulus-driven responses, particularly those involving visual evoked activity, reveal distinct differences between PD patients and healthy controls.
Channel-wise evaluation offers a more targeted and interpretable perspective; these benchmarking evaluation results can facilitate the design of more efficient and clinically viable EEG configurations, potentially reducing the number of electrodes required without compromising diagnostic performance. Building upon this channel-specific analysis, we presented an extensive three-tier evaluation framework to rigorously assess the robustness and generalizability of our classification models and presented the benchmark results. Traditional EEG-based PD detection studies often rely on intra-stimulus (within-stimulus) evaluation, where training and testing data are derived from the same type of stimulus, thereby limiting the ability to assess model performance under real-world variability [42,43,44]. In contrast, our framework progresses through three increasingly challenging evaluation protocols. The first level involves within-stimulus classification, where training and testing are performed on the same stimulus but at different time instances—serving as a baseline. The second level introduces cross-stimulus learning within the same Data Collection Protocol, training on one stimulus and testing on another (e.g., horror vs. comedy). The third and most rigorous level performs inter-protocol cross-stimulus evaluation, where models trained on Data Collection Protocol 1 are tested on different stimuli from Data Collection Protocol 2, simulating a realistic generalization challenge.
Across all three evaluations, our results consistently demonstrate the superior performance with CRC and LSTM classifiers, with accuracies frequently exceeding 95%. This multi-stage evaluation strategy allowed us to investigate not only the spatial relevance of individual EEG channels but also the temporal and contextual stability of PD-related neural features under diverse cognitive and emotional stimuli. Importantly, it highlights the potential of emotionally rich stimuli (such as horror and comedy) to enhance discriminability between PD and healthy controls, reinforcing the value of stimulus-based EEG paradigms. Overall, this robust framework goes beyond existing methods by accounting for both spatial (channel-specific) and contextual (stimulus-dependent) variability, offering a more reliable pathway toward the development of clinically deployable EEG-based PD diagnostic tools.
Although the present study demonstrates that a few central and frontal electrodes consistently yield high discriminative power for PD detection, translating these findings into a practical single-channel or reduced-channel diagnostic device presents challenges. First, EEG signals have inherently low signal-to-noise ratio, and reliance on a single electrode may increase susceptibility to artifacts (e.g., muscle, ocular, or environmental noise). Second, achieving consistent electrode placement is critical, as small deviations can alter recorded signals and compromise reproducibility. Third, extensive validation across larger, more diverse populations is required to establish the clinical reliability of such minimal configurations. Therefore, while our results suggest the feasibility of reduced-channel PD detection, further research and technological development are necessary before such systems can be deployed in clinical practice.
This study provides encouraging results for EEG-based PD detection, and several avenues can further extend its scope. While classification of PD versus healthy controls is primarily exploratory rather than a clinical diagnostic application, this study establishes a baseline proof-of-concept framework. First, the resting-state baseline duration was limited to 60 s to minimize fatigue and maintain consistency across conditions; however, longer recordings (e.g., 3–5 min) may capture additional stable resting-state features and should be explored in future work. Second, the EEG acquisition was performed at 250 Hz, which adequately covers PD-related changes in the delta–beta range, but higher sampling rates (≥512 Hz) could enable investigation of gamma-band activity and finer temporal dynamics. Although participants refrained from medication on the day of recording to minimise pharmacological effects, dopaminergic medications (e.g., carbidopa levodopa) may still influence beta-band EEG activity, which can be systematically examined with signal-level analyses in future research, future studies could incorporate longer washout periods to better isolate disease-specific neural signals. Similarly, future studies will incorporate detailed UPDRS motor subscores to examine EEG alterations across multiple symptom dimensions and consider the potential presence of atypical Parkinsonism (e.g., drug-induced and vascular), despite all patients being clinically diagnosed using UKPDS Brain Bank Criteria, with future work adopting the updated MDS Clinical Diagnostic Criteria to align with evolving standards and enhance comparability across studies.
Future work will focus on expanding the ParEEG database to include additional Parkinsonian syndromes such as PSP and MSA, systematically evaluating medication effects, incorporating UPDRS subscores, and performing longitudinal data collection to enhance clinical relevance. These steps aim to build upon the baseline results presented in this study.
Finally, future research could explore a broader range of stimuli, including motor tasks and cognitive challenges, as PD patients commonly exhibit bradykinesia, tremors, and cognitive impairments. Analyzing EEG responses to such tasks may provide deeper insights into disease-specific neural dysfunctions and further enhance detection accuracy.

5. Conclusions and Future Work

With the global rise in Parkinson’s disease prevalence, the need for accurate and early detection methods is receiving increasing attention. Electroencephalography (EEG) has emerged as a valuable tool for the detection and monitoring of Parkinson’s disease (PD). This study investigated a cross-stimulation evaluation framework for Parkinson’s disease (PD) detection using EEG signals and examined the impact of different brain regions and stimulus conditions on classification performance. To achieve this, analysis was performed on two handcrafted classifiers, including Collaborative Representation-based Classification (CRC) and Support Vector Machine (SVM), and two deep learning models—Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN). The evaluations were performed on the Parkinson’s disease EEG (ParEEG) database, comprising 203,520 EEG samples from 60 subjects, including 30 unique individuals belongs to healthy control and 30 individuals belongs to parkinson disease. The experimental results demonstrated that CRC and LSTM consistently outperformed other classification models across different evaluation protocols, highlighting their robustness in detecting PD-affected EEG signals.
Notably, channels located in the frontal (F7, F9), fronto-central (FC2), and central (CP2) regions consistently achieved the highest classification accuracies—often exceeding 95–100% in both CRC and LSTM classifier models across a variety of stimulus conditions and evaluation setups. These findings suggest that future research may not require full-brain EEG recordings, as focusing on these regions could be sufficient for accurate PD classification. Additionally, multi-pattern evaluation through within-stimulus and cross-stimulus classification revealed that emotional stimuli influenced classification performance, with Horror and Comedy stimuli yielding the highest accuracy. The high classification accuracy, particularly for emotionally engaging stimuli, underscores the potential of stimulus-based EEG analysis for PD detection. In the current work, the stimuli were exclusively visual, focusing on RSVEP, SSVEP, and emotional video paradigms. Future studies will incorporate motor tasks (e.g., finger tapping and gait-related EEG paradigms) and cognitive challenges (e.g., working memory tasks), which may provide complementary insights into PD-related neural dysfunction and improve the sensitivity of EEG-based biomarkers.
Given these promising outcomes, the proposed cross-stimulation evaluation framework benchmarked across individual channels holds significant potential as a reliable tool for aiding clinicians in PD diagnosis. The strong classification performance across multiple evaluation settings further highlights the feasibility of EEG-based biomarkers and deep learning techniques in advancing PD detection.

Author Contributions

Conceptualization, K.P., N.V., R.G., T.F. and R.R.; Methodology, K.P., N.V. and M.L.d.A.; Validation, N.V., R.G., R.R. and T.F.; Data curation, K.P., M.L.d.A. and T.F.; Writing – original draft, K.P. and M.L.d.A.; Writing – review & editing, R.G., N.V., R.R. and T.F.; Supervision, N.V. and R.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Visvesvaraya Ph.D. Scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India.

Institutional Review Board Statement

The study was conducted in accordance with institutional ethical guidelines and approved by the Institutional Human Ethics Committee of Goa University (Approval Code: GU-DRDRM/IHEC-Cert/2023/169) and the Institutional Ethics Committee of Goa Medical College and Hospital (Approval Code: GMCIEC/2024/338).

Informed Consent Statement

Written informed consent was obtained from all participants prior to EEG data collection. All procedures were explained to participants, and confidentiality of the collected data was ensured.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors sincerely thank the Department of Neurology, Goa Medical College and Hospital, for their support during EEG data acquisition. The authors also express their sincere gratitude to all the healthy volunteers from Goa University and various regions of Goa who participated in this study, and to the Vetrekar family for their kind support and assistance during the data collection process.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Philipe de Souza Ferreira, L.; André da Silva, R.; Marques Mesquita da Costa, M.; Moraes de Paiva Roda, V.; Vizcaino, S.; Janisset, N.R.L.L.; Ramos Vieira, R.; Marcos Sanches, J.; Maria Soares Junior, J.; de Jesus Simões, M. Sex differences in Parkinson’s Disease: An emerging health question. Clinics 2022, 77, 100121. [Google Scholar] [CrossRef]
  2. Zirra, A.; Rao, S.C.; Bestwick, J.; Rajalingam, R.; Marras, C.; Blauwendraat, C.; Mata, I.F.; Noyce, A.J. Gender differences in the prevalence of Parkinson’s disease. Mov. Disord. Clin. Pract. 2023, 10, 86–93. [Google Scholar] [CrossRef]
  3. Marsili, L.; Rizzo, G.; Colosimo, C. Diagnostic criteria for Parkinson’s disease: From James Parkinson to the concept of prodromal disease. Front. Neurol. 2018, 9, 156. [Google Scholar] [CrossRef]
  4. Postuma, R.B.; Berg, D.; Stern, M.; Poewe, W.; Olanow, C.W.; Oertel, W.; Obeso, J.; Marek, K.; Litvan, I.; Lang, A.E.; et al. MDS clinical diagnostic criteria for Parkinson’s disease. Mov. Disord. 2015, 30, 1591–1601. [Google Scholar] [CrossRef] [PubMed]
  5. Narayanan, N.S.; Rodnitzky, R.L.; Uc, E.Y. Prefrontal dopamine signaling and cognitive symptoms of Parkinson’s disease. Rev. Neurosci. 2013, 24, 267–278. [Google Scholar] [CrossRef]
  6. Ye, Z. Mapping neuromodulatory systems in Parkinson’s disease: Lessons learned beyond dopamine. Curr. Med. 2022, 1, 15. [Google Scholar] [CrossRef]
  7. Buddhala, C.; Loftin, S.K.; Kuley, B.M.; Cairns, N.J.; Campbell, M.C.; Perlmutter, J.S.; Kotzbauer, P.T. Dopaminergic, serotonergic, and noradrenergic deficits in Parkinson disease. Ann. Clin. Transl. Neurol. 2015, 2, 949–959. [Google Scholar] [CrossRef]
  8. Beach, T.G.; Adler, C.H. Importance of low diagnostic Accuracy for early Parkinson’s disease. Mov. Disord. 2018, 33, 1551–1554. [Google Scholar] [CrossRef]
  9. Sheng, J.; Wang, B.; Zhang, Q.; Liu, Q.; Ma, Y.; Liu, W.; Shao, M.; Chen, B. A novel joint HCPMMP method for automatically classifying Alzheimer’s and different stage MCI patients. Behav. Brain Res. 2019, 365, 210–221. [Google Scholar] [CrossRef] [PubMed]
  10. Raghavendra, U.; Acharya, U.R.; Adeli, H. Artificial intelligence techniques for automated diagnosis of neurological disorders. Eur. Neurol. 2019, 82, 41–64. [Google Scholar] [CrossRef]
  11. Bigdely-Shamlo, N.; Mullen, T.; Kothe, C.; Su, K.M.; Robbins, K.A. The PREP pipeline: Standardized preprocessing for large-scale EEG analysis. Front. Neuroinform. 2015, 9, 16. [Google Scholar] [CrossRef]
  12. Cole, S.; Voytek, B. Cycle-by-cycle analysis of neural oscillations. J. Neurophysiol. 2019, 122, 849–861. [Google Scholar] [CrossRef] [PubMed]
  13. Gimenez-Aparisi, G.; Guijarro-Estelles, E.; Chornet-Lurbe, A.; Diaz-Roman, M.; Hao, D.; Li, G.; Ye-Lin, Y. Early detection of Parkinson’s disease based on beta dynamic features and beta-gamma coupling from non-invasive resting state EEG: Influence of the eyes. Biomed. Signal Process. Control 2025, 107, 107868. [Google Scholar] [CrossRef]
  14. Victor Paul M, A.; Shankar, S. Deep learning-based method for detecting Parkinson using 1D convolutional neural networks and improved jellyfish algorithms. Int. J. Electr. Comput. Eng. Syst. 2024, 15, 515–522. [Google Scholar] [CrossRef]
  15. Loh, H.W.; Ooi, C.P.; Palmer, E.; Barua, P.D.; Dogan, S.; Tuncer, T.; Baygin, M.; Acharya, U.R. GaborPDNet: Gabor Transformation and Deep Neural Network for Parkinson’s Disease Detection Using EEG Signals. Electronics 2021, 10, 1740. [Google Scholar] [CrossRef]
  16. de Oliveira, A.P.S.; de Santana, M.A.; Andrade, M.K.S.; Gomes, J.C.; Rodrigues, M.C.A.; dos Santos, W.P. Early diagnosis of Parkinson’s disease using EEG, machine learning and partial directed coherence. Res. Biomed. Eng. 2020, 36, 311–331. [Google Scholar] [CrossRef]
  17. Ezazi, Y.; Ghaderyan, P. Textural feature of EEG signals as a new biomarker of reward processing in Parkinson’s disease detection. Biocybern. Biomed. Eng. 2022, 42, 950–962. [Google Scholar] [CrossRef]
  18. Hassin-Baer, S.; Cohen, O.S.; Israeli-Korn, S.; Yahalom, G.; Benizri, S.; Sand, D.; Issachar, G.; Geva, A.B.; Shani-Hershkovich, R.; Peremen, Z. Identification of an early-stage Parkinson’s disease neuromarker using event-related potentials, brain network analytics and machine-learning. PLoS ONE 2022, 17, e0261947. [Google Scholar] [CrossRef]
  19. Shah, S.A.A.; Zhang, L.; Bais, A. Dynamical system based compact deep hybrid network for classification of Parkinson disease related EEG signals. Neural Netw. 2020, 130, 75–84. [Google Scholar] [CrossRef]
  20. Wang, X.; Huang, J.; Chatzakou, M.; Medijainen, K.; Taba, P.; Toomela, A.; Nomm, S.; Ruzhansky, M. A light-weight CNN model for efficient Parkinson’s disease diagnostics. In Proceedings of the 2023 IEEE 36th International Symposium on Computer-Based Medical Systems (CBMS), L’Aquila, Italy, 22–24 June 2023; pp. 616–621. [Google Scholar]
  21. Vanegas, M.I.; Ghilardi, M.F.; Kelly, S.P.; Blangero, A. Machine learning for EEG-based biomarkers in Parkinson’s disease. In Proceedings of the 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 3–6 December 2018; IEEE: New York, NY, USA, 2018. [Google Scholar]
  22. Aslam, A.R.; Altaf, M.A.B. An On-Chip Processor for Chronic Neurological Disorders Assistance Using Negative Affectivity Classification. IEEE Trans. Biomed. Circuits Syst. 2020, 14, 838–851. [Google Scholar] [CrossRef]
  23. Dar, M.N.; Akram, M.U.; Yuvaraj, R.; Gul Khawaja, S.; Murugappan, M. EEG-based emotion charting for Parkinson’s disease patients using Convolutional Recurrent Neural Networks and cross dataset learning. Comput. Biol. Med. 2022, 144, 105327. [Google Scholar] [CrossRef]
  24. He, S.B.; Liu, C.Y.; Chen, L.D.; Ye, Z.N.; Zhang, Y.P.; Tang, W.G.; Wang, B.d.; Gao, X. Meta-analysis of visual evoked potential and Parkinson’s disease. Park. Dis. 2018, 2018, 3201308. [Google Scholar]
  25. Molcho, L.; Maimon, N.B.; Hezi, N.; Zeimer, T.; Intrator, N.; Gurevich, T. Evaluation of Parkinson’s Disease Early Diagnosis Using Single-Channel EEG Features and Auditory Cognitive Assessment. Front. Neurol. 2023, 14, 1273458. [Google Scholar] [CrossRef]
  26. Wu, H.; Qi, J.; Purwanto, E.; Zhu, X.; Yang, P.; Chen, J. Multi-Scale Feature and Multi-Channel Selection toward Parkinson’s Disease Diagnosis with EEG. arXiv 2024, arXiv:382338819. [Google Scholar]
  27. Maitín, A.M.; García-Tejedor, A.J.; Muñoz, J.P.R. Machine learning approaches for detecting Parkinson’s disease from EEG analysis: A systematic review. Appl. Sci. 2020, 10, 8662. [Google Scholar] [CrossRef]
  28. Belyaev, M.; Murugappan, M.; Velichko, A.; Korzun, D. Entropy-based machine learning model for fast diagnosis and monitoring of Parkinson’s disease. Sensors 2023, 23, 8609. [Google Scholar] [PubMed]
  29. Guo, D.; Guo, F.; Zhang, Y.; Li, F.; Xia, Y.; Xu, P.; Yao, D. Periodic visual stimulation induces resting-state brain network reconfiguration. Front. Comput. Neurosci. 2018, 12, 21. [Google Scholar] [CrossRef] [PubMed]
  30. Petro, N.M.; Ott, L.R.; Penhale, S.H.; Rempe, M.P.; Embury, C.M.; Picci, G.; Wang, Y.P.; Stephen, J.M.; Calhoun, V.D.; Wilson, T.W. Eyes-closed versus eyes-open differences in spontaneous neural dynamics during development. Neuroimage 2022, 258, 119337. [Google Scholar] [CrossRef]
  31. İşcan, Z.; Nikulin, V.V. Steady state visual evoked potential (SSVEP) based brain-computer interface (BCI) performance under different perturbations. PLoS ONE 2018, 13, e0191673. [Google Scholar] [CrossRef]
  32. Zhang, L.; Yang, M.; Feng, X. Sparse representation or collaborative representation: Which helps face recognition? In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 471–478. [Google Scholar]
  33. Dubbelink, K.T.O.; Hillebrand, A.; Twisk, J.W.; Deijen, J.B.; Stoffers, D.; Scherder, E.J.; Stam, C.J. Predicting dementia in Parkinson disease by combining neurophysiologic and cognitive markers. Neurology 2014, 82, 263–270. [Google Scholar] [CrossRef] [PubMed]
  34. Babiloni, C.; De Pandis, M.F.; Vecchio, F.; Buffo, P.; Sorpresi, F.; Frisoni, G.B.; Rossini, P.M. Cortical sources of resting state EEG rhythms in Parkinson’s disease related dementia and Alzheimer’s disease. Clin. Neurophysiol. 2011, 122, 2355–2364. [Google Scholar] [CrossRef]
  35. Caviness, J.N.; Utianski, R.L.; Hentz, J.G.; Beach, T.G.; Dugger, B.N.; Shill, H.A.; Adler, C.H. Spectral EEG abnormalities in Parkinson’s disease without dementia. Clin. Neurophysiol. 2007, 118, 2510–2515. [Google Scholar] [CrossRef]
  36. Bosboom, J.L.; Stoffers, D.; Wolters, E.C.; Stam, C.J.; Berendse, H.W. MEG resting state functional connectivity in Parkinson’s disease related dementia. J. Neural Transm. 2006, 113, 593–598. [Google Scholar] [CrossRef] [PubMed]
  37. Shaban, M.; Amara, A.W. Resting-state electroencephalography based deep-learning for the detection of parkinson’s disease. PLoS ONE 2022, 17, e0263159. [Google Scholar] [CrossRef] [PubMed]
  38. Geraedts, V.J.; Boon, L.I.; Marinus, J.; Gouw, A.A.; van Hilten, J.J.; Stam, C.J.; Tannemaat, M.R.; Contarino, M.F. Clinical correlates of quantitative EEG in parkinson disease. Neurology 2018, 91, 871–883. [Google Scholar] [CrossRef]
  39. Gérard, M.; Bayot, M.; Derambure, P.; Dujardin, K.; Defebvre, L.; Betrouni, N.; Delval, A. EEG-based functional connectivity and executive control in patients with parkinson’s disease and freezing of gait. Clin. Neurophysiol. 2022, 137, 207–215. [Google Scholar] [CrossRef] [PubMed]
  40. Little, S.; Pogosyan, A.; Kuhn, A.A.; Brown, P. β band stability over time correlates with Parkinsonian rigidity and bradykinesia. Exp. Neurol. 2012, 236, 383–388. [Google Scholar] [CrossRef]
  41. Morita, A.; Kamei, S.; Mizutani, T. Relationship between slowing of the EEG and cognitive impairment in Parkinson disease. J. Clin. Neurophysiol. 2011, 28, 384–387. [Google Scholar] [CrossRef]
  42. Aljalal, M.; Aldosari, S.A.; AlSharabi, K.; Abdurraqeeb, A.M.; Alturki, F.A. Parkinson’s disease detection from resting-state EEG signals using common spatial pattern, entropy, and Machine Learning Techniques. Diagnostics 2022, 12, 1033. [Google Scholar] [CrossRef]
  43. Siuly, S.; Khare, S.K.; Kabir, E.; Sadiq, M.T.; Wang, H. An efficient parkinson’s disease detection framework: Leveraging time-frequency representation and AlexNet Convolutional Neural Network. Comput. Biol. Med. 2024, 174, 108462. [Google Scholar] [CrossRef]
  44. Qiu, L.; Li, J.; Zhong, L.; Feng, W.; Zhou, C.; Pan, J. A novel EEG-based parkinson’s disease detection model using multiscale convolutional Prototype Networks. IEEE Trans. Instrum. Meas. 2024, 73, 1–14. [Google Scholar] [CrossRef]
Figure 1. Temporal representation of audio–visual stimuli for EEG acquisition across Data Collection Protocol 1 (DCP 1) and Data Collection Protocol 2 (DCP 2).
Figure 1. Temporal representation of audio–visual stimuli for EEG acquisition across Data Collection Protocol 1 (DCP 1) and Data Collection Protocol 2 (DCP 2).
Bioengineering 12 01185 g001
Figure 2. Block diagram illustrates the process of Parkinson’s disease detection using brain wave signals.
Figure 2. Block diagram illustrates the process of Parkinson’s disease detection using brain wave signals.
Bioengineering 12 01185 g002
Figure 3. Illustrating the performance accuracy of the top four EEG channels across all four algorithms for DCP 1 (Evaluation 1; only the four best-performing channels are presented for simplicity). (a) Channel 4 Accuracy Across Algorithms; (b) Channel 5 Accuracy Across Algorithms; (c) Channel 22 Accuracy Across Algorithms; (d) Channel 27 Accuracy Across Algorithms.
Figure 3. Illustrating the performance accuracy of the top four EEG channels across all four algorithms for DCP 1 (Evaluation 1; only the four best-performing channels are presented for simplicity). (a) Channel 4 Accuracy Across Algorithms; (b) Channel 5 Accuracy Across Algorithms; (c) Channel 22 Accuracy Across Algorithms; (d) Channel 27 Accuracy Across Algorithms.
Bioengineering 12 01185 g003
Figure 4. Illustrates the performance across 32 channels for PD detection for DCP 1 (Evaluation 1; presenting the two best-performing algorithms: CRC and LSTM for simplicity).
Figure 4. Illustrates the performance across 32 channels for PD detection for DCP 1 (Evaluation 1; presenting the two best-performing algorithms: CRC and LSTM for simplicity).
Bioengineering 12 01185 g004aBioengineering 12 01185 g004b
Figure 5. Illustrating the performance accuracy of the top four EEG channels across all four algorithms for DCP 2 (Evaluation 1; only the four best-performing channels are presented for simplicity).
Figure 5. Illustrating the performance accuracy of the top four EEG channels across all four algorithms for DCP 2 (Evaluation 1; only the four best-performing channels are presented for simplicity).
Bioengineering 12 01185 g005
Figure 6. Illustrates the performance across 32 channels for PD detection for DCP 2 (Evaluation 1; presenting the two best-performing algorithms: CRC and LSTM for simplicity).
Figure 6. Illustrates the performance across 32 channels for PD detection for DCP 2 (Evaluation 1; presenting the two best-performing algorithms: CRC and LSTM for simplicity).
Bioengineering 12 01185 g006
Figure 7. Illustrating the performance accuracy of the top four EEG channels across all four algorithms for DCP 1 (Evaluation 2; only the four best-performing channels are presented for simplicity).
Figure 7. Illustrating the performance accuracy of the top four EEG channels across all four algorithms for DCP 1 (Evaluation 2; only the four best-performing channels are presented for simplicity).
Bioengineering 12 01185 g007
Figure 8. Illustrates the performance across 32 channels for PD detection for DCP 1 (Evaluation 2; presenting the two best-performing algorithms: CRC and LSTM for simplicity).
Figure 8. Illustrates the performance across 32 channels for PD detection for DCP 1 (Evaluation 2; presenting the two best-performing algorithms: CRC and LSTM for simplicity).
Bioengineering 12 01185 g008
Figure 9. Illustrating the performance accuracy of the top four EEG channels across all four algorithms for DCP 2 (Evaluation 2; only the four best-performing channels are presented for simplicity).
Figure 9. Illustrating the performance accuracy of the top four EEG channels across all four algorithms for DCP 2 (Evaluation 2; only the four best-performing channels are presented for simplicity).
Bioengineering 12 01185 g009aBioengineering 12 01185 g009b
Figure 10. Illustrates the performance across 32 channels for PD detection for DCP 2 (Evaluation 2; presenting the two best-performing algorithms: CRC and LSTM for simplicity).
Figure 10. Illustrates the performance across 32 channels for PD detection for DCP 2 (Evaluation 2; presenting the two best-performing algorithms: CRC and LSTM for simplicity).
Bioengineering 12 01185 g010
Table 1. Comprehensive overview of EEG recordings across two distinct Data Collection Protocols in the ParEEG database.
Table 1. Comprehensive overview of EEG recordings across two distinct Data Collection Protocols in the ParEEG database.
Data Acquisition
Protocol
StimuliNo. of Subject
(30 HC & 30 PD)
Sample
(Ch * Sample/Subject)
Total Sample
(Sample * Subject)
Description Notation
Data Collection
Protocol 1
Resting State
Eye Close
r e c p 1 1 60192 (32 * 06)103,680
(1728 * 60)
r e c p 1 2 60192 (32 * 06)
Relax State r e l a x p 1 1 60192 (32 * 06)
r e l a x p 1 2 60192 (32 * 06)
Comedy State c o m e d y p 1 1 60224 (32 * 07)
c o m e d y p 1 2 60224 (32 * 07)
Horror State h o r r o r p 1 1 60256 (32 * 08)
h o r r o r p 1 2 60256 (32 * 08)
Data Collection
Protocol 2
Alpha Neumaric
Flikering
7 L T p 2 1 60192 (32 * 06)99,840
(1664 * 60)
7 L T p 2 2 60192 (32 * 06)
7 L T p 2 3 60192 (32 * 06)
7 L T p 2 4 60192 (32 * 06)
Relax State r e l a x p 2 1 60192 (32 * 06)
Comedy State c o m e d y p 2 1 60256 (32 * 08)
Horror State h o r r o r p 2 1 60448 (32 * 14)
Table 2. Average classification accuracy of four PD detection algorithms for within-stimulation Evaluation 1 using Data Collection Protocol 1.
Table 2. Average classification accuracy of four PD detection algorithms for within-stimulation Evaluation 1 using Data Collection Protocol 1.
Ch rec p 1 1  Vs  rec p 1 2 relax p 1 1  Vs  relax p 1 2 comedy p 1 1  Vs  comedy p 1 2 horror p 1 1  Vs  horror p 1 2
SVM CRC LSTM CNN SVM CRC LSTM CNN SVM CRC LSTM CNN SVM CRC LSTM CNN
179.0 ± 6.297.8 ± 2.099.3 ± 1.196.6 ± 1.478.1 ± 4.297.6 ± 1.798.8 ± 1.293.7 ± 2.478.6 ± 4.1298.5 ± 1.898.2 ± 1.195.7 ± 2.778.9 ± 4.098.6 ± 1.598.3 ± 1.895.6 ± 2.9
269.1 ± 4.297.4 ± 1.598.9 ± 0.896.5 ± 1.168.3 ± 3.995.1 ± 2.898.8 ± 1.696.2 ± 1.868.7 ± 3.6893.8 ± 3.199.7 ± 0.796.6 ± 2.470.1 ± 4.096.1 ± 2.7100.0 ± 0.295.7 ± 1.9
378.9 ± 3.098.9 ± 1.497.8 ± 1.697.8 ± 1.080.3 ± 3.997.8 ± 2.097.8 ± 2.593.3 ± 3.080.3 ± 3.9998.0 ± 1.595.7 ± 6.393.3 ± 7.779.7 ± 4.697.9 ± 1.394.8 ± 8.893.1 ± 7.3
483.1 ± 4.799.1 ± 1.399.7 ± 0.297.0 ± 1.384.4 ± 5.199.1 ± 1.198.3 ± 1.897.0 ± 2.185.1 ± 4.59100.0 ± 0.297.0 ± 1.297.1 ± 2.185.7 ± 3.998.1 ± 1.996.1 ± 1.396.1 ± 1.9
583.8 ± 6.698.4 ± 1.497.4 ± 3.097.1 ± 3.290.8 ± 1.899.7 ± 0.397.4 ± 3.295.3 ± 3.791.0 ± 1.61100.0 ± 0.097.6 ± 2.997.2 ± 2.890.3 ± 1.999.8 ± 0.598.3 ± 2.797.3 ± 1.7
679.5 ± 4.596.8 ± 2.599.6 ± 0.796.8 ± 1.882.0 ± 3.999.1 ± 0.499.5 ± 0.798.1 ± 1.382.8 ± 4.3199.9 ± 0.298.7 ± 1.595.7 ± 3.283.2 ± 4.599.1 ± 1.398.3 ± 1.797.6 ± 2.1
776.6 ± 6.198.2 ± 1.799.1 ± 1.095.5 ± 1.578.1 ± 4.597.0 ± 2.499.6 ± 0.595.4 ± 1.978.5 ± 4.6097.7 ± 1.398.8 ± 1.395.1 ± 1.678.8 ± 3.499.5 ± 0.999.0 ± 1.095.2 ± 2.7
877.6 ± 4.696.8 ± 1.997.7 ± 1.995.2 ± 2.677.7 ± 4.796.5 ± 1.997.8 ± 0.993.2 ± 2.178.2 ± 4.1999.1 ± 0.799.0 ± 1.092.3 ± 2.579.0 ± 3.597.3 ± 2.599.2 ± 1.195.2 ± 3.4
967.6 ± 4.199.2 ± 0.798.7 ± 1.296.5 ± 1.274.3 ± 3.4100.0 ± 0.199.7 ± 0.498.0 ± 1.374.7 ± 3.58100.0 ± 0.099.8 ± 0.496.7 ± 1.674.7 ± 3.6100.0 ± 0.099.2 ± 0.996.9 ± 2.9
1056.9 ± 4.395.5 ± 2.998.5 ± 1.596.4 ± 2.159.0 ± 3.699.1 ± 1.399.6 ± 0.795.2 ± 1.759.0 ± 3.8299.8 ± 0.399.8 ± 0.592.8 ± 2.958.9 ± 4.4100.0 ± 0.099.8 ± 0.396.8 ± 2.2
1168.2 ± 3.795.5 ± 1.899.3 ± 0.694.8 ± 2.067.8 ± 3.398.1 ± 1.397.6 ± 2.095.5 ± 1.568.3 ± 3.6899.4 ± 1.098.7 ± 1.494.8 ± 2.668.2 ± 4.0100.0 ± 0.299.5 ± 0.695.8 ± 2.2
1275.2 ± 5.597.5 ± 2.199.0 ± 1.093.3 ± 2.376.5 ± 4.396.7 ± 1.899.4 ± 0.496.7 ± 2.076.8 ± 4.4699.3 ± 0.4100.0 ± 0.094.1 ± 2.677.1 ± 4.499.6 ± 0.999.4 ± 0.695.2 ± 1.7
1369.7 ± 4.096.1 ± 4.398.3 ± 0.892.8 ± 4.369.3 ± 3.496.8 ± 2.399.3 ± 0.795.7 ± 2.169.4 ± 3.1196.5 ± 1.899.3 ± 1.191.5 ± 1.668.7 ± 3.696.7 ± 1.299.5 ± 0.894.7 ± 1.6
1478.0 ± 3.297.1 ± 1.998.0 ± 1.795.3 ± 1.379.0 ± 4.198.8 ± 1.699.0 ± 1.496.2 ± 1.479.6 ± 4.2399.8 ± 1.899.2 ± 1.395.6 ± 1.579.1 ± 5.099.3 ± 0.898.7 ± 1.097.1 ± 1.7
1574.7 ± 6.2100.0 ± 0.099.2 ± 0.997.2 ± 1.977.5 ± 5.698.0 ± 1.797.0 ± 5.796.3 ± 1.277.6 ± 6.1798.0 ± 1.799.0 ± 1.397.5 ± 1.176.6 ± 6.398.0 ± 1.796.7 ± 5.894.8 ± 1.9
1673.2 ± 4.897.8 ± 1.699.6 ± 0.994.6 ± 2.172.2 ± 4.797.6 ± 1.097.3 ± 4.794.8 ± 2.373.3 ± 4.5697.8 ± 2.697.9 ± 4.394.7 ± 1.774.3 ± 3.799.6 ± 1.098.3 ± 4.196.7 ± 2.2
1772.3 ± 5.297.7 ± 1.997.4 ± 1.694.5 ± 2.273.3 ± 4.999.1 ± 1.098.1 ± 1.597.3 ± 1.573.1 ± 4.9299.0 ± 1.698.3 ± 1.396.5 ± 1.773.1 ± 4.998.2 ± 1.997.6 ± 1.694.6 ± 1.9
1870.5 ± 4.896.3 ± 1.9100.0 ± 0.093.8 ± 2.868.5 ± 4.298.5 ± 0.998.8 ± 1.595.8 ± 2.269.7 ± 4.4498.7 ± 2.299.7 ± 0.695.4 ± 2.069.3 ± 4.499.6 ± 0.4100.0 ± 0.096.6 ± 1.9
1972.0 ± 9.295.5 ± 3.699.1 ± 0.792.1 ± 2.373.3 ± 9.196.8 ± 2.694.2 ± 6.296.9 ± 1.874.6 ± 8.9397.8 ± 1.998.0 ± 1.597.1 ± 1.674.1 ± 8.598.4 ± 1.297.0 ± 1.996.0 ± 2.0
2081.6 ± 6.395.8 ± 2.999.4 ± 1.693.7 ± 1.184.7 ± 3.797.7 ± 1.898.1 ± 4.498.6 ± 1.485.3 ± 4.2798.2 ± 2.499.5 ± 0.794.4 ± 2.685.0 ± 2.899.1 ± 1.998.8 ± 0.996.7 ± 2.2
2168.0 ± 6.196.3 ± 2.999.5 ± 1.193.6 ± 2.675.2 ± 5.796.1 ± 2.698.6 ± 1.196.1 ± 1.576.2 ± 5.1697.8 ± 1.999.3 ± 0.795.0 ± 2.477.5 ± 4.696.3 ± 2.497.0 ± 2.195.4 ± 1.8
2278.6 ± 5.297.8 ± 1.599.9 ± 0.295.4 ± 2.081.1 ± 4.598.5 ± 1.699.6 ± 0.396.8 ± 1.382.9 ± 4.5798.7 ± 1.299.5 ± 0.595.5 ± 2.483.0 ± 4.699.8 ± 0.398.7 ± 1.196.8 ± 1.5
2358.7 ± 3.6100.0 ± 0.199.4 ± 0.397.2 ± 1.464.1 ± 3.092.3 ± 3.191.8 ± 4.089.5 ± 3.765.0 ± 2.4093.4 ± 2.894.8 ± 1.491.0 ± 3.766.1 ± 2.592.3 ± 2.992.6 ± 3.192.3 ± 3.3
2469.8 ± 5.398.6 ± 1.399.7 ± 0.596.5 ± 1.177.4 ± 3.494.7 ± 2.397.0 ± 1.897.0 ± 2.777.7 ± 3.8696.8 ± 2.194.9 ± 2.395.6 ± 1.877.7 ± 3.996.8 ± 1.298.5 ± 1.496.2 ± 1.8
2580.6 ± 3.398.2 ± 1.699.2 ± 1.295.7 ± 1.279.6 ± 4.496.9 ± 1.397.8 ± 2.296.2 ± 1.678.7 ± 4.0599.2 ± 1.398.8 ± 1.495.8 ± 2.278.8 ± 4.799.3 ± 0.799.1 ± 1.196.6 ± 1.7
2672.4 ± 3.096.0 ± 2.899.2 ± 1.895.8 ± 1.769.0 ± 1.996.7 ± 1.299.5 ± 0.995.6 ± 2.068.9 ± 1.5798.0 ± 1.799.2 ± 1.194.9 ± 2.569.3 ± 3.198.0 ± 1.798.7 ± 1.196.6 ± 2.0
2777.4 ± 5.897.0 ± 2.2100.0 ± 0.096.1 ± 2.283.9 ± 4.196.8 ± 2.797.7 ± 1.995.5 ± 1.385.1 ± 3.9997.2 ± 1.998.1 ± 1.494.9 ± 2.086.0 ± 3.197.9 ± 2.498.2 ± 1.497.3 ± 1.8
2879.2 ± 4.396.5 ± 2.799.4 ± 1.193.8 ± 2.677.0 ± 5.096.2 ± 2.097.0 ± 3.895.0 ± 1.979.0 ± 5.1997.3 ± 1.999.0 ± 1.397.0 ± 1.480.5 ± 4.697.8 ± 1.699.4 ± 1.197.0 ± 1.5
2968.2 ± 4.597.4 ± 1.898.8 ± 1.195.4 ± 1.772.3 ± 4.496.2 ± 2.195.2 ± 5.893.1 ± 2.873.0 ± 3.9997.5 ± 2.096.5 ± 4.194.6 ± 1.473.1 ± 4.197.8 ± 1.698.4 ± 1.993.2 ± 2.1
3070.5 ± 3.895.0 ± 3.998.7 ± 1.194.7 ± 1.474.1 ± 4.896.3 ± 2.098.1 ± 1.294.6 ± 2.274.5 ± 4.6996.8 ± 2.397.7 ± 2.796.5 ± 1.975.9 ± 4.496.8 ± 1.695.8 ± 2.597.0 ± 2.6
3175.2 ± 4.998.7 ± 1.798.0 ± 1.297.8 ± 1.977.2 ± 4.298.7 ± 1.797.6 ± 1.195.2 ± 2.178.3 ± 4.4998.7 ± 1.798.1 ± 1.694.5 ± 2.878.6 ± 4.599.6 ± 0.795.5 ± 5.397.0 ± 1.1
3281.0 ± 5.299.3 ± 1.199.1 ± 0.597.8 ± 1.081.3 ± 4.898.4 ± 1.799.1 ± 0.796.1 ± 2.481.5 ± 5.3098.3 ± 1.798.1 ± 1.396.7 ± 1.881.0 ± 5.497.8 ± 2.397.9 ± 1.696.2 ± 1.9
Table 3. Average classification accuracy of four PD detection algorithms for within-stimulation Evaluation 1 using Data Collection Protocol 2.
Table 3. Average classification accuracy of four PD detection algorithms for within-stimulation Evaluation 1 using Data Collection Protocol 2.
Ch 7 LT p 2 1  Vs  7 LT p 2 2 7 LT p 2 1  Vs  7 LT p 2 3 7 LT p 2 1  Vs  7 LT p 2 4
SVM CRC LSTM CNN SVM CRC LSTM CNN SVM CRC LSTM CNN
182.0 ± 4.898.5 ± 1.796.5 ± 2.193.6 ± 2.380.3 ± 2.998.1 ± 1.997.6 ± 1.393.6 ± 2.583.5 ± 3.298.5 ± 1.796.8 ± 1.494.2 ± 2.5
272.0 ± 3.392.8 ± 3.098.5 ± 2.491.9 ± 4.170.3 ± 3.096.5 ± 2.699.5 ± 1.395.2 ± 1.872.3 ± 3.099.8 ± 0.498.4 ± 1.495.8 ± 2.1
380.2 ± 5.399.3 ± 0.799.8 ± 0.496.2 ± 2.377.7 ± 4.498.5 ± 1.199.2 ± 0.896.8 ± 1.377.8 ± 4.699.7 ± 0.599.0 ± 1.398.1 ± 0.6
488.5 ± 2.9100.0 ± 0.099.3 ± 1.796.5 ± 1.888.0 ± 3.499.8 ± 0.598.7 ± 3.097.1 ± 1.788.7 ± 3.799.7 ± 0.797.2 ± 2.997.4 ± 2.3
588.7 ± 3.6100.0 ± 0.098.3 ± 2.796.2 ± 3.087.5 ± 3.199.9 ± 0.496.9 ± 4.594.9 ± 3.788.2 ± 2.9100.0 ± 0.096.3 ± 4.695.1 ± 3.9
681.4 ± 3.799.8 ± 0.398.6 ± 4.096.1 ± 1.879.5 ± 3.799.2 ± 0.599.1 ± 1.395.6 ± 3.080.2 ± 3.199.7 ± 0.397.7 ± 4.497.1 ± 1.5
784.9 ± 3.4100.0 ± 0.299.1 ± 0.796.1 ± 2.282.0 ± 3.099.0 ± 0.899.2 ± 0.697.3 ± 1.483.1 ± 3.499.7 ± 0.499.8 ± 0.497.5 ± 1.3
883.3 ± 4.199.2 ± 1.397.0 ± 5.194.2 ± 5.982.3 ± 3.297.5 ± 1.097.5 ± 5.894.8 ± 6.383.6 ± 2.7100.0 ± 0.097.3 ± 5.593.6 ± 5.9
975.2 ± 3.899.6 ± 0.698.4 ± 1.696.5 ± 2.473.9 ± 3.299.6 ± 0.698.5 ± 1.296.2 ± 2.374.0 ± 3.399.5 ± 0.699.1 ± 1.595.9 ± 2.4
1060.7 ± 3.4100.0 ± 0.0100.0 ± 0.096.3 ± 1.359.7 ± 4.399.3 ± 0.599.7 ± 0.596.7 ± 1.459.6 ± 4.599.8 ± 0.499.3 ± 0.995.8 ± 1.9
1165.0 ± 2.899.9 ± 0.299.6 ± 0.492.7 ± 2.765.1 ± 2.899.3 ± 0.599.4 ± 0.893.5 ± 3.065.7 ± 2.399.9 ± 0.298.3 ± 1.593.3 ± 2.0
1275.3 ± 7.099.5 ± 0.798.6 ± 1.194.2 ± 4.172.4 ± 4.999.2 ± 0.598.8 ± 0.696.4 ± 1.774.1 ± 4.999.6 ± 0.399.5 ± 0.794.0 ± 1.8
1369.6 ± 2.892.2 ± 3.696.8 ± 4.889.2 ± 5.567.8 ± 2.990.6 ± 3.595.1 ± 4.991.7 ± 4.269.1 ± 3.792.2 ± 4.893.8 ± 4.093.5 ± 3.3
1480.0 ± 4.4100.0 ± 0.0100.0 ± 0.297.2 ± 1.180.0 ± 4.498.8 ± 1.098.5 ± 1.595.3 ± 1.480.0 ± 4.499.1 ± 1.098.3 ± 1.595.8 ± 2.7
1578.0 ± 5.9100.0 ± 0.097.6 ± 4.996.3 ± 2.278.2 ± 6.299.0 ± 1.197.4 ± 3.896.2 ± 2.080.3 ± 6.6100.0 ± 0.2100.0 ± 0.096.7 ± 2.2
1673.6 ± 5.099.6 ± 0.698.8 ± 0.893.8 ± 2.971.3 ± 2.898.6 ± 1.096.4 ± 4.695.2 ± 2.171.5 ± 3.097.7 ± 1.597.8 ± 2.095.0 ± 2.7
1776.0 ± 4.6100.0 ± 0.0100.0 ± 0.097.5 ± 2.176.3 ± 5.199.8 ± 0.497.1 ± 4.797.8 ± 1.277.0 ± 4.9100.0 ± 0.099.4 ± 0.794.1 ± 3.0
1870.1 ± 5.1100.0 ± 0.0100.0 ± 0.295.5 ± 2.667.9 ± 4.399.9 ± 0.299.0 ± 1.397.3 ± 1.668.2 ± 4.699.8 ± 0.499.5 ± 0.795.6 ± 2.5
1967.8 ± 8.799.6 ± 0.296.7 ± 4.394.6 ± 1.866.8 ± 6.899.8 ± 0.598.5 ± 1.597.6 ± 1.667.2 ± 7.499.8 ± 0.497.8 ± 4.396.4 ± 1.8
2083.0 ± 5.6100.0 ± 0.099.8 ± 0.596.9 ± 1.780.8 ± 3.499.6 ± 0.498.9 ± 3.098.1 ± 1.281.8 ± 3.999.7 ± 0.497.5 ± 4.695.8 ± 2.6
2185.3 ± 5.198.6 ± 1.896.3 ± 4.091.6 ± 4.985.6 ± 5.099.9 ± 0.499.0 ± 3.095.8 ± 3.786.3 ± 7.099.9 ± 0.499.0 ± 3.095.1 ± 3.9
2283.5 ± 7.3100.0 ± 0.299.9 ± 0.296.6 ± 1.681.5 ± 4.699.3 ± 1.198.7 ± 1.196.2 ± 1.583.1 ± 3.899.2 ± 0.799.3 ± 0.795.3 ± 2.8
2367.2 ± 5.198.8 ± 2.597.5 ± 2.895.2 ± 2.171.1 ± 4.693.8 ± 3.394.5 ± 3.296.1 ± 2.173.0 ± 4.293.7 ± 2.995.5 ± 3.296.3 ± 3.3
2481.3 ± 4.2100.0 ± 0.0100.0 ± 0.096.0 ± 2.181.1 ± 3.399.9 ± 0.499.6 ± 0.596.8 ± 1.681.2 ± 4.399.7 ± 0.697.9 ± 1.796.6 ± 2.2
2584.2 ± 3.399.7 ± 0.499.2 ± 0.895.9 ± 2.782.1 ± 4.499.6 ± 0.599.6 ± 0.397.5 ± 1.386.0 ± 3.099.8 ± 0.499.7 ± 0.595.3 ± 2.8
2667.5 ± 4.797.8 ± 1.599.2 ± 0.995.8 ± 2.468.1 ± 5.997.5 ± 1.598.7 ± 1.096.9 ± 2.371.5 ± 4.896.5 ± 1.396.5 ± 1.893.3 ± 3.1
2786.0 ± 2.198.8 ± 1.599.7 ± 0.596.0 ± 2.185.3 ± 2.399.9 ± 0.499.1 ± 0.896.0 ± 1.885.3 ± 2.3100.0 ± 0.299.1 ± 1.496.0 ± 1.2
2882.0 ± 3.897.8 ± 1.996.1 ± 3.095.7 ± 3.483.6 ± 4.498.0 ± 1.798.5 ± 2.995.8 ± 3.686.3 ± 4.297.4 ± 1.397.0 ± 2.596.2 ± 2.6
2976.0 ± 4.798.0 ± 1.697.2 ± 2.695.3 ± 1.777.4 ± 6.298.0 ± 1.798.1 ± 1.695.6 ± 1.878.9 ± 5.997.7 ± 1.596.4 ± 2.795.5 ± 1.6
3078.7 ± 4.898.2 ± 1.696.8 ± 5.197.4 ± 1.477.5 ± 3.797.7 ± 1.499.5 ± 0.897.7 ± 1.279.2 ± 3.697.6 ± 1.398.7 ± 1.396.3 ± 2.2
3182.7 ± 3.6100.0 ± 0.297.6 ± 2.696.5 ± 2.084.5 ± 4.399.8 ± 0.398.0 ± 2.896.1 ± 2.285.1 ± 4.1100.0 ± 0.297.7 ± 2.796.7 ± 1.5
3283.2 ± 3.9100.0 ± 0.099.7 ± 0.597.0 ± 2.084.1 ± 4.099.7 ± 0.399.6 ± 0.595.5 ± 1.885.3 ± 3.399.6 ± 0.598.7 ± 1.196.0 ± 2.3
Table 4. Average Classification Accuracy of Four PD Detection Algorithms for Cross-Stimulation Evaluation 2 Using Data Collection Protocol 1.
Table 4. Average Classification Accuracy of Four PD Detection Algorithms for Cross-Stimulation Evaluation 2 Using Data Collection Protocol 1.
Ch rec p 1 1 , 2  Vs  relax p 1 1 , 2 rec p 1 1 , 2  Vs  comedy p 1 1 , 2 rec p 1 1 , 2  Vs  horror p 1 1 , 2
SVM CRC LSTM CNN SVM CRC LSTM CNN SVM CRC LSTM CNN
179.2 ± 5.198.1 ± 1.999.0 ± 0.697.5 ± 1.479.8 ± 5.098.1 ± 1.796.9 ± 6.697.4 ± 0.979.6 ± 4.897.9 ± 1.797.1 ± 5.097.6 ± 1.6
269.0 ± 4.096.6 ± 1.499.4 ± 0.497.8 ± 0.869.0 ± 4.096.1 ± 2.099.3 ± 0.697.8 ± 0.969.6 ± 3.797.0 ± 1.199.7 ± 0.498.4 ± 0.9
380.3 ± 3.197.6 ± 1.997.6 ± 1.896.3 ± 1.680.3 ± 3.198.2 ± 1.298.2 ± 1.797.6 ± 1.480.8 ± 3.498.1 ± 1.098.8 ± 0.997.8 ± 1.3
484.0 ± 4.998.5 ± 1.498.5 ± 0.798.5 ± 1.284.6 ± 4.599.8 ± 0.698.7 ± 1.098.7 ± 0.685.0 ± 4.298.7 ± 1.497.9 ± 1.298.3 ± 1.1
588.6 ± 3.899.8 ± 0.298.1 ± 2.997.8 ± 2.789.7 ± 2.9100.0 ± 0.298.4 ± 2.598.5 ± 2.389.9 ± 3.099.6 ± 0.698.3 ± 2.498.3 ± 2.5
683.0 ± 3.199.0 ± 0.999.8 ± 0.398.4 ± 0.483.5 ± 3.299.5 ± 0.699.6 ± 0.598.6 ± 0.983.5 ± 3.699.3 ± 0.798.9 ± 1.299.1 ± 0.7
778.3 ± 4.397.9 ± 1.899.7 ± 0.498.2 ± 1.479.1 ± 4.197.1 ± 1.999.1 ± 0.697.8 ± 1.279.8 ± 3.798.4 ± 0.999.2 ± 0.798.4 ± 1.0
877.6 ± 4.795.0 ± 1.197.6 ± 1.395.0 ± 0.977.9 ± 4.597.5 ± 1.098.6 ± 0.996.5 ± 0.778.3 ± 4.196.9 ± 0.998.7 ± 0.697.0 ± 2.0
972.0 ± 3.099.8 ± 0.299.3 ± 0.798.6 ± 0.872.4 ± 3.0100.0 ± 0.099.6 ± 0.698.8 ± 0.773.3 ± 3.599.5 ± 0.799.1 ± 0.898.6 ± 0.8
1058.4 ± 3.797.6 ± 1.899.3 ± 0.897.8 ± 1.058.7 ± 4.098.8 ± 1.299.7 ± 0.598.2 ± 1.458.8 ± 4.199.2 ± 0.5100.0 ± 0.298.4 ± 1.1
1168.2 ± 3.397.8 ± 0.897.5 ± 1.496.8 ± 1.468.3 ± 3.898.8 ± 0.899.0 ± 0.896.2 ± 2.068.5 ± 3.999.0 ± 1.099.2 ± 0.596.7 ± 1.7
1276.2 ± 4.897.0 ± 1.299.4 ± 0.797.1 ± 2.076.5 ± 5.097.7 ± 0.999.1 ± 0.996.3 ± 2.477.0 ± 5.096.8 ± 1.299.4 ± 0.896.8 ± 1.5
1370.2 ± 3.696.7 ± 3.099.6 ± 0.496.5 ± 2.469.9 ± 3.597.6 ± 1.899.6 ± 0.495.4 ± 1.669.6 ± 3.597.2 ± 1.599.5 ± 0.596.2 ± 1.5
1479.0 ± 3.997.7 ± 1.198.0 ± 1.197.9 ± 1.379.5 ± 3.898.9 ± 1.198.3 ± 0.797.9 ± 1.679.2 ± 4.199.0 ± 1.499.0 ± 0.998.4 ± 1.0
1576.9 ± 6.199.1 ± 0.898.7 ± 1.198.4 ± 0.677.3 ± 5.699.0 ± 0.899.4 ± 0.798.4 ± 0.877.4 ± 5.898.6 ± 1.099.0 ± 1.098.2 ± 1.0
1674.1 ± 5.398.6 ± 0.699.2 ± 0.696.1 ± 2.374.3 ± 5.498.0 ± 0.999.4 ± 1.196.0 ± 2.475.4 ± 5.699.5 ± 0.599.5 ± 0.396.4 ± 1.4
1773.0 ± 4.999.0 ± 1.098.9 ± 1.098.7 ± 0.773.2 ± 4.999.3 ± 0.899.3 ± 0.596.9 ± 1.573.5 ± 4.799.0 ± 0.898.8 ± 1.098.1 ± 1.0
1869.9 ± 4.698.3 ± 1.099.5 ± 0.798.1 ± 1.170.2 ± 4.598.8 ± 0.999.7 ± 0.397.3 ± 1.070.6 ± 4.999.8 ± 0.2100.0 ± 0.096.4 ± 1.6
1973.1 ± 9.997.6 ± 1.398.6 ± 0.997.4 ± 1.173.5 ± 10.097.4 ± 1.898.4 ± 1.296.6 ± 1.474.0 ± 9.697.5 ± 1.197.8 ± 0.997.2 ± 1.5
2085.6 ± 3.898.6 ± 1.199.6 ± 0.696.4 ± 1.585.8 ± 4.498.5 ± 1.499.0 ± 2.197.3 ± 1.586.0 ± 4.099.2 ± 0.499.7 ± 0.596.3 ± 1.1
2172.5 ± 6.796.6 ± 1.899.0 ± 1.097.1 ± 1.573.5 ± 6.496.9 ± 1.698.8 ± 0.597.4 ± 1.874.2 ± 6.395.0 ± 2.498.2 ± 0.996.9 ± 1.5
2282.3 ± 3.798.6 ± 0.899.6 ± 0.497.1 ± 1.982.5 ± 4.598.2 ± 1.399.5 ± 0.497.5 ± 1.883.1 ± 4.698.5 ± 1.1100.0 ± 0.197.5 ± 1.3
2363.0 ± 3.796.2 ± 1.696.1 ± 1.895.1 ± 1.663.5 ± 3.096.0 ± 1.294.7 ± 0.994.8 ± 1.864.3 ± 2.693.5 ± 2.994.1 ± 2.495.8 ± 2.0
2475.6 ± 4.197.1 ± 0.998.9 ± 0.598.2 ± 0.876.5 ± 4.497.3 ± 0.899.5 ± 0.698.1 ± 1.176.8 ± 4.197.2 ± 1.598.3 ± 1.698.2 ± 0.7
2581.0 ± 3.698.1 ± 0.698.6 ± 1.496.3 ± 1.381.0 ± 3.599.5 ± 0.799.2 ± 1.097.0 ± 2.081.3 ± 4.098.8 ± 1.799.6 ± 0.896.3 ± 2.4
2670.8 ± 2.096.8 ± 1.399.0 ± 0.897.0 ± 1.371.0 ± 2.197.0 ± 1.298.3 ± 1.597.6 ± 1.070.0 ± 2.097.0 ± 1.499.1 ± 0.697.3 ± 1.1
2782.9 ± 3.898.0 ± 1.399.3 ± 0.797.1 ± 0.783.8 ± 3.898.1 ± 1.699.4 ± 0.696.4 ± 1.584.2 ± 3.398.4 ± 1.099.2 ± 0.897.6 ± 0.9
2878.6 ± 4.997.1 ± 1.898.6 ± 1.797.1 ± 1.679.2 ± 4.497.5 ± 1.799.1 ± 1.497.5 ± 1.478.7 ± 4.397.9 ± 1.699.5 ± 1.197.3 ± 1.5
2971.5 ± 4.097.4 ± 2.197.5 ± 2.095.9 ± 0.872.4 ± 4.197.4 ± 2.097.7 ± 1.596.4 ± 1.472.8 ± 3.997.0 ± 2.296.9 ± 1.796.4 ± 1.2
3073.3 ± 4.397.4 ± 2.099.1 ± 0.797.1 ± 1.373.8 ± 4.597.5 ± 2.098.9 ± 0.998.3 ± 1.274.4 ± 4.497.2 ± 2.098.4 ± 1.497.2 ± 1.6
3177.0 ± 4.398.8 ± 1.398.2 ± 0.997.6 ± 1.377.3 ± 4.098.6 ± 1.698.0 ± 1.298.4 ± 1.477.7 ± 4.198.5 ± 2.197.4 ± 1.498.6 ± 0.7
3281.0 ± 5.398.5 ± 1.897.0 ± 4.998.6 ± 0.781.5 ± 5.298.4 ± 1.898.1 ± 1.198.2 ± 1.481.1 ± 5.498.1 ± 1.995.7 ± 5.697.6 ± 1.0
Table 5. Average classification accuracy of four PD detection algorithms for cross-stimulation Evaluation 2 using Data Collection Protocol 2.
Table 5. Average classification accuracy of four PD detection algorithms for cross-stimulation Evaluation 2 using Data Collection Protocol 2.
Ch relax p 1 1  Vs  comedy p 2 1 relax p 1 1  Vs  chorror p 2 1 relax p 1 1  Vs  7 LT p 2 1
SVM CRC LSTM CNN SVM CRC LSTM CNN SVM CRC LSTM CNN
182.8 ± 2.797.8 ± 1.595.3 ± 4.494.9 ± 1.782.4 ± 4.799.8 ± 0.595.9 ± 4.196.3 ± 1.679.5 ± 4.298.9 ± 1.097.6 ± 1.594.9 ± 2.0
270.8 ± 3.497.3 ± 1.498.7 ± 1.595.7 ± 1.872.0 ± 3.293.8 ± 2.998.2 ± 2.290.0 ± 3.170.4 ± 3.794.6 ± 3.199.4 ± 1.395.1 ± 3.1
378.4 ± 4.399.6 ± 0.496.4 ± 8.794.7 ± 7.681.1 ± 5.298.0 ± 1.797.2 ± 6.595.4 ± 6.277.7 ± 4.899.2 ± 1.095.8 ± 6.794.6 ± 7.0
488.2 ± 3.2100.0 ± 0.095.6 ± 6.294.7 ± 5.788.6 ± 2.5100.0 ± 0.097.1 ± 5.593.6 ± 6.385.4 ± 4.399.6 ± 0.595.1 ± 6.995.3 ± 5.1
588.3 ± 2.8100.0 ± 0.095.4 ± 3.796.0 ± 4.090.0 ± 2.7100.0 ± 0.097.9 ± 3.096.3 ± 2.885.7 ± 2.999.7 ± 0.496.7 ± 4.294.9 ± 4.5
679.3 ± 3.898.4 ± 0.999.3 ± 1.196.5 ± 1.281.3 ± 3.6100.0 ± 0.099.8 ± 0.597.2 ± 1.580.1 ± 4.099.7 ± 0.399.7 ± 0.897.3 ± 1.0
784.0 ± 2.998.2 ± 0.995.7 ± 8.194.9 ± 7.086.0 ± 3.4100.0 ± 0.096.6 ± 6.695.3 ± 4.582.7 ± 2.798.6 ± 1.993.3 ± 8.094.3 ± 6.0
883.3 ± 3.199.4 ± 0.897.2 ± 5.995.0 ± 5.984.2 ± 3.698.2 ± 2.396.4 ± 5.793.6 ± 6.278.4 ± 2.898.9 ± 0.996.8 ± 6.893.9 ± 6.7
974.0 ± 3.499.2 ± 0.897.8 ± 1.696.1 ± 1.575.3 ± 3.999.8 ± 0.398.4 ± 1.697.3 ± 2.274.0 ± 3.499.7 ± 0.399.5 ± 1.097.2 ± 1.4
1060.7 ± 3.198.3 ± 0.998.7 ± 1.696.8 ± 1.561.3 ± 2.8100.0 ± 0.099.8 ± 0.495.8 ± 2.161.1 ± 3.399.8 ± 0.399.9 ± 0.296.9 ± 1.3
1166.1 ± 4.899.3 ± 0.599.1 ± 0.795.8 ± 2.066.0 ± 4.799.4 ± 0.799.7 ± 0.696.6 ± 1.465.4 ± 3.397.7 ± 1.797.2 ± 2.092.8 ± 1.4
1274.3 ± 4.599.2 ± 0.599.3 ± 1.298.1 ± 1.777.6 ± 5.8100.0 ± 0.099.3 ± 0.996.1 ± 1.873.7 ± 2.699.7 ± 0.399.7 ± 0.596.8 ± 1.5
1369.2 ± 4.191.4 ± 3.594.7 ± 4.192.3 ± 2.970.7 ± 3.192.1 ± 3.597.3 ± 3.287.5 ± 3.069.6 ± 3.294.3 ± 3.197.4 ± 2.694.4 ± 1.9
1480.0 ± 4.498.3 ± 1.898.0 ± 2.294.8 ± 2.980.0 ± 4.4100.0 ± 0.099.8 ± 0.596.5 ± 1.679.7 ± 4.099.9 ± 0.299.2 ± 2.397.8 ± 1.7
1579.6 ± 6.099.3 ± 0.898.5 ± 1.497.0 ± 2.578.3 ± 5.998.9 ± 1.195.3 ± 6.095.8 ± 2.576.4 ± 6.1100.0 ± 0.096.7 ± 6.797.4 ± 0.7
1672.0 ± 3.297.5 ± 1.297.1 ± 3.796.9 ± 1.974.4 ± 4.799.6 ± 0.798.5 ± 1.596.4 ± 2.571.1 ± 3.498.7 ± 1.997.1 ± 1.795.1 ± 1.0
1776.9 ± 5.399.6 ± 0.798.3 ± 0.997.0 ± 1.776.2 ± 5.199.7 ± 0.598.7 ± 1.596.8 ± 1.774.7 ± 5.7100.0 ± 0.099.3 ± 1.596.4 ± 1.8
1870.1 ± 5.199.7 ± 0.399.7 ± 0.796.8 ± 2.471.8 ± 5.099.7 ± 0.598.9 ± 1.495.4 ± 2.369.4 ± 5.099.7 ± 0.598.4 ± 1.196.1 ± 1.5
1968.7 ± 7.899.7 ± 0.396.1 ± 6.397.4 ± 2.069.1 ± 9.299.7 ± 0.693.9 ± 5.496.9 ± 1.568.7 ± 8.298.6 ± 0.793.7 ± 7.096.5 ± 2.0
2082.0 ± 2.799.3 ± 0.499.3 ± 0.997.1 ± 1.284.7 ± 3.3100.0 ± 0.097.8 ± 3.897.2 ± 1.982.1 ± 3.199.8 ± 0.495.8 ± 6.296.4 ± 1.7
2186.2 ± 5.7100.0 ± 0.098.6 ± 3.094.2 ± 4.387.0 ± 3.798.9 ± 1.597.6 ± 3.894.4 ± 3.882.5 ± 2.798.6 ± 1.297.2 ± 5.893.9 ± 6.0
2285.1 ± 2.999.8 ± 0.498.0 ± 3.595.1 ± 2.488.3 ± 3.6100.0 ± 0.099.0 ± 1.197.3 ± 1.683.3 ± 4.299.4 ± 0.895.7 ± 5.196.8 ± 2.7
2374.2 ± 5.393.2 ± 2.993.2 ± 5.592.8 ± 5.371.3 ± 8.199.8 ± 0.595.9 ± 6.493.9 ± 5.670.0 ± 7.495.1 ± 2.194.4 ± 6.493.7 ± 5.4
2482.8 ± 4.499.7 ± 0.498.3 ± 1.597.1 ± 1.781.9 ± 4.599.9 ± 0.299.9 ± 0.295.6 ± 1.280.2 ± 4.999.6 ± 0.4100.0 ± 0.097.1 ± 1.9
2584.6 ± 5.799.3 ± 0.499.8 ± 0.496.6 ± 1.886.1 ± 5.0100.0 ± 0.099.9 ± 0.296.4 ± 1.782.7 ± 5.499.6 ± 0.599.9 ± 0.296.4 ± 1.7
2672.1 ± 7.397.9 ± 1.798.4 ± 1.396.7 ± 2.071.1 ± 7.798.0 ± 1.798.7 ± 1.596.3 ± 1.969.8 ± 4.697.1 ± 0.998.3 ± 1.295.9 ± 1.2
2785.3 ± 2.399.9 ± 0.298.7 ± 1.696.3 ± 1.386.8 ± 1.898.8 ± 1.599.7 ± 0.595.4 ± 2.183.3 ± 2.298.4 ± 1.697.6 ± 2.096.1 ± 2.5
2885.8 ± 4.498.3 ± 1.797.4 ± 4.695.6 ± 5.282.7 ± 3.397.8 ± 1.995.6 ± 4.293.8 ± 6.180.7 ± 3.497.8 ± 1.595.9 ± 4.295.3 ± 4.1
2978.8 ± 5.897.6 ± 1.496.1 ± 4.095.9 ± 1.976.9 ± 4.898.0 ± 1.797.4 ± 3.095.8 ± 1.775.5 ± 3.696.4 ± 2.190.4 ± 6.594.4 ± 1.7
3078.1 ± 3.297.9 ± 1.799.3 ± 1.598.1 ± 1.679.1 ± 4.997.9 ± 1.799.4 ± 0.597.6 ± 1.875.8 ± 3.497.9 ± 1.397.2 ± 3.597.3 ± 1.5
3185.7 ± 2.3100.0 ± 0.096.4 ± 6.396.6 ± 2.083.9 ± 2.9100.0 ± 0.097.1 ± 5.997.2 ± 1.980.9 ± 2.899.0 ± 1.095.1 ± 6.595.9 ± 3.2
3285.5 ± 2.7100.0 ± 0.096.3 ± 4.198.2 ± 1.185.4 ± 3.3100.0 ± 0.097.9 ± 2.097.4 ± 1.880.9 ± 3.799.6 ± 0.598.4 ± 1.497.1 ± 1.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Patel, K.; Gad, R.; Lourdes de Ataide, M.; Vetrekar, N.; Ferreira, T.; Ramachandra, R. Stimulus-Evoked Brain Signals for Parkinson’s Detection: A Comprehensive Benchmark Performance Analysis on Cross-Stimulation and Channel-Wise Experiments. Bioengineering 2025, 12, 1185. https://doi.org/10.3390/bioengineering12111185

AMA Style

Patel K, Gad R, Lourdes de Ataide M, Vetrekar N, Ferreira T, Ramachandra R. Stimulus-Evoked Brain Signals for Parkinson’s Detection: A Comprehensive Benchmark Performance Analysis on Cross-Stimulation and Channel-Wise Experiments. Bioengineering. 2025; 12(11):1185. https://doi.org/10.3390/bioengineering12111185

Chicago/Turabian Style

Patel, Krishna, Rajendra Gad, Marissa Lourdes de Ataide, Narayan Vetrekar, Teresa Ferreira, and Raghavendra Ramachandra. 2025. "Stimulus-Evoked Brain Signals for Parkinson’s Detection: A Comprehensive Benchmark Performance Analysis on Cross-Stimulation and Channel-Wise Experiments" Bioengineering 12, no. 11: 1185. https://doi.org/10.3390/bioengineering12111185

APA Style

Patel, K., Gad, R., Lourdes de Ataide, M., Vetrekar, N., Ferreira, T., & Ramachandra, R. (2025). Stimulus-Evoked Brain Signals for Parkinson’s Detection: A Comprehensive Benchmark Performance Analysis on Cross-Stimulation and Channel-Wise Experiments. Bioengineering, 12(11), 1185. https://doi.org/10.3390/bioengineering12111185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop