Brain–Computer Interface: The HOL–SSA Decomposition and Two-Phase Classification on the HGD EEG Data

An efficient processing approach is essential for increasing identification accuracy since the electroencephalogram (EEG) signals produced by the Brain–Computer Interface (BCI) apparatus are nonlinear, nonstationary, and time-varying. The interpretation of scalp EEG recordings can be hampered by nonbrain contributions to electroencephalographic (EEG) signals, referred to as artifacts. Common disturbances in the capture of EEG signals include electrooculogram (EOG), electrocardiogram (ECG), electromyogram (EMG) and other artifacts, which have a significant impact on the extraction of meaningful information. This study suggests integrating the Singular Spectrum Analysis (SSA) and Independent Component Analysis (ICA) methods to preprocess the EEG data. The key objective of our research was to employ Higher-Order Linear-Moment-based SSA (HOL–SSA) to decompose EEG signals into multivariate components, followed by extracting source signals using Online Recursive ICA (ORICA). This approach effectively improves artifact rejection. Experimental results using the motor imagery High-Gamma Dataset validate our method’s ability to identify and remove artifacts such as EOG, ECG, and EMG from EEG data, while preserving essential brain activity.


Introduction
EEG is a technique for detecting electrical activity in the brain.Since the electrodes are often positioned along the scalp, it is noninvasive.EEG readings that are aberrant are the outcome of the most frequent use of the technology, which is to diagnose epilepsy according to [1].Additionally, it may be utilized to spot brain death, coma, encephalopathies, sleep disorders, and the degree of anesthesia.For identifying tumors, strokes, and other focal brain illnesses, among other conditions, EEG was previously considered the gold standard.The use of this technology has decreased, nevertheless, owing to the advancement of good structural imaging methods such as computerized tomography and magnetic resonance.EEG continues to be a vital study and diagnostic tool despite its poor spatial resolution [2].CT, PET, and MRI cannot really compete with its millisecond-range temporal resolution.EEG is often tolerant of subject mobility, unlike the majority of other neuroimaging methods.mentation, the results obtained, and their analysis.Finally, Section 4 concludes the entire work and its benefit with direction toward future enhancement.

Literature Survey
The authors in [10] introduced a new method based on the Singular Spectrum Analysis (SSA) technique for classifying brain activity based on EEG signals via an application into a benchmark dataset for epileptic study.The results from the SSA-based approach were compared with those from discrete wavelet transform.It finds that SSA can capture both stationary and nonstationary EEG features more effectively than wavelet transforms.The automated removal of EOG artifacts from EEG signals was presented by authors in [11].Circulant Singular Spectrum Analysis (CiSSA) was employed by them to decompose the EOG-contaminated EEG signals into intrinsic mode functions (IMFs).Subsequently, the artifact signal components were identified through the utilization of kurtosis and energy values, and their removal was executed by means of a four-level discrete wavelet transform (DWT).The results indicate that the proposed approach was evaluated on synthetic and real EEG data, revealing its effectiveness in the elimination of EOG artifacts while retaining low-frequency EEG information.
In their study [12], the authors introduced a novel and effective technique for the removal of muscle artifacts from EEG signals.The method, named SSA-CCA (Singular Spectrum Analysis-Canonical Correlation Analysis), combines Singular Spectrum Analysis (SSA) and Canonical Correlation Analysis (CCA).Unlike conventional single-channel decomposition methods, such as ensemble empirical mode decomposition (EEMD), the SSA algorithm employed in this approach draws on principles of multivariate statistics.This enables the proposed method to harness the benefits of both SSA and cross-channel information.The efficacy of SSA-CCA is assessed using both semi-simulated and real EEG data.The results of the evaluation reveal that the introduced method surpasses existing techniques, namely, EEMD-CCA, and even the classic approach of CCA, particularly when dealing with multichannel scenarios.This innovative SSA-CCA approach thus presents a promising advancement in the domain of EEG artifact removal.
As the successful elimination of EOG artifacts remains a significant obstacle in EEG research, the authors proposed a novel approach, termed EEMD-based ICA (EICA) [13].This method combines ensemble empirical mode decomposition (EEMD) with ICA algorithms to enhance the removal of EOG artifacts from multichannel EEG signals.However, when conducting a comparative analysis, the authors found that the Singular Spectrum Analysis (SSA) method exhibits superior performance.SSA showcases the highest improvement in signal-to-noise ratio, coupled with a reduction in root mean square error and correlation coefficient after the removal of EOG artifacts.This robust performance of SSA underscores its ability to more effectively eliminate blink artifacts from multichannel EEG signals, while minimizing the impact of error.As a result, SSA emerges as a promising solution for addressing the challenge of EOG artifact removal in the realm of multichannel EEG signal analysis.
One emerging approach that has gained attention in recent years is the two-phase classification approach, which involves a sequential classification process aimed at enhancing accuracy, efficiency, and noise reduction.This review highlights the merits of the two-phase classification approach in comparison to other classification methods commonly used in EEG signal processing.
The seminal work by the authors in [14] discusses the conceptual framework and practical implementation of a two-stage classification approach as compared to singlestage classifiers.By leveraging multiple stages, the proposed methodology enables the model to first capture high-level patterns and subsequently refine predictions in the second stage.Empirical evidence presented in this article underscores the improved accuracy, generalization, and adaptability of the two-stage classifier across diverse datasets.
In the comparative study, the authors systematically assess the performance of singlestage classifiers against a two-stage classifier using multiple datasets [15].The article meticulously outlines the benefits of the two-stage approach, which includes superior feature extraction and hierarchical decision-making.The experimental results clearly illustrate that the two-stage classifier consistently outperforms single-stage alternatives, emphasizing the efficacy of its intricate decision pipeline.
Focusing on the complexities posed by intricate datasets, the article [16] by the authors elucidates the merits of employing a two-stage classification strategy.Through an in-depth examination of real-world scenarios, the authors demonstrate the limitations of single-stage classifiers and how the two-stage approach is better suited to handle such challenges.By effectively segmenting the decision-making process, the proposed methodology showcases remarkable performance improvements, establishing its relevance in intricate data analysis.
The authors in [17] have presented a case study that highlights the tangible benefits of adopting a two-stage classification model in practical applications.Drawing from a specific domain, they outline the shortcomings of using single-stage classifiers and present evidence of the two-stage model's remarkable success.Similarly, the authors in [18] introduced a dual-stage classification approach.In the initial stage, they employed LDA classifiers to distinguish between various pair-wise MI tasks.Following this, a naive Bayes classifier was employed to forecast the ultimate task executed by the user.This prediction is based on the weighted results of the LDA classifiers.The conducted experiments indicated that the proposed method surpassed the top-performing entry in BCI competition IV by a margin of 3.5%.
Through careful analysis and extensive experimentation, this work underscores the superiority of the two-stage classification approach, reinforcing its viability in real-world scenarios.
The proposed study contributes an adaptive two-phase classification technique for MI events, showcasing improved accuracy and consistency in BCI performance.The study by the authors in [19] presents a method for epileptic seizure detection in EEG signals, leveraging nonlinear features and a deep learning model.Both studies highlight the significance of innovative classification methodologies in distinct domains, with the first emphasizing enhanced performance in BCI and the second demonstrating exceptional accuracy in epileptic seizure detection using advanced feature extraction and DL techniques.
The proposed study in this research and the study in [20] addresses classification challenges in distinct domains utilizing advanced methodologies.In Study 1, the emphasis is on MI event classification using a two-phase approach, with ANN and adaptive SVM classifiers.The adaptive technique aims to improve BCI performance by maintaining consistency, reducing training time, and handling non-stationarities.Study 2, on the other hand, focuses on epileptic seizure detection in EEG signals, employing a comprehensive CADS.It incorporates TQWT decomposition, extraction of various features, and a CNN-RNN DL model for classification.Both studies demonstrate significant improvements over existing approaches.Moreover, the proposed model can be efficiently used for other applications of medical images segmentation for brain data studies.

Singular Spectrum Analysis (SSA)
Singular Spectrum Analysis (SSA) is a powerful technique to handle time series data [21].It can handle nonlinear and nonstationary time series data.SSA has shown great promise in the analysis of electroencephalography (EEG) signals [22].It is a datadriven technique which identifies the alpha, beta, gamma, etc., associated with different brain activities.The processing steps of SSA include: (1) Embedding, (2) Singular Value Decomposition, (3) Grouping, and (4) Reconstruction.
The proposed contribution is HOL-SVD-based decomposition in the SSA rather than the conventional SVD.HOL-SSA is a linear combination of Higher-Order Singular Value Decomposition (HOSVD).It proved to be more robust than the existing higher-order and lower-order statistics of SSA.Both HOSVD and SVD are matrix factorization techniques, they handle higher or multidimensional data.HOSVD can handle nonlinear data.It can handle complete spatial and temporal features from EEG data simultaneously, making it useful for analyzing data with complex spatiotemporal patterns.SVD does not capture the full spatiotemporal patterns in EEG data.HOSVD can handle missing data in the tensor by using tensor completion, whereas SVD requires a complete matrix for analysis.However, the choice of method will depend on the specific application and the characteristics of the data being analyzed.

HOL-SSA
Multiple approaches to SSA were proposed for decomposition.Here, it is proposed to use a novel Higher-Order L-moment Singular Value Decomposition-based SSA (HOL-SSA), a linear combination of Higher-Order Singular Value Decomposition (HOSVD), which has been proved to be more robust than the existing higher-order and lower-order statistics of SSA.The recommended HOL-SSA technique is utilized to deconstruct the single-channel signals into multivariate data, which are then subsequently used to recover the source signals using the Online Recursive ICA (ORICA) approach.

HOSVD
Most frequently, the multidimensional SVD is associated with the extraction of relevant information from the multiway cluster.A Multilinear Singular Value Decomposition is another term that is used.The relevant data are sampled in several dimensions using the multidimensional digital signal processing technique.The process of performing singledimensional samplings involves selecting points along a continuous line and recording their values in a data stream.Contrarily, in multidimensional sampling, the data are chosen using a matrix based on the dataset's sample vectors.The Tucker compression, which is a method for reducing the amount of multidimensional data, is mostly implemented using the HOSVD.
For tensor R of order O and size s 1 x s 2 x. ... ..x s O , the HOSVD is defined as follows.
where C R is the core tensor.P (m) are the matrices of m-mode singular vectors of R with m = 1, 2, . . ..O.
After the computation of matrices of m mode singular vectors P (m) , the core tensor C R can be computed as follows.
• Number of nonzero diagonal elements in Σ (m) as the m rank.
Similar to O matrix SVDs in difficulty, the HOSVD of an order O tensor R is also computationally complex.

Truncated HOSVD
An efficient and approximative solution is to compute the greatest m-mode singular values.After determining the dominant m-mode singular vectors' matrices, derived from The term "truncated HOSVD" refers to the low m rank approximation R' of the tensor R, which has the dominating m ranks [23].In several applications across a wide range of signal processing domains, the HOSVD has been employed.It is extremely promising to use the reduced HOSVD as a preprocessing step for several multilinear signal processing methods' dimensionality reduction.Thus, the computational complexity may be greatly decreased.

L-Moment
The L-moment analysis is a statistical method used to analyze the probability distributions.As the HOSVD method decomposes the EEG signal into its spatial, spectral, and temporal components, the L-moment provides the distribution information of each component.The approach provides support in identifying patterns in the signal that would not be apparent using traditional signal processing techniques thus providing more accurate and reliable results [24].
In statistical theory, using cumulants and joint cumulants for univariate and multivariate distributions is one well-established method for Higher-Order Statistics.These are extended in time series analysis to higher-order spectra, such as the bispectrum and trispectrum.
L-moments, which are linear statistics (linear combinations of order statistics) and thus more reliable than HOS, can be used as an alternative to HOS and higher moments.L-moments are a series of statistics that are used to condense a probability distribution's form.The L-scale, L-skewness, and L-kurtosis are linear combinations of order statistics (L-statistics) that are comparable to traditional moments and may be used to derive numbers similar to standard deviation, skewness, and kurtosis, respectively, where the L-mean is identical to the conventional mean.Standardized moments are equivalent to standardized L-moments, also known as L-moment ratios.A theoretical distribution has a collection of population L-moments, similar to conventional moments.For a sample taken from the population, sample L-moments are established and utilized as estimators of population L-moments.
The nth population L-moment for random variable Z is where E stands for expected value and Z i:N represents the kth order statistic (nth least value) in an independent sample of size N from the distribution of Z.
The recommended HOL-SSA technique is utilized to deconstruct the single-channel signals into multivariate data, which are then subsequently used to recover the source signals using the Online Recursive ICA (ORICA) [25] approach (Algorithm 1).
Step 2: Map the signal vector to a matrix.In the embedding stage, the time series s with length l is mapped into tensor R , where s is segmented using a nonoverlapping window of size i and a[l/i] x i matrix M is obtained from s.
L refers to the last slab of the tensor.The matrix M is converted to tensor R by considering each slab of the tensor as a windowed version of M. Because the application of SSA to real data does not exploit the inherent nonstationarity and therefore may fail in actual data decomposition, therefore, tensor-based SSA is a robust solution to this problem.
Step 3: Decompose the signal using HOSVD.The truncated HOSVD of the converted tensor R of order O and the dominant m ranks for m = 1, 2, . . ..O is computed.
Step 4: Determine the Linear moments of HOSVD.The nth population L-moment of a tensor with O order statistics in a decomposed sample from the distribution of core tensor C t R is as follows.
E is the expected value.Here, Y represents the total number of groups, z refers to the subgroups of eigenvalues, and M z denotes the sum of matrices within group z.Secondly, each matrix of the grouped decomposition is Hankelized, after which the Hankel matrix is transformed into a new series of length l .The diagonal averaging applied to the resultant matrix produces a reconstructed series.Thus, the initial series set s 1 , . . .., s l is decomposed into a sum of r reconstructed subseries, as shown below.
This decomposition is the main result of the HOL-SSA algorithm.If each reconstructed subseries is categorized as a single periodic component or noise, the decomposition makes sense.As a result, the online recursive ICA technique is used in this situation for component separation, as indicated in the step that follows.
Step 6: Apply ORICA on the multivariate data matrix, and for each iteration, the whitening matrix and the demixing matrix are computed.In order to reverse the mixing action, the inverse matrix of the reconstructed subseries is built.The independent components are produced by applying the ORICA rule after applying the Sherman-Morrison matrix inversion method.
S −1 refers to the demixing matrix of the r reconstructed subseries.
Step 7: Output the mapped sources of interest into original signal form.

Time Complexity:
The characteristics of the denoised EEG data are then extracted using the Common Spatial Pattern (CSP) technique.A two-phase classification strategy has also been suggested and tested on the motor 4 imagery EEG data, which is likewise in accordance with this.Cross-comparison tests also demonstrated that the suggested two-phase classification approach including Artificial Neural Network and Adaptive Support Vector Machine has greater classification accuracy than the existing single-stage and two-stage classification approaches [26].

Dataset Description
The suggested model is assessed using the HGD, a different dataset, to confirm its resilience to data fluctuations.The HGD contains four classes-left hand, right hand, both feet, and rest-and more trials than the BCI-IV 2a.Fourteen individuals provided the HGD, which was gathered in a controlled environment.Just 21 of the 128 channels used to acquire the data, which had a sampling frequency of 500 Hz, were associated to MI.
The HGD dataset's data quality was improved by downsampling it from 500 Hz to 250 Hz.In addition, channels were reduced from 128 to 21 in order to discard redundant information.Electrodes that do not link to the motor imagery region are left out.As the database description states, only 21 sensors with the letter C in their name were chosen since they represent the motor cortex.

Performance Analysis
The analysis of artifact removal on the HGD motor imagery signals using the proposed approach is discussed below.Figure 1     Note that executing ICA requires that bad channels be rejected first.The entire dataset should be cycled through in order to visually detect faulty channels because some of them could only be harmful intermittently.In this instance, removing the erroneous data segment rather than the channel itself may be better.Plotting the channels' spectra is another approach to spot problematic channels.Bad channels might be rejected using the pop select.mfunction if they are known.Moreover, as filtering might scatter the artifacts out over clean data, necessitating additional data to be discarded after filtering, it Note that executing ICA requires that bad channels be rejected first.The entire dataset should be cycled through in order to visually detect faulty channels because some of them could only be harmful intermittently.In this instance, removing the erroneous data segment rather than the channel itself may be better.Plotting the channels' spectra is another approach to spot problematic channels.Bad channels might be rejected using the pop select.mfunction if they are known.Moreover, as filtering might scatter the artifacts out over clean data, necessitating additional data to be discarded after filtering, it may be desirable to remove data parts containing substantial artifacts by visual examination, such as high spikes in the data, before filtering.
After band-pass filtering of the signals, Figure 3 shows the channel data.Before filtering, it is also preferable to eliminate data segments having significant artifacts by visual inspection, such as large spikes in the data.Problematic data segment deletion is seen in Figures 4 and 5.
Diagnostics 2023, 13, x FOR PEER REVIEW 10 of 21 may be desirable to remove data parts containing substantial artifacts by visual examination, such as high spikes in the data, before filtering.After band-pass filtering of the signals, Figure 3 shows the channel data.Before filtering, it is also preferable to eliminate data segments having significant artifacts by visual inspection, such as large spikes in the data.Problematic data segment deletion is seen in Figures 4 and 5.Although epoched data can also be filtered, screening continuous EEG data before epoching or artifact removal is advised since it reduces the introduction of filtering artifacts at epoch borders.It may be beneficial to high-pass filter the data to eliminate linear trends.It is recommended to apply high-pass filtering to the data at 1 Hz to generate signal decompositions of high quality.
Moreover, when large artifacts are removed, as seen in Figure 6, a "border" event replaces the deleted data.It is possible to reject or remove any portion of the continuous data in the eegplot.mbox.After portions of the data have been flagged for rejection, a new dataset will be created.Any part of the continuous data in the eegplot.mbox could be rejected or deleted.A new dataset will be constructed when some sections of the data have been designated for rejection.Although epoched data can also be filtered, screening continuous EEG data before epoching or artifact removal is advised since it reduces the introduction of filtering artifacts at epoch borders.It may be beneficial to high-pass filter the data to eliminate linear trends.It is recommended to apply high-pass filtering to the data at 1 Hz to generate signal decompositions of high quality.
Moreover, when large artifacts are removed, as seen in Figure 6, a "border" event replaces the deleted data.It is possible to reject or remove any portion of the continuous data in the eegplot.mbox.After portions of the data have been flagged for rejection, a new dataset will be created.Any part of the continuous data in the eegplot.mbox could be rejected or deleted.A new dataset will be constructed when some sections of the data have been designated for rejection.
Moreover, when large artifacts are removed, as seen in Figure 6, a "border" event replaces the deleted data.It is possible to reject or remove any portion of the continuous data in the eegplot.mbox.After portions of the data have been flagged for rejection, a new dataset will be created.Any part of the continuous data in the eegplot.mbox could be rejected or deleted.A new dataset will be constructed when some sections of the data have been designated for rejection.The components are listed in decreasing order of the EEG variation that each component accounts for.EEG datasets always contain eye artifacts.They frequently occupy the top spots in both their scalp topographies and component arrays.
All of the component topoplots are shown in Figure 7. Figure 8's depiction of the scalp map for component 21 illustrates the existence and volume of artifacts in the EEG data.This component appears to have a significant level of muscular artifacts, and Figure 9 displays the corresponding activity spectrum.Ocular artifacts, which typically occupy the highest locations in their scalp topographies, can be seen together with EEG data.As a consequence, component 21 may be identified as an eye artifact since neither the findings of the ERP in Figure 10 nor the scalp map shows a significant far-frontal projection that characterizes eye artifacts.The components are listed in decreasing order of the EEG variation that each component accounts for.EEG datasets always contain eye artifacts.They frequently occupy the top spots in both their scalp topographies and component arrays.
All of the component topoplots are shown in Figure 7.       Relatively, Figure 11 depicts the scalp map of component 1, which has fewer artifacts and more EEG signals.Figures 12 and 13, respectively, display the activity power spectrum and ERP map of the same.Table 1 lists the artifacts that are present in each component.Relatively, Figure 11 depicts the scalp map of component 1, which has fewer artifacts and more EEG signals.Figures 12 and 13, respectively, display the activity power spectrum and ERP map of the same.Table 1 lists the artifacts that are present in each component.Relatively, Figure 11 depicts the scalp map of component 1, which has fewer artifacts and more EEG signals.Figures 12 and 13, respectively, display the activity power spectrum and ERP map of the same.Table 1 lists the artifacts that are present in each component.Relatively, Figure 11 depicts the scalp map of component 1, which has fewer artifacts and more EEG signals.Figures 12 and 13, respectively, display the activity power spectrum and ERP map of the same.Table 1 lists the artifacts that are present in each component.Following artifact removal, the pruned data are eventually shown in Figures 14 and 15, where the artifact-free signals are depicted in red.So, it is found to be quite advantageous to remove artifact regions that include unique artifacts while generating pure independent components.The signals that have had the artifacts removed are then transmitted for feature extraction and classification.The following chart compares the classification performance of the artifact-free HGD motor imagery signals using the proposed ANN + A-SVM model to other approaches tested on the identical HGD motor imagery EEG signals.The classification performance is evaluated under different metrics such as accuracy, precision, recall, K-score, F1-score, and misclassification rate.The accuracy reported is 95.24%, with an average K value of 0.94.Also, the precision, recall, F1-score evaluated for the four classes (Left, Right, Feet and Rest) of all the 14 subjects are reported in the analysis, as shown in Table 2.The average misclassification rate of 0.047 is better compared to the existing approaches.This performance analysis is graphically represented in Figure 16.The classification performance is also represented through the confusion matrices in Figure 17.The confusion matrices are shown for four subjects, S4, S5, S13, and S14, where the prediction values are found to be better.Table 3 shows the performance comparison between the proposed models and other models.In particular, the classification accuracy of every subject and the average classification accuracies obtained by the DeepConvNet, EEGNet, CP-MixedNet, TS-SEFFNet, MBEEGNet, and MBShallowCovNet from the HGD dataset is summarized in Table 3.Our method has the highest average accuracy of 95.24%, except for the MBEEGNet approach, which has an accuracy of 95.30%.The comparison is graphically presented in Figure 18.
Table 4 shows the performance comparison between the proposed models and other models.The average classification accuracies from the BCI-IV 2a and HGD Motor Imagery datasets are summarized in the table.Using the two public datasets, the performance of the proposed model is evaluated where it has proved to perform better compared to the other models.
fication performance of the artifact-free HGD motor imagery signals using the proposed ANN + A-SVM model to other approaches tested on the identical HGD motor imagery EEG signals.The classification performance evaluated under different metrics such as accuracy, precision, recall, K-score, F1-score, and misclassification rate.The accuracy reported is 95.24%, with an average K value of 0.94.Also, the precision, recall, F1-score evaluated for the four classes (Left, Right, Feet and Rest) of all the 14 subjects are reported in the analysis, as shown in Table 2.  fication performance of the artifact-free HGD motor imagery signals using the proposed ANN + A-SVM model to other approaches tested on the identical HGD motor imagery EEG signals.The classification performance is evaluated under different metrics such as accuracy, precision, recall, K-score, F1-score, and misclassification rate.The accuracy reported is 95.24%, with an average K value of 0.94.Also, the precision, recall, F1-score evaluated for the four classes (Left, Right, Feet and Rest) of all the 14 subjects are reported in the analysis, as shown in Table 2.   the proposed models and other models.In particular, the classification accuracy of every subject and the average classification accuracies obtained by the DeepConvNet, EEGNet CP-MixedNet, TS-SEFFNet, MBEEGNet, and MBShallowCovNet from the HGD datase is summarized in Table 3.Our method has the highest average accuracy of 95.24%, ex cept for the MBEEGNet approach, which has an accuracy of 95.30%.The comparison is graphically presented in Figure 18.Table 4 shows the performance comparison between the proposed models and other models.The average classification accuracies from the BCI-IV 2a and HGD Motor Imagery datasets are summarized in the table.Using the two public datasets, the performance of the proposed model is evaluated where it has proved to perform better compared to the other models.

Conclusions
In this research article, a new method for removing artifacts from EEG signals has been put forward.The proposed HOL-SSA involves a Higher-Order Linear-Moment-based

Step 5 :
Reconstruct the original signal to a multivariate data matrix.The matrices from step 4 are grouped into submatrices, as given below.

Figure 2 .
Figure 2. Original channel data of MI signals.

Figure 2 .
Figure 2. Original channel data of MI signals.

Figure 3 .
Figure 3. Channel data after filtering.Figure 3. Channel data after filtering.

Figure 3 .
Figure 3. Channel data after filtering.Figure 3. Channel data after filtering.

Figure 4 .
Figure 4. Rejection of bad data.Figure 4. Rejection of bad data.

Figure 8 '
s depiction of the scalp map for component 21 illustrates the existence and volume of artifacts in the EEG data.This component appears to have a significant level of muscular artifacts, and Figure9displays the corresponding activity spectrum.Ocular artifacts, which typically occupy the highest locations in their scalp topographies, can be seen together with EEG data.As a consequence, component 21 may be identified as an eye artifact since neither the findings of the ERP in Figure10nor the scalp map shows a significant far-frontal projection that characterizes eye artifacts.

Figure 7 .
Figure 7. Topoplots of the independent components.Figure 7. Topoplots of the independent components.

Figure 7 .
Figure 7. Topoplots of the independent components.Figure 7. Topoplots of the independent components.

Figure 18 .
Figure 18.Comparison of classification performance chart.

Figure 18 .
Figure 18.Comparison of classification performance chart.

Table 1 .
EEG and artifacts present in the observed signals.

Table 1 .
EEG and artifacts present in the observed signals.

Table 1 .
EEG and artifacts present in the observed signals.

Table 2 .
Classification performance on the HGD dataset using the proposed model.

Table 4 .
The comparison summary of classification performance among different models under different datasets.