An Emotion Assessment of Stroke Patients by Using Bispectrum Features of EEG Signals

Emotion assessment in stroke patients gives meaningful information to physiotherapists to identify the appropriate method for treatment. This study was aimed to classify the emotions of stroke patients by applying bispectrum features in electroencephalogram (EEG) signals. EEG signals from three groups of subjects, namely stroke patients with left brain damage (LBD), right brain damage (RBD), and normal control (NC), were analyzed for six different emotional states. The estimated bispectrum mapped in the contour plots show the different appearance of nonlinearity in the EEG signals for different emotional states. Bispectrum features were extracted from the alpha (8–13) Hz, beta (13–30) Hz and gamma (30–49) Hz bands, respectively. The k-nearest neighbor (KNN) and probabilistic neural network (PNN) classifiers were used to classify the six emotions in LBD, RBD and NC. The bispectrum features showed statistical significance for all three groups. The beta frequency band was the best performing EEG frequency-sub band for emotion classification. The combination of alpha to gamma bands provides the highest classification accuracy in both KNN and PNN classifiers. Sadness emotion records the highest classification, which was 65.37% in LBD, 71.48% in RBD and 75.56% in NC groups.


Introduction
Stroke is one of the highest causes of death in Malaysia, with more than 40,000 survivors are managing their health today [1]. Globally, there were 6.2 million deaths caused by stroke in 2017, where the highest rates of stroke mortality countries were reported in Eastern Europe, Africa, and Central Asia [2]. Stroke is caused by the insufficient supply of oxygen to the brain, damaging damage brain cells. This is turn will definitely affect some brain functions which results in stroke For the indirect method, bispectrum is estimated by first estimating the third order cumulants of the random process x(k). Then the n th -order moment is equal to the expectation over the process multiplied by the (n − 1) lagged version of itself. Therefore, the third order moment, m 3x is: where E [ ] denotes statistical expectation operation, τ 1 and τ 2 are lags of the moment sequence. The third order cumulant sequence, C 3x (τ 1 , τ 2 ), is identical to its third order moment sequence. It can be calculated by taking an expectation over the process multiplied by 2 lagged versions given by: The bispectrum, B( f 1 , f 2 ), is the 2D-Fourier transform of the third order cumulant function is given by: for f 1 ≤ π , f 2 ≤ π , and f 1 + f 2 ≤ π .
Bispectrum is a symmetric function as shown in Figure 1. The shaded area is the non-redundant region of computation of the bispectrum, where f 2 ≥ 0, f 2 ≥ f 1 , f 1 + f 2 ≤ π, which is sufficient to describe the whole bispectrum [36].
Brain Sci. 2020, 10, x FOR PEER REVIEW 4 of 22 The third order cumulant sequence, ( , ), is identical to its third order moment sequence.
It can be calculated by taking an expectation over the process multiplied by 2 lagged versions given by: The bispectrum, ( , ), is the 2D-Fourier transform of the third order cumulant function is given by: for | | ≤ , | | ≤ , and | + | ≤ .
Bispectrum is a symmetric function as shown in Figure 1. The shaded area is the non-redundant region of computation of the bispectrum, where ≥ 0, ≥ , + ≤ , which is sufficient to describe the whole bispectrum [36].

EEG Data
The EEG database used in this study was collected from stroke patients, with left brain damage (LBD), right brain damage (RBD) and normal control (NC) at the Hospital Canselor Tuanku Muhriz (HCTM), Kuala Lumpur. (formal approval obtained from UKM Medical Center and Ethics committee for human research, reference no.: UKM 1.5.3.5/244/FF-354-2012). The EEG raw signals of 15 subjects each from every group (LBD, RBD, and NC) were used for the analysis. The background and neurophysiological characteristics of the subjects in the three groups are described in Table 1. The subjects passed the Mini-Mental State Examination (MMSE) with scores of more than 24 over a total of 30 points which was conducted to exclude dementia. The subjects also passed the Beck Depression Inventory (BDI) with scores of less than 18 points, to exclude subjects with psychological problems. The Edinburg Handedness Inventory (EHI) was used to determine the handedness of the subjects, and measured in a scale from −1 to 1. The scales were interpreted as pure left hander for a score of −1, mixed left hander for a score of −0.5, neutral for a score of 0, mixed right hander for a score of 0.5 and pure right hander for a score of 1. From Table 1, the scores show that all subjects were right handers. All subjects were self-reported to have normal vision or corrected to normal vision (with spectacles or contact lenses) to ensure better effect of perceiving emotions from audio-visual stimuli.

EEG Data
The EEG database used in this study was collected from stroke patients, with left brain damage (LBD), right brain damage (RBD) and normal control (NC) at the Hospital Canselor Tuanku Muhriz (HCTM), Kuala Lumpur. (formal approval obtained from UKM Medical Center and Ethics committee for human research, reference no.: UKM 1.5.3.5/244/FF-354-2012). The EEG raw signals of 15 subjects each from every group (LBD, RBD, and NC) were used for the analysis. The background and neurophysiological characteristics of the subjects in the three groups are described in Table 1. The subjects passed the Mini-Mental State Examination (MMSE) with scores of more than 24 over a total of 30 points which was conducted to exclude dementia. The subjects also passed the Beck Depression Inventory (BDI) with scores of less than 18 points, to exclude subjects with psychological problems. The Edinburg Handedness Inventory (EHI) was used to determine the handedness of the subjects, and measured in a scale from −1 to 1. The scales were interpreted as pure left hander for a score of −1, mixed left hander for a score of −0.5, neutral for a score of 0, mixed right hander for a score of 0.5 and pure right hander for a score of 1. From Table 1, the scores show that all subjects were right handers. All subjects were self-reported to have normal vision or corrected to normal vision (with spectacles or contact lenses) to ensure better effect of perceiving emotions from audio-visual stimuli. Table 1. Background and neurophysiological characteristics (mean ± std) of left brain damage (LBD), right brain damage (RBD), and normal control (NC) subjects. The EEG data were collected using a 14-channel wireless EEG device, Emotiv EPOC headset, with built in digital 5th order Sinc filter. The electrode placement was based on the international standard 10-20 system as shown in Figure 2. The EEG data were collected at sampling frequency of 128 Hz. One of the limitations of the EEG is that it has poor spatial resolution as compared to high resolution brain imaging devices, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) scans [37]. However, the Emotiv EPOC device has 14 electrodes with 2 references providing appropriate spatial resolution as well as practical in terms of time and money for this study. Moreover, the EEG device provides high temporal resolution data that record the neural activity changes in milliseconds, which is impossible for fMRI and PET scans.  The EEG data were collected using a 14-channel wireless EEG device, Emotiv EPOC headset, with built in digital 5th order Sinc filter. The electrode placement was based on the international standard 10-20 system as shown in Figure 2. The EEG data were collected at sampling frequency of 128 Hz. One of the limitations of the EEG is that it has poor spatial resolution as compared to high resolution brain imaging devices, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) scans [37]. However, the Emotiv EPOC device has 14 electrodes with 2 references providing appropriate spatial resolution as well as practical in terms of time and money for this study. Moreover, the EEG device provides high temporal resolution data that record the neural activity changes in milliseconds, which is impossible for fMRI and PET scans. To collect the emotional EEG data, an emotional elicitation protocol was designed to stimulate the emotional states of subjects. The data collection protocol is shown in Figure 3. The stimuli used to evoke the emotions in subjects were audio-visual in the form of video clips edited from International Affective Picture System (IAPS) and International Affective Digital Sound (IADS). Six emotional content video clips were presented to stimulate six discrete emotions, namely anger (A), disgust (D), fear (F), happiness (H), sadness (S) and surprise (SU) [12]. To collect the emotional EEG data, an emotional elicitation protocol was designed to stimulate the emotional states of subjects. The data collection protocol is shown in Figure 3. The stimuli used to evoke the emotions in subjects were audio-visual in the form of video clips edited from International Affective Picture System (IAPS) and International Affective Digital Sound (IADS). Six emotional content video clips were presented to stimulate six discrete emotions, namely anger (A), disgust (D), fear (F), happiness (H), sadness (S) and surprise (SU) [12]. Brain Sci. 2020, 10, x FOR PEER REVIEW 6 of 22 Prior to the experiment, the subjects were asked to complete the MMSE, BDI, and EHI tests and informed consent was given to them. Then, instruction about the experimental procedure was given to the subjects. The experiment started with a sample video clip, followed by six trials of video clips which were displayed continuously. Emotional EEG signals were recorded during the six trials video clips display. After that, the EEG recording was stopped for self-assessment, where the subjects were asked about the emotions they felt or perceived from the video clips. The self-assessment time was at least 1 min and it was subject-dependent. During this period, subjects were asked to relax and get ready for the next video. This was to avoid stimulus order effects. Then the experiment began with the sadness emotion. The same experimental procedures were repeated for happiness (H), fear (F), disgust (D), surprise (SU) and anger (A) emotions. There were a total of 42 video clips, including those of the sample video clips. The duration of each video clip was 46 s to 1 min, therefore, the total duration of the data collection was between 90 to 120 min.

Preprocessing
There were a total of 36 trials of EEG signals collected from each subject in all groups (LBD, RBD and NC). The collected EEG signals were preprocessed to remove the effects of noises and artifacts that caused interference to the raw signals. The preprocessing of the EEG signals was performed using MATLAB.
The artifacts due to eye blinks were filtered using thresholding method, where the potentials higher than 80 µV and lower than −80 µV were offset from each EEG raw signal [32]. A 6th order Butterworth bandpass filter was used to filter the EEG signals with cut-off frequencies from 0.5 to 49 Hz to extract the delta to gamma frequency bands [32].
Bispectrum features were computed from the non-redundant region (Ω) of bispectrum The features extracted from each epoch were variance ( ), sum of logarithmic of bispectrum ( 1), sum of logarithmic of diagonal elements in the bispectrum ( 2), first moment of diagonal elements in the bispectrum ( 3), second moment of diagonal elements in the bispectrum ( 4) and moment of bispectrum ( 5).
The variance, of the bispectrum was computed as: Prior to the experiment, the subjects were asked to complete the MMSE, BDI, and EHI tests and informed consent was given to them. Then, instruction about the experimental procedure was given to the subjects. The experiment started with a sample video clip, followed by six trials of video clips which were displayed continuously. Emotional EEG signals were recorded during the six trials video clips display. After that, the EEG recording was stopped for self-assessment, where the subjects were asked about the emotions they felt or perceived from the video clips. The self-assessment time was at least 1 min and it was subject-dependent. During this period, subjects were asked to relax and get ready for the next video. This was to avoid stimulus order effects. Then the experiment began with the sadness emotion. The same experimental procedures were repeated for happiness (H), fear (F), disgust (D), surprise (SU) and anger (A) emotions. There were a total of 42 video clips, including those of the sample video clips. The duration of each video clip was 46 s to 1 min, therefore, the total duration of the data collection was between 90 to 120 min.

Preprocessing
There were a total of 36 trials of EEG signals collected from each subject in all groups (LBD, RBD and NC). The collected EEG signals were preprocessed to remove the effects of noises and artifacts that caused interference to the raw signals. The preprocessing of the EEG signals was performed using MATLAB.
The artifacts due to eye blinks were filtered using thresholding method, where the potentials higher than 80 µV and lower than −80 µV were offset from each EEG raw signal [32]. A 6th order Butterworth bandpass filter was used to filter the EEG signals with cut-off frequencies from 0.5 to 49 Hz to extract the delta to gamma frequency bands [32].

Feature Extraction
The indirect method was used in this study to estimate bispectrum using the bispeci function in the MATLAB Higher Order Statistics Toolbox. The number of points used to form each fast Fourier transform (NFFT) was 1024. The bispectrum features were extracted from data by using 50% overlap with Hanning window. The preprocessed time domain EEG data were segmented into six seconds length for every channel. Each data segment is also known as an epoch, and contains 768 data. Three types of EEG frequency sub-bands were used for analysis, namely the alpha (8-13) Hz, beta (13-30) Hz and gamma (30-49) Hz bands.
Bispectrum features were computed from the non-redundant region (Ω) of bispectrum The features extracted from each epoch were variance (v), sum of logarithmic of bispectrum (H1), sum of logarithmic of diagonal elements in the bispectrum (H2), first moment of diagonal elements in the bispectrum (H3), second moment of diagonal elements in the bispectrum (H4) and moment of bispectrum (H5).
The variance, v of the bispectrum was computed as: Brain Sci. 2020, 10, 672 where N is the total number of bispectrum in Ω, µ is the mean of bispectrum in Ω, B i is the bispectrum series for i = 1, 2, 3, . . . , N. Sum of logarithmic amplitudes of bispectrum (H1): where Ω is the non-redundant region of bispectrum, f 1 and f 2 are frequency variables of bispectrum and B( f 1 , f 2 ) is the bispectrum feature of f 1 and f 2 in Ω.
The sum of logarithmic amplitudes of diagonal elements in the bispectrum (H2): where Ω is the non-redundant region of bispectrum and B( f m , f m ) is the diagonal element of bispectrum feature in Ω.
The first-order spectral moment of amplitudes of diagonal elements in the bispectrum (H3): where B( f m , f m ) is the diagonal element of bispectrum feature in Ω, N is the total number diagonal elements of bispectrum in Ω, and m = 1, 2, 3, . . . , N.
The second-order spectral moment of the amplitudes of diagonal elements in the bispectrum (H4): where B( f m , f m ) is the diagonal element of bispectrum feature in Ω, N is the total number diagonal elements of bispectrum in Ω, and m = 1, 2, 3, . . . , N.
The moment of bispectrum (H5): where f 1 and f 2 are frequency variables of bispectrum and B( f 1 , f 2 ) is the bispectrum feature of f 1 and f 2 in Ω.

Statistical Analysis
One-way analysis of variance (ANOVA) was used to test the significant difference of the bispectrum features among the six emotions classes for LBD, RBD and NC respectively. The use of ANOVA was to statistically analyze the bispectrum features for whether there were differences among the class means of the six emotions. The use of ANOVA requires the assumption that the observations from the feature were approximately normally distributed, the observations were independent and the variances of the classes were equal. The null hypothesis was: "All the emotions of the extracted feature have equal mean". The null hypothesis was rejected and the bispectrum features were validated as statistically significant among the six emotional states if the p-value is less than or equal to 0.05. When the null hypothesis was not rejected, it implies that all the emotions of the extracted feature have equal mean, thus the feature that failed to reject the null hypothesis was not suitable to be used for emotion classification.

Classification
Each feature used for classification has a total of 90 trials (6 trials × 15 subjects) with 84 feature vectors (14 channels × 6 windows) for each emotion. The k-nearest neighbor (KNN) and probabilistic neural network (PNN) classifiers were used to classify the six emotions in the three groups (LBD, RBD and NC). The KNN is one of the most widely applied classifier due to it lower complexity and fast decision making. The KNN searches for the nearest distance or to examine for the most likeliness between the unknown sample and the training dataset. The distance of the unknown sample and the training dataset is determined by the distance metric. In this study, the Cityblock distance metric was implemented in the KNN classification [38].
The PNN uses the Parzen window for nonparametric approximation of the probability distribution function (PDF) of each class and applies Bayes' rule to allocate the new input data to the class with the highest probability by using the PDF of each class [39]. The classifier parameter is the spread value and is proportional to the standard deviation of the Parzen window in PNN. A small spread value gives narrow PDF, whereas a large spread value gives wide PDF and the classifier becomes less selective [40,41].
In this work, the k values of 1 to 15 were tested in KNN and the spread values of 0.1 to 1.5 with an increment of 0.1 were used in PNN to classify the features. The performance of the classifiers was validated through 10-fold cross validation, where 90% of the data were used for training and 10% of the data were used for testing.

Results and Discussions
Bispectrum features were extracted from the EEG signals in three groups of subjects (LBD, RBD, and NC) for the analysis of six emotions, namely anger (A), disgust (D), fear (F), happiness (H), sadness (S), and surprise (SU). The contour plots of the estimated bispectrum using the anger emotion of one subject from the LBD group was plotted for the alpha, beta and gamma bands as shown in Figure 4. The plots of the bispectrum magnitude show the relationship between the two bispectrum frequency variables, f 1 and f 2 , of the anger emotion. In Figure 4, the f 1 (x-axis) and f 2 (y-axis) are phased coupled. Frequency variables that are phase coupled indicate the presence of quadratic phase coupling (QPC) [31], where the QPC represents the underlying neuronal interaction of the emotional state at the frequencies ( f 1 and f 2 ). The higher magnitude indicates stronger QPC between the frequencies. The red color represents the greatest increase in the magnitude of bispectrum, while the blue color represents the greatest decrease in the magnitude of bispectrum.
The distribution of the bispectrum over the ( f 1 , f 2 ) plane differs in each frequency band. The alpha band in Figure 4a shows more bispectrum distribution at lower phase coupled frequencies, which is between (0.04, 0.04) Hz and (0.1, 0.1) Hz in the non-redundant region and other symmetry regions. Whereas the beta band in Figure 4b and gamma band in Figure 4c show the bispectrum distribution at higher phase coupled frequencies. These are between (0.1, 0.1) Hz and (0.2, 0.2) Hz in the beta band and between (0.3, 0.3) Hz and (0.4, 0.4) Hz in the gamma band. Moreover, the maximum magnitude of the bispectrum in the alpha band is the lowest among the three frequency bands. The beta band has larger maximum bispectrum magnitude than the alpha band, while the gamma band has the largest maximum bispectrum magnitude among the frequency bands.  Figures 5-7 show the bispectrum plots in the non-redundant region and one symmetry region of the six emotions of Subjects #1 from NC, LBD and RBD groups, respectively. From these figures, the different emotional states have different bispectrum distribution over the plane with different phased coupled peaks and maximum magnitude for each. In the past studies, bispectrum has been claimed as a useful signal classification method as it is able to show distinctive distribution in different conditions, such as the left-hand motor imagery and the right-hand motor imagery [42]. The Figures 5-7 show the bispectrum plots in the non-redundant region and one symmetry region of the six emotions of Subjects #1 from NC, LBD and RBD groups, respectively. From these figures, the different emotional states have different bispectrum distribution over the plane with different phased coupled peaks and maximum magnitude for each. In the past studies, bispectrum has been claimed as a useful signal classification method as it is able to show distinctive distribution in different conditions, such as the left-hand motor imagery and the right-hand motor imagery [42]. The bispectrum provides an EEG feature that is able to recognize these two conditions. Another study has shown that the bispectrum feature is different before meditation and during meditation [43]. This study revealed that the bispectrum exhibit more phase-coupled distribution during meditation than the state before meditation. In addition, the maximum bispectrum magnitude increased during meditation. For the non-human experiment, the "induced" ischemic stroke in rat showed the difference bispectrum distribution in different states of ischemia [44]. In this study, the bispectrum distribution decreased as the rat turns from normal state to ischemic state. Consequently, the distinctive bispectrum pattern of the six emotional states presented in this study implies that the emotional states of each group were distinguishable by applying bispectrum analysis. The significant difference of the emotional states using the bispectrum feature is further validated by the statistical analysis using ANOVA as shown in Table 2. From the experiment, six types of bispectrum features were extracted from preprocessed EEG data of LBD, RBD and NC. The statistical test using ANOVA was performed on the extracted features and the degrees of freedom were 45,354. The results are shown in Table 2 in three different frequency bands for LBD, RBD and NC respectively. For a p-value less than or equal to 0.05, this indicates that the differences between some of the means of the emotional states are statistically significant. The significant bispectrum features imply that there is an interaction of neuronal subcomponents at different frequencies in different emotional states. The shaded p-values show the feature which are statistically not significant between the means of emotion classes as they are larger than 0.05. All the bispectrum features were statistically significant in LBD, RBD and NC except the second moment of the diagonal elements in bispectrum (H4), thus, H4 was discarded in classification. Moreover, from Table 2, the F values are higher in H1 and H3 for LBD, H2 and H5 in RBD and v, H5, and H2 in NC. The highest overall F values are in the LBD group, while NC has comparably smaller values compared to both LBD and RBD groups. difference bispectrum distribution in different states of ischemia [44]. In this study, the bispectrum distribution decreased as the rat turns from normal state to ischemic state. Consequently, the distinctive bispectrum pattern of the six emotional states presented in this study implies that the emotional states of each group were distinguishable by applying bispectrum analysis. The significant difference of the emotional states using the bispectrum feature is further validated by the statistical analysis using ANOVA as shown in Table 2.       In emotion classification, the features were trained with varying k values for KNN and spread values for PNN. The classifiers were tested for all groups and frequency bands. Figures 8-10 show the classification performance of varying k values in three individual EEG frequency sub-bands (alpha, beta, and gamma) and the combination of the three bands using Cityblock KNN classifier. From the figures, the average accuracy of the bispectrum features was similar across all k values tested for alpha, beta, and gamma bands. However, the k value of 1 achieved the highest average accuracy when using the features from the combination of the alpha to gamma band. Moreover, the combination of the alpha to gamma band significantly performs better than other frequency bands for all k values as shown in Figures 8-10. The beta band, on the other hand, is the single band that performs best among the three EEG sub-bands.   In Figures 11-13, the average accuracy of the emotion classification of varying spread values using PNN classifier are plotted for LBD, RBD and NC, respectively. For most of the features, the accuracies are consistent at the spread value between 0.1 and 0.6. Then the accuracy is observed to gradually drop when spread value is larger than 0.6 and further declines when spread value increases. The spread value of 0.4 was chosen to classify the six emotional states as it has achieved the optimum accuracy for most of the features. Similarly, the combination of frequency bands has the highest average accuracy for all spread values. The beta frequency band is the best performance individual frequency band among the three frequency sub-bands in all groups.   In Figures 11-13, the average accuracy of the emotion classification of varying spread values using PNN classifier are plotted for LBD, RBD and NC, respectively. For most of the features, the accuracies are consistent at the spread value between 0.1 and 0.6. Then the accuracy is observed to gradually drop when spread value is larger than 0.6 and further declines when spread value increases. The spread value of 0.4 was chosen to classify the six emotional states as it has achieved the optimum accuracy for most of the features. Similarly, the combination of frequency bands has the highest average accuracy for all spread values. The beta frequency band is the best performance individual frequency band among the three frequency sub-bands in all groups.   In Figures 11-13, the average accuracy of the emotion classification of varying spread values using PNN classifier are plotted for LBD, RBD and NC, respectively. For most of the features, the accuracies are consistent at the spread value between 0.1 and 0.6. Then the accuracy is observed to gradually drop when spread value is larger than 0.6 and further declines when spread value increases. The spread value of 0.4 was chosen to classify the six emotional states as it has achieved the optimum accuracy for most of the features. Similarly, the combination of frequency bands has the highest average accuracy for all spread values. The beta frequency band is the best performance individual frequency band among the three frequency sub-bands in all groups. In Figures 11-13, the average accuracy of the emotion classification of varying spread values using PNN classifier are plotted for LBD, RBD and NC, respectively. For most of the features, the accuracies are consistent at the spread value between 0.1 and 0.6. Then the accuracy is observed to gradually drop when spread value is larger than 0.6 and further declines when spread value increases. The spread value of 0.4 was chosen to classify the six emotional states as it has achieved the optimum accuracy for most of the features. Similarly, the combination of frequency bands has the highest average accuracy for all spread values. The beta frequency band is the best performance individual frequency band among the three frequency sub-bands in all groups.    Table 3 shows the average accuracy of all emotional states of the bispectrum features from the combination of all bands from alpha to gamma using KNN and PNN classifiers. For both classifiers, the optimum parameters for LBD, RBD and NC are found to be the same. The optimum k value in KNN classification for all three groups is 1, and the optimum spread values in PNN classification is 0.4. In Table 3, the KNN is observed to have higher average accuracy than the PNN classifier for all three groups. Notably, the H3 feature is seen to have the highest average accuracy feature in all three groups using KNN, whereas the H3 feature achieves the highest accuracy in the LBD group using PNN. Meanwhile, the H1 feature obtains highest average accuracy in RBD and NC using PNN. According to the results obtained, the top three features are 3, H1, and H2 for all the groups, whereas the worst performing bispectrum feature is the variance. The highest average classification    Table 3 shows the average accuracy of all emotional states of the bispectrum features from the combination of all bands from alpha to gamma using KNN and PNN classifiers. For both classifiers, the optimum parameters for LBD, RBD and NC are found to be the same. The optimum k value in KNN classification for all three groups is 1, and the optimum spread values in PNN classification is 0.4. In Table 3, the KNN is observed to have higher average accuracy than the PNN classifier for all three groups. Notably, the H3 feature is seen to have the highest average accuracy feature in all three groups using KNN, whereas the H3 feature achieves the highest accuracy in the LBD group using PNN. Meanwhile, the H1 feature obtains highest average accuracy in RBD and NC using PNN. According to the results obtained, the top three features are 3, H1, and H2 for all the groups, whereas the worst performing bispectrum feature is the variance. The highest average classification    Table 3 shows the average accuracy of all emotional states of the bispectrum features from the combination of all bands from alpha to gamma using KNN and PNN classifiers. For both classifiers, the optimum parameters for LBD, RBD and NC are found to be the same. The optimum k value in KNN classification for all three groups is 1, and the optimum spread values in PNN classification is 0.4. In Table 3, the KNN is observed to have higher average accuracy than the PNN classifier for all three groups. Notably, the H3 feature is seen to have the highest average accuracy feature in all three groups using KNN, whereas the H3 feature achieves the highest accuracy in the LBD group using PNN. Meanwhile, the H1 feature obtains highest average accuracy in RBD and NC using PNN. According to the results obtained, the top three features are 3, H1, and H2 for all the groups, whereas the worst performing bispectrum feature is the variance. The highest average classification  Table 3 shows the average accuracy of all emotional states of the bispectrum features from the combination of all bands from alpha to gamma using KNN and PNN classifiers. For both classifiers, the optimum parameters for LBD, RBD and NC are found to be the same. The optimum k value in KNN classification for all three groups is 1, and the optimum spread values in PNN classification is 0.4. In Table 3, the KNN is observed to have higher average accuracy than the PNN classifier for all three groups. Notably, the H3 feature is seen to have the highest average accuracy feature in all three groups using KNN, whereas the H3 feature achieves the highest accuracy in the LBD group using PNN. Meanwhile, the H1 feature obtains highest average accuracy in RBD and NC using PNN. According to the results obtained, the top three features are H3, H1, and H2 for all the groups, whereas the worst performing bispectrum feature is the variance. The highest average classification accuracy is 65.40% in the NC group using KNN. Hence the H3 feature is considered the most effective bispectrum feature in this study. The confusion matrix of the H3 feature in KNN emotion classification is presented in Tables 4-6 for LBD, RBD and NC, respectively. From Table 4, the highest predicted class is happiness in LBD. In Table 5, the RBD group has the highest predicted value in sadness and surprise emotions, whereas the NC group has the highest predicted value in sadness emotion in Table 6. For PNN classification, the confusion matrix is presented in Tables 7-9 for each subject group. Likewise, the PNN classification predicted the happiness emotion correctly in the LBD group, as shown in Table 7. In addition, the sadness emotion has the highest classification accuracy in RBD and NC groups as shown in Tables 8 and 9.     The classification rates using the KNN classifier for individual emotions are show in Figure 14. From the figure, the emotion with highest accuracy in all groups was sadness, where the LBD group achieved 65.37%, RBD group achieved 71.48% and NC group achieved 75.56%. Meanwhile, the fear emotion recorded the lowest accuracy in all the three groups, which was 53.52%, 57.96% and 60.74% for LBD, RBD and NC respectively.   The classification rates using the KNN classifier for individual emotions are show in Figure 14. From the figure, the emotion with highest accuracy in all groups was sadness, where the LBD group achieved 65.37%, RBD group achieved 71.48% and NC group achieved 75.56%. Meanwhile, the fear emotion recorded the lowest accuracy in all the three groups, which was 53.52%, 57.96% and 60.74% for LBD, RBD and NC respectively. The accuracy of the individual emotional states classified using the PNN classifier is shown in Figure 15. The PNN classifier has the same highest accuracy emotion with KNN, which was the sadness emotion. The LBD group has 57.41%, RBD has 62.59% and NC has 65.19% classification accuracy for sadness emotion using PNN. In Figure 15, the lowest classification accuracy for LBD and NC groups is the surprise emotion, which is 50.93% and 47.22%, respectively. In the RBD group, on the other hand, disgust emotion recorded the lowest classification accuracy of only 50.00%. The accuracy of the individual emotional states classified using the PNN classifier is shown in Figure 15. The PNN classifier has the same highest accuracy emotion with KNN, which was the sadness emotion. The LBD group has 57.41%, RBD has 62.59% and NC has 65.19% classification accuracy for sadness emotion using PNN. In Figure 15, the lowest classification accuracy for LBD and NC groups is the surprise emotion, which is 50.93% and 47.22%, respectively. In the RBD group, on the other hand, disgust emotion recorded the lowest classification accuracy of only 50.00%. In this work, the surprise and fear achieved lower recognition rates compared to other emotions. The happiness was the most accurately recognized emotion, as well as the facial expressions for anger, sadness and disgust [45,46]. According to past studies, there is no convincing evidence for the surprise and fear emotions to be accurately recognized [47][48][49].
The emotional state that shows highest classification accuracy in each group (LBD, RBD and NC) indicates that the emotion is more significant compared to other emotional states in the respective groups. From this current result, all of the LBD, RBD and NC groups show highest classification accuracy for sadness. Meanwhile, NC group exhibits the highest average accuracy for both classifiers, followed by RBD with LBD trailing behind.
As a result, in this study, the LBD and RBD stroke patients have recorded a lower classification accuracy compared to the NC. This suggests that the emotional states of NC are more significant than the stroke patients. In order to validate the significant differences among the three groups, ANOVA was used to test the statistical difference among the average accuracy obtained from the KNN classifier and the resultant p-value was less than 0.05. Hence, the emotion classification accuracy for LBD, RBD and NC were statistically significant. This signifies the significant difference among the three groups, which implies that there are differences in the emotional experiences between LBD, RBD and NC groups. From this work, the NC group was observed to have the highest emotion classification accuracy, followed by the RBD group and the LBD group performed worst. Therefore, the NC group has the highest efficiency in EEG emotional classification with the use of machine learning, while the LBD group the lowest. This work is significant to those past studies in which only second order measures of statistics, such as the power spectrum [50,51], which is a linear feature, was used. The power spectrum can only reveal the amplitude information about the EEG signals, the phase information, such as the phase coupling in the signal, cannot be observed by applying the power spectrum. Furthermore, the use of linear approaches has ignored the nonlinear characteristics of the EEG signals, thus, bispectrum was implemented in this study to detect and characterize the nonlinearities of EEG signals. Also, this current study using the bispectrum was able to provide distinctive information for different emotional states which was useful for emotion classification by achieving the highest accuracy of 75.56% using the 3 bispectrum feature.

Conclusions
The importance of emotion assessment of stroke patients stems from the need to seek information on the severity of emotional impairment symptoms. Therefore, an accurate emotion assessment approach is required to identify the symptoms of mood disorders in stroke patients. This work proposed the use of the bispectrum feature to classify the discrete emotions (anger, disgust, fear, happiness, sadness and surprise) of stroke patients and normal people. This study aims to In this work, the surprise and fear achieved lower recognition rates compared to other emotions. The happiness was the most accurately recognized emotion, as well as the facial expressions for anger, sadness and disgust [45,46]. According to past studies, there is no convincing evidence for the surprise and fear emotions to be accurately recognized [47][48][49].
The emotional state that shows highest classification accuracy in each group (LBD, RBD and NC) indicates that the emotion is more significant compared to other emotional states in the respective groups. From this current result, all of the LBD, RBD and NC groups show highest classification accuracy for sadness. Meanwhile, NC group exhibits the highest average accuracy for both classifiers, followed by RBD with LBD trailing behind.
As a result, in this study, the LBD and RBD stroke patients have recorded a lower classification accuracy compared to the NC. This suggests that the emotional states of NC are more significant than the stroke patients. In order to validate the significant differences among the three groups, ANOVA was used to test the statistical difference among the average accuracy obtained from the KNN classifier and the resultant p-value was less than 0.05. Hence, the emotion classification accuracy for LBD, RBD and NC were statistically significant. This signifies the significant difference among the three groups, which implies that there are differences in the emotional experiences between LBD, RBD and NC groups. From this work, the NC group was observed to have the highest emotion classification accuracy, followed by the RBD group and the LBD group performed worst. Therefore, the NC group has the highest efficiency in EEG emotional classification with the use of machine learning, while the LBD group the lowest. This work is significant to those past studies in which only second order measures of statistics, such as the power spectrum [50,51], which is a linear feature, was used. The power spectrum can only reveal the amplitude information about the EEG signals, the phase information, such as the phase coupling in the signal, cannot be observed by applying the power spectrum. Furthermore, the use of linear approaches has ignored the nonlinear characteristics of the EEG signals, thus, bispectrum was implemented in this study to detect and characterize the nonlinearities of EEG signals. Also, this current study using the bispectrum was able to provide distinctive information for different emotional states which was useful for emotion classification by achieving the highest accuracy of 75.56% using the H3. bispectrum feature.

Conclusions
The importance of emotion assessment of stroke patients stems from the need to seek information on the severity of emotional impairment symptoms. Therefore, an accurate emotion assessment approach is required to identify the symptoms of mood disorders in stroke patients. This work proposed the use of the bispectrum feature to classify the discrete emotions (anger, disgust, fear, happiness, sadness and surprise) of stroke patients and normal people. This study aims to develop an accurate emotion identification method, which can be used to recognize the current emotional state of strokes patient during diagnosis.
In this work, the bispectrum reveals the presence of QPC in the EEG signals and exhibits different QPC relations in each emotional state. This difference in the harmonic components and peaks were shown in the bispectrum contour plots arising from the nonlinear interactions between neuronal populations in each emotional state. In this study, the proposed method of emotion classification by using the bispectrum feature and KNN classifier has shown its effectiveness in the combination of alpha to gamma frequency bands. In addition, the bispectrum feature, H3, was able to provide an accuracy of 75.56% in the NC group. Moreover, the proposed method gave a comparable result with some current studies in emotion classification, However, there were only six types of bispectrum features implemented in this study and there are more to be explored. Also, future works could also focus on the optimization of the classification accuracy.
To conclude, bispectrum-based features are effective to analyze the nonlinearity EEG signals, and therefore is a useful feature for emotion assessment. Bispectrum feature was able to provide the emotional information of stroke patients and hence can be used as the substitute for conventional observation-based or scoring methods.