Next Article in Journal
Design and Evaluation of Anthropomorphic Robotic Hand for Object Grasping and Shape Recognition
Next Article in Special Issue
Simulation and Analysis of Self-Replicating Robot Decision-Making Systems
Previous Article in Journal
Design and Implementation of Automated Steganography Image-Detection System for the KakaoTalk Instant Messenger
Previous Article in Special Issue
FuseVis: Interpreting Neural Networks for Image Fusion Using Per-Pixel Saliency Visualization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EEG and Deep Learning Based Brain Cognitive Function Classification

1
Southwestern Educational Society, Mayaguez, PR 00680, USA
2
Department of Electrical and Computer Engineering, University of Puerto Rico, Mayaguez, PR 00681-9000, USA
*
Author to whom correspondence should be addressed.
Computers 2020, 9(4), 104; https://doi.org/10.3390/computers9040104
Submission received: 13 October 2020 / Revised: 1 December 2020 / Accepted: 14 December 2020 / Published: 21 December 2020
(This article belongs to the Special Issue Feature Paper in Computers)

Abstract

:
Electroencephalogram signals are used to assess neurodegenerative diseases and develop sophisticated brain machine interfaces for rehabilitation and gaming. Most of the applications use only motor imagery or evoked potentials. Here, a deep learning network based on a sensory motor paradigm (auditory, olfactory, movement, and motor-imagery) that employs a subject-agnostic Bidirectional Long Short-Term Memory (BLSTM) Network is developed to assess cognitive functions and identify its relationship with brain signal features, which is hypothesized to consistently indicate cognitive decline. Testing occurred with healthy subjects of age 20–40, 40–60, and >60, and mildly cognitive impaired subjects. Auditory and olfactory stimuli were presented to the subjects and the subjects imagined and conducted movement of each arm during which Electroencephalogram (EEG)/Electromyogram (EMG) signals were recorded. A deep BLSTM Neural Network is trained with Principal Component features from evoked signals and assesses their corresponding pathways. Wavelet analysis is used to decompose evoked signals and calculate the band power of component frequency bands. This deep learning system performs better than conventional deep neural networks in detecting MCI. Most features studied peaked at the age range 40–60 and were lower for the MCI group than for any other group tested. Detection accuracy of left-hand motor imagery signals best indicated cognitive aging (p = 0.0012); here, the mean classification accuracy per age group declined from 91.93% to 81.64%, and is 69.53% for MCI subjects. Motor-imagery-evoked band power, particularly in gamma bands, best indicated (p = 0.007) cognitive aging. Although the classification accuracy of the potentials effectively distinguished cognitive aging from MCI (p < 0.05), followed by gamma-band power.

1. Introduction

Brain Electroencephalogram (EEG) signals are widely used for constructing Brain Computer Interfaces (BCI) with applications in motor rehabilitation, gaming and diagnosing brain disorders [1,2,3,4,5,6]. Most of the BCIs use EEGs signals collected from the user when he/she images hand or foot movement (motor imagery) [7,8]. Gamification and virtual environments are used nowadays to build motor imagery-based BCI’s [9]. There are BCIs that use Steady State Visual Evoked Potentials (SSVEP) [10,11,12,13], where a larger number of control signals can be extracted to interface the user with a BCI application called PathSpeller [14,15,16,17]. EEG-based BCI’s can also be built with very few channels for gathering brain electrical activity [18,19]. EEG is collected by the placement of electrodes using an electrode cap over the subject’s head, and conductive gel is used to improve the contact of the electrodes on the scalp. This technology is noninvasive, with minimal risk that makes them affordable and comfortable to use [20]. Most of the current EEG-based applications have used only one sensory motor function such as the motor imagery function, or visual imagery. Other cognitive functions such as audition (hearing) and olfaction (smell) have not been used in medical diagnosis for disorders such as early stages of cognitive impairment. It has been shown that EEG signals in response to auditory signals received in the left and right ear are better discriminable than the EEG responses to auditory signals received from the front and back directions of the head [21]. However, the discriminatory power of audition (hearing) and olfactory (smelling) EEG signals have not been studied in different age groups. Also, there is no classifier built or tested that can discriminate between auditory, olfactory, motor imagery, and motor movement brain EEG responses. First, we have to determine how reliable are brain electrical responses to auditory and olfactory stimulus and how well they can be discriminated against motor imagery or motor movement responses [7,22]. In this paper, we present a deep network to classify the brain’s EEG responses to each of these stimulus, as well as analyze how well the EEG responses to the four sensory motor stimulus are discriminable in different age groups between 20 to 40 years, 40 to 60 years, above 60 years, and also a group of participants with a mild memory dysfunction. Mild memory dysfunction is a symptom of Mild Cognitive Impairment (MCI) and is a key to the early diagnosis of progressive brain disorders. Below is a literature review on detecting MCI using different neuroimaging techniques.

2. Literature Review

Progressive neural brain disorders such as Alzheimer’s Disease (AD), a type of dementia, causes cognitive and functional deterioration in the aging brain, and it is a leading cause of deaths, estimated to reach 7.1 million in 2025, with a 61% mortality rate if the patient is age 70 or older [23]. Brain structural and functional connectivity using graphs shows patterns of differences in these disorders [24]. Most of these disorders affect cognition and result in a long duration of dependency on caregivers [25]. Cognitive functions such as memory, thinking, and behavior decline at a rapid rate as the disease progresses. At an early stage, the cognitive decline is not significant and may not hinder daily activities. This stage is called Mild Cognitive Impairment (MCI), which is difficult to distinguish from aging-related cognitive dysfunctionality. It is important to detect MCI in the early stages, which is a key to rapid interventions and clinical attention. The current clinical diagnosis methods consist of a battery of cognitive tests and brain imaging. The brain imaging methods include Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). Many researches focus on MCI detection using state-of-the-art time-frequency and deep learning techniques. An early detection of deterioration of these functions is a key to rapid interventions and clinical attention. Resting-state brain networks from functional MRI (fMRI) images are constructed and used to train an autoencoder network to distinguish between normal aging and MCI [23]. Other brain functional network and machine learning methods are found in [24,25]. Graph Convolution Networks (GCNs) are trained with functional connectivity maps from fMRI images, which are then used to predict MCI [26,27]. A feature-based deep Convolutional Neural Network is used in [28] to distinguish between Early MCI (EMCI), Late MCI (LMCI), Normal Control (NC), and AD. Tissue measurement differences from Diffusion Tomography Images (DTI) have been used to train a Recurrent Neural Network (RNN) [29] for identification of prodromal MCI cases, who are likely to develop AD. Statistical t-test features extracted from MRI and projected on to a partial least squares subspace multiclass classifier is used to classify MCI against AD [30]. All of the above methods have used the collected and integrated multi-modal brain image database (ADNI) [31]. The accuracies for detection of early Alzheimer’s from fMRI images are about 72% with a Minimum Mean Square Error (MMSE) of about 28%. The GCN and CNN methods show an improvement in accuracy to about 84% and 94%, they require a lot of data and images for training, and do not provide results in real time for early detection of MCI. While 2D and 3D brain imaging is widely used as a diagnostic method for MCI and AD, it is expensive, not readily available, static, does not have temporal resolution, and is not affordable by many patients.
Multi-channel Electroencephalogram (EEG) is a non-invasive, rapid, cost-effective brain imaging method that records electrical brain activity with a very high temporal resolution when the subject is at a resting state or performing certain tasks. Cognitive tasks and EEG Fourier transform features of power spectral density, variance, fractal dimension, and Tsallis entropy have been fused and a kNN classifier is used to distinguish early and mild dementia from normal subjects [32]. A biomarker from the changes in the EEG amplitude [33], and EEG relative power [34] have been proposed for the early detection of AD. The EEG brainwaves can be divided into alpha, theta, beta, gamma, and delta bands depending on the frequencies of the components. These brainwave subbands are projected onto 2D images and used to train a CNN classifier for early detection of AD in [23]. While Fourier transform frequency domain features have been found to be useful for EEG based classification of medical conditions, EEG is more amenable to time–frequency analysis as the frequency components in these signals change considerably over time. There are different methods for time-frequency feature extraction such as Short Time Fourier Transform (STFT) and Continuous Wavelet Transform (CWT) [35]. Epoched time frequency representation of EEG signals are used to train a deep CNN, which is then used to distinguish between MCI and AD subjects [36]. Both Fast Fourier Transform (FFT) and CWT features have been used to train a classifier, which is used to identify dementia in [37]. All of the above EEG-based methods for early diagnosis are based on recorded resting state EEG. There are some methods that use visual and memory tasks for early detection of MCI [38]. There is one research where the subjects are required to perform a visual memory task and EEG is recorded during the performance of this task. Event Related Potential (ERP) analysis of the EEG is used as an early biomarker on working memory task for early diagnosis of MCI [39]. MCI related to vascular dementia can also be diagnosed using EEG recorded during an oddball experiment in [40]. ERP analysis of EEG recorded during a sound stimulus has been used to detect dementia in subjects with diabetes [41]. The literature above show that EEG is a promising tool for early detection of MCI. However, they have not studied or presented a method for discrimination of EEG collected during performance of audition, olfaction, motor movement, and motor imagery tasks, and how it can be used for early detection of MCI.
Early dementia implies impairment of cognitive functions such as sensory perceptual (visual, auditory, olfactory, tactile) and memory functions. Especially olfactory dysfunction has links to early cognitive impairment and can be effectively used for early screening for MCI, and conditions that can progress to MCI [42]. It has been shown that olfaction, audition functional biomarkers can serve as detectors in screening for cognitive impairment, as well as for interventions. Current healthcare equipment does not detect small changes in sensory motor functions, thus, not allowing an early diagnosis of mental cognitive decline. Also, olfaction and audition have associations in age-related deterioration of cognitive ability. Hence, in this study, olfactory, auditory, motor imagery, and motor imagery functions are considered in healthy subjects of different age groups in comparison to mildly demented subjects. Nowadays, EEG and EMG equipment are available at a very low price that can be afforded by patients with no means for costly brain imaging methods. In this paper, we present a Deep Bidirectional Long Short-Term Memory Recurrent Neural Network architecture that integrates and evaluates EEG/EMG sensory motor responses to auditory, olfactory, and motor stimuli to detect early cognitive deterioration caused by Mild Cognitive Impairment (MCI) and discriminate early MCI from normal aging. Section 3 presents an overview of the Materials and Methods used including a description of the different age groups and MCI group of human study subjects, equipment used, experimental procedure, data processing, and deep learning architecture for classification. Section 4 presents and discusses the EEG from sensory motor function classification results, and Section 5 presents the conclusions.

3. Materials and Methods

This section describes the equipment used, subjects, data acquisition and preprocessing, and the Bidirectional Long Short Term Memory (BLSTM) network used for EEG classification.

3.1. Subjects

A total of 35 subjects participated in the experiments, with seven in each age group of 20 to 40, 40 to 60, and 60 and above. Eighteen were female subjects, and 17 were male subjects. Twenty-eight normal healthy subjects and seven subjects with mild memory dysfunction, an indication of MCI were recruited. In total, 35 subjects participated in the study. All the subjects come from the same demographic background, are educated and employed. Participation was voluntary, and prior Institutional Review Board (IRB) approval from the University of Puerto Rico, Mayaguez (UPRM) was obtained before conducting the experiments.

3.2. Data Acquisition

EEG data is recorded using the Ultracortex openBCI headset electrode system with 16 electrodes. One electrode is used to record Electromyogram (EMG) during hand movement from the left arm and right arm. The 16 electrode EEG cap records EEG from the following channels in the 10–20 system: Fp1, Fp2, F3, F4, F7, F8 from the frontal lobe; C3 and C4 from the motor area; T3, T4, T5, T6 from the temporal lobe; O1 and O2 in the occipital lobe; and P3 and P4 from the parietal lobe.

3.3. Task Description

Each subject participated in a session with five trials per task, for four tasks. Each trial lasts 60 s. There are four tasks, which includes collecting (1) EEG/EMG data during left hand followed by right hand movement, (2) EEG data collection during left hand and right hand motor imagery tasks, (3) EEG data collection while presenting auditory stimulus through headphones, and (4) EEG data collection during smelling of four types of perfumes (lavender, eucalyptus, rosewater, and neem) presented using a sniff strip. EEG data is collected for auditory, olfactory, motor imagery functions, and EEG/EMG data for motor movement functions. There is a break of 10 seconds between each trial, and a break of 2 minutes between each task. For recording auditory EEG response, the subject is presented with auditory stimulus through headphones in the left and right ear. The 16-channel EEG cap is worn by the subject and is connected to the cyton biosensing board. The board transmits the EEG signal wirelessly to the openBCI v5.0.1 software in the computer. The stimulus consists of eight-directional sound beeps produced at random from eight different directions (East, West, North, South, North-East, North-West, South-East, and South-West) using the HRTF (Head Related Transfer Function), a method for simulating the direction of sound arrival. The subject has the eyes closed and pays attention to the sound stimulus for the duration of the trials, while the 16 channel EEG signal is recorded by the openBCI GUI in the computer. The olfactory stimulus consists of four different odors (Eucalyptus oil, Lavender oil, Neem oil, and Rosewater) presented on strips of paper placed under the nose of each subject during the duration of the trial. The subject is asked to close the eyes and focus on the smell while the 16-channel EEG is recorded. The motor imagery experiment consists of imagining movement of the left arm away from the body, and the EEG data is recorded. This is followed by recording of the EEG during the imagination of movement of the right arm away from the body. For the hand movement task, the subject is asked to open and close the fist of each of the right and left arm when 15-channel EEG and 1 channel EMG signals are recorded. The P4 channel of the openBCI 16-channel board is replaced with the signal from the Electromyogram (EMG), which is recorded from the EMG electrode placed on the arm of the subject. The multi-modal EEG/EMG signals for each of the tasks are epoched in time intervals and organized in to training, validation, and testing matrices.

3.4. Data Collection Procedure

Figure 1 shows the equipment used for data collection. The subject is asked to sit comfortably in a chair. The subject is fitted with the ultracortex headset, and the EMG electrode is placed on the left arm as shown in Figure 1a. During data collection, the subject has the eyes closed. This reduces noise in the EEG due to eye blinks. The subject opens and closes the left fist for 1 min five times with a break of 10 s between each minute. During this time, EEG and EMG data are collected using the 16 channel Cyton biosensing board (Figure 1d), and the data is transmitted wirelessly to the computer through the dongle. The data collection is manually synchronized with the commencement of the task. The same procedure is repeated for collecting EEG and EMG data for right hand movement. The second task is motor imagery, the EMG electrode is removed and the P4 channel is connected to the ultracortex and the Cyton board. The subject closes the eyes and imagines movement of the left hand away from the body for five trials, followed by imagining movement of the right hand for five trials. The subject is then fitted with the headphones as shown in Figure 1b. The subject closes the eyes and the sound stimulus is presented, and the subject is asked to focus on hearing the sound. The EEG data collection is started manually and is recorded for 5 minutes with a break of 10 seconds between trials. The olfaction task consists of four subtasks that include sniffing the perfumed sniff trips. Four sniff strips are perfumed each with lavender, eucalyptus, rose water, and neem scents. The final task is presentation of the sniff strips scented with each of the four oils. The subject is asked to close the eyes, the sniff strips are placed under the nose of the subject, and the subject is asked to focus on the smell. The EEG data is collected simultaneously for five trials with a break of 10 seconds between trials for each of the four sniff strip presentations. The data collected for each of the task are labeled with the subject’s initials, name of the task and the trial number. The data is a matrix of 16 columns with rows being the samples.

3.5. BLSTM-Recurrent Neural Network

A Recurrent Neural Network (RNN) uses a hidden state vector to represent context based on prior inputs and outputs, to be considered along with the current state when generating an output. The input vector undergoes a series of transformations to produce a series of output vectors. Because this is advantageous in terms of network accuracy, RNNs are useful for analyzing time-series data [43]. Long Short-Term Memory (LSTM) Neural Networks are a type of RNN that attempts to solve the “vanishing gradient” (very small gradients don’t allow distant input nodes to be considered) problem. The basic unit of an LSTM network is a memory cell, which has an input gate, an output gate, and a forget gate, which control information flow into the system. The cell itself then determines the fate of the information it holds. This is determined by an independent set of weights pertaining to the memory cell, which are adjusted by gradient descent and backpropagation. The Bidirectional Long Short-Term Memory (BLSTM) learns from data in both forward and backward directions, improving the accuracy and speed of the network. It uses two hidden layers running in opposite directions from the same input, as the network simultaneously gains information from past and future states [44]. The LSTM model is shown in Figure 2. The LSTM cell learns the input signal features based on the following mathematical Equation (1):
f t = σ ( W f h h t 1 + W f x x t + b f ) i t = σ ( W i h h t 1 + W i x x t + b i ) o t = σ ( W o h h t 1 + W o x x t + b o ) d c t = tanh ( W c h h t 1 + W c x x t + b c ) c t = f t c t 1 + i t d c t
where xt is the input at time t, and ct and ht are the cell state and hidden state, respectively. W and b denote weights and biases, respectively. σ is the sigmoid function, and ° is the Hadamard product operator. dct is a candidate for updating ct through the input gate. The input gate it decides whether to update the cell state ct, while the forget gate ft decides what to keep and what to forget from the previous cell state, and output gate decides on how much information is passed to the next cell [45]. The BLSTM model used in this work is shown in Figure 3. The input to the network is the three principal component features obtained from the Principal Component Analysis (PCA) transformation of the 16 channel EEG/EMG signal. The BLSTM layer consists of one forward and one backward layer of a Long Short-Term Memory (LSTM) network, with a total of 164 hidden units (nodes). This layer learns bidirectional long-term dependencies between time intervals of the data. These dependencies are useful as the data collected from experiments pertaining to sensory motor functions are continuous, and the network learns the complete time series at each interval. A Rectifying Linear Unit (ReLu) Layer, defined as y = max(0,x), serves as an activation layer that outputs the original input if the input is positive, and outputs 0 if the input is negative. This function does not saturate (network values reach an asymptote and stagnate within a range), and the difference between successive weights is always high whenever a neuron activates. This permits the network to converge to the accurate value in a shorter period of time. The next layer is a fully connected layer where all nodes are connected to every other node. This layer is used to label inputs. The softmax layer limits the output of the function to a 0–1 range, to be interpreted as a probability. These probabilities are then mapped to a categorical label or symbol by the classification layer. The categorical or classification label is an integer value assigned to each class, which represents the set of five sensory motor functions, which serve as classes in which the input feature matrices are classified.

3.6. Training and Testing of BLSTM Network

A BLSTM model is trained to discriminate between normal aging-related changes in sensory motor function and MCI-related changes. The model is subject-agnostic and it can detect progressive changes in a single subject, as well as identify the level of sensory motor function in a subject, who could be a new candidate for MCI diagnosis compared to subjects with normal aging-related changes.
The EEG/EMG data matrix is first filtered from noise such as interference using a Butterworth bandpass filter in the frequency range 3 to 30 Hz. The filtered data is then reduced dimensionality using Principal Component Analysis (PCA). The three principal components from the PCA constitute the feature matrix for each sensory motor task, which is used to train the BLSTM network.
The BLSTM is trained with the feature matrix for intra-subject (within subject) as well as inter-subject classification of the sensory motor function categories. Three combinations of five categories are made, in order to determine which function shows better or worst discriminability among the different age groups and the MCI group. One combination consists of auditory and olfactory functions, the second combination is auditory, olfactory and motor imagery function. The third combination tests all functions and consists of features from auditory, olfactory, motor imagery, and motor movement functions.
The five trials of data are averaged into one trial for a duration of 1 minute, in order to smooth out variations due to noise. The network is trained in batches of 20 epochs/class; each epoch corresponds to 1 s of data, with 125 samples per second. Hence, the sample vectors used for training comprise 20 epochs, totaling 2500 sample points, which corresponds to 26.67% of training data. The trained network weights and parameters are saved and loaded during testing. For intra-subject trained networks, the testing is done with 54 batches per class, which corresponds to 73.3% of the data. Eighty epochs per class from four subjects corresponding to 27.49% of the feature data are used to train the network for inter-subject training, and the remaining 72.5% of the feature data are used for testing the model. This corresponds to 211 epochs with 125 samples in each epoch. 10-fold cross-validation is conducted to obtain the final classification accuracies.

3.7. Brain Wave Band Power Analysis

The EEG channel data corresponding to the brain regions are analyzed in to the brain wave bands: Delta (0.5–4 Hz), Theta (4–8 Hz), Alpha (8–13 Hz), Beta (13–30 Hz), and Gamma (>30 Hz) using an eight-level wavelet decomposition with the Daubechies scaling and wavelet functions [46]. The band power in each of these frequency bands is calculated.

4. Results and Discussion

The BLSTM network utilized responses to sensory motor stimuli in order to produce an accurate assessment of cognitive function. The BLSTM model is employed in the study comprised of healthy subjects of varying age and subjects that suffered from Mild Cognitive Impairment (MCI), to determine its ability to detect and distinguish between cognitive aging and pathological cognitive deterioration. It was aimed to identify the effect of cognitive deterioration on key signal features, such as the network’s classification accuracy of the sensory motor potentials or rhythms (which is an indicator of the brain’s ability to process the stimuli), and signal band power. The results of this study suggest that the BLSTM model is more effective than current models in evaluating cognitive decline; furthermore, most elements of sensory motor functionality tested were reliable indicators of cognitive aging as well as promising biomarkers of early dementia. It was hypothesized that these elements would indicate consistent cognitive decline as a function of age and would demonstrate lower sensory motor health for subjects suffering from MCI than for those pertaining to any other test group.
The BLSTM classification accuracy of evoked signals determines the extent to which the brain discriminates and processes the stimulus presented. Table 1 shows the discriminability of evoked responses to sensory motor stimuli (auditory, olfactory, motor imagery, and movement) as defined by network classification accuracy (%) of evoked signals for test groups studied. These results were obtained through intra-subject training and testing (data pertaining to the same subject were used for both training and testing). Table 2 shows the mean inter-subject (data pertaining to more than one subject used for both training and testing) classification accuracies of combined sensory motor potentials per age group. Table 1 and Table 2 suggests that the classification accuracy of the evoked potentials pertaining to all sensory motor stimuli peaks at age range 40–60 and declines thereafter, as demonstrated by responses elicited by auditory stimuli, right motor imagery, and two of the four olfactory stimuli utilized (neem and rosewater). This result is consistent with previous studies [47,48], concluding that key sensory motor functions such as audition, olfaction and gait peak in one’s 40’s or 50’s, resulting in deficiencies that are evident in one’s 60’s. According to Figure 4, which illustrates the aforementioned trends beginning at 40, the classification accuracy of auditory responses best reflected this trend, with an R2-value of 0.63. In contrast, Figure 5 indicates that the mean classification accuracy of potentials related to movement and motor imagery increases as a function of age. However, Figure 6 shows that when the left and right sides are considered separately, movement- and motor imagery-related stimuli are reliable indicators of age-induced and pathological cognitive deterioration, with the exception of right-hand movement and olfactory (lavender).
The results of this study show significant differences between network classification accuracy of signals pertaining to healthy and demented subjects. Table 1 shows that the classification accuracy of all sensory motor responses obtained using intra-subject testing, except those pertaining to audition and olfactory (eucalyptus), were significantly lower for the test group comprised of subjects with MCI than for any of the age groups studied. Table 3 shows detection accuracies (%) of evoked sensory motor potentials analyzed together for each group tested, and comparison with subjects suffering from Mild Cognitive Impairment (MCI) in age groups 40–60 and >60. Additionally, Table 3, which shows the classification accuracies of several combined sensory motor rhythms, demonstrates not only the previously described age-related trend, but also a sharp contrast between the mean network classification accuracy of these combined signals for the upper age groups (40–60 and >60) and the MCI subjects belonging to each of these age groups.
The results shown in Table 2 were acquired through inter-subject training and testing. The intra-subject and inter-subject results are comparable, demonstrating the subject-agnostic nature of the network. As a result, the network has minimal to no need for subject-specific information, resulting in a more robust and versatile system that can be trained with data from different subjects and can detect cognitive deterioration from new data pertaining to demented subjects
Cognitive function is highly variable depending on a variety of factors, such as lifestyle, stress and anxiety levels, attention span, and pre-existing health conditions. For this study, an optimal coefficient of variation was considered to be 50% or below. Table 4 shows the standard deviation and coefficient of variation of detection accuracies (%) of evoked sensory motor potentials analyzed together for each group tested. Optimal coefficient of variation is considered to be under 50% for this study. According to Table 5, the test group comprising subjects with MCI displayed significantly higher levels of variation in classification accuracy than any other test group. This can be accounted for by several disparities among the subjects that formed part of this group; for example, the seven subjects pertained to different age groups (five belonged in 40–60 and two in the other age group >60). In addition, while the subject belonging to the age group 40–60 experienced severe cognitive impairment and suffered from Bipolar Syndrome, the subject belonging to the age group >60 recently acquired dementia and hence displayed only mild cognitive deterioration. Furthermore, due to logistical and ethical constraints, the test group pertaining to MCI has variability as there are many causes that lead to MCI including medical history, demography, family history, environment, and health predispositions.
According to Figure 7, band power (dB/Hz) exhibits variation more frequently than network classification accuracy (%). This suggests that network classification accuracy is a more reliable indicator of cognitive deterioration than band power. Figure 8 indicates that the beta, alpha, theta, and gamma bands were the best indicators of cognitive aging and MCI.
Of all the five frequency bands analyzed, the Gamma band proved to be the most reliable as it exhibits the lowest levels of variation. In fact, previous work has shown that Gamma waves are closely linked with functional connectivity. Figure 8 also shows that the band power of delta waves, which are predominant during a state of deep relaxation, is disproportionately higher than those of other waves [1].
Table 6 shows the Results of Analysis of Variance (ANOVA) conducted on the relationship between network classification accuracy of sensory motor evoked potentials and subject age. The ANOVA results indicates that the classification accuracies of the sensory motor rhythms analyzed shows EEG differences between age groups, with p-values below 0.05 and F-values surpassing F-critical values. The sensory motor rhythms elicited by left-hand movement show the highest discriminability between test groups (p = 0.012) because it contains both oscillatory and motion-related cortical potentials (MRCP), and is thus a more comprehensive indicator of cognitive health [7]. Table 7 shows the results of Student’s t-Test to determine the significance of difference between network classification accuracy of all sensory motor potentials. According to the Student’s t-Test, the network classification accuracy of diverse sensory motor potentials is indeed effective at distinguishing between age-related and pathological cognitive decline, as the p-value falls below 0.05 and the T-Statistic (2.25) falls above the T-Critical value (2.23).
Table 8 shows the results of Student’s t-test to determine the significance of difference between band power (dB/Hz) of all sensory motor potentials. Table 8 shows that band power (dB/Hz) of evoked sensory motor potentials is a less reliable indicator of cognitive aging and dementia, as fewer stimulus categories exhibit clear trends in relation to age and significant differences between the MCI group and upper age groups. Responses to auditory and olfactory (eucalyptus and rosewater) stimuli, left-hand movement, left-hand motor imagery, and right-hand motor imagery exhibit the previously mentioned relationship with age, with a plateau at age group 40–60. Figure 9 shows that this trend, prevalent after the age of 40, is best shown by sensory motor responses to motor imagery. This trend is also exhibited by the mean of the band power values pertaining to responses to both left and right movement, as well as the mean band power of responses pertaining to both left and right motor imagery.
In addition, the band power of signals elicited by olfactory stimuli were subject to the highest coefficients of variation, and hence, not desirable as indicators of cognitive decline. Furthermore, the results of the Student’s t-test, shown in Table 8, indicate that band power is not effective at distinguishing between the cognitive deterioration effectuated by advanced age and that effectuated by dementia, because p > 0.05 and the T-statistic falls below the critical t-value (2.26) at 1.00. Interestingly, subjects with MCI also displayed differential activation of cerebral regions. Table 9 shows cerebral regions where Delta, Theta, Alpha, Beta, and Gamma Bands were predominant for test groups studied. M stands for Motor Cortex, OT for Occipito-temporal, FT for Fronto-temporal, FM for Frontal and Motor, FP for Fronto-parietal, and OF for Occipito-frontal.
As shown in Table 5 and Table 6, the Fronto-parietal region, which processes sensory motor stimuli [49], of MCI subjects was activated more often than those of healthy subjects, suggesting that the Fronto-parietal region is important in the advent of cognitive deterioration. Additionally, the pairs of topographical plots shown in Figure 10a,b show that the brains of MCI subjects display significantly lower activity levels in key regions (frontal, occipital, parietal, temporal) than those of healthy subjects. Based on the classification accuracies of EEG from sensory motor function tasks, it can be seen that the EEG features are not well discriminable in elder subjects and even lesser discriminable in subjects with MCI. Hence, sensory motor function tasks can be incorporated in early MCI diagnosis from fMRI modality, as well.

5. Conclusions

The paper presented how discriminable are EEG data for sensory motor cognitive functions such as audition, olfaction, motor imagery, and motor movement in subjects of different age groups and subjects with MCI. The EEG collected from five sensory motor tasks shows proficiency in detecting cognitive aging and dementia, as well as in distinguishing the two, through comprehensive and accurate assessment of sensory motor potentials. It only requires a limited amount of data to train, and is therefore faster and easier to train than other networks. Because it is subject-agnostic (independent of the subjects used to train the network) and it can be adaptive to new tasks such as memory and gaming, it can thus serve as a versatile tool for clinicians. This BLSTM model may also be used to predict MCI progress, so clinicians can provide timely preventive care for their patients. Responses to most of the stimuli employed in this study displayed a peak in network classification accuracy at age range 40–60, and declined thereafter. Furthermore, classification accuracy of sensory motor potentials is significantly lower for MCI subjects. The best indicators of cognitive aging were potentials elicited by left-hand movement and motor imagery. It was also found that Mild Cognitive Impairment (MCI) patients showed significantly higher levels of activity in the Fronto-parietal brain region. This suggests that the EEG modality-based BLSTM model can be used for preliminary screening of the subjects for sensory motor function deterioration due to aging or MCI. These results can further be asserted with additional imaging methods.

Author Contributions

Data curation, V.M..; methodology, S.S. and V.M.; supervision, V.M.; validation, S.S.; writing—original draft, S.S. and V.M.; writing—review and editing, S.S. and V.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the undergraduate students from the Electrical and Computer Engineering Department at the University of Puerto Rico at Mayaguez for their assistance in data collection and the recruited subjects for their participation in the experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hassanien, A.E.; Azar, A.T. Brain-Computer Interfaces; Springer International Publishing: Cham, Switzerland, 2015; pp. 4–16. [Google Scholar]
  2. Shih, J.J.; Krusienski, D.J.; Wolpaw, J.R. Brain-Computer Interfaces in Medicine. Mayo Clin. Proc. 2012, 87, 268–279. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG-Based Brain-Computer Interfaces Using Motor-Imagery: Techniques and Challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Guy, V.; Soriani, M.-H.; Bruno, M.; Papadopoulo, T.; Desnuelle, C.; Clerc, M. Brain computer interface with the P300 speller: Usability for disabled people with amyotrophic lateral sclerosis. Ann. Phys. Rehabil. Med. 2018, 61, 5–11. [Google Scholar] [CrossRef] [PubMed]
  5. Renton, A.I.; Mattingley, J.B.; Painter, D.R. Optimising non-invasive brain-computer interface systems for free communication between naïve human participants. Sci. Rep. 2019, 9, 1–18. [Google Scholar] [CrossRef]
  6. Mora-Cortes, A.; Manyakov, N.V.; Chumerin, N.; Van Hulle, M.M. Language Model Applications to Spelling with Brain-Computer Interfaces. Sensors 2014, 14, 5967–5993. [Google Scholar] [CrossRef] [Green Version]
  7. Lin, J.; Shihb, R. A Motor-Imagery BCI System Based on Deep Learning Networks and Its Applications. Evol. BCI Ther. 2018. [Google Scholar] [CrossRef] [Green Version]
  8. Kim, Y.; Ryu, J.; Kim, K.K.; Took, C.C.; Mandic, D.P.; Park, C. Motor Imagery Classification Using Mu and Beta Rhythms of EEG with Strong Uncorrelating Transform Based Complex Common Spatial Patterns. Comput. Intell. Neurosci. 2016, 2016, 1–13. [Google Scholar] [CrossRef] [Green Version]
  9. Škola, F.; Tinková, S.; Liarokapis, F. Progressive Training for Motor Imagery Brain-Computer Interfaces Using Gamification and Virtual Reality Embodiment. Front. Hum. Neurosci. 2019, 13, 1–16. [Google Scholar] [CrossRef]
  10. Iscan, Z.; Nikulin, V.V. Steady state visual evoked potential (SSVEP) based brain-computer interface (BCI) performance under different perturbations. PLoS ONE 2018, 13, e0191673. [Google Scholar] [CrossRef] [Green Version]
  11. Han, C.; Xu, G.; Xie, J.; Chen, C.; Zhang, S. Highly Interactive Brain–Computer Interface Based on Flicker-Free Steady-State Motion Visual Evoked Potential. Sci. Rep. 2018, 8, 1–13. [Google Scholar] [CrossRef]
  12. Choi, K.-M.; Park, S.; Im, C.-H. Comparison of Visual Stimuli for Steady-State Visual Evoked Potential-Based Brain-Computer Interfaces in Virtual Reality Environment in terms of Classification Accuracy and Visual Comfort. Comput. Intell. Neurosci. 2019, 2019, 9680697. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Guger, C.; Allison, B.; Grosswindhager, B.; Prückl, R.; Hintermüller, C.; Kapeller, C.; Bruckner, M.; Ekrausz, G.; Edlinger, G. How Many People Could Use an SSVEP BCI? Front. Behav. Neurosci. 2012, 6, 2–7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Tantiongloc, J.; Mesa, D.A.; Ma, R.; Kim, S.; Alzate, C.H.; Camacho, J.; Coleman, T.P. An information and control framework for optimizing user-complaint human computer interfaces. Proc. IEEE 2017, 105, 273–285. [Google Scholar] [CrossRef]
  15. Sridhar, S.; Manian, V. Assessment of Cognitive Aging Using an SSVEP-Based Brain–Computer Interface System. Big Data Cogn. Comput. 2019, 3, 29. [Google Scholar] [CrossRef]
  16. Rezeika, A.; Benda, M.; Stawicki, P.; Gembler, F.; Saboor, A.; Volosyak, I. Brain–Computer Interface Spellers: A Review. Brain Sci. 2018, 8, 57. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Chen, X.; Wang, Y.; Nakanishi, M.; Gao, X.; Jung, T.-P.; Gao, S. High-speed spelling with a noninvasive brain–computer interface. Proc. Natl. Acad. Sci. USA 2015, 112, E6058–E6067. [Google Scholar] [CrossRef] [Green Version]
  18. Camacho, J.; Manian, V. Real-time single channel EEG motor imagery based Brain Computer Interface. In Proceedings of the 2016 World Automation Congress (WAC), Rio Grande, Puerto Rico, 31 July–4 August 2016. [Google Scholar]
  19. Nguyen, T.-H.; Chung, W.-Y. A Single-Channel SSVEP-Based BCI Speller Using Deep Learning. IEEE Access 2018, 7, 1752–1763. [Google Scholar] [CrossRef]
  20. Camacho-Rosa, J.J. Motor imagery classification using single-channel eeg signals for brain computer interfaces. Master’s Thesis, University of Puerto Rico, San Juan, Puerto Rico, USA, 2018. [Google Scholar]
  21. Nambu, I.; Ebisawa, M.; Kogure, M.; Yano, S.; Hokari, H.; Wada, Y. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization. PLoS ONE 2013, 8, e57174. [Google Scholar] [CrossRef] [Green Version]
  22. Mulder, T. Motor imagery and action observation: Cognitive tools for rehabilitation. J. Neural Transm. 2007, 114, 1265–1278. [Google Scholar] [CrossRef] [Green Version]
  23. Pan, D.; Adni, F.A.D.N.I.; Huang, Y.; Zeng, A.; Jia, L.; Song, X. Early Diagnosis of Alzheimer’s Disease Based on Deep Learning and GWAS. Commun. Comput. Inf. Sci. 2019, 1072, 52–68. [Google Scholar] [CrossRef]
  24. Kam, T.-E.; Zhang, H.; Jiao, Z.; Shen, D. Deep Learning of Static and Dynamic Brain Functional Networks for Early MCI Detection. IEEE Trans. Med Imaging 2019, 39, 478–487. [Google Scholar] [CrossRef] [PubMed]
  25. Minhas, S.; Khanum, A.; Riaz, F.; Khan, S.A.; Alvi, A. Predicting Progression From Mild Cognitive Impairment to Alzheimer’s Disease Using Autoregressive Modelling of Longitudinal and Multimodal Biomarkers. IEEE J. Biomed. Health Inform. 2018, 22, 818–825. [Google Scholar] [CrossRef]
  26. Zhao, X.; Zhou, F.; Ou-Yang, L.; Wang, T.; Lei, B. Graph convolutional network analysis for mild cognitive impairment prediction. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 1598–1601. [Google Scholar] [CrossRef]
  27. Song, T.-A.; Chowdhury, S.R.; Yang, F.; Jacobs, H.; El Fakhri, G.; Li, Q.; Johnson, K.; Dutta, J. Graph convolutional neural networks for Alzheimer’s disease classification. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 414–417. [Google Scholar] [CrossRef]
  28. Yue, L.; Gong, X.; Chen, K.; Mao, M.; Li, J.; Nandi, A.K.; Li, M. Auto-detection of alzheimer’s disease using deep convolutional neural networks. In Proceedings of the 2018 14th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Huangshan, China, 28–30 July 2018; pp. 228–234. [Google Scholar] [CrossRef]
  29. Velazquez, M.; Anantharaman, R.; Velazquez, S.; Lee, Y. RNN-Based Alzheimer’s Disease Prediction from Prodromal Stage using Diffusion Tensor Imaging. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; pp. 1665–1672. [Google Scholar] [CrossRef]
  30. Jimenez-Mesa, C.; Illan, I.A.; Martin-Martin, A.; Castillo-Barnes, D.; Martinez-Murcia, F.J.; Ramirez, J.; Górriz, J.M. Optimized One vs One Approach in Multiclass Classification for Early Alzheimer’s Disease and Mild Cognitive Impairment Diagnosis. IEEE Access 2020, 8, 96981–96993. [Google Scholar] [CrossRef]
  31. ADNI Alzheimer’s Database. Available online: http://adni.loni.usc.edu/ (accessed on 15 December 2020).
  32. Sharma, N.; Kolekar, M.H.; Jha, K. Iterative Filtering Decomposition based Early Dementia Diagnosis using EEG with Cognitive Tests. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, XX, 1. [Google Scholar] [CrossRef]
  33. Al-Nuaimi, A.H.H.; Jammeh, E.; Sun, L.; Ifeachor, E. Changes in the EEG amplitude as a biomarker for early detection of Alzheimer’s disease. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 993–996. [Google Scholar] [CrossRef] [Green Version]
  34. Kim, D.; Kim, K. Detection of Early Stage Alzheimer’s Disease using EEG Relative Power with Deep Neural Network. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 352–355. [Google Scholar] [CrossRef]
  35. Manian, V. Auditory Source Localization by Time Frequency Analysis and Classification of Electroencephalogram Signals. Biomed. J. Sci. Tech. Res. 2019, 19. [Google Scholar] [CrossRef]
  36. Morabito, F.C.; Campolo, M.; Ieracitano, C.; Ebadi, J.M.; Bonanno, L.; Bramanti, A.; DeSalvo, S.; Mammone, N.; Bramanti, P. Deep convolutional neural networks for classification of mild cognitive impaired and Alzheimer’s disease patients from scalp EEG recordings. In Proceedings of the 2016 IEEE 2nd International Forum on Research and Technologies for Society and Industry Leveraging a better tomorrow (RTSI), Bologna, Italy, 7–9 September 2016; pp. 1–6. [Google Scholar] [CrossRef]
  37. Durongbhan, P.; Zhao, Y.; Chen, L.; Zis, P.; De Marco, M.; Unwin, Z.C.; Venneri, A.; He, X.; Li, S.; Zhao, Y.; et al. A Dementia Classification Framework Using Frequency and Time-Frequency Features Based on EEG Signals. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 826–835. [Google Scholar] [CrossRef] [Green Version]
  38. Macdonald, S.W.S.; Keller, C.J.C.; Brewster, P.W.H.; Dixon, R.A. Contrasting olfaction, vision, and audition as predictors of cognitive change and impairment in non-demented older adults. Neuropsychology 2018, 32, 450–460. [Google Scholar] [CrossRef]
  39. Mamani, G.Q.; Fraga, F.J.; Tavares, G.; Johns, E.; Phillips, N.D. EEG-based biomarkers on working memory tasks for early diagnosis of Alzheimer’s disease and mild cognitive impairment. In Proceedings of the 2017 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT 2017), Bethesda, MD, USA, 6–8 November 2017; pp. 237–240. [Google Scholar] [CrossRef]
  40. Wang, C.; Xu, J.; Zhao, S.; Lou, W. Identification of Early Vascular Dementia Patients With EEG Signal. IEEE Access 2019, 7, 68618–68627. [Google Scholar] [CrossRef]
  41. Wen, D.; Wei, Z.; Zhou, Y.; Bian, Z.; Yin, S. Classification of ERP Signals from Mild Cognitive Impairment Patients with Diabetes using Dual Input Encoder Convolutional Neural Network. In Proceedings of the 2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), Tianjin, China, 14–16 June 2009; pp. 1–4. [Google Scholar] [CrossRef]
  42. Kirkpatrick, M.A.F.; Combest, W.; Newton, M.; Teske, Y.; Cavendish, J.; McGee, R.; Przychodzin, D. Combining olfaction and cognition measures to screen for mild cognitive impairment. Neuropsychiatr. Dis. Treat. 2006, 2, 565–570. [Google Scholar] [CrossRef] [Green Version]
  43. Che, Z.; Purushotham, S.; Cho, K.; Sontag, D.; Liu, Y. Recurrent Neural Networks for Multivariate Time Series with Missing Values. Sci. Rep. 2018, 8, 1–12. [Google Scholar] [CrossRef] [Green Version]
  44. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  45. Daoud, H.; Bayoumi, M.A. Efficient Epileptic Seizure Prediction Based on Deep Learning. IEEE Trans. Biomed. Circuits Syst. 2019, 13, 804–813. [Google Scholar] [CrossRef] [PubMed]
  46. Rowe, A.C.H.; Abbott, P. Daubechies wavelets and Mathematica. Comput. Phys. 1995, 9, 635. [Google Scholar] [CrossRef] [Green Version]
  47. Profant, O.; Jilek, M.; Bures, Z.; Vencovsky, V.; Kucharova, D.; Svobodova, V.; Korynta, J.; Syka, J. Functional Age-Related Changes Within the Human Auditory System Studied by Audiometric Examination. Front. Aging Neurosci. 2019, 10, 1–16. [Google Scholar] [CrossRef] [Green Version]
  48. Doty, R.L.; Kamath, V. The influences of age on olfaction: A review. Front. Psychol. 2014, 5, 20. [Google Scholar] [CrossRef] [Green Version]
  49. Kropotov, J.D. Quantitative EEG, Event-Related Potentials, and Neurotherapy; Academic Press: Cambridge, MA, USA, 2009. [Google Scholar]
Figure 1. Procedure for Data Collection, (a) EMG, (b) Auditory sound presentation, (c) olfactory sniff strip preparation, (d) Ultracortex EEG cap setup.
Figure 1. Procedure for Data Collection, (a) EMG, (b) Auditory sound presentation, (c) olfactory sniff strip preparation, (d) Ultracortex EEG cap setup.
Computers 09 00104 g001
Figure 2. Basic LSTM Cell [19].
Figure 2. Basic LSTM Cell [19].
Computers 09 00104 g002
Figure 3. Bidirectional Long Short Term Memory Model.
Figure 3. Bidirectional Long Short Term Memory Model.
Computers 09 00104 g003
Figure 4. Relationship between discriminability of evoked responses to sensory motor stimuli (auditory, olfactory, motor imagery, and movement) as defined by network classification accuracy (%) and subject age.
Figure 4. Relationship between discriminability of evoked responses to sensory motor stimuli (auditory, olfactory, motor imagery, and movement) as defined by network classification accuracy (%) and subject age.
Computers 09 00104 g004
Figure 5. Discriminability of sensory motor potentials tested (auditory, olfactory, motor imagery, and movement) as defined by network classification accuracy (%) of the signals for test groups studied.
Figure 5. Discriminability of sensory motor potentials tested (auditory, olfactory, motor imagery, and movement) as defined by network classification accuracy (%) of the signals for test groups studied.
Computers 09 00104 g005
Figure 6. Discriminability of left and right movement and motor imagery as defined by network classification accuracy (%) of the signals for test groups studied.
Figure 6. Discriminability of left and right movement and motor imagery as defined by network classification accuracy (%) of the signals for test groups studied.
Computers 09 00104 g006
Figure 7. Mean band power (dB/Hz) of Gamma, Beta, Alpha, Theta, and Delta bands found in the sensory motor potentials triggered in the test groups studied.
Figure 7. Mean band power (dB/Hz) of Gamma, Beta, Alpha, Theta, and Delta bands found in the sensory motor potentials triggered in the test groups studied.
Computers 09 00104 g007
Figure 8. Mean band power (dB/Hz) of Gamma, Beta, Alpha, Theta, and Delta frequency bands elicited by auditory and olfactory stimuli, in addition to motor imagery and movement.
Figure 8. Mean band power (dB/Hz) of Gamma, Beta, Alpha, Theta, and Delta frequency bands elicited by auditory and olfactory stimuli, in addition to motor imagery and movement.
Computers 09 00104 g008
Figure 9. Relationship between mean band power (dB/Hz) of sensory motor potentials studied (auditory, olfactory, motor imagery, and movement) and subject age.
Figure 9. Relationship between mean band power (dB/Hz) of sensory motor potentials studied (auditory, olfactory, motor imagery, and movement) and subject age.
Computers 09 00104 g009
Figure 10. Voltage distribution (microvolts; µV) across scalp of healthy subject (left) and MCI subject (right) from (a) age group 40–60, (b) age group >60.
Figure 10. Voltage distribution (microvolts; µV) across scalp of healthy subject (left) and MCI subject (right) from (a) age group 40–60, (b) age group >60.
Computers 09 00104 g010
Table 1. Classification accuracy of evoked potentials.
Table 1. Classification accuracy of evoked potentials.
Cognitive FunctionStimulusTest Group
20 to 4040 to 60>60MCI
AuditoryAuditory64.8682.5257.4557.64
Motor ImageryLeft82.3182.6479.6376.86
Right72.6794.2170.5351.85
Mean MI77.4888.4375.0864.35
Motor MovementLeft82.0987.8182.1450.93
Right78.1278.9491.1363.89
Mean Motor Movement80.1083.3786.6457.41
OlfactoryNeem69.5588.4362.7036.12
Eucalyptus46.0595.1474.2167.60
Lavender90.6472.6875.3129.63
Rosewater71.5078.7161.1120.37
Mean Olfactory69.4378.1868.3338.43
Note: MI = Motor Imagery.
Table 2. Mean inter-subject classification accuracy.
Table 2. Mean inter-subject classification accuracy.
Cognitive FunctionAge Group
20 to 4040 to 60>60
Auditory and Olfactory48.261.3556.9
Auditory and Motor56.5678.11560.66
Olfactory and Motor60.358.82555.45
Auditory, Olfactory and Motor61.4675.2266.4
Average56.6368.377559.8525
Table 3. Detection accuracies for each age group.
Table 3. Detection accuracies for each age group.
Cognitive FunctionsTest Group
Combined Functions20–4040–60MCI (40–60)>60MCI (>60)
Auditory and Olfactory74.976.1125.2668.2152.41
Auditory, Motor, and MI72.589.8114.484.5422.5
Auditory, Olfactory Motor, and MI86.8891.9373.3381.6465.73
Table 4. Cerebral regions with predominant brainwave frequencies for sensory motor functions.
Table 4. Cerebral regions with predominant brainwave frequencies for sensory motor functions.
Cognitive FunctionsStimulusTest Group
20–4040–60>60MCI
AuditoryAuditoryOTFP & OFFPFM
Motor ImageryLeftMFP FPF&P
RightMFFFP
MovementLeftFMFMFMM
RightMOFOFFP
OlfactoryNeemOTOFOFFP
EucalyptusFT&OFOFOFFP
LavenderOTOFOFFP
RosewaterOTFOF & FTFP
Table 5. Statistics of variation.
Table 5. Statistics of variation.
Cognitive FunctionsTest Group
20–4040–60>60MCIs
Combined FunctionsSt. DevC. of Var.St. DevC. of Var.St. DevC. of Var.St. DevC. of Var.
Auditory
Olfactory
7.7710.378.0410.567.5211.0218.3741.89
Auditory, Motor, MI14.5020.005.506.125.56.125.3811.17
Auditory, Olfactory Motor, MI5.266.056.687.2712.2714.952.95.92
Table 6. Results of Analysis of Variance (ANOVA).
Table 6. Results of Analysis of Variance (ANOVA).
Functionp-ValueF-Critical ValueF-Value
Auditory0.00764.193.39
Left MI0.00214.2311.71
Right MI0.00284.265.49
Left Movement0.00124.1912.9
Right Movement0.00834.2314.29
Olfactory0.00264.1910.96
Table 7. Results of Student’s t-Test.
Table 7. Results of Student’s t-Test.
StatisticTest Group
>60MCI
Mean74.1456.31
Variance156.91218.19
Observations66
Student’s t-Test
StatisticValue
Hypothesized Mean Difference0
Degree of Freedom10
T-Statistic2.25418
P(T <= t) two-tail0.047839
T Critical two-tail2.228139
Table 8. Results of Student’s t-Test on band power (dB/Hz).
Table 8. Results of Student’s t-Test on band power (dB/Hz).
StatisticTest Group
>60MCI
Mean1,430,635.8708,396.33
Variance4.53 × 10121.50 × 1011
Observations99
Student’s t-Test
StatisticValue
Hypothesized Mean Difference0
Degree of Freedom9
T-Statistic1.0017
P(T ≤ t) two-tail0.34
T Critical two-tail2.26
Table 9. Cerebral regions with predominant brainwave frequencies.
Table 9. Cerebral regions with predominant brainwave frequencies.
Frequency BandsTest Group
20–4040–60>60MCI
DeltaM&FTFPOTFP
ThetaOTOFFPFP
AlphaOTOFOFFP
BetaOPFM&OFFPFP
GammaFTOFFPFP
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sridhar, S.; Manian, V. EEG and Deep Learning Based Brain Cognitive Function Classification. Computers 2020, 9, 104. https://doi.org/10.3390/computers9040104

AMA Style

Sridhar S, Manian V. EEG and Deep Learning Based Brain Cognitive Function Classification. Computers. 2020; 9(4):104. https://doi.org/10.3390/computers9040104

Chicago/Turabian Style

Sridhar, Saraswati, and Vidya Manian. 2020. "EEG and Deep Learning Based Brain Cognitive Function Classification" Computers 9, no. 4: 104. https://doi.org/10.3390/computers9040104

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop