Next Article in Journal
Fe2O3 Nanowire Flux Enabling Tungsten Inert Gas Welding of High-Manganese Steel Thick Plates with Improved Mechanical Properties
Next Article in Special Issue
Adaptive Model for Biofeedback Data Flows Management in the Design of Interactive Immersive Environments
Previous Article in Journal
A Deep Learning Model to Forecast Solar Irradiance Using a Sky Camera
Previous Article in Special Issue
Holonic Reengineering to Foster Sustainable Cyber-Physical Systems Design in Cognitive Manufacturing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Eye State Identification Based on Discrete Wavelet Transforms

CITIC Research Center, University of A Coruña, Campus de Elviña, 15071 A Coruña, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(11), 5051; https://doi.org/10.3390/app11115051
Submission received: 5 May 2021 / Revised: 16 May 2021 / Accepted: 27 May 2021 / Published: 29 May 2021
(This article belongs to the Special Issue Advances in Information and Communication Technologies (ICT))

Abstract

:

Featured Application

Authors are encouraged to provide a concise description of the specific application or a potential application of the work. This section is not mandatory.

Abstract

We present a prototype to identify eye states from electroencephalography signals captured from one or two channels. The hardware is based on the integration of low-cost components, while the signal processing algorithms combine discrete wavelet transform and linear discriminant analysis. We consider different parameters: nine different wavelets and two features extraction strategies. A set of experiments performed in real scenarios allows to compare the performance in order to determine a configuration with high accuracy and short response delay.

1. Introduction

During recent decades, eye gaze analysis and eye state recognition have made up an active research field due to their direct implication in emerging areas such as clinical diagnosis or Human–Machine Interface (HMIs). The ocular state of the user and his/her gaze movements can reveal important features from its cognitive condition, which can be crucial for health care purposes but also for the analysis of daily life activities. Hence, it has been studied and applied in several domains such as driver drowsiness detection [1,2,3], robot control [4], infant sleep–waking state identification [5] or seizure detection [6], among others [7,8].
Different techniques have been proposed for studying eye gaze and eye state, such as Videooculography (VOG), Electrooculography (EOG) and Electroencephalography (EEG). In VOG [9,10], several cameras record videos or pictures of the user’s eyes and, by applying image processing and artificial vision algorithms, provide an accurate analysis of the eye state of the user. In EOG [11,12,13,14,15], some electrodes are placed on the user’s skin near to the eyes in order to capture the electrical signals produced by the ocular activity. On the other hand, in the EEG technique [16,17], the electrical signals produced by the brain are measured using electrodes placed on the scalp of the user. The computational complexity associated with the algorithms employed in the image-based methods, such as VOG, is considerably higher than those used in EOG and EEG due to the costly process of analyzing and classifying multiple images [18]. The EOG method seems to be an interesting technique for building HMIs based on eye movements or blinking, but the placement of electrodes on the user’s face might be uncomfortable and not usable in practical applications [19]. Thus, the EEG technique is an attractive solution for developing new interfaces that, based on the eye state of the user, can analyze and infer its cognitive state (relaxed, stressed, asleep, etc.), which could be crucial information for the implementation of real applications.
EEG is a popular technique for neuroimaging and brain signal acquisition widely used in the study of brain disorders [20] and in Brain–Computer Interface (BCI) systems [21]. EEG has several advantages such as its high portability and temporal resolution, its relatively low cost, and its ease of use [22,23] when compared to other brain signal acquisition techniques such as Magnetoencephalography (MEG), Electrocorticography (ECoG) or functional Magnetic Resonance Imaging (fMRI). Particularly, EEG-based eye state detection has been applied in several domains, such as, for example, clinical diagnosis and health care. In this regard, Naderi et al. [6] propose a technique based on EEG time series and Power Spectral Density (PSD) features and on the use of a Recurrent Neural Network (RNN) for its classification. Their technique distinguished a relaxed and open eye state from an epileptic seizure with an accuracy of 100 % . In another study on the same data set, Acharya et al. [24] propose the employment of Convolutional Neural Networks (CNNs) for the development of a Computer-Aided Diagnosis (CAD) system that automatically detects seizures. Their technique achieves an accuracy, specificity, and sensitivity of 88.67 % , 90.00 % , and 95.00 % , respectively. Moreover, EEG-based eye state detection has been successfully applied for automatic driver drowsiness detection. Yeo et al. [25] proposed to use Support Vector Machine (SVM) as classification algorithm to identify and differentiate EEG changes that occur between alert and drowsy states. Four main EEG rhythms (delta, theta, alpha and beta) were employed for extracting different frequency features, such as dominant frequency, frequency variability, center of gravity frequency and the average power of dominant peak. Their method reached a classification accuracy of 99.30 % and was also able to predict the transition from alertness to drowsiness with an accuracy over 90 % . Furthermore, EEG eye state identification has been employed for the interaction with BCIs. For instance, Kirkup et al. [26] present a home automation control system for a rapid ON/OFF switch appliance. This calculates a threshold employing alpha band to determine the user’s eye state and control external devices.
Due to the wide variety of areas where the EEG eye state detection can be applied, several methods have been presented to achieve higher classification accuracies. In this sense, Rösler and Sunderman [27] tested 42 classification algorithms in terms of their performances to predict the eye state. For this purpose, a dataset containing the two possible ocular states was recorded using the 14 channels of the Emotiv EPOC headset. The reported results showed that standard classifiers such as naïve Bayes, Artificial Neural Networks (ANNs) or Logistic Regression (LR) offered poor classification accuracies, while instance-based algorithms such as IB1 or KStar offered significantly higher results. The latter classifier achieved the best performance with a classification accuracy of 97.30 % . However, it took at least 20 minutes to classify the state of new instances. Moreover, the dataset included the data of only one subject, so the authors cannot assure that the obtained results are generalizable. Several works have presented new classification methods based on this dataset. For example, Wang et al. [17] proposed to extract channel standard deviations and averages as features for an Incremental Attribute Learning (IAL) algorithm and achieve an error rate of 27.45 % for eye state classification. In a more recent study, Saghafi et al. [28] propose to study the maximum and minimum values in the EEG signals in order to detect any eye state change. Once this change has been detected, the last two seconds of the signal are low-pass filtered below 8 Hz and passed through Multivariate Empirical Mode Decomposition (MEMD) for feature extraction. These features are fed into a classification algorithm to confirm the eye state change. For this purpose, they tested ANNs, LR and SVM. Their proposed algorithm using LR as a classifier detected the eye state with an accuracy of 88.2 % in less than 2 s . Hamilton et al. [29] proposed a new system based on eager learners (e.g., decision trees) in order to improve the classification time achieved by Rösler and Sunderman [27]. For this purpose, three ensemble learners were evaluated: a rotational forest that implements random forests as its base classifiers, a rotational forest that implements J48 trees as its base classifiers and is boosted by adaptive boosting, and an ensemble of the rotational random forest model with the KStar classifier. The results achieved in the study showed that the approach using J48 trees and adaptive boosting offered accurate classification rates within the time constraints of real-time classification.
Although the aforementioned papers show methods to detect eye states with high accuracy, they usually gather the brain activity using a large number of electrodes and voluminous EEG devices, which might be cumbersome and uncomfortable for real-life applications. In order to avoid these limitations, we present an EEG-based system that employs a reduced number of electrodes for capturing the brain signals. For this purpose, we extend our prototype presented in [16] to the case of two input channels in order to build a multi-dimensional feature set that improves the detection rates and reduces the response time of the system. We study and compare two algorithms with low computational complexity for eye state detection. For feature extraction, we employed the Discrete Wavelet Transform (DWT), which presents lower computational complexity than other widely known algorithms such as the Fast-Fourier Transform (FFT) [30]. For feature classification, we applied Linear Discriminant Analysis (LDA), a popular technique in BCI systems, which also presents low computational requirements [23,31].
The paper is organized as follows. Section 2 shows the theoretical background of DWT and features classifiers. Section 3 describes the proposed system. Section 4 defines the materials and methods employed in the experiments. Section 5 shows the obtained results. Finally, Section 6 analyzes these results and Section 7 presents the most relevant conclusions of this work.

2. Theoretical Background

Brain signals captured by EEG devices need to be analyzed and processed for their posterior classification and subsequent translation into a specific mental state. This task is developed by a signal processing unit, whose two main tasks are feature extraction and feature classification. The feature extraction process aims to find the most relevant values, called features, that best describe the original raw EEG data [32]. These features are sent to a classification algorithm, which is responsible for estimating the mental state of the user. In this section, we will present DWT as the feature extraction technique and LDA as the classification algorithm employed throughout this work.

2.1. Wavelet Transform

Wavelet Transform (WT) is a mathematical technique particularly suitable for non-stationary signals due to its properties of time-frequency localization and multi-rate filtering, which means that a signal can be extracted at a particular time and frequency and can be differentiated at various frequencies [33].
Wavelets can be defined as small waves limited in time, with zero-mean, finite energy over their time course and band-limited, i.e., they are composed of a relatively limited range of frequencies [34,35]. Wavelet functions can be scaled in time and translated to any time point without changing their original shape. WT breaks down the input signal into a set of time-scaled and time-translated versions of the same basic wavelet. The set of scaled and translated wavelets of a unique mother wavelet ψ ( t ) is called wavelet family, denoted as ψ a , b ( t ) and obtained as follows
ψ a , b ( t ) = 1 a ψ t b a ,
where t denotes time, a , b R and a 0 . Wavelet function in (1) becomes wider when a decreases and is shifted in time when b varies. Therefore, a is called the scaling parameter that determines the oscillatory frequency and length of the wavelet, while b is called the translation parameter.
There are two types of WT: Continuous Wavelet Transform (CWT) and DWT. The idea behind CWT is to scale and translate the basic wavelet shape and convolve it with the signal to be analyzed at continuous time and frequency increments. However, analyzing the signal at every time point and scale is time consuming. Moreover, the information provided by the CWT at close time points and scales is highly correlated and redundant [34]. DWT is a more efficient and computationally simpler algorithm for the wavelet analysis [36]. In this case, discrete a and b parameters based on powers of two (dyadic scales and translations) are usually employed.
The DWT algorithm based on multi-resolution analysis can be implemented as a simple recursive filtering scheme composed by a pair of digital filters, high-pass and low-pass, whose coefficients are determined by the wavelet shape used in the analysis. In Figure 1, we can see the scheme for the DWT-based multi-resolution analysis. The signal is decomposed into a set of Approximation (A) coefficients, which represent the output of the low-pass filter, and Detail (D) coefficients, which are the output of the high-pass filter. The features extracted from these wavelet coefficients at different levels can reveal the inner characteristics of the signal. Hence, both the selection of a proper mother wavelet and the number of decomposition levels are of critical importance for the analysis of signals using DWT [37].
The DWT has been widely applied in EEG signal processing, particularly as a feature extraction method that feeds a classification algorithm for mental state recognition. For instance, it has been applied for the classification and analysis of Event-related Potential (ERP) signals [38,39], self-regulated Slow Cortical Potentials (SCPs) [40], single-sweep ERPs [41], among others. It has been also applied for Motor Imagery (MI) data classification [42,43] and for the characterization and classification of epileptic strokes through EEG recordings [33,44,45].

2.2. Linear Discriminant Analysis

The main goal of LDA is to project the original multidimensional data into a lower dimensional subspace with higher class separability [46,47]. For this reason, it is also widely used as a dimensionality reduction algorithm as well as a classifier. LDA assumes that all the classes are separable and that they follow a Gaussian distribution. Let us consider a binary classification problem with training samples D = { x ( n ) , y ( n ) , ( x ( n + 1 ) , y ( n + 1 ) ) , , x ( n + N 1 ) , y ( n + N 1 ) } , where x R d is the input feature vector and y { 1 , 1 } is the class label. LDA seeks a hyperplane in the feature space that separates both classes. In the case of a multi-class problem with more than two classes, several hyperplanes are used [23]. The optimal separating hyperplane can be expressed as
f ( x ) = w · x + b ,
where w is the projection vector and b is a bias term. The projection vector w is defined as [48]
w = Σ c 1 μ 1 μ 2 ,
where μ i is the estimated mean of the i-th class and Σ c = 1 2 Σ 1 + Σ 2 is the estimated common covariance matrix, i.e., the average of the class-wise empirical covariance matrices [48]. The corresponding estimators of the covariance matrix and the mean are calculated as follows
Σ = 1 N 1 i = 1 N x ( i ) μ x ( i ) μ T ,
μ = 1 N i = 1 N x ( i ) .
Once the projection vector has been calculated in the training phase, the predicted class for an unseen feature vector x is determined by s i g n f x . Thus, the assigned class to x will be 1 if f x > 0 and 1 otherwise.
This classification algorithm will be applied over the features extracted with the DWT in order to estimate the ocular state of the user. Therefore, it will face a binary classification problem, i.e., open eye state (oE) or closed eye state (cE) state.
LDA is probably the most used classifier for BCI design [32]. It has been successfully applied in different BCI systems, such as P300 spellers [49], MI-based applications for prostheses and orthosis control [50,51], among others [23,52]. LDA has a lower computational burden and faster rates than other popular classifiers such as SVM or ANN, which makes it suitable for the development of online BCI systems [23,31].

3. Proposed System

Figure 2 and Figure 3 show the hardware components of the developed system and its procedure for eye state identification, respectively. First, the brain activity of the user is captured by the EEG device, then this activity signal is processed and decomposed by the DWT. The obtained coefficients are then employed to extract the features, which finally feed the classification algorithm that estimates the user’s ocular state. The following sections describe this procedure in detail.

3.1. EEG Device

For capturing the brain activity of the user, we have developed a low-cost EEG device that uses a total of four electrodes: two input channels, and the reference and ground electrodes. This device is an extension of the prototype presented in our previous work [16] with an additional input channel.
The signal captured from each input channel (depicted in element 1 of Figure 2) is amplified and bandpass filtered between 4.7 and 29.2 Hz. Towards this end, we use the AD8221 instrumentation amplifier followed by a 50 Hz notch filter to avoid the interference of electric devices in the vicinity of the sensor wires, a second order low-pass filter, a second order high-pass filter and a final bandpass filter with adjustable gain (see element 2 of Figure 2). Once the brain signal has been captured, amplified and filtered, the ESP32 microcontroller [53] is responsible for its sampling (shown in element 3 of Figure 2). A sampling frequency of 200 Hz is employed.

3.2. Feature Extraction and Classification

Once the brain signals of the user have been captured and digitized, they are analyzed and decomposed with the DWT for extracting the features. Thanks to the dual core nature of the ESP32, complex processing tasks such as DWT and its subsequent classification can be performed while the signal is sampled.
As previously described, the coefficients extracted by the DWT at different levels can reveal the inner characteristics of the signal. Thus, both the selection of a proper mother wavelet and the number of decomposition levels are of primary importance for the analysis of the brain signals [37]. The number of decomposition levels is based on the dominant frequency component of the signal. Therefore, the levels are chosen such that those parts of the signal that correlate well with the frequencies needed for signal classification are retained in the wavelet coefficients [37]. In our system, in order to decompose the signal according to the main EEG rhythms, the number of levels of decomposition is 4. Hence, the signal is decomposed into four detail levels, D1–D4, and one final approximation level, A4. Table 1 shows the wavelet coefficients and their EEG rhythm equivalence.
According to these decomposition levels and their equivalent EEG rhythms, those detail and approximation coefficients are studied and employed for extracting the features and estimating the ocular state of the user. To this end, we propose two schemes based on different feature sets defined from data obtained in alpha and beta rhythms. It is important to note that alpha rhythms correspond to the detail coefficients of level 4 (D4), while beta rhythms correspond to the detail coefficients in level 3 (D3).
Let P D 3 be the average power of wavelet coefficients at D3, P D 4 be the average power of wavelet coefficients at D4 and R = P D 3 / P D 4 as the ratio between these two average powers. Thus, the first scheme, termed as Scheme 1, will employ the ratio R = P D 3 / P D 4 as the only feature for eye state identification. Conversely, in the second scheme, termed as Scheme 2, two different features are extracted from the wavelet coefficients: the standard deviation of the coefficients of level D4 ( S D 4 ) and R = P D 3 / P D 4 . In both cases, the LDA classification algorithm is applied for the eye state identification.

4. Materials and Methods

To evaluate the suitability of the proposed system, we have carried out a series of experiments with a participant group who agreed to participate in the research. This participant group included a total of 7 volunteers with an average age of 29.67 (range 24–56). The participants indicated that they do not have hearing or visual impairments. Participation was voluntary and informed consent was obtained for each participant in order to employ their EEG data in our study.
Our EEG prototype was used to capture the brain activity of the subjects. Gold cup electrodes were placed in accordance with the 10–20 international system for electrode placement [54] and attached to the subjects scalp using a conductive paste. Electrode–skin impedances were below 15 k Ω at all electrodes.
Several studies have proved that the alpha rhythm predominates in the occipital area of the brain when subjects remain with their eyes closed and it is reduced when visual stimulation takes place [55,56,57]. In accordance with these works, the input channels of the EEG devices were located in the O1 and O2 positions. Moreover, to optimize the setup time and EEG signal quality, the reference and ground electrodes were placed in the FP2 and A1 positions, respectively, where the absence of hair facilitates its placement [58] (see Figure 4).
All the experiments were conducted in a sound-attenuated and controlled environment. Participants were seated in a comfortable chair, and asked to be relaxed and focused on the task, trying to avoid any distraction or external stimulus. Experiments were composed of 2 tasks: the first one, 60 s of oE and the second, 60 s of cE. In order to simulate a real-life situation, the subject could freely move his gaze during the eye-open tasks, without the need to keep it at a fixed point. The procedure was conveniently explained in advance allowing the participants to feel comfortable and familiar with the test environment. Moreover, possible artifacts were minimized by asking them not to speak, move or blink (or at least as little as possible) throughout the oE task. Electrode–skin impedance was below 15 k Ω at all the electrodes.
A total of 10 tasks (i.e., 10 min ) were continuously recorded for each participant, which corresponds to 5 tasks of oE and 5 tasks of cE. Each task was separated by a sound alert, which indicated the user to change the state. All the experiments started with oE as the initial state (see Figure 5). The captured signals were filtered between 4 and 40 Hz and the mean of the signal was subtracted.
Since an essential feature of our study is to provide a reliable system with high accuracy rates, several types of wavelets, already used in previous works for EEG analysis, were evaluated and compared for extracting the features. In particular, nine types of wavelets were tested: db2, db4, db8, coif1, coif4, haar, sym2, sym4, and sym10.
Moreover, overlapped windows have been used for extracting the features. We have considered time windows of D seconds and an overlapped time slot of d seconds. It is important to note that, using this technique, the response time of the system is directly related to D and d, i.e., the decision delay, which is the wait time for a new classifier decision, is given by D d s. Hence, in order to find the shortest response time with a reliable accuracy rate, we have evaluated our system using several window sizes, ranging from 1 to 10 s. The size selected for d was constant for all the experiments: 80 % of the size of D.
To avoid classification bias, a 5-fold cross-validation technique is applied for training and evaluating the classifier—that is, 80 % of the data were used for training the algorithm and the remaining 20 % were used for testing it. In our experiments, it means that 8 out of the 10 min ( 4 min for each eye state) were used to train the LDA classifier and the remaining 2 min ( 1 min for each eye state) were used for testing it. This process was repeated 5 times using each minute of each eye state once for testing the classifier. Therefore, the accuracy results shown throughout this work correspond to an average of all these executions using the different training and test sets.

5. Experimental Results

In this section, we present the results obtained for both feature schemes, i.e., using only one extracted feature or two features. Moreover, for each scheme, we compared different wavelet types and window sizes. The results obtained using the data from only one electrode located at O2 were compared to those obtained using both electrodes located at O1 and O2 positions. The main goal of this experiment is to determine which mother wavelet, feature scheme and number of input channels offer the best performance in terms of accuracy and response time.

5.1. Scheme 1: One Feature

The experiments for this scheme were carried out using the ratio R as the only feature for eye state classification. In order to compare the different wavelet types, we employed overlapped windows of 10 s and a decision delay of D d = 2 s .
Table 2 shows the mean accuracy from all the subjects obtained for each wavelet type, for both eye states and using the data from one and two channels. The results achieved for cE are significantly higher than those achieved for oE, regardless of the wavelet type and the number of channels. In the case of cE, all the accuracies are above 86 % , while in the oE case some drop to 71 % and none of them exceed 86 % . For most of the wavelets, using more channels does not imply an improvement in the performance, since similar results are obtained using one- or two-channel data. In addition, the number of filter coefficients for each mother wavelet is also shown. The number of operations for applying the multi-resolution analysis of the input signal is directly related to this filter length.
From Table 2, we can see that coif4 offers the highest accuracy for oE and high results that exceed 91 % for cE; thus, it could be the best choice for implementing the system. In this regard, for robustness analysis, Table 3 shows the accuracy obtained for each subject using coif4 as the mother wavelet. The results follow the same pattern described before, where cE offers better classification accuracy than oE and similar results are achieved using one or two sensor data. All subjects except one (Subject 5) show accuracies above 80 % for any condition and even some of them, such as Subjects 1, 3 and 6, present results higher than 89 % .
A second set of experiments was conducted in order to determine the performance of the system for short delay times, which is an important aspect when implementing BCIs in real-life scenarios. Figure 6 shows the accuracy obtained for each subject and ocular state as a function of the window size. In these experiments, we considered a constant overlapped time slot d with a duration of the 80 % of the window size D. Therefore, the decision delay of the system will be 20 % of D, i.e., if D = 1 s the delay would be D d = 0.2 s . It is apparent that there exists a trade-off between the window size and accuracy of the system, i.e., as window size increases the obtained accuracy improves and vice versa. For short window sizes the classifier offers low accuracies, especially for the oE case, where none of the subjects exceeds 75 % with the shortest window size, D = 1 s , and one of them, Subject 5 (Figure 6e), shows an accuracy below 50 % . Moreover, as presented in the previous results in Table 2 and Table 3, similar accuracies are achieved for one and two-channel data.

5.2. Scheme 2: Two Features

For this scheme, two extracted features were employed for the prediction of the eye state of the user: S D 4 and the ratio R. Windows with a duration of 10 s and a delay of D d = 2 s were employed. From one realization of the cross-validation process, the extracted features of the training set are represented in Figure 7, where the decision boundary of LDA is also marked.
The signals corresponding to three windows are shown in Figure 8 with their detail coefficients D3 and D4. Three situations are compared: oE without artifacts (Figure 8a–c), oE with blink artifact (Figure 8d–f) and cE state (Figure 8g–i). Figure 7 marks the features corresponding to these three windows. We see that they have been correctly classified, even the one with the blink artifact since the window size is larger than the artifact duration.
Table 4 shows the mean accuracy of all the subjects obtained for each wavelet type and ocular state for one and two sensor data. All the wavelet types offer high performances for both eye states with an average accuracy above 91 % , regardless of the number of channels employed. Similar results are obtained with one or two sensors, although the latter are slightly better. In addition, we can see that db8 offers the highest results for 3 of the 4 conditions, thus it could be the best choice for implementing the system.
Table 5 shows the accuracy obtained for each subject, each eye sate and for one and two sensor data using db8 as mother wavelet, a window duration of 10 s and a delay of D d = 2 s . Results from oE are higher than those achieved from cE. Results from one and two channels are similar, so the use of only one channel could be enough for a reliable performance of the system.
The second set of experiments are used to determine the performance of using one or two sensors for short delays. Figure 9 depicts the accuracy obtained for each subject and ocular state as a function of the window size. As in previous experiments, the overlapped time slot d was selected to be 80 % of the window size D; therefore, the delay in the response will be 20 % of D. As occurred in Scheme 1, there exists a trade-off between the window size and the accuracy of the system, i.e., as window size increases the obtained accuracy improves and vice versa. Furthermore, the results obtained with the data from one sensor or two sensors are very close for large window sizes. However, in some subjects, such as Subjects 1, 3 and 5 (Figure 9a,c,e), the accuracy obtained for short window duration with two sensors is higher than the one obtained only with one sensor. This can also be seen in Figure 9h, which depicts the average accuracy for all the subjects. Here, we can clearly observe that, for short window sizes, the data from two sensors offer better accuracy rates.

6. Discussion

Several solutions have been proposed during recent decades for the detection of the eye state through EEG activity [17,27,29]. However, these solutions usually capture the brain signals using large and voluminous devices, which are cumbersome and uncomfortable for the final user. The main goal of the presented study is to develop a new system for eye state identification based on an open EEG device that gathers the brain activity using a reduced number of electrodes. For this purpose, the DWT and the LDA were applied for feature extraction and feature classification, respectively.
Furthermore, different feature schemes are compared in order to determine which of them offers the best classification accuracy and response time. From Table 2, Table 3, Table 4 and Table 5, we can see that the scheme, which considers two features ( S D 4 and the ratio R) offers higher results than those achieved by the scheme composed of a single feature for all the mother wavelets (Table 2 and Table 4) and six of the seven subjects (Table 3 and Table 5). This difference becomes more apparent for the oE case, especially when small window sizes are employed (see Figure 6 and Figure 9). Moreover, considering that for the real implementation of the system an average accuracy greater than 80 % is required for both ocular states, we can see from Figure 6h and Figure 9h that Scheme 2 achieves it at 2 s , while Scheme 1 needs 6 s .
Several shapes for wavelet functions have been proposed for the analysis of EEG signals, such as Haar, Daubechies (db2, db4 and db8), Coiflets (coif1, coif4) or Symlets (sym2, sym4, sym10). However, depending on the application or analysis where they are involved, a particular wavelet family will result in a more efficient performance than the others [33,34,35,59]. Therefore, the selection of an appropriate mother wavelet is crucial for the correct performance of the system. From Table 2 and Table 4, we can see the average results obtained for each wavelet type for both ocular states. For Scheme 1 with a single feature, there are remarkable differences between each one of the wavelets. Furthermore, it can be observed that the results for cE are significantly higher than those obtained for oE. Conversely, for Scheme 2, the results obtained by the different wavelets are very similar and there is no big differences between oE and cE. Therefore, this second approach should be selected for the implementation of the system in a real scenario since it offers more robust results.
The response time of the system is also a key aspect when developing real-time and online applications. Consequently, we tested our system for small window sizes with short response times. Figure 6 and Figure 9 show the results for each subject and eye state using coif4 with a single feature and db8 with the two features, respectively. As previously mentioned, Scheme 2 offers higher accuracy and more robust results than Scheme 1, especially for the oE case and small window sizes. Moreover, similar results are achieved for one and two-channel data in the case of Scheme 1. However, for Scheme 2, the results obtained by the two-channel data are higher for some subjects. This difference is more apparent for small size windows (see Figure 9a,c,e).
Taking into account the filter lengths shown in Table 2, the number of operations needed to compute the db8 in Scheme 2 is considerably lower that the needed to compute the coif4 used in Scheme 1.
We can conclude that Scheme 2, composed by the two features, is the most suitable option for implementing the system since it offers the best performance in terms of accuracy and response time. There is no significant difference between the use of one or two sensors for large window sizes; however, we consider that the use of both channels could be more suitable for the system since in some subjects it did show an improvement for small window sizes. Therefore, considering this system configuration with two input channels and two extracted features, an average accuracy of 77.93 % for cE and 90.62 % for oE was obtained for the shortest window size, D = 1 s , with five of the seven subjects being above 70 % . Using a window size of D = 3 s , six of the seven subjects achieve an accuracy above 81 % in both ocular states and, with D = 5 s , those six subjects exceed 86 % of accuracy in both eye states. The response time of the system is 20 % of D, and therefore it would be 0.2 s for D = 1 s , 0.6 s for D = 3 s and 1 s for D = 5 s . Thus, the system offers a reliable classification accuracy for short response times, suitable for the implementation of non-critical applications.

7. Conclusions

We have presented a system for EEG eye state identification based on an open EEG device that captures the brain activity from only two input channels. We apply the DWT for decomposing the gathered signals and extracting the most relevant features for its subsequent classification. The performance of two different feature sets are compared in terms of accuracy and response time. We also compare the performance achieved when using one or two input channels. The results show that, for most users, using two channels does not improve the system performance significantly. On the other hand, the feature set composed by the two features (standard deviation and ratio between coefficients of alpha and beta bands) offers the best accuracy for the shortest response times, achieving an average classification accuracy with two sensors of 90.60 % and 97.25 % for closed and open eyes, respectively, with a response time of 1 s . Future work includes increasing the number of participants in the experiments and considering subjects with mobility disorders.

Author Contributions

F.L. and F.J.V.-A. implemented the software; F.J.V.-A. developed the hardware prototype; P.M.C. and A.D. designed the experiments; F.L. and O.F. performed the experiments and the data analysis; F.L. and A.D. wrote the paper; P.M.C. and F.J.V.-A. revised the manuscript; A.D. and P.M.C. led the research. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been funded by the Xunta de Galicia (by grant ED431C 2020/15 and grant ED431G2019/01 to support the Centro de Investigación de Galicia “CITIC”), the Agencia Estatal de Investigación of Spain (by grants RED2018-102668-T and PID2019-104958RB-C42) and ERDF funds of the EU (FEDER Galicia & AEI/FEDER, UE); and the predoctoral Grant No. ED481A-2018/156 (Francisco Laport).

Institutional Review Board Statement

Not applicable for studies involving researchers themselves.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in the study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Acronym

The following abbreviations are used in this manuscript:
ANNArtificial Neural Network
BCIBrain–Computer Interface
CADComputer-Aided Diagnosis
cEclosed eye state
CNNConvolutional Neural Network
CWTContinuous Wavelet Transform
DWTDiscrete Wavelet Transform
ECoGElectrocorticography
EEGElectroencephalography
EOGElectrooculography
ERPEvent-related Potential
fMRIfunctional Magnetic Resonance Imaging
FFTFast-Fourier Transform
HMIHuman–Machine Interface
IALIncremental Attribute Learning
LDALinear Discriminant Analysis
LRLogistic Regression
MEGMagnetoencephalography
MEMDMultivariate Empirical Mode Decomposition
MIMotor Imagery
oEopen eye state
PSDPower Spectral Density
RNNRecurrent Neural Network
SCPSlow Cortical Potential
SVMSupport Vector Machine
VOGVideooculography
WTWavelet Transform

References

  1. Ueno, H.; Kaneda, M.; Tsukino, M. Development of drowsiness detection system. In Proceedings of the VNIS’94-1994 Vehicle Navigation and Information Systems Conference, Yokohama, Japan, 31 August–2 September 1994; pp. 15–20. [Google Scholar]
  2. Zhang, F.; Su, J.; Geng, L.; Xiao, Z. Driver fatigue detection based on eye state recognition. In Proceedings of the 2017 International Conference on Machine Vision and Information Technology (CMVIT), Singapore, 17–19 February 2017; pp. 105–110. [Google Scholar]
  3. Mandal, B.; Li, L.; Wang, G.S.; Lin, J. Towards detection of bus driver fatigue based on robust visual analysis of eye state. IEEE Trans. Intell. Transp. Syst. 2016, 18, 545–557. [Google Scholar] [CrossRef]
  4. Ma, J.; Zhang, Y.; Cichocki, A.; Matsuno, F. A novel EOG/EEG hybrid human–machine interface adopting eye movements and ERPs: Application to robot control. IEEE Trans. Biomed. Eng. 2015, 62, 876–889. [Google Scholar] [CrossRef]
  5. Estévez, P.; Held, C.; Holzmann, C.; Perez, C.; Pérez, J.; Heiss, J.; Garrido, M.; Peirano, P. Polysomnographic pattern recognition for automated classification of sleep-waking states in infants. Med. Biol. Eng. Comput. 2002, 40, 105–113. [Google Scholar] [CrossRef]
  6. Naderi, M.A.; Mahdavi-Nasab, H. Analysis and classification of EEG signals using spectral analysis and recurrent neural networks. In Proceedings of the 2010 17th Iranian Conference of Biomedical Engineering (ICBME), Isfahan, Iran, 3–4 November 2010; pp. 1–4. [Google Scholar]
  7. Wang, J.G.; Sung, E. Study on eye gaze estimation. IEEE Trans. Syst. Man Cybern. Part B Cybernet. 2002, 32, 332–350. [Google Scholar] [CrossRef]
  8. Kar, A.; Corcoran, P. A review and analysis of eye-gaze estimation systems, algorithms and performance evaluation methods in consumer platforms. IEEE Access 2017, 5, 16495–16519. [Google Scholar] [CrossRef]
  9. Krolak, A.; Strumillo, P. Vision-based eye blink monitoring system for human-computer interfacing. In Proceedings of the 2008 Conference on Human System Interactions, Krakow, Poland, 25–27 May 2008; pp. 994–998. [Google Scholar]
  10. Noureddin, B.; Lawrence, P.D.; Man, C. A non-contact device for tracking gaze in a human computer interface. Comput. Vis. Image Underst. 2005, 98, 52–82. [Google Scholar] [CrossRef]
  11. Bulling, A.; Ward, J.A.; Gellersen, H.; Tröster, G. Eye movement analysis for activity recognition using electrooculography. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 741–753. [Google Scholar] [CrossRef] [PubMed]
  12. Deng, L.Y.; Hsu, C.L.; Lin, T.C.; Tuan, J.S.; Chang, S.M. EOG-based Human–Computer Interface system development. Expert Syst. Appl. 2010, 37, 3337–3343. [Google Scholar] [CrossRef]
  13. Lv, Z.; Wu, X.P.; Li, M.; Zhang, D. A novel eye movement detection algorithm for EOG driven human computer interface. Pattern Recognit. Lett. 2010, 31, 1041–1047. [Google Scholar] [CrossRef]
  14. Barea, R.; Boquete, L.; Ortega, S.; López, E.; Rodríguez-Ascariz, J. EOG-based eye movements codification for human computer interaction. Expert Syst. Appl. 2012, 39, 2677–2683. [Google Scholar] [CrossRef]
  15. Laport, F.; Iglesia, D.; Dapena, A.; Castro, P.M.; Vazquez-Araujo, F.J. Proposals and Comparisons from One-Sensor EEG and EOG Human–Machine Interfaces. Sensors 2021, 21, 2220. [Google Scholar] [CrossRef]
  16. Laport, F.; Dapena, A.; Castro, P.M.; Vazquez-Araujo, F.J.; Iglesia, D. A Prototype of EEG System for IoT. Int. J. Neural Syst. 2020, 2050018. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, T.; Guan, S.; Man, K.; Ting, T. Time series classification for EEG eye state identification based on incremental attribute learning. In Proceedings of the 2014 International Symposium on Computer, Consumer and Control (IS3C), Taichung, Taiwan, 10–12 June 2014; pp. 158–161. [Google Scholar] [CrossRef]
  18. Islalm, M.S.; Rahman, M.M.; Rahman, M.H.; Hoque, M.R.; Roonizi, A.K.; Aktaruzzaman, M. A deep learning-based multi-model ensemble method for eye state recognition from EEG. In Proceedings of the 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 27–30 January 2021; pp. 0819–0824. [Google Scholar]
  19. Reddy, T.; Behera, L. Online Eye state recognition from EEG data using Deep architectures. Proceedings of 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 712–717. [Google Scholar] [CrossRef]
  20. Zhao, X.; Wang, X.; Yang, T.; Ji, S.; Wang, H.; Wang, J.; Wang, Y.; Wu, Q. Classification of sleep apnea based on EEG sub-band signal characteristics. Sci. Rep. 2021, 11, 1–11. [Google Scholar]
  21. Paszkiel, S. Using the Raspberry PI2 module and the brain-computer technology for controlling a mobile vehicle. In Conference on Automation; Springer: Cham, Switzerland, 2019; pp. 356–366. [Google Scholar]
  22. Ortiz-Rosario, A.; Adeli, H. Brain-computer interface technologies: From signal to action. Rev. Neurosci. 2013, 24, 537–552. [Google Scholar] [CrossRef] [PubMed]
  23. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef]
  24. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Comput. Biol. Med. 2018, 100, 270–278. [Google Scholar] [CrossRef]
  25. Yeo, M.; Li, X.; Shen, K.; Wilder-Smith, E. Can SVM be used for automatic EEG detection of drowsiness during car driving? Saf. Sci. 2009, 47, 115–124. [Google Scholar] [CrossRef]
  26. Kirkup, L.; Searle, A.; Craig, A.; McIsaac, P.; Moses, P. EEG-based system for rapid on-off switching without prior learning. Med. Biol. Eng. Comput. 1997, 35, 504–509. [Google Scholar] [CrossRef]
  27. Rösler, O.; Suendermann, D. A first step towards eye state prediction using eeg. Proc. AIHLS 2013, 1, 1–4. [Google Scholar]
  28. Saghafi, A.; Tsokos, C.P.; Goudarzi, M.; Farhidzadeh, H. Random eye state change detection in real-time using EEG signals. Expert Syst. Appl. 2017, 72, 42–48. [Google Scholar] [CrossRef]
  29. Hamilton, C.R.; Shahryari, S.; Rasheed, K.M. Eye state prediction from EEG data using boosted rotational forests. In Proceedings of the 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 9–11 December 2015; pp. 429–432. [Google Scholar]
  30. Guo, H.; Burrus, C.S. Wavelet transform based fast approximate Fourier transform. In Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, Munich, Germany, 21–24 April 1997; Volume 3, pp. 1973–1976. [Google Scholar]
  31. Bellingegni, A.D.; Gruppioni, E.; Colazzo, G.; Davalli, A.; Sacchetti, R.; Guglielmelli, E.; Zollo, L. NLR, MLP, SVM, and LDA: A comparative analysis on EMG data from people with trans-radial amputation. J. Neuroeng. Rehabil. 2017, 14, 1–16. [Google Scholar] [CrossRef] [PubMed]
  32. Lotte, F. A tutorial on EEG signal-processing techniques for mental-state recognition in brain–computer interfaces. In Guide to Brain-Computer Music Interfacing; Springer: Cham, Switzerland, 2014; pp. 133–161. [Google Scholar]
  33. Adeli, H.; Zhou, Z.; Dadmehr, N. Analysis of EEG records in an epileptic patient using wavelet transform. J. Neurosci. Methods 2003, 123, 69–87. [Google Scholar] [CrossRef]
  34. Samar, V.J.; Bopardikar, A.; Rao, R.; Swartz, K. Wavelet analysis of neuroelectric waveforms: A conceptual tutorial. Brain Lang. 1999, 66, 7–60. [Google Scholar] [CrossRef] [PubMed]
  35. Gandhi, T.; Panigrahi, B.K.; Anand, S. A comparative study of wavelet families for EEG signal classification. Neurocomputing 2011, 74, 3051–3057. [Google Scholar] [CrossRef]
  36. Mallat, S.G. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 674–693. [Google Scholar] [CrossRef] [Green Version]
  37. Subasi, A. EEG signal classification using wavelet feature extraction and a mixture of expert model. Expert Syst. Appl. 2007, 32, 1084–1093. [Google Scholar] [CrossRef]
  38. Quiroga, R.Q.; Sakowitz, O.; Basar, E.; Schürmann, M. Wavelet transform in the analysis of the frequency composition of evoked potentials. Brain Res. Protoc. 2001, 8, 16–24. [Google Scholar] [CrossRef]
  39. Quiroga, R.Q.; Garcia, H. Single-trial event-related potentials with wavelet denoising. Clin. Neurophysiol. 2003, 114, 376–390. [Google Scholar] [CrossRef]
  40. Hinterberger, T.; Kübler, A.; Kaiser, J.; Neumann, N.; Birbaumer, N. A brain–computer interface (BCI) for the locked-in: Comparison of different EEG classifications for the thought translation device. Clin. Neurophysiol. 2003, 114, 416–425. [Google Scholar] [CrossRef]
  41. Demiralp, T.; Yordanova, J.; Kolev, V.; Ademoglu, A.; Devrim, M.; Samar, V.J. Time–frequency analysis of single-sweep event-related potentials by means of fast wavelet transform. Brain Lang. 1999, 66, 129–145. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Zhou, J.; Meng, M.; Gao, Y.; Ma, Y.; Zhang, Q. Classification of motor imagery EEG using wavelet envelope analysis and LSTM networks. In Proceedings of the 2018 Chinese Control And Decision Conference (CCDC), Shenyang, China, 9–11 June 2018; pp. 5600–5605. [Google Scholar]
  43. Li, M.-A.; Wang, R.; Hao, D.-M.; Yang, J.-F. Feature extraction and classification of mental EEG for motor imagery. In Proceedings of the 2009 Fifth International Conference on Natural Computation, Tianjian, China, 14–16 August 2009; Volume 2, pp. 139–143. [Google Scholar]
  44. Orhan, U.; Hekim, M.; Ozer, M. EEG signals classification using the K-means clustering and a multilayer perceptron neural network model. Expert Syst. Appl. 2011, 38, 13475–13481. [Google Scholar] [CrossRef]
  45. Li, M.; Chen, W.; Zhang, T. Classification of epilepsy EEG signals using DWT-based envelope analysis and neural network ensemble. Biomed. Signal Process. Control 2017, 31, 357–365. [Google Scholar] [CrossRef]
  46. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification and Scene Analysis; Wiley: New York, NY, USA, 1973; Volume 3. [Google Scholar]
  47. Muller, K.R.; Anderson, C.W.; Birch, G.E. Linear and nonlinear methods for brain-computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2003, 11, 165–169. [Google Scholar] [CrossRef] [Green Version]
  48. Blankertz, B.; Lemm, S.; Treder, M.; Haufe, S.; Müller, K.R. Single-trial analysis and classification of ERP components—A tutorial. NeuroImage 2011, 56, 814–825. [Google Scholar] [CrossRef] [PubMed]
  49. Bostanov, V. BCI competition 2003-data sets Ib and IIb: Feature extraction from event-related brain potentials with the continuous wavelet transform and the t-value scalogram. IEEE Trans. Biomed. Eng. 2004, 51, 1057–1061. [Google Scholar] [CrossRef]
  50. Neuper, C.; Müller-Putz, G.R.; Scherer, R.; Pfurtscheller, G. Motor imagery and EEG-based control of spelling devices and neuroprostheses. Prog. Brain Res. 2006, 159, 393–409. [Google Scholar]
  51. Pfurtscheller, G.; Solis-Escalante, T.; Ortner, R.; Linortner, P.; Muller-Putz, G.R. Self-paced operation of an SSVEP-Based orthosis with and without an imagery-based “brain switch:” A feasibility study towards a hybrid BCI. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 409–414. [Google Scholar] [CrossRef] [PubMed]
  52. Lotte, F.; Congedo, M.; Lécuyer, A.; Lamarche, F.; Arnaldi, B. A review of classification algorithms for EEG-based brain–computer interfaces. J. Neural Eng. 2007, 4, R1. [Google Scholar] [CrossRef] [PubMed]
  53. Espressif Systems (Shanghai), C. ESP32-WROOM-32 Datasheet. Available online: https://www.espressif.com/sites/default/files/documentation/esp32-wroom-32_datasheet_en.pdf (accessed on 1 April 2021).
  54. Jasper, H.H. The ten-twenty electrode system of the International Federation. Electroencephalogr. Clin. Neurophysiol. 1958, 10, 370–375. [Google Scholar]
  55. Barry, R.; Clarke, A.; Johnstone, S.; Magee, C.; Rushby, J. EEG differences between eyes-closed and eyes-open resting conditions. Clin. Neurophysiol. 2007, 118, 2765–2773. [Google Scholar] [CrossRef] [PubMed]
  56. La Rocca, D.; Campisi, P.; Scarano, G. EEG biometrics for individual recognition in resting state with closed eyes. In Proceedings of the 2012 BIOSIG-Proceedings of the International Conference of Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 6–7 September 2012; pp. 1–12. [Google Scholar]
  57. Gale, A.; Dunkin, N.; Coles, M. Variation in visual input and the occipital EEG. Psychon. Sci. 1969, 14, 262–263. [Google Scholar] [CrossRef] [Green Version]
  58. Ogino, M.; Mitsukura, Y. Portable drowsiness detection through use of a prefrontal single-channel electroencephalogram. Sensors 2018, 18, 4477. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Al-Qazzaz, N.K.; Hamid Bin Mohd Ali, S.; Ahmad, S.A.; Islam, M.S.; Escudero, J. Selection of mother wavelet functions for multi-channel EEG signal analysis during a working memory task. Sensors 2015, 15, 29015–29035. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Filter scheme for a multi-resolution DWT of an EEG signal sampled at 128 Hz , where g ( n ) and h ( n ) represent the impulse response of the high- and low-pass filters, respectively, and 2 represents downsampling by a factor of 2.
Figure 1. Filter scheme for a multi-resolution DWT of an EEG signal sampled at 128 Hz , where g ( n ) and h ( n ) represent the impulse response of the high- and low-pass filters, respectively, and 2 represents downsampling by a factor of 2.
Applsci 11 05051 g001
Figure 2. Proposed device details. (1) Sensors; (2) amplifiers; (3) ESP32 module.
Figure 2. Proposed device details. (1) Sensors; (2) amplifiers; (3) ESP32 module.
Applsci 11 05051 g002
Figure 3. Block diagram for experiments based on feature extraction with DWT.
Figure 3. Block diagram for experiments based on feature extraction with DWT.
Applsci 11 05051 g003
Figure 4. Anatomical electrode distribution in accordance with the standard 10–20 placement system used during the electroencephalography measurements. The green circle represents the input channels, while gray and black bordered circles represent reference and ground, respectively.
Figure 4. Anatomical electrode distribution in accordance with the standard 10–20 placement system used during the electroencephalography measurements. The green circle represents the input channels, while gray and black bordered circles represent reference and ground, respectively.
Applsci 11 05051 g004
Figure 5. User’s experiment flowchart.
Figure 5. User’s experiment flowchart.
Applsci 11 05051 g005
Figure 6. Accuracy obtained for each subject as a function of the window size using Scheme 1. Figures (ag) represent the accuracy for Subjects 1 to 7, respectively. Figure (h) shows the average accuracy of all the subjects.
Figure 6. Accuracy obtained for each subject as a function of the window size using Scheme 1. Figures (ag) represent the accuracy for Subjects 1 to 7, respectively. Figure (h) shows the average accuracy of all the subjects.
Applsci 11 05051 g006
Figure 7. Training features and LDA decision boundary for one training set of the cross-validation process. oE-Blink, oE-No blink and cE * represent the features obtained from the signals shown in Figure 8.
Figure 7. Training features and LDA decision boundary for one training set of the cross-validation process. oE-Blink, oE-No blink and cE * represent the features obtained from the signals shown in Figure 8.
Applsci 11 05051 g007
Figure 8. EEG signals captured from one of the participants from channel O2 and its wavelet decomposition for levels 3 and 4: (ac) show the signal captured for oE without artifacts, its detail coefficients from D3 and D4, respectively; (df) show the signal captured for oE with one blink artifact, its detail coefficients from D3 and D4, respectively; (gi) show the signal captured for oE, its detail coefficients from D3 and D4, respectively.
Figure 8. EEG signals captured from one of the participants from channel O2 and its wavelet decomposition for levels 3 and 4: (ac) show the signal captured for oE without artifacts, its detail coefficients from D3 and D4, respectively; (df) show the signal captured for oE with one blink artifact, its detail coefficients from D3 and D4, respectively; (gi) show the signal captured for oE, its detail coefficients from D3 and D4, respectively.
Applsci 11 05051 g008
Figure 9. Accuracy obtained for each subject as a function of the window size using Scheme 2. Figures (ag) represent the accuracy for Subjects 1 to 7. Figure (h) shows the average accuracy of all the subjects.
Figure 9. Accuracy obtained for each subject as a function of the window size using Scheme 2. Figures (ag) represent the accuracy for Subjects 1 to 7. Figure (h) shows the average accuracy of all the subjects.
Applsci 11 05051 g009
Table 1. Wavelet coefficients and EEG rhythm equivalence.
Table 1. Wavelet coefficients and EEG rhythm equivalence.
LevelsFrequency Band (Hz)EEG RhythmDecomposition Level
D150–100Noise1
D225–50Beta-Gamma2
D312.50–25Beta3
D46.25–12.50Theta-Alpha4
A40–6.25Delta-Theta4
Table 2. Average accuracy (in %) of all the subjects obtained for each wavelet type and ocular state for one and two sensor data using Scheme 1. Bold values indicate the highest value of each column. Filter length column represents the number of filter coefficients employed for the multi-resolution analysis.
Table 2. Average accuracy (in %) of all the subjects obtained for each wavelet type and ocular state for one and two sensor data using Scheme 1. Bold values indicate the highest value of each column. Filter length column represents the number of filter coefficients employed for the multi-resolution analysis.
WaveletFilter LengthClosedOpen
O1 and O2 (%)O2 (%)O1 and O2 (%)O2 (%)
db2486.2986.9774.6372.11
db4888.2389.8381.0379.43
db81692.4692.6985.1484.00
coif1686.9787.8976.3475.54
coif42491.6692.4685.3784.23
haar286.7489.0373.1471.09
sym2486.2986.9774.6372.11
sym4890.1791.3181.4980.00
sym102094.6392.4684.4682.29
Table 3. Average accuracy (in %) obtained for each subject and ocular state for one and two sensor data using Scheme 1, coif4 as mother wavelet, a window duration of 10 s and a delay of D d = 2 s .
Table 3. Average accuracy (in %) obtained for each subject and ocular state for one and two sensor data using Scheme 1, coif4 as mother wavelet, a window duration of 10 s and a delay of D d = 2 s .
SubjectClosedOpen
O1 and O2 (%)O2 (%)O1 and O2 (%)O2 (%)
1100.00100.0094.4091.20
283.2084.0084.8083.20
3100.00100.0096.0096.00
493.6096.8080.8080.80
577.6079.2068.0061.60
698.4098.4089.6090.40
788.8088.8084.0086.40
Mean91.6692.4685.3784.23
Table 4. Average accuracy (in %) of all the subjects obtained for each wavelet type and ocular state for one and two sensor data using Scheme 2. Bold values indicate the highest value of each column. Filter length column represents the number of filter coefficients employed for the multi-resolution analysis.
Table 4. Average accuracy (in %) of all the subjects obtained for each wavelet type and ocular state for one and two sensor data using Scheme 2. Bold values indicate the highest value of each column. Filter length column represents the number of filter coefficients employed for the multi-resolution analysis.
WaveletFilter LengthClosedOpen
O1 and O2 (%)O2 (%)O1 and O2 (%)O2 (%)
db2493.4991.3198.2996.57
db4894.0692.8098.8697.71
db81694.4093.8399.3197.60
coif1693.7192.9198.1796.80
coif42494.4093.7199.0997.49
haar293.3791.5497.9496.11
sym2493.4991.3198.2996.57
sym4893.9493.0398.0697.03
sym102094.4093.0398.6397.37
Table 5. Average accuracy (in %) obtained for each subject and ocular state for one and two sensor data using Scheme 2, db8 as mother wavelet, a window duration of 10 s and a delay of D d = 2 s .
Table 5. Average accuracy (in %) obtained for each subject and ocular state for one and two sensor data using Scheme 2, db8 as mother wavelet, a window duration of 10 s and a delay of D d = 2 s .
SubjectClosedOpen
O1 and O2 (%)O2 (%)O1 and O2 (%)O2 (%)
1100.00100.00100.00100.00
284.0080.0096.0089.60
3100.00100.00100.0096.00
495.2095.20100.00100.00
593.6095.20100.00100.00
692.0091.20100.0098.40
796.0095.2099.2099.20
Mean94.4093.8399.3197.60
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Laport, F.; Castro, P.M.; Dapena, A.; Vazquez-Araujo, F.J.; Fresnedo, O. Eye State Identification Based on Discrete Wavelet Transforms. Appl. Sci. 2021, 11, 5051. https://doi.org/10.3390/app11115051

AMA Style

Laport F, Castro PM, Dapena A, Vazquez-Araujo FJ, Fresnedo O. Eye State Identification Based on Discrete Wavelet Transforms. Applied Sciences. 2021; 11(11):5051. https://doi.org/10.3390/app11115051

Chicago/Turabian Style

Laport, Francisco, Paula M. Castro, Adriana Dapena, Francisco J. Vazquez-Araujo, and Oscar Fresnedo. 2021. "Eye State Identification Based on Discrete Wavelet Transforms" Applied Sciences 11, no. 11: 5051. https://doi.org/10.3390/app11115051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop