Next Article in Journal
Severity Grading and Early Retinopathy Lesion Detection through Hybrid Inception-ResNet Architecture
Next Article in Special Issue
Special Issue “Advanced Signal Processing in Wearable Sensors for Health Monitoring”
Previous Article in Journal
Determination of the Kinematic Excitation Originating from the Irregular Envelope of an Omnidirectional Wheel
Previous Article in Special Issue
Fusion Method to Estimate Heart Rate from Facial Videos Based on RPPG and RBCG
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EEG Signal Multichannel Frequency-Domain Ratio Indices for Drowsiness Detection Based on Multicriteria Optimization

Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, 10000 Zagreb, Croatia
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(20), 6932; https://doi.org/10.3390/s21206932
Submission received: 30 August 2021 / Revised: 15 October 2021 / Accepted: 17 October 2021 / Published: 19 October 2021
(This article belongs to the Special Issue Advanced Signal Processing in Wearable Sensors for Health Monitoring)

Abstract

:
Drowsiness is a risk to human lives in many occupations and activities where full awareness is essential for the safe operation of systems and vehicles, such as driving a car or flying an airplane. Although it is one of the main causes of many road accidents, there is still no reliable definition of drowsiness or a system to reliably detect it. Many researchers have observed correlations between frequency-domain features of the EEG signal and drowsiness, such as an increase in the spectral power of the theta band or a decrease in the spectral power of the beta band. In addition, features calculated as ratio indices between these frequency-domain features show further improvements in detecting drowsiness compared to frequency-domain features alone. This work aims to develop novel multichannel ratio indices that take advantage of the diversity of frequency-domain features from different brain regions. In contrast to the state-of-the-art, we use an evolutionary metaheuristic algorithm to find the nearly optimal set of features and channels from which the indices are calculated. Our results show that drowsiness is best described by the powers in delta and alpha bands. Compared to seven existing single-channel ratio indices, our two novel six-channel indices show improvements in (1) statistically significant differences observed between wakefulness and drowsiness segments, (2) precision of drowsiness detection and classification accuracy of the XGBoost algorithm and (3) model performance by saving time and memory during classification. Our work suggests that a more precise definition of drowsiness is needed, and that accurate early detection of drowsiness should be based on multichannel frequency-domain features.

1. Introduction

Drowsiness is the intermediate state between wakefulness and sleep [1]. Terms such as sleepiness or tiredness are used synonymously with drowsiness in related studies [2,3,4]. Although it is intuitively clear what drowsiness is, it is not so easy to determine exactly whether a person is in a drowsy state or not. The reason for this is the unclear definition of drowsiness. Some researchers define drowsiness as stage 1 sleep (S1) [5,6,7,8,9], which is also known as non-rapid eye movement 1 (NREM 1) sleep. Da Silveira et al. [10] used S1 sleep stage data in their research of drowsiness. Johns [11] claims that the S1 sleep stage is equivalent to microsleep (episodes of psychomotor insensitivity due to sleep-related wakefulness loss [12]), while drowsiness is stated to occur before S1 sleep, but it is not stated when it begins and what characterizes it. Researchers who do not use any of the aforementioned definitions of drowsiness typically use a subjective assessment of drowsiness, e.g., the Karolinska sleepiness scale [13]. In this paper, the term drowsiness is used as a synonym for the S1 sleep stage.
In a drowsy state, people are not able to function at the level required to safely perform an activity [14], due to the progressive loss of cortical processing efficiency [15]. Drowsiness is, therefore, a significant risk factor for human lives in many occupations, e.g., for air traffic controllers, pilots and regular car drivers [16]. According to the reports from NASA [17] and the National Transportation Safety Board [18], one of the main factors in road and air accidents is drowsiness. Gonçalves et al. [19] conducted a study across 19 European countries and concluded that in the last two years, 17% of drivers fell asleep while driving, while 7% of them had an accident due to drowsiness. The high frequency and prevalence of drowsiness-related accidents speak in favor of the development of early drowsiness detection systems, which is the subject of this paper.
Many researchers are trying to solve the problem of early detection of drowsiness in drivers. Balandong et al. [20], in their recent review, divided the techniques for detecting driver drowsiness into six categories: (1) subjective measures, (2) vehicle-based systems, (3) driver’s behavior-based systems, (4) mathematical models of sleep–wake dynamics, (5) human physiological signal-based systems and (6) hybrids of one or more of these techniques. Currently, the most common techniques used in practice are vehicle-based systems [5], but these systems are mostly unreliable and depend largely on the driver’s motivation to drive as well as possible [20].
Physiological signals are the promising alternative for reliable drowsiness detection [21]. The main problem with this approach is that these systems are often not easy to use and are intrusive to drivers [20]. Nevertheless, many researchers are working on small, automated and wearable devices [21,22,23,24], or on steering wheel devices [25,26] in order to overcome these obstacles. Techniques for detecting drowsiness based on physiological signals can be further subdivided according to the type of signal used, such as electroencephalogram (EEG) [27], electrooculogram (EOG) [28] or electrocardiogram (ECG) [29].
The most studied and applied physiological signal to detect drowsiness is the EEG. In this paper, frequency-domain features of the EEG signal are analyzed and two novel multichannel ratio indices for the detection of drowsiness are proposed. Besides the frequency-domain features, there are also other types of features: (1) nonlinear features [30], (2) spatiotemporal (functional connectivity) features [31] and (3) entropies [32]. These three groups of features have a lower frequency of use compared to the frequency-domain features, so in this paper, we focus only on frequency-domain features. Based on the recent review [33] of EEG-based drowsiness detection systems, 61% of the included papers used frequency-domain features, 38% used entropies, 10% used nonlinear features and 10% used spatiotemporal features (some papers used multiple groups of features, so the sum of the percentages is greater than 100%). This shows the difference in the use of drowsiness detection systems, and the difference is even greater in the general field of neurophysiological scientific papers. Although the three feature groups mentioned above are used less frequently, there are still a certain number of papers that include them, especially entropies.
Frequency-domain features estimate the power spectral density in a given frequency band. The bands typically used in the analysis of EEG signals are delta (δ, 0.5–4 Hz), theta (θ, 4–8 Hz), alpha (α, 8–12 Hz), beta (β, 12–30 Hz) and gamma (γ, >30 Hz). An increase in theta activity [34] and an increase in alpha activity [35] indicate drowsiness. An increase in the beta activity, however, is a sign of wakefulness and alertness [36]. There are several widely used frequency-domain ratio indices for detecting drowsiness. Eoh et al. [36] proposed the θ/α and β/α ratio indices, Jap et al. [37] proposed the (θ + α)/β, θ/β and (θ + α)/(α + β) ratio indices and da Silveira et al. [10] proposed the γ/δ and (γ + β)/(δ + α) ratio indices. These ratio indices provide improvement in the detection of drowsiness compared to the frequency-domain features alone and are shown to correlate with drowsiness.
All these frequency-domain features and ratio indices are calculated from a single EEG channel, i.e., from a single brain region. In recent research, Wang et al. [38] showed that the significance of a decrease in delta and an increase in (θ + α)/β indices depends on the brain region. This significant diversity of the correlation of features with drowsiness in different brain regions is the motivation for this research. Since all currently used frequency-domain features and ratio indices are based on a single channel (single brain region), this work aims to use the best distinguishing features of each brain region for the detection of drowsiness and to combine them into a single multichannel ratio index feature.
In our work, we use a computational method based on multicriteria optimization to extract the multichannel EEG-based frequency-domain ratio index features. This method allows us to discover new multichannel ratio indices that show improvements in the detection of drowsiness compared to single-channel ratio indices. Finally, with the use of machine learning models, we prove that multichannel indices detect drowsiness with higher accuracy, higher precision, reduced memory and faster computation compared to single-channel features.
In the Materials and Methods Section, we show the methodology of our work, including a description of the dataset, preprocessing and feature extraction methods used. Novel multichannel ratio indices and the multi-objective optimization method are also described there. In the Results Section, we present the results of our work, including statistical analysis, drowsiness prediction and computational properties of the proposed indices. In the Discussion Section, we discuss in more detail the topics covered in this paper. Finally, in the last section, we conclude the paper.

2. Materials and Methods

2.1. Dataset, Preprocessing and Feature Extraction

The data used in this paper were obtained from the PhysioNet portal [39], in particular from the 2018 PhysioNet computing in cardiology challenge [40]. The original dataset contains data records from 1985 subjects, and each recording includes a six-channel EEG, an electrooculogram, an electromyogram, a respiration signal from the abdomen and chest, airflow and oxygen saturation signals and a single-channel electrocardiogram during the all-night sleep. The records were divided into training and test sets of equal size. The sleep stages [41] of all subjects were annotated by clinical staff based on the American Academy of Sleep Medicine (AASM) manual for the scoring of sleep [42]. There are six types of annotations for different stages: wakefulness (W), stage 1 (S1), stage 2 (S2), stage 3 (S3), rapid eye movement (REM) and undefined.
In this research, we wanted to use a training set (992 subjects) to detect drowsiness. The officially provided way of acquiring the data is through torrent download, but we managed to download only 393 subjects completely, due to a lack of seeders. Of these 393 subjects, EEG signal recordings from 28 subjects were selected, based on the condition that each recording had at least 300 s of the W stage and, immediately after that, at least 300 s of the S1 stage. From each recording, a fragment of 600 s (300 s of W stage and 300 s of S1 stage) was used for analysis. In the original dataset, each EEG signal recording consists of six channels (F3, F4, C3, C4, O1 and O2, based on the International 10/20 System), with a sampling frequency of 200 Hz. Table 1 shows the identification numbers of all the selected subjects. The subjects were divided into two groups, one group used for training of the model (16 subjects) and the other one for the test of the obtained models (12 subjects). The training set was used to obtain novel ratio indices (with the method described below) and the test set was used to check these novel indices on the unseen data.
Before feature extraction, the EEG signal must be filtered. For this purpose, the DC component was removed from the signal and the signal was filtered with a Butterworth filter to remove high-frequency artifacts and low-frequency drifts. We used the sixth-order Butterworth filter, the low-cut frequency of 1 Hz and the high-cut frequency of 40 Hz. In the selected fragments of the recordings, there was an insignificant number of eye-related artifacts, so we decided not to use the independent component analysis for their removal in order to prevent potential information loss due to component removal.
The signals were divided into epochs to calculate features. The epochs were five seconds long with a 50% overlap between them. Frequency-domain features are often used in EEG signal analysis. These features were extracted from the power spectral density (PSD) of the signal. To obtain the PSD of the signal, Welch’s method [43] was used. Welch’s method is used more often than Fast Fourier transform in the field of EEG signal analysis since it produces PSD with lower variance. The standard frequency-domain features were calculated, i.e., delta (δ, 0.5–4 Hz), theta (θ, 4–8 Hz), alpha (α, 8–12 Hz) and beta (β, 12–30 Hz) bands. We also calculated the less frequently used frequency-domain features, i.e., gamma (γ, >30 Hz), sigma (σ, 12–14 Hz), low alpha (α1, 8–10 Hz) and high alpha (α2, 10–12 Hz) bands [44].

2.2. Novel Multichannel Ratio Indices

Ratios between frequency-domain features have often been used as new features in different areas of EEG signal analysis [10,36]. All these features have a simple mathematical formulation but often lead to an improvement in detection and reduction of dimensionality for drowsiness. Moreover, they are calculated based on a single channel only. The idea behind the novel indices we present in this work is to design the feature formulation in such a way that frequency-domain features from different channels can be combined. Figure 1 illustrates the difference between these two approaches. For simplicity of visualization, only four epochs, two channels (F3 and F4) and three features per channel are shown in Figure 1.
We define a new index, I, for each epoch, e, which is calculated as a ratio of the feature values, F(e), for all six channels in the epoch, e. In both the nominator and denominator, the feature value of each channel, j, is multiplied with a dedicated coefficient, Cij or Kij respectively, as indicated in the Equation (1):
I e = i = f e a t u r e s j = c h a n n e l s C i j F i j e i = f e a t u r e s j = c h a n n e l s K i j F i j e
The purpose of the coefficients is to reduce or even eliminate the influence of certain channels of frequency-domain features, by setting the value in the range [ 0 ,   1 , or increase the influence of certain channels of the frequency-domain features by setting the corresponding coefficient to a value in the range [ 1 , . There are 48 (6 channels and 8 features per channel) C coefficients and 48 K coefficients.
The ideal output of I(e) should look like a step function (or an inverse step function), which would indicate a clear difference between the two stages: W and S1. Figure 2 illustrates the main features of the output. The output can be divided into two parts: the left one corresponds to stage W and the right one to S1. While the output in each part should be as smooth as possible, i.e., with minimal oscillations, it is expected that there will be a transition period between the phases, which may have significant oscillations. This transition period would ideally be the step function, but in realistic settings, it is expected that the transition between phases of brain activity will probably last several epochs and would not be considered as either stage W or S1.
In order to determine the appropriate value of the coefficients that would provide the output as close as possible to the ideal, at least two criteria must be taken into account: the absolute difference between the mean values left and right of the transition window and the quantification of the oscillations in each part. This can be defined as a multi-objective optimization problem that we want to solve using a metaheuristic multi-objective evolutionary optimization method, as described in the next section. To the best of our knowledge, this state transition problem has never been approached with evolutionary computation.

2.3. Multi-Objective Optimization

The optimization of a step function that is representative of the problem of flat surfaces is generally a challenge for any optimization algorithm because it does not provide information about which direction is favorable and an algorithm can get stuck on one of the flat plateaus [45]. To overcome this challenge, instead of optimizing the function according to one criterion, we define two objectives that we optimize simultaneously: (1) to maximize the absolute difference between the mean value of I(e) output for the W and S1 stages, and (2) to minimize the oscillations of the output value around the mean value in each stage. According to Figure 2, the left part of the I(e) output occurring before the transition phase corresponds to the W stage, and the right part, occurring after the transition phase, corresponds to the S1 stage. Since optimization problems are usually expressed as minimization problems, where the first objective function, O1, is defined as the inverse absolute difference between the mean value of I(e) of the left part (avgleft) and the right part (avgright), Equation (2) is established:
O 1 = 1 a v g r i g h t a v g l e f t
The second objective function, O2, expresses the oscillations in the function and is defined as the number of times the difference between the output values of I(e) for two adjacent epochs was greater than a given limit. The exact value of this limit will be discussed later in this section as it is closely related to the specifics of the optimization method used. The main goal of the objective function O2 is to minimize the influence of the biggest flaw in the way that the objective O1 is calculated, i.e., to use the averaging function. For example, if a possible solution is a completely straight line, except for a large negative spike in the left part and a large positive spike in the right part, based only on the objective function O1, this would be a good solution, while the objective function O2 would penalize this solution.
As mentioned above, the transition between two stages will probably take several epochs and show significant oscillations of the function output values. According to the annotation made by clinical personnel, the transition phase should be approximately in the middle of the I(e) output, but it cannot be determined exactly how long it will last. In our work, which is based on expert knowledge of human behavior in the case of drowsiness, we assume that it lasts about one minute, which corresponds to about 30 epochs. Within the transition window, neither one of the two objective functions is calculated, since it is assumed to belong neither to the W nor to the S1 stages. We also allow it to move around the center, shifting left and right, due to a possible error of the human observer who marked the data.
The multi-objective optimization problem can now be expressed as min{O1,O2}, where O1 and O2 are the conflicting objective functions, as defined above. The evolutionary metaheuristic algorithm NSGA-II [46] was applied to solve this multi-objective optimization problem. The genetic algorithms (GAs) are normally used to solve complex optimization and search problems [47]. NSGA-II is one of the most popular evolutionary multicriteria optimization methods due to its versatility and ability to easily adapt to different types of optimization problems. The strong points of this MO algorithm are: (1) the fast non-dominated sorting ranking selection method used to emphasize Pareto-optimal solutions, (2) maintaining the population diversity by using the crowding distance and (3) the elitism approach, which ensures the preservation of best candidates through generations without the setting of any new parameters other than the normal genetic algorithm parameters, such as population size, termination parameter, crossover and mutation probabilities. Additionally, it was often used for the elimination of EEG channels with the similar purpose as in our case-dimensionality reduction [48]. This paper uses the implementation of NSGA-II provided by the MOEA framework [49] and is based on the guidelines defined in [46,50].
NSGA-II was used with the following configuration. The chromosome was divided into two parts: in the first part, genes represented the nominator coefficient values (Cij), and in the second part, genes represented the denominator coefficient values (Kij). In each part, the genes were grouped by frequency-domain features and channels, as illustrated in Figure 3. The genes were encoded as real values in the range [0.0, 10.0], and standard NSGA-II crossover and mutation operators were used to support operation on real values.
Each solution is evaluated based on the values of objectives O1 and O2, as described in the pseudocode in Algorithm 1. First, the chromosome is decoded (line 1). Then, for each test fragment, two values are calculated: (1) the inverse absolute difference (IAD) between the mean index value, I(e), of the left part and the right part, represented by the invAbsDiff variable in the pseudocode, and (2) the oscillations in the function, represented by the oscillation variable in the pseudocode (lines 3–5). Finally, the value of each objective O1 for the given solution is defined as the average value of invAbsDiff for all test fragments, and the value of objective O2 is defined as the average value of oscillation for all test fragments (lines 7–8).
Algorithm 1. Evaluation.
1: decode chromosome to get coefficient values
2: for each fragment do
3:   indexVals[[] = calculate index value for each epoch
4:   invAbsDiff += IADCalc(indexVals[[], windowStart)
5:   oscillation += OscillationCalc(indexVals[[], windowStart, winSize)
6: end for
7: objective1 = invAbsDiff/number_of_fragments
8: objective2 = oscillation/number_of_fragments
The algorithm for the IAD calculation is provided in the pseudocode in Algorithm 2. The calculation of the IAD for each fragment was slightly modified compared to Equation (1) to allow a faster convergence of the search algorithm. The transition phase was not in the same position in each fragment but allowed to move more loosely away from the center because the annotation in the original dataset was performed manually and there was a possibility of human error in case the observer would register a transition from W to S1 a little too early or too late. The algorithm allows the transition phase to begin no earlier than 30 epochs from the fragment start, and end no later than 60 epochs before the fragment end (line 2). The algorithm assumes the transition phase by looking for a window of 30 epochs which has the maximum difference of index, I(e), values between the left and the right part (lines 9–13).
The gradation of the absolute difference between the mean value of the left and the right parts is also introduced (lines 19–22) to allow easier and faster convergence of the algorithm. The optimization of the objective O1 can be considered as an optimization problem with soft constraints that are related to how much O1 deviates from the optimal value. However, it is quite difficult to determine the optimal value precisely a priori. As indicated in [51,52], constraints are often treated with penalties in optimization techniques. The basic idea is to transform a constrained optimization problem into an unconstrained one by introducing a penalty into the original objective function to penalize violations of constraints. According to a comprehensive overview in [51], the penalty should be based on the degree of constraint violation of an individual. In [53], it is also recommended that instead of having just one fixed penalty coefficient, the penalty coefficient should increase when higher levels of constraint violation are reached. The greatest challenge, however, is to determine the exact penalty values. If the penalty is too high or too low, evolutionary algorithms spend either too much or too little time exploring the infeasible region, so it is necessary to find the right trade-off between the objective function and the penalty function so that the search moves towards the optimum in the feasible space. As the authors have shown in [54], the choice of penalty boundaries is problem-dependent and difficult to generalize. Since we cannot strictly determine the optimal value of O1 in our case, we have chosen several thresholds for the absolute difference value, with the penalty increasing by a factor of 10 for each new threshold. The exact thresholds were selected based on the experience gained from the first few trial runs of the algorithm. Based on the observations from the trial runs, a third modification was also introduced: the difference is calculated with a relative, instead of absolute, value of I(e). The relative value of I(e) is calculated by using the lowest I(e) value as a reference point, instead of zero, i.e., the zero is “moved”, as shown in code lines 16–18 in Algorithm 2.
Algorithm 2. IAD Calculation.
1: function IADCalc(indexVals[[], windowStart)
2:  for j between 30 and (indexVals.size-60) do
3:   maxAbsDiff = 0
4:   left = 0
5:   right = 0
6:   avgLeft = average value of all Index values before j
7:   avgRight = average value of all Index values after j+30
8:   diff = ABS(avgRight–avgLeft)
9:   if diff ≥ maxAbsDiff then
10:    maxAbsDiff 0 diff
11:    left = avgLeft
12:   right = avgRight
13:    windowStart = j
14:   end if
15:  end for
16:  lowestVal = getLowestVal(indexVals)
17:  movedZero = lowestVal–0.01*lowestVal
18:  absDiff = ABS(right–left)/MIN(left–movedZero, right–movedZero)
19:  if absDiff ≥ 5.0 then invAbsDiff = 1/absDiff
20:  else if absDiff ≥ 1.0 then invAbsDiff = 10/absDiff
21:  else if absDiff ≥ 0.5 then invAbsDiff = 100/absDiff
22:  else invAbsDiff = 1000
23:  end if
24:  return invAbsDiff
25: end function
The pseudocode for calculating the oscillations in the function as the second objective, O2, is provided in Algorithm 3. Again, the optimization of the oscillations can be considered a constrained optimization problem, so that, in the same way as in the case of the IAD calculation discussed previously, a gradation of the difference between the output values of I(e) for two adjacent epochs is used to penalize the larger differences more severely (lines 7–10 and 15–18). The exact thresholds were chosen based on the experience gained from the first few trial runs of the algorithm. In order to make the algorithm converge more easily and quickly, the concept of “moved zero” was used again (lines 2, 3, 6 and 14).
Algorithm 3. Oscillation Calculation.
1: function OscillationCalc(indexVals[[], windowStart, winSize)
2:  lowestVal = getLowestVal(indexVals)
3:   movedZero = lowestVal–0.01*lowestVal
4:   oscillation = 0
5:   for i between 1 and windowStart–1 do
6:    absDiff = ABS((indexVals[i]-indexVals[i-1])/(indexVals[i-1] -movedZero))
7:    if absDiff ≥ 5.0 then oscillation += 1000
8:    else if absDiff ≥ 1.0 then oscillation += 100
9:    else if absDiff ≥ 0.5 then oscillation += 10
10:    else if absDiff ≥ 0.25 then oscillation += 1
11:    end if
12:   end for
13:   for i between windowStart+winSize and indexVals.size()-1 do
14:    absDiff = ABS((indexVals[i]-indexVals[i-1])/(indexVals[i-1] -movedZero))
15:    if absDiff ≥ 5.0 then oscillation += 1000
16:     else if absDiff ≥ 1.0 then oscillation += 100
17:     else if absDiff ≥ 0.5 then oscillation += 10
18:    else if absDiff ≥ 0.25 then oscillation += 1
19:    end if
20:   end for
21:   return oscillation
22: end function
Finally, to further minimize the oscillations, and help the search algorithm converge more quickly, the maximum change in the I(e) value between two adjacent epochs is set to 10% of the first of the two epochs. The mathematical formulation of this limit is provided in Equation (3):
I n d e x e = 1.1 * I e 1 ,    i f   I e > 1.1 * I e 1 0.9 * I e 1 ,    i f   I e < 0.9 * I e 1 I e ,    e l s e                    

3. Results

The optimization algorithm was executed over 107 generations, using 100 randomly selected chromosomes as a starting point. Ideally, the optimization algorithm would have many C and K coefficients equal to zero and only a few non-zero coefficients in order to obtain a simple and easily understandable mathematical formulation of a novel multichannel ratio index. Unfortunately, even the best solutions of the optimization algorithm had only up to 20 C and K coefficients equal to zero. Although such a novel multichannel ratio index showed good behavior in detecting drowsiness, it is impractical to use a formula with 76 coefficients. We consider anything above 15 coefficients to be impractical.
In order to reduce the number of coefficients and to simplify the formulation of the novel multichannel ratio index, some coefficients were manually set to zero. In order to decide which coefficients have the least influence on the final solution, we counted how often a large value of the coefficient is fixed to a certain frequency-domain feature. By analyzing the coefficients of all solutions in the final population of the optimization algorithm, we concluded that the most frequently selected features were δ, α, α1 and α2. After manually fixing the coefficients of all other frequency-domain features to zero, the search range for the optimization algorithm was reduced to half.
Although 48 C and K coefficients remained in the solution at that time, the algorithm provided equally good results in terms of drowsiness detection, but with a much simpler mathematical formulation. In addition to the 48 coefficients that were manually set to zero, the algorithm often set many more coefficients to zero. A decision on the best solution in the final population was made based on the O1 and O2 values of the optimization algorithm in combination with the number of coefficients set to zero after using the floor operator on the coefficients. The floor operator was used to simplify the equation by removing the decimal numbers. Preferred solutions are those with a higher number of coefficients set to zero. Our choice was the solution with 13 non-zero coefficients, as shown in Equation (4):
I 1 e = α F 3 + 4 α O 2 + 9 α 1 F 3 + 3 α 1 C 3 + 9 α 1 C 4 + α 1 O 2 + 4 α 2 O 1 + 8 α 2 O 2 δ F 3 + 3 δ F 4 + 3 δ C 3 + 2 δ C 4 + 9 δ O 2
All C and K coefficients were rounded to a lower value (floor operator). Here, e represents the current epoch and all the features on the right side were from that same epoch.
The goal of the second condition of the optimization algorithm was to minimize the oscillations of the I(e) function. The results were much better with this condition than without it, but the resulting function still oscillated strongly. In order to additionally minimize the oscillations, a limitation was performed. The maximum change between any two adjacent samples was set to 10% of the value of the first sample. Equation (5) shows the mathematical formulation of this limitation of the maximum change:
I n d e x 1 e = 1.1 * I 1 e 1 ,    i f   I 1 e > 1.1 * I 1 e 1 0.9 * I 1 e 1 ,    i f   I 1 e < 0.9 * I 1 e 1 I 1 e ,    e l s e                         
where I1(e) is defined by Equation (4) and e is the current epoch. Limiting the maximum change of adjacent samples further improves the detection model, and therefore Equation (5) presents the first novel multichannel ratio index.
We have tried to further simplify the formulation of the multichannel ratio index. This time, brute force search for the best solution was applied with the following constraints: (1) encoding of all C and K coefficients was set to integer values of zero or one for the sake of simplicity, and (2) a maximum of five addends in the equation was allowed. With these constraints, we obtained Equation (6):
I 2 e = δ F 3 + δ F 4 + δ O 2 α C 3 + α 2 O 2
Again, similar to the first index, the maximum change was limited, so that the final equation for the second ratio index was obtained as:
I n d e x 2 e = 1.1 * I 2 e 1 ,    i f   I 2 e > 1.1 * I 2 e 1 0.9 * I 2 e 1 ,    i f   I 2 e < 0.9 * I 2 e 1 I 2 e ,    e l s e                         
where I2(e) is defined by Equation (6) and e is the current epoch. After obtaining the two novel indices, they were normalized to the range [0, 1] for each subject to eliminate interindividual differences between the subjects.
The two novel multichannel ratio indices defined by Equations (5) and (7) were compared with the seven existing indices θ/α and β/α [36], (θ + α)/β, θ/β and (θ + α)/(α + β) [37], and γ/δ and (γ + β)/(δ + α) [10]. The indices γ/δ and (γ + β)/(δ + α) were calculated based on the wavelet transform, i.e., in the same way as in the original paper. Figure 4 shows a comparison of our novel indices with the best and the worst channel for θ/α and (θ + α)/β single-channel indices for subject tr08-0111. These two single-channel indices were selected because they are the best predictors of drowsiness for a given subject among all single-channel indices.

3.1. Statistical Analysis

The Wilcoxon signed-rank test [55] was used to analyze the statistical differences between the awake state and the S1 state. This test was chosen because it refers to data that do not necessarily follow the normal distribution. Table 2 shows p-values for each subject in the training set and each index. The significance level α0 = 0.01 was used together with the Bonferroni correction [55] to reduce the probability of false-positive results, as the test was repeated 144 times (16 subjects and 9 indices), giving us the final αp = 6.9 × 10−5. For the existing indices, the p-value was calculated for each channel, but only the p-values of the best channel (the lowest average of p-value for all subjects) are shown in Table 2.
The two novel indices show p-values lower than αp for most subjects. From this, we can conclude that, for Index1, 14 of 16 subjects show two different distributions for the W stage and the S1 stage, while 13 of 16 subjects show significantly different distributions of the W stage and the S1 stage for Index2. There are only two existing indices where the p-value is lower than αp in more than ten cases. These are θ/β and (θ + α)/(α + β), both by Jap et al. [37].
Table 3 shows p-values for each subject in the test set and each index. Again, the two novel indices, together with the (γ + β)/(δ + α) [10] index, show p-values lower than αp for most subjects.

3.2. Drowsiness Prediction Analysis

An additional comparison of ratio indices was performed by analyzing the drowsiness detection accuracy and precision, as obtained with the XGBoost algorithm [56]. Default parameters were applied: learning rate eta equal to 0.3, gamma equal to 0 and a maximum depth of a tree equal to 6. For a detailed comparison of the indices, classification accuracy and precision were calculated for each subject. Namely, each subject has 238 epochs of the measured signal, with the first half representing the W state and the second half the S1 state. The algorithm classified the subject’s state for each epoch (238 classifications per subject), and the accuracy for each subject was calculated based on these classifications. The leave-one-subject-out cross-validation method was applied on the training set, i.e., the algorithm was trained on the data of 15 subjects and tested on the subject excluded from the training set, and this was repeated 16 times to evaluate drowsiness detection on each subject from the training set. Table 4 shows the classification accuracy achieved on the training set.
Index1 has the highest average accuracy and the highest classification accuracy for 3 of 16 subjects. Index2 has the second-highest average accuracy and the highest classification accuracy for 4 of 16 subjects, which is the most of all indices. θ/α [36] and (θ + α)/(α + β) [37] are the only other indices with an average classification accuracy above 0.58, while θ/α [36] and θ/β [37] are the only other indices with the highest accuracy for 3 of 16 subjects. The β/α [36] index has the lowest average classification accuracy on the training set (0.5515).
Table 5 shows the classification accuracy on the test set. Index1 has the highest average accuracy and the highest classification accuracy for 3 of 12 subjects. Index2 has the third-highest average accuracy and the highest classification accuracy for 4 of 12 subjects, which is the most of all indices. The only other index with comparable accuracy is θ/α [36], with the second-highest average accuracy. All other indices have at least 2.5% lower accuracy than the two novel indices.
Table 6 shows the degree of precision of drowsiness detection on the training set. Index2 has the highest average precision of drowsiness detection and the highest precision of drowsiness detection for five subjects, which is the highest of all indices. Index1 has the second-best average precision of drowsiness detection. (θ + α)/(α + β) [37] and γ/δ [10] have a precision of drowsiness detection comparable to Index1 and Index2, while all other ratio indices have lower precision.
Table 7 shows the degree of precision of drowsiness detection achieved on the test set. Index1 has the highest average precision. θ/α [36] and γ/δ [10] have 1% lower precision than Index1, while all other indices have at least 4% lower precision. Index2 has the second-highest average precision.

3.3. Computational Analysis

With regard to the classification and the use of machine learning algorithms, an advantage of using the novel multichannel indices compared to the existing single-channel indices is also the saving of memory and time, due to the reduction of dimensionality. The accuracies of Index1 and Index2 from Table 4 were achieved with the model constructed from the single feature only, while all other indices had six features since the dataset contains six EEG channels. For this reason, storing the novel indices consumes six times less memory. The time consumption was measured as an average of 100 executions. The measured time included classifier initialization, classifier training, classifications on the test subject and calculation of classification accuracy. Table 8 shows the results of time consumption measurements. The use of the novel multichannel indices saves about 30% of time compared to all other traditionally used single-channel ratio indices.

4. Discussion

The main idea of our research was to combine frequency-domain features from different brain regions into a multichannel ratio index to improve frequency-domain features for the detection of drowsiness and to gain new insights into drowsiness. The results in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 suggest that two novel multichannel ratio indices improve the detection of drowsiness based on the frequency-domain features and reduce the time required for detection.
We must note that the main idea of this research was not to create the best possible model for drowsiness detection but only to bring improvement into frequency-domain features that are often used for drowsiness detection. Our focus was on developing the method for obtaining these novel indices, which is explained in Section 2.3 “Multi-Objective Optimization”. In order to confirm that our conclusions also hold for other classifiers besides XGBoost, Table 9 shows the average accuracy on the test set obtained with Naïve Bayes, k nearest neighbors, logistic regression, decision tree, random forest and support vector machine classifiers (using the scikit-learn library at default settings). The average accuracies of two novel indices vary from 56% to 65% among the algorithms. All the algorithms show that our novel multichannel indices are better than existing single-channel indices.
Our results were compared with the seven existing single-channel ratio indices that are currently state-of-the-art frequency-domain features. The newest one was introduced in 2016 [10], but all of these single-channel ratio indices are often used in the more recent drowsiness detection papers [57,58,59].
The authors in the aforementioned research report 92% accuracy as the best-obtained accuracy [57]. This accuracy was obtained based on the epoch-level validation. Epoch-level validation is a cross-validation procedure on the epoch level, which means that there is a very high probability that all subjects will have epochs in the training set and in the test set at the same time. On the other hand, subject-level validation is validation where it is ensured that subjects in the test set are not contained in the training set. An example of a subject-level validation is the leave-one-subject-out cross-validation that we used in this research. The only proper way for model validation is subject-level validation, as it represents the real-life setting in which the data from a new subject are used only for testing the model. Empirical tests conducted in related research showed a large difference in the accuracies between epoch-level validation and subject-level validation [60].
In a study from Mehreen et al. [57], the authors also provide subject-level validation, and the accuracy achieved was 71.5% based on 15 frequency-domain features. The highest accuracy achieved in our research is shown in Table 9, and it was 65.45%, achieved by logistic regression. This 65.45% accuracy is relatively close to 71.5%, and it must be noted that it was obtained based only on the Index2 feature, with a simple algorithm and without any parameter optimization. Due to this, we are confident that the addition of our two multichannel ratio indices would lead to an improvement in all state-of-the-art drowsiness detection systems that use frequency-domain features. Again, our aim was not to create the best possible drowsiness detection model but to prove that the novel multichannel indices are better than the existing single-channel frequency-domain features.
The Equations (4) and (6) for these multichannel ratio indices, obtained after optimizing the parameters with the optimization algorithm, suggest that alpha and delta are two of the most important frequency power bands for drowsiness detection. Equation (6) suggests that delta power in the frontal region describes drowsiness better than in the central region, while alpha power in the occipital and central regions describes drowsiness better than in the frontal region.
These results are consistent with several previous research papers on drowsiness detection that reported the importance of increasing alpha power [22,35,61,62]. Delta power is usually only present in deep sleep stages [36], so some researchers studying drowsiness do not include delta in their research [63]. However, there is still much research that includes delta power. The increase in delta power is considered to be an indicator of drowsiness [4]. Our research found that theta and beta powers are not as good drowsiness indicators as alpha and delta powers, while many other research studies disagree. A decrease in beta power was found to be an indicator of drowsiness in [4,36,64,65] and an increase in theta power was found to be an indicator of drowsiness in [27,34,61,62,65]. Wang et al. [38], in their study of microsleep events, found that alpha and delta rhythms characterize microsleep events. As mentioned earlier, there is an inconsistency in terminology, and some researchers consider sleep stage S1 as drowsiness [5,6,7,8,9], while Johns [11] considers it equivalent to microsleep events in the driving scenario. We used the data from sleep stage S1 and referred to it in this research as drowsiness. Since our results suggest that delta and alpha are the most significant for the detection of drowsiness, as in the work of Wang et al. [38] on microsleep events, our work suggests that sleep stage S1 may be more similar to microsleep events than to drowsiness, but further research is needed to support this as a fact.
Apart from the indication that drowsiness is closely related to microsleep events, it may also be closely linked to driver fatigue. Some researchers even use the term fatigue as a synonym for drowsiness [66]. Fatigue is a consequence of prolonged physical or mental activity [67] and can lead to drowsiness [68]. Normally, rest and inactivity relieve fatigue, however, they exacerbate drowsiness [69]. Lal and Craig [70] found that delta and theta band activities increase significantly during fatigue. Craig et al. [71] reported significant changes in the alpha 1, alpha 2, theta and beta bands, while they did not find any significant changes in the delta band when observing driver fatigue. Simon et al. [68] report that alpha band power and alpha spindles correlate with fatigue.
These three research papers [68,70,71] all use visual inspection to define the ground truth of fatigue. This approach to defining the ground truth is prone to subjectivity. A similar problem occurs when drowsiness is defined by using subjective drowsiness ratings, such as the Karolinska sleepiness scale [72].
Driver drowsiness, driver fatigue and microsleep events are defined as different internal states of the brain, but show similar behavior when observing the features obtained from the EEG. Possible explanations could be that fatigue, drowsiness and microsleep have a similar effect on brain functions and cause the driver’s inability to function at the desired level. Most researchers of these three driver states only use frequency-domain features, while there are a number of other features (nonlinear features [30], spatiotemporal features [31] and entropies [32]) that could be used. Further studies with these features could find some features of the EEG signal that distinguish drowsiness, fatigue and microsleep. Distinguishing features of these three brain states could lead to the exact definitions of these terms. Precise and standardized definitions of fatigue, drowsiness and microsleep would help researchers to compare their work more easily.
Figure 4 shows that the proposed procedure for creating the novel ratio indices has succeeded in creating step-like indices for a given subject. In addition to Index1 and Index2, which show desirable behavior, the indices (θ + α)/β and θ/β show similar, favorable behavior for a few channels. Figure 5 shows a comparison of novel ratio indices with the best and the worst channel for γ/δ and (γ + β)/(δ + α) single-channel indices for subject tr04-0726. Index1, index2, θ/α [36], (θ + α)/β, (θ + α)/(α + β) [37] and γ/δ [10] show similar behavior. These indices seem to detect drowsiness well, but with about a 50 epochs delay. Since several different single-channel indices that were previously shown to correlate with drowsiness together with two novel multichannel indices show the same delay in detecting drowsiness, this suggests that there may be shortcomings in the labeling of the initial signals. The manual for scoring sleep [42] provides guidelines for labeling, and it may be possible that the professionals who labeled the sleep signals labeled an approximate time of transition from the W state to the S1 state, as it is known that labeling any kind of several-hour-long EEG signal is a very tedious, hard and time-consuming job [73]. For this reason, the loose transition window is applied in the optimization algorithm, as described in Section 2.3.
The main shortcoming in applying our approach is the need to place six EEG electrodes on the driver’s scalp while driving. Apart from being intrusive, there is also a problem with noise in real-world applications that cannot be neutralized with the current state-of-the-art filter technology. All electrophysiological signals measured with wearable devices have a similar problem with intrusiveness and noise. ECG measurements, for example, are somewhat less susceptible to noise than EEG. Several recent works have shown that ECG can be used as a good predictor of sleep stages based on deep learning classifiers. Sun et al. [74] combined ECG with abdominal respiration and obtained a kappa value of 0.585, while Sridhar et al. [75] obtained a kappa value of 0.66. Combining EEG and ECG measurements has also been proposed in the context of driver drowsiness detection under simulator-based laboratory conditions [76]. Despite the problems of intrusiveness and noise susceptibility, research based on the electrophysiological signals brings a shift towards a precise definition of drowsiness. Once there is an exact definition of drowsiness or at least guidelines and manuals that accurately describe drowsiness (similar to the manuals for evaluating sleep stages), a big step will be taken to solve the problem of early detection of drowsiness [77]. It is doubtful that a wearable system based on electrophysiological signals will ever be widely used in real-world driving, but they still need to be developed. In our opinion, such wearable electrophysiological devices are more likely to be used for calibration/validation of non-intrusive systems (such as the driving performance-based or video-based systems) in controlled/simulated driving scenarios. In such scenarios, it is possible to control ambient noise, leading to a reduction in the effects of noise sensitivity.
An additional limitation of this work is that we were able to download data from 393 of 992 subjects completely, and only 28 of these 393 subjects were included in our study due to the inclusion condition that we described in Section 2.1 ”Dataset, Preprocessing and Feature Extraction”. Although it is a small subset of data, with the use of 12 subjects as a test set, we showed that the dataset is large enough to provide a good generalization (as seen in Table 3, Table 5 and Table 7). In a recent review paper about state-of-the-art drowsiness detection [33], the authors reviewed 39 papers, and the average number of subjects in the included works is 23.5, which also indicates that our number of subjects included in the current study (28) is acceptable.

5. Conclusions

This paper presented two novel multichannel ratio indices for the detection of drowsiness obtained by multi-objective optimization based on evolutionary computation. The results suggested that alpha and delta powers are good drowsiness indicators. The novel multichannel ratio indices were compared with seven existing single-channel ratio indices and showed better results in detecting drowsiness measured with precision and in the overall classification accuracy of both states using several machine learning algorithms. Our work suggests that a more precise definition of drowsiness is needed, and that accurate early detection of drowsiness should be based on multichannel frequency-domain ratio indices. The multichannel features also reduced the time needed for classification. The process of obtaining these indices by using a multi-objective optimization algorithm can also be applied to other areas of EEG signal analysis.
Research such as this, together with research on small hardware for physiology-based drowsiness detection, can eventually lead to an easy-to-use, non-intrusive device that reliably detects drowsiness. In addition, research on a reliable and standardized definition of drowsiness is needed and it would lead to improvements in the field of drowsiness detection.

Author Contributions

Conceptualization, I.S., N.F. and A.J.; methodology, I.S., N.F. and A.J.; software, I.S. and N.F.; validation, I.S., N.F., M.C. and A.J.; formal analysis, I.S., N.F. and A.J.; investigation, I.S.; resources, I.S. and N.F.; data curation, I.S.; writing—original draft preparation, I.S. and N.F.; writing—review and editing, I.S., N.F., M.C. and A.J.; visualization, I.S.; supervision, A.J.; project administration, M.C.; funding acquisition, M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been carried out within the project “Research and development of the system for driver drowsiness and distraction identification—DFDM” (KK.01.2.1.01.0136), funded by the European Regional Development Fund in the Republic of Croatia under the Operational Programme Competitiveness and Cohesion 2014–2020.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of the University of Zagreb, Faculty of Electrical Engineering and Computing (on 26 September 2020).

Informed Consent Statement

All the available information regarding patients is available from the PhysioNet portal [47], from the 2018 PhysioNet computing in cardiology challenge [48], at: https://physionet.org/content/challenge-2018/1.0.0/ (accessed on 24 September 2021).

Data Availability Statement

The data used in this paper were obtained from the PhysioNet portal [47], from the 2018 PhysioNet computing in cardiology challenge [48], at: https://physionet.org/content/challenge-2018/1.0.0/ (accessed on 24 September 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jackson, M.L.; Kennedy, G.A.; Clarke, C.; Gullo, M.; Swann, P.; Downey, L.A.; Hayley, A.C.; Pierce, R.J.; Howard, M.E. The utility of automated measures of ocular metrics for detecting driver drowsiness during extended wakefulness. Accid. Anal. Prev. 2016, 127, 127–133. [Google Scholar] [CrossRef]
  2. Kamran, M.A.; Mannan, M.M.N.; Jeong, M.Y. Drowsiness, Fatigue and Poor Sleep’s Causes and Detection: A Comprehensive Study. IEEE Access 2019, 7, 167172–167186. [Google Scholar] [CrossRef]
  3. Lal, S.K.L.; Craig, A. A critical review of the psychophysiology of driver fatigue. Biol. Psychol. 2001, 173, 173–194. [Google Scholar] [CrossRef]
  4. Papadelis, C.; Chen, Z.; Kourtidou-Papadeli, C.; Bamidis, P.; Chouvarda, I.; Bekiaris, E.; Maglaveras, N. Monitoring sleepiness with on-board electrophysiological recordings for preventing sleep-deprived traffic accidents. Clin. Neurophysiol. 2007, 1906, 1906–1922. [Google Scholar] [CrossRef]
  5. Chowdhury, A.; Shankaran, R.; Kavakli, M.; Haque, M.M. Sensor Applications and Physiological Features in Drivers’ Drowsiness Detection: A Review. IEEE Sens. J. 2018, 3055, 3055–3067. [Google Scholar] [CrossRef]
  6. Oken, B.S.; Salinsky, M.C.; Elsas, S.M. Vigilance, alertness, or sustained attention: Physiological basis and measurement. Clin. Neurophysiol. 2006, 1885, 1885–1901. [Google Scholar] [CrossRef] [Green Version]
  7. Majumder, S.; Guragain, B.; Wang, C.; Wilson, N. On-board Drowsiness Detection using EEG: Current Status and Future Prospects. In Proceedings of the 2019 IEEE International Conference on Electro Information Technology (EIT), Brookings, SD, USA, 16–18 May 2019; pp. 483–490. [Google Scholar]
  8. Sriraam, N.; Shri, T.P.; Maheshwari, U. Recognition of wake-sleep stage 1 multichannel eeg patterns using spectral entropy features for drowsiness detection. Australas. Phys. Eng. Sci. Med. 2016, 797, 797–806. [Google Scholar] [CrossRef]
  9. Budak, U.; Bajaj, V.; Akbulut, Y.; Atila, O.; Sengur, A. An Effective Hybrid Model for EEG-Based Drowsiness Detection. IEEE Sens. J. 2019, 7624, 7624–7631. [Google Scholar] [CrossRef]
  10. da Silveira, T.L.; Kozakevicius, A.J.; Rodrigues, C.R. Automated drowsiness detection through wavelet packet analysis of a single EEG channel. Expert Syst. Appl. 2016, 559, 559–565. [Google Scholar] [CrossRef]
  11. Johns, M.W. A new perspective on sleepiness. Sleep Biol. Rhythm. 2010, 170, 170–179. [Google Scholar] [CrossRef]
  12. Moller, H.J.; Kayumov, L.; Bulmash, E.L.; Nhan, J.; Shapiro, C.M. Simulator performance, microsleep episodes, and subjective sleepiness: Normative data using convergent methodologies to assess driver drowsiness. Psychosom J. Res. 2006, 335, 335–342. [Google Scholar] [CrossRef] [PubMed]
  13. Martensson, H.; Keelan, O.; Ahlstrom, C. Driver Sleepiness Classification Based on Physiological Data and Driving Performance From Real Road Driving. IEEE Trans. Intell. Transp. Syst. 2019, 421, 421–430. [Google Scholar] [CrossRef]
  14. Phillips, R.O. A review of definitions of fatigue—And a step towards a whole definition. Transp. Res. Part F Traffic Psychol. Behav. 2015, 48, 48–56. [Google Scholar] [CrossRef]
  15. Slater, J.D. A definition of drowsiness: One purpose for sleep? Med. Hypotheses 2008, 641, 641–644. [Google Scholar] [CrossRef]
  16. Subasi, A. Automatic recognition of alertness level from EEG by using neural network and wavelet coefficients. Expert Syst. Appl. 2005, 701, 701–711. [Google Scholar] [CrossRef]
  17. Orasanu, J.; Parke, B.; Kraft, N.; Tada, Y.; Hobbs, A.; Anderson, B.; Dulchinos, V. Evaluating the Effectiveness of Schedule Changes for Air Traffic Service (ATS) Providers: Controller Alertness and Fatigue Monitoring Study; Technical Report; Federal Aviation Administration, Human Factors Division: Washington, DC, USA, 2012. [Google Scholar]
  18. Hart, C.A.; Dinh-Zarr, T.B.; Sumwalt, R.; Weener, E. Most Wanted List of Transportation Safety Improvements: Reduce Fatigue-Related Accidents; National Transportation Safety Board: Washington, DC, USA, 2018.
  19. Gonçalves, M.; Amici, R.; Lucas, R.; Åkerstedt, T.; Cirignotta, F.; Horne, J.; Léger, D.; McNicholas, W.T.; Partinen, M.; Téran-Santos, J.; et al. Sleepiness at the wheel across Europe: A survey of 19 countries. Sleep J. Res. 2015, 24, 242–253. [Google Scholar] [CrossRef]
  20. Balandong, R.P.; Ahmad, R.F.; Saad, M.N.M.; Malik, A.S. A Review on EEG-Based Automatic Sleepiness Detection Systems for Driver. IEEE Access 2018, 6, 22908–22919. [Google Scholar] [CrossRef]
  21. Kundinger, T.; Sofra, N.; Riener, A. Assessment of the Potential of Wrist-Worn Wearable Sensors for Driver Drowsiness Detection. Sensors 2020, 20, 1029. [Google Scholar] [CrossRef] [Green Version]
  22. Zheng, W.-L.; Gao, K.; Li, G.; Liu, W.; Liu, C.; Liu, J.-Q.; Wang, G.; Lu, B.-L. Vigilance Estimation Using a Wearable EOG Device in Real Driving Environment. IEEE Trans. Intell. Transp. Syst. 2020, 170, 170–184. [Google Scholar] [CrossRef]
  23. Fu, R.; Wang, H.; Zhao, W. Dynamic driver fatigue detection using hidden Markov model in real driving condition. Expert Syst. Appl. 2016, 397, 397–411. [Google Scholar] [CrossRef]
  24. Lin, C.-T.; Chuang, C.-H.; Tsai, S.-F.; Lu, S.-W.; Chen, Y.-H.; Ko, L.-W. Wireless and Wearable EEG System for Evaluating Driver Vigilance. IEEE Trans. Biomed. Circuits Syst. 2014, 165, 165–176. [Google Scholar]
  25. Cassani, R.; Falk, T.H.; Horai, A.; Gheorghe, L.A. Evaluating the Measurement of Driver Heart and Breathing Rates from a Sensor-Equipped Steering Wheel using Spectrotemporal Signal Processing. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 2843–2847. [Google Scholar]
  26. Li, Z.; Li, S.; Li, R.; Cheng, B.; Shi, J. Online Detection of Driver Fatigue Using Steering Wheel Angles for Real Driving Conditions. Sensors 2017, 17, 495. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Aeschbach, D.; Matthews, J.R.; Postolache, T.T.; Jackson, M.A.; Giesen, H.A.; Wehr, T.A. Dynamics, of the human EEG during prolonged wakefulness: Evidence for frequency-specific circadian and homeostatic influences. Neurosci. Lett. 1997, 239, 121–124. [Google Scholar] [CrossRef]
  28. Barua, S.; Ahmed, M.U.; Ahlström, C.; Begum, S. Automatic driver sleepiness detection using EEG, EOG and contextual information. Expert Syst. Appl. 2019, 121, 121–135. [Google Scholar] [CrossRef]
  29. Wang, L.; Li, J.; Wang, Y. Modeling and Recognition of Driving Fatigue State Based on R-R Intervals of ECG Data. IEEE Access 2019, 7, 175584–175593. [Google Scholar] [CrossRef]
  30. Stam, C.J. Nonlinear dynamical analysis of EEG and MEG: Review of an emerging field. Clin. Neurophysiol. 2005, 2266, 2266–2301. [Google Scholar] [CrossRef]
  31. Bastos, A.M.; Schoffelen, M.J. A Tutorial Review of Functional Connectivity Analysis Methods and Their Interpretational Pitfalls. Front. Syst. Neurosci. 2016, 9, 175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Acharya, U.R.; Hagiwara, Y.; Deshpande, S.N.; Suren, S.; Koh, J.E.W.; Oh, S.L.; Arunkumar, N.; Ciaccio, E.J.; Lim, C.M. Characterization of focal EEG signals: A review. Future Gener. Comput. Syst. 2019, 290, 290–299. [Google Scholar] [CrossRef]
  33. Stancin, I.; Cifrek, M.; Jovic, A. A Review of EEG Signal Features and Their Application in Driver Drowsiness Detection Systems. Sensors 2021, 21, 3786. [Google Scholar] [CrossRef]
  34. Cajochen, C.; Brunner, D.P.; Krauchi, K.; Graw, P.; Wirz-Justice, A. Power Density in Theta/Alpha Frequencies of the Waking EEG Progressively Increases During Sustained Wakefulness. Sleep 1995, 890, 890–894. [Google Scholar] [CrossRef] [Green Version]
  35. Astolfi, L.; Mattia, D.; Vecchiato, G.; Babiloni, F.; Borghini, G. Measuring neurophysiological signals in aircraft pilots and car drivers for the assessment of mental workload, fatigue and drowsiness. Neurosci. Biobehav. Rev. 2012, 58, 58–75. [Google Scholar]
  36. Eoh, H.J.; Chung, M.K.; Kim, S.H. Electroencephalographic study of drowsiness in simulated driving with sleep deprivation. Int. Ind. J. Ergon. 2005, 307, 307–320. [Google Scholar] [CrossRef]
  37. Jap, B.T.; Lal, S.; Fischer, P.; Bekiaris, E. Using EEG spectral components to assess algorithms for detecting fatigue. Expert Syst. Appl. 2009, 36, 2352–2359. [Google Scholar] [CrossRef]
  38. Wang, C.; Guragain, B.; Verma, A.K.; Archer, L.; Majumder, S.; Mohamud, A.; Flaherty-Woods, E.; Shapiro, G.; Almashor, M.; Lenné, M.; et al. Spectral Analysis of EEG During Microsleep Events Annotated via Driver Monitoring System to Characterize Drowsiness. IEEE Trans. Aerosp. Electron. Syst. 2020, 1346, 1346–1356. [Google Scholar] [CrossRef]
  39. Goldberger, A.L.; Amaral, L.A.N.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.-K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Ghassemi, M.; Moody, B.; Lehman, L.-W.; Song, C.; Li, Q.; Sun, H.; Westover, B.; Clifford, G. You Snooze, You Win: The PhysioNet/Computing in Cardiology Challenge 2018. In Proceedings of the 2018 Computing in Cardiology Conference (CinC), Maastricht, The Nethelands, 23–26 September 2018. [Google Scholar]
  41. Institute of Medicine (US) Committee on Sleep Medicine and Research; Colten, H.; Altevogt, B. Sleep Physiology. In Sleep Disorders and Sleep Deprivation: An Unmet Public Health Problem; National Academies Press (US): Washington, DC, USA, 2006. [Google Scholar]
  42. Berry, R.B.; Quan, S.F.; Abreu, A.R.; Bibbs, M.L.; Del Rosso, L.; Harding, S.M.; Mao, M.; Plante, D.T.; Pressman, M.R.; Troester, M.M.; et al. The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications; Version 2.6; American Academy of Sleep Medicine: Darien, IL, USA, 2020. [Google Scholar]
  43. Welch, P. The use of fast Fourier transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms. IEEE Trans. Audio Electroacoust. 1967, 15, 70–73. [Google Scholar] [CrossRef] [Green Version]
  44. Basha, A.J.; Balaji, B.S.; Poornima, S.; Prathilothamai, M.; Venkatachalam, K. Support vector machine and simple recurrent network based automatic sleep stage classification of fuzzy kernel. J. Ambient Intell. Humaniz. Comput. 2020, 12, 6189–6197. [Google Scholar] [CrossRef]
  45. Feoktistov, V. Differential Evolution 5; Springer US: Boston, MA, USA, 2006. [Google Scholar]
  46. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 182, 182–197. [Google Scholar] [CrossRef] [Green Version]
  47. Chugh, T.; Sindhya, K.; Hakanen, J.; Miettinen, K. A survey on handling computationally expensive multiobjective optimization problems with evolutionary algorithms. Soft Comput. 2019, 23, 3137–3166. [Google Scholar] [CrossRef] [Green Version]
  48. Moctezuma, L.A.; Molinas, M. Towards a minimal EEG channel array for a biometric system using resting-state and a genetic algorithm for channel selection. Sci. Rep. 2020, 10, 14917. [Google Scholar] [CrossRef]
  49. Hadka, D. MOEA Framework. Available online: http://moeaframework.org/ (accessed on 20 September 2021).
  50. Coello Coello, C.; Lamont, G.B.; van Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective Problems; Springer: Boston, MA, USA, 2007. [Google Scholar]
  51. Coello Coello, C.A. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
  52. Wang, Y.; Cai, Z.; Zhou, Y.; Zeng, W. An Adaptive Tradeoff Model for Constrained Evolutionary Optimization. IEEE Trans. Evol. Comput. 2008, 80, 80–92. [Google Scholar] [CrossRef]
  53. Homaifar, A.; Qi, C.X.; Lai, S.H. Constrained Optimization Via Genetic Algorithms. Simulation 1994, 242, 242–253. [Google Scholar] [CrossRef]
  54. Runarsson, T.P.; Yao, X. Stochastic ranking for constrained evolutionary optimization. IEEE Trans. Evol. Comput. 2000, 284, 284–294. [Google Scholar] [CrossRef] [Green Version]
  55. McDonald, J.H. Handbook of Biological Statistics, 3rd ed.; Sparky House Publishing: Baltimore, MD, USA, 2014. [Google Scholar]
  56. Chen, T.; Guestrin, C. XGBoost. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  57. Mehreen, A.; Anwar, S.M.; Haseeb, M.; Majid, M.; Ullah, M.O. A Hybrid Scheme for Drowsiness Detection Using Wearable Sensors. IEEE Sens. J. 2019, 5119, 5119–5126. [Google Scholar] [CrossRef]
  58. Seok, W.; Yeo, M.; You, J.; Lee, H.; Cho, T.; Hwang, B.; Park, C. Optimal Feature Search for Vigilance Estimation Using Deep Reinforcement Learning. Electronics 2020, 9, 142. [Google Scholar] [CrossRef] [Green Version]
  59. Wu, E.Q.; Peng, X.Y.; Zhang, C.Z.; Lin, J.X.; Sheng, R.S.F. Pilots’ Fatigue Status Recognition Using Deep Contractive Autoencoder Network. IEEE Trans. Instrum. Meas. 2019, 3907, 3907–3919. [Google Scholar] [CrossRef]
  60. Kamrud, A.; Borghetti, B.; Schubert Kabban, C. The Effects of Individual Differences, Non-Stationarity, and the Importance of Data Partitioning Decisions for Training and Testing of EEG Cross-Participant Models. Sensors 2021, 21, 3225. [Google Scholar] [CrossRef]
  61. Lin, F.-C.; Ko, L.-W.; Chuang, C.-H.; Su, T.-P.; Lin, T.-C. Generalized EEG-Based Drowsiness Prediction System by Using a Self-Organizing Neural Fuzzy System. IEEE Trans. Circuits Syst. I Regul. Pap. 2012, 2044, 2044–2055. [Google Scholar] [CrossRef]
  62. Jap, B.T.; Lal, S.; Fischer, P. Comparing combinations of EEG activity in train drivers during monotonous driving. Expert Syst. Appl. 2011, 996, 996–1003. [Google Scholar] [CrossRef]
  63. Akbar, I.A.; Rumagit, A.M.; Utsunomiya, M.; Morie, T.; Igasaki, T. Three drowsiness categories assessment by electroencephalogram in driving simulator environment. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju Island, Korea, 11–15 July 2017; pp. 2904–2907. [Google Scholar]
  64. Keckluno, G.; Åkersteot, T. Sleepiness in long distance truck driving: An ambulatory EEG study of night driving. Ergonomics 1993, 1007, 1007–1017. [Google Scholar] [CrossRef]
  65. Dussault, C.; Jouanin, J.-C.; Philippe, M.; Guezennec, Y.C. EEG and ECG changes during simulator operation reflect mental workload and vigilance. Aviat. Space Environ. Med. 2005, 344, 344–351. [Google Scholar]
  66. Khushaba, R.N.; Kodagoda, S.; Lal, S.; Dissanayake, G. Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm. IEEE Trans. Biomed. Eng. 2011, 121, 121–131. [Google Scholar] [CrossRef] [Green Version]
  67. Brown, I.D. Driver Fatigue. Hum. Factors 1994, 36, 298–314. [Google Scholar] [CrossRef] [PubMed]
  68. Simon, M.; Schmidt, E.A.; Kincses, W.E.; Fritzsche, M.; Bruns, A.; Aufmuth, C.; Bogdan, M.; Rosenstiel, W.; Schrauf, M. EEG alpha spindle measures as indicators of driver fatigue under real traffic conditions. Clin. Neurophysiol. 2011, 122, 1168–1178. [Google Scholar] [CrossRef]
  69. Johns, M.W.; Chapman, R.; Crowley, K.; Tucker, A. A new method for assessing the risks of drowsiness while driving. Somnologie—Schlafforsch. Schlafmed. 2008, 66, 66–74. [Google Scholar] [CrossRef]
  70. Lal, S.K.L.; Craig, A. Driver fatigue: Electroencephalography and psychological assessment. Psychophysiology 2002, 313, 313–321. [Google Scholar] [CrossRef]
  71. Craig, A.; Tran, Y.; Wijesuriya, N.; Nguyen, H. Regional brain wave activity changes associated with fatigue. Psychophysiology 2012, 574, 574–582. [Google Scholar] [CrossRef]
  72. Kaida, K.; Takahashi, M.; Åkerstedt, T.; Nakata, A.; Otsuka, Y.; Haratani, T.; Fukasawa, K. Validation of the Karolinska sleepiness scale against performance and EEG variables. Clin. Neurophysiol. 2006, 1574, 1574–1581. [Google Scholar] [CrossRef]
  73. Gajic, D.; Djurovic, Z.; Di Gennaro, S.; Gustafsson, F. Classification of EEG signals for detection of epileptic seizures based on wavelets and statistical pattern recognition. Biomed. Eng. Appl. Basis Commun. 2014, 26, 1450021. [Google Scholar] [CrossRef] [Green Version]
  74. Sun, H.; Ganglberger, W.; Panneerselvam, E.; Leone, M.J.; Quadri, S.A.; Goparaju, B.; Tesh, R.A.; Akeju, O.; Thomas, R.J.; Westover, M.B. Sleep staging from electrocardiography and respiration with deep learning. Sleep 2020, 43, zsz306. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. Sridhar, N.; Shoeb, A.; Stephens, P.; Kharbouch, A.; Shimol, D.B.; Burkart, J.; Ghoreyshi, A.; Myers, L. Deep learning for automated sleep staging using instantaneous heart rate. NPJ Digit. Med. 2020, 3, 106. [Google Scholar] [CrossRef]
  76. Awais, M.; Badruddin, N.; Drieberg, M. A Hybrid Approach to Detect Driver Drowsiness Utilizing Physiological Signals to Improve System Performance and Wearability. Sensors 2017, 17, 1991. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  77. Dong, Y.; Hu, Z.; Uchimura, K.; Murayama, N. Driver Inattention Monitoring System for Intelligent Vehicles: A Review. IEEE Trans. Intell. Transp. Syst. 2011, 12, 596–614. [Google Scholar] [CrossRef]
Figure 1. A visualization of tables with features. The green color represents the possibilities for creating a ratio index, the first table (top) are the possibilities reported in the related work to create a single-channel ratio index, while the second table (bottom) are the possibilities explored in our novel multichannel approach.
Figure 1. A visualization of tables with features. The green color represents the possibilities for creating a ratio index, the first table (top) are the possibilities reported in the related work to create a single-channel ratio index, while the second table (bottom) are the possibilities explored in our novel multichannel approach.
Sensors 21 06932 g001
Figure 2. An illustration of all the elements needed for an evaluation of solutions of the multi-objective optimization in drowsiness detection.
Figure 2. An illustration of all the elements needed for an evaluation of solutions of the multi-objective optimization in drowsiness detection.
Sensors 21 06932 g002
Figure 3. An illustration of a chromosome structure in the proposed optimization problem solution.
Figure 3. An illustration of a chromosome structure in the proposed optimization problem solution.
Sensors 21 06932 g003
Figure 4. The comparison of the two novel multichannel indices with the best and the worst channel for θ/α and (θ + α)/β single-channel indices for subject tr08-0111. The white part of the diagram represents an awake state, while the yellow part of the diagram represents stage 1 of sleep, i.e., a drowsiness state.
Figure 4. The comparison of the two novel multichannel indices with the best and the worst channel for θ/α and (θ + α)/β single-channel indices for subject tr08-0111. The white part of the diagram represents an awake state, while the yellow part of the diagram represents stage 1 of sleep, i.e., a drowsiness state.
Sensors 21 06932 g004
Figure 5. The comparison of the two novel multichannel indices with the best and the worst channel for γ/δ and (γ + β)/(δ + α) single-channel indices for subject tr04-0726. The white part of the diagram represents the awake state, while the yellow part of the diagram represents the stage 1 of sleep, i.e., the drowsiness state.
Figure 5. The comparison of the two novel multichannel indices with the best and the worst channel for γ/δ and (γ + β)/(δ + α) single-channel indices for subject tr04-0726. The white part of the diagram represents the awake state, while the yellow part of the diagram represents the stage 1 of sleep, i.e., the drowsiness state.
Sensors 21 06932 g005
Table 1. The identification numbers of all the selected subjects. The training set is in the upper part and the test set is in the lower part of the table.
Table 1. The identification numbers of all the selected subjects. The training set is in the upper part and the test set is in the lower part of the table.
tr03-0092tr03-0256tr03-0876tr03-1389
tr04-0649tr04-0726tr05-1434Tr05-1675
tr07-0168tr07-0458tr07-0861tr08-0021
tr08-0111tr09-0175tr10-0872tr13-0204
tr04-0653tr07-0127tr09-0453tr13-0170
tr05-0028tr08-0157tr12-0255tr13-0508
tr05-0332tr09-0328tr12-0441tr13-0653
Table 2. Statistical significance p-values were obtained by the Wilcoxon signed-rank test for distinguishing the awake state from the S1 state. The shaded green cells with bold text represent the lowest p-value for each subject in the training set. At the bottom, the index having p-values lower than αp for most subjects is marked in the same way.
Table 2. Statistical significance p-values were obtained by the Wilcoxon signed-rank test for distinguishing the awake state from the S1 state. The shaded green cells with bold text represent the lowest p-value for each subject in the training set. At the bottom, the index having p-values lower than αp for most subjects is marked in the same way.
SubjectIndex1Index2θ/αβ/α(θ + α)/βθ/β(θ + α)/(α + β)γ/δ(γ + β)/(δ + α)
This WorkEoh et al. [36]Jap et al. [37]da Silveira et al. [10]
tr03-00921.12 × 10−64.82 × 10−31.23 × 10−47.22 × 10−14.97 × 10−21.62 × 10−081.41 × 10−91.06 × 10−63.09 × 10−4
tr03-02563.92 × 10−81.43 × 10−132.42 × 10−66.88 × 10−41.11 × 10−34.04 × 10−23.60 × 10−19.08 × 10−31.51 × 10−9
tr03-08761.37 × 10−203.87 × 10−175.06 × 10−31.63 × 10−79.80 × 10−142.94 × 10−61.58 × 10−51.09 × 10−12.09 × 10−1
tr03-13899.77 × 10−111.35 × 10−89.64 × 10−22.06 × 10−12.86 × 10−41.27 × 10−11.91 × 10−11.82 × 10−13.01 × 10−1
tr04-06495.85 × 10−214.11 × 10−215.71 × 10−122.38 × 10−98.12 × 10−71.89 × 10−12.18 × 10−53.33 × 10−76.13 × 10−3
tr04-07262.96 × 10−203.19 × 10−205.90 × 10−202.31 × 10−169.40 × 10−92.78 × 10−142.62 × 10−152.36 × 10−192.19 × 10−20
tr05-14347.90 × 10−109.79 × 10−136.76 × 10−93.70 × 10−101.62 × 10−13.96 × 10−172.00 × 10−195.42 × 10−171.08 × 10−11
tr05-16751.71 × 10−131.85 × 10−111.24 × 10−97.82 × 10−31.48 × 10−11.10 × 10−144.47 × 10−161.58 × 10−24.62 × 10−10
tr07-01682.88 × 10−215.15 × 10−211.75 × 10−134.73 × 10−118.08 × 10−61.05 × 10−84.49 × 10−118.87 × 10−161.01 × 10−1
tr07-04588.34 × 10−111.77 × 10−161.66 × 10−41.77 × 10−45.51 × 10−11.32 × 10−26.62 × 10−33.68 × 10−44.17 × 10−1
tr07-08612.88 × 10−213.11 × 10−213.14 × 10−33.55 × 10−73.64 × 10−29.96 × 10−81.11 × 10−61.49 × 10−171.50 × 10−12
tr08-00212.88 × 10−212.88 × 10−214.09 × 10−22.54 × 10−94.55 × 10−83.34 × 10−104.91 × 10−54.19 × 10−132.10 × 10−6
tr08-01112.88 × 10−212.88 × 10−214.41 × 10−27.54 × 10−51.94 × 10−202.04 × 10−203.92 × 10−44.50 × 10−153.41 × 10−3
tr09-01757.78 × 10−57.92 × 10−24.64 × 10−23.10 × 10−45.35 × 10−142.23 × 10−52.68 × 10−67.18 × 10−21.30 × 10−5
tr × 10-08722.62 × 10−151.96 × 10−141.76 × 10−23.89 × 10−25.91 × 10−35.09 × 10−67.52 × 10−52.14 × 10−52.33 × 10−5
tr13-02041.71 × 10−36.62 × 10−16.30 × 10−44.91 × 10−56.36 × 10−52.91 × 10−102.63 × 10−105.59 × 10−12.16 × 10−2
No. subjects with p < 6.9 × 10−51413698121198
Table 3. Statistical significance p-values were obtained by the Wilcoxon signed-rank test for distinguishing the awake state from the S1 state. The shaded green cells with bold text represent the lowest p-value for each subject in the test set. At the bottom, the index having p-values lower than αp for most subjects is marked in the same way.
Table 3. Statistical significance p-values were obtained by the Wilcoxon signed-rank test for distinguishing the awake state from the S1 state. The shaded green cells with bold text represent the lowest p-value for each subject in the test set. At the bottom, the index having p-values lower than αp for most subjects is marked in the same way.
SubjectIndex1Index2θ/αβ/α(θ + α)/βθ/β(θ + α)/(α + β)γ/δ(γ + β)/(δ + α)
This WorkEoh et al. [36]Jap et al. [37]da Silveira et al. [10]
tr04-06533.19 × 10−92.71 × 10−92.61 × 10−71.80 × 10−56.83 × 10−13.63 × 10−92.17 × 10−82.69 × 10−63.33 × 10−10
tr05-00281.26 × 10−92.36 × 10−73.78 × 10−32.99 × 10−13.45 × 10−13.56 × 10−15.16 × 10−21.70 × 10−14.36 × 10−2
tr05-03322.88 × 10−212.88 × 10−218.04 × 10−162.10 × 10−52.66 × 10−37.39 × 10−148.17 × 10−181.84 × 10−163.00 × 10−16
tr07-01273.71 × 10−181.27 × 10−158.92 × 10−21.44 × 10−23.44 × 10−51.38 × 10−22.51 × 10−59.07 × 10−163.36 × 10−16
tr08-01572.88 × 10−212.88 × 10−216.51 × 10−51.26 × 10−79.66 × 10−16.67 × 10−31.85 × 10−32.82 × 10−115.92 × 10−11
tr09-03281.80 × 10−101.14 × 10−27.90 × 10−109.56 × 10−71.70 × 10−72.01 × 10−11.92 × 10−33.54 × 10−27.01 × 10−5
tr09-04532.73 × 10−17.80 × 10−41.63 × 10−12.96 × 10−11.03 × 10−73.89 × 10−23.17 × 10−28.30 × 10−166.45 × 10−9
tr12-02551.37 × 10−25.55 × 10−104.31 × 10−192.17 × 10−181.89 × 10−162.20 × 10−198.22 × 10−191.11 × 10−72.33 × 10−7
tr12-04412.76 × 10−119.26 × 10−96.95 × 10−46.89 × 10−12.46 × 10−33.63 × 10−131.53 × 10−42.13 × 10−35.68 × 10−8
tr13-01707.59 × 10−73.60 × 10−53.74 × 10−24.73 × 10−21.91 × 10−49.71 × 10−42.16 × 10−26.07 × 10−175.23 × 10−16
tr13-05082.69 × 10−17.61 × 10−21.87 × 10−51.99 × 10−21.10 × 10−99.17 × 10−26.22 × 10−57.26 × 10−57.26 × 10−2
tr13-06531.09 × 10−167.36 × 10−204.90 × 10−84.16 × 10−48.05 × 10−21.66 × 10−21.03 × 10−93.47 × 10−21.40 × 10−5
No. subjects with p < 6.9 × 10−5997554679
Table 4. The classification accuracy was obtained with the XGBoost algorithm for each subject in the training set. The shaded green cells with bold text show the highest accuracy obtained for each subject. At the bottom, the best mean accuracy for each ratio index is marked in the same way.
Table 4. The classification accuracy was obtained with the XGBoost algorithm for each subject in the training set. The shaded green cells with bold text show the highest accuracy obtained for each subject. At the bottom, the best mean accuracy for each ratio index is marked in the same way.
SubjectIndex1Index2θ/αβ/α(θ + α)/βθ/β(θ + α)/(α + β)γ/δ(γ + β)/(δ + α)
This WorkEoh et al. [36]Jap et al. [37]da Silveira et al. [10]
tr03-00920.54200.52520.63870.51680.55040.60920.60500.46840.5527
tr03-02560.59240.63030.58400.55040.56720.63450.63450.44730.4346
tr03-08760.63870.56720.60080.62180.47480.64710.63030.57810.5992
tr03-13890.34870.39080.48740.55880.50420.53780.46640.56960.5148
tr04-06490.69750.80250.54620.54620.56300.48320.52100.61180.5527
tr04-07260.79830.76050.66810.61760.59660.61340.67650.76370.7511
tr05-14340.37390.36970.41600.62180.70590.58820.69330.54850.5063
tr05-16750.68490.65130.69330.55460.62180.56300.72270.57810.5781
tr07-01680.77730.81930.60080.55040.56300.60920.63030.44730.5105
tr07-04580.31090.26890.53780.47480.50840.53780.55040.46840.4979
tr07-08610.69330.72690.53780.54200.55040.51680.57140.65400.6160
tr08-00210.78570.63870.56300.48740.46640.38660.47480.63290.6245
tr08-01110.68910.80250.71430.71010.72270.54620.47480.62870.6118
tr09-01750.60500.52520.59660.47480.54200.62180.60080.54010.6498
tr10-08720.61340.60080.50840.51680.48740.52520.48320.52740.4810
tr13-02040.50420.45380.65970.47900.63870.64710.63030.55700.5570
Average0.60350.59590.58460.55150.56640.56670.58530.56380.5649
Table 5. The classification accuracy was obtained with the XGBoost algorithm for each subject in the test set. The shaded green cells with bold text show the highest accuracy obtained for each subject. At the bottom, the best mean accuracy for each ratio index is marked in the same way.
Table 5. The classification accuracy was obtained with the XGBoost algorithm for each subject in the test set. The shaded green cells with bold text show the highest accuracy obtained for each subject. At the bottom, the best mean accuracy for each ratio index is marked in the same way.
SubjectIndex1Index2θ/αβ/α(θ + α)/βθ/β(θ + α)/(α + β)γ/δ(γ + β)/(δ + α)
This WorkEoh et al. [36]Jap et al. [37]da Silveira et al. [10]
tr04-06530.56720.63450.63030.50420.51680.58400.56300.56120.5738
tr05-00280.44540.42020.52940.55880.46640.40760.54620.50210.5443
tr05-03320.80670.82770.75630.56300.52940.65550.61340.56540.6118
tr07-01270.54620.60920.49160.52940.52520.50000.41180.43040.4051
tr08-01570.63030.66810.52940.52940.50000.50840.52940.52320.4810
tr09-03280.60500.50840.59660.51680.50000.50420.56720.57380.5401
tr09-04530.55880.52520.57140.54200.52940.55040.58400.51050.4557
tr12-02550.54200.55460.66390.54620.47480.56300.59240.63290.5654
tr12-04410.68910.57560.58400.51680.53780.56300.57980.64980.5274
tr13-01700.60080.56300.59240.55460.62610.64710.53360.56120.4430
tr13-05080.45380.50840.65550.60080.55880.62610.60920.56540.5443
tr13-06530.68070.63030.52100.54620.56300.53360.60080.53590.5063
Average0.59380.58540.59350.54240.52730.55360.56090.55100.5165
Table 6. The precision of drowsiness detection was obtained with the XGBoost algorithm for each subject in the training set. The shaded green cells with bold text show the highest precision obtained for each subject. At the bottom, the best mean precision for each ratio index is marked in the same way.
Table 6. The precision of drowsiness detection was obtained with the XGBoost algorithm for each subject in the training set. The shaded green cells with bold text show the highest precision obtained for each subject. At the bottom, the best mean precision for each ratio index is marked in the same way.
SubjectIndex1Index2θ/αβ/α(θ + α)/βθ/β(θ + α)/(α + β)γ/δ(γ + β)/(δ + α)
This WorkEoh et al. [36]Jap et al. [37]da Silveira et al. [10]
tr03-00920.54390.54390.54390.54390.54390.54390.54390.54390.5439
tr03-02560.62220.63480.56760.53490.55710.61270.60960.42700.3974
tr03-08760.65140.58000.67650.59730.48080.73970.71230.58040.6264
tr03-13890.22730.37740.49060.54550.50340.53910.45560.55190.5120
tr04-06490.75820.82730.57330.78950.77780.48780.52940.90630.6000
tr04-07260.83181.00000.71280.63730.60550.66270.73330.83700.7706
tr05-14340.40510.40830.14290.60280.71680.76920.78050.65710.5027
tr05-16750.66420.60110.68850.56070.65590.58620.72650.54250.5433
tr07-01680.75000.79230.57590.53750.53970.57300.59630.44880.5156
tr07-04580.28160.04920.54370.47890.50880.54460.57320.45000.4930
tr07-08610.62640.66880.53380.53420.53490.51050.55210.62160.5780
tr08-00210.74640.68540.61190.47170.45350.40000.46880.63250.6355
tr08-01110.66920.82140.72970.82050.79120.53140.48400.65000.5985
tr09-01750.64040.52630.57420.47620.53790.61420.60530.52730.6207
tr10-08720.61740.59090.50450.51300.48920.52540.48820.53490.4627
tr13-02040.50540.45450.63570.48200.61700.66360.63480.60320.5970
Average0.59630.59760.56910.57040.58210.58150.59340.59460.5623
Table 7. The precision of drowsiness detection was obtained with the XGBoost algorithm for each subject in the test set. The shaded green cells with bold text show the highest precision obtained for each subject. At the bottom, the best mean precision for each ratio index is marked in the same way.
Table 7. The precision of drowsiness detection was obtained with the XGBoost algorithm for each subject in the test set. The shaded green cells with bold text show the highest precision obtained for each subject. At the bottom, the best mean precision for each ratio index is marked in the same way.
SubjectIndex1Index2θ/αβ/α(θ + α)/βθ/β(θ + α)/(α + β)γ/δ(γ + β)/(δ + α)
This WorkEoh et al. [36]Jap et al. [37]da Silveira et al. [10]
tr04-06530.57690.65690.67420.50300.51330.60000.58620.57950.5914
tr05-00280.38180.42960.51780.53850.47620.42140.53640.50000.5862
tr05-03320.74830.93330.79050.55470.53210.78460.67090.85710.8824
tr07-01270.54010.65120.47220.52940.52310.50000.36000.43090.4207
tr08-01570.58120.60870.51580.55740.50000.50480.51530.51220.4886
tr09-03280.61470.50780.56500.51370.50000.50410.56780.56200.5372
tr09-04530.53980.51760.54360.53010.52480.53060.55380.50490.4721
tr12-02550.53160.53510.60540.53420.48260.53930.55450.58860.5389
tr12-04410.80000.57890.56330.50930.52490.54780.57850.64960.5231
tr13-01700.63330.54660.57240.53550.60560.62960.53130.69440.3478
tr13-05080.43370.50910.63310.56740.54430.61540.61400.54780.5352
tr13-06530.70480.60000.51770.53140.54190.52130.58450.53920.5037
Average0.59050.58960.58090.53370.52240.55820.55440.58050.5356
Table 8. The average time of 100 executions of the XGBoost classifier’s initialization, training, classifications on the test subject and calculation of classification accuracy, expressed in milliseconds. The shaded green cells with bold text represent the best values for each subject and the best average value.
Table 8. The average time of 100 executions of the XGBoost classifier’s initialization, training, classifications on the test subject and calculation of classification accuracy, expressed in milliseconds. The shaded green cells with bold text represent the best values for each subject and the best average value.
SubjectIndex1Index2θ/αβ/α(θ + α)/βθ/β(θ + α)/(α + β)γ/δ(γ + β)/(δ + α)
This WorkEoh et al. [36]Jap et al. [37]da Silveira et al. [10]
tr03-009286.377286.9689122.7764124.0313129.9221129.5543130.6956128.6490128.9000
tr03-025686.744687.1034123.3161123.2911130.3844130.2867130.6147128.6415128.7518
tr03-087686.350887.0188122.6970123.5249130.2485130.7789130.6456128.7160128.3419
tr03-138985.934486.8811122.1586123.8243130.4414130.1170131.3382129.2281128.8390
tr04-064986.983387.5527123.6650124.0832130.2565130.0574130.2316129.2234128.8357
tr04-072686.549887.5921123.7690123.6945129.6285129.6750130.6256128.9002128.6363
tr05-143486.545086.6549123.0505130.6853131.9267130.6205131.3138130.6215131.3438
tr05-167587.053487.4627123.3135130.0399130.4660129.1552129.5943130.5365128.9256
tr07-016886.914387.2251122.9559129.6185130.5158130.1070129.7780129.6381128.7795
tr07-045886.565186.8690122.6074129.9533130.1468130.2667130.4915128.8906129.5788
tr07-086186.863487.2801122.7545130.2319130.0104128.1135129.9124129.3949128.7760
tr08-002186.923988.9566123.0910130.0868129.1948130.0221130.1670129.3241129.1652
tr08-011186.669787.4626122.7216130.3879130.5019130.4011129.8419128.8413129.1940
tr09-017587.324087.3827123.4803129.6729130.9953130.1271131.1785128.7062128.9786
tr10-087286.969087.5381124.0918130.5091129.4638130.6490130.2964129.4843129.3599
tr13-020487.250987.1135123.2010131.7928130.4062131.4087130.3568128.9199128.1530
Average86.751287.3164123.1031127.8392130.2818130.0838130.4426129.2322129.0350
Table 9. The average accuracy was obtained on the test set with different classification algorithms. Each row is colored with a pallet of colors ranging from dark green for the highest number in the row to dark red for the lowest number in the row. The algorithms are: NB—Naïve Bayes, KNN—k nearest neighbors, Logistic—logistic regression, DT—decision tree, RF—random forest and SVM—support vector machine.
Table 9. The average accuracy was obtained on the test set with different classification algorithms. Each row is colored with a pallet of colors ranging from dark green for the highest number in the row to dark red for the lowest number in the row. The algorithms are: NB—Naïve Bayes, KNN—k nearest neighbors, Logistic—logistic regression, DT—decision tree, RF—random forest and SVM—support vector machine.
AlgorithmIndex1Index2θ/αβ/α(θ + α)/βθ/β(θ + α)/(α + β)γ/δ(γ + β)/(δ + α)
This WorkEoh et al. [36]Jap et al. [37]da Silveira et al. [10]
NB0.63990.65350.59470.54620.54320.53080.56630.53160.5277
KNN0.57850.58400.55880.53870.53990.54520.55250.53780.5277
Logistic0.63960.65430.60290.51310.53830.56260.57350.57930.5613
DT0.57170.56290.54560.50500.50740.54200.52670.53560.5223
RF0.57190.56590.57620.53800.53600.55490.55010.53210.5222
SVM0.63250.65260.62000.56950.57140.57310.58010.55410.5478
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Stancin, I.; Frid, N.; Cifrek, M.; Jovic, A. EEG Signal Multichannel Frequency-Domain Ratio Indices for Drowsiness Detection Based on Multicriteria Optimization. Sensors 2021, 21, 6932. https://doi.org/10.3390/s21206932

AMA Style

Stancin I, Frid N, Cifrek M, Jovic A. EEG Signal Multichannel Frequency-Domain Ratio Indices for Drowsiness Detection Based on Multicriteria Optimization. Sensors. 2021; 21(20):6932. https://doi.org/10.3390/s21206932

Chicago/Turabian Style

Stancin, Igor, Nikolina Frid, Mario Cifrek, and Alan Jovic. 2021. "EEG Signal Multichannel Frequency-Domain Ratio Indices for Drowsiness Detection Based on Multicriteria Optimization" Sensors 21, no. 20: 6932. https://doi.org/10.3390/s21206932

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop