Next Article in Journal
“Hangry” in Forensic Psychiatry? Analysis of the Relationship Between Eating Disorders and Aggressive Behavior in Patients with Substance Use Disorders
Previous Article in Journal
The Interaction of Target and Masker Speech in Competing Speech Perception
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Optimized Hybrid Deep Learning Framework for Mental Stress Detection Using Electroencephalography

1
Department of Electronics Communication Engineering, Bharath Institute of Higher Education and Research, Chennai 600073, India
2
Department of Electronics and Telecommunication Engineering, Pimpri Chinchwad College of Engineering and Research Ravet, Pune 412101, India
3
Department of Electrical Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Brain Sci. 2025, 15(8), 835; https://doi.org/10.3390/brainsci15080835
Submission received: 4 June 2025 / Revised: 24 July 2025 / Accepted: 28 July 2025 / Published: 4 August 2025

Abstract

Mental stress is a psychological or emotional strain that typically occurs because of threatening, challenging, and overwhelming conditions and affects human behavior. Various factors, such as professional, environmental, and personal pressures, often trigger it. In recent years, various deep learning (DL)-based schemes using electroencephalograms (EEGs) have been proposed. However, the effectiveness of DL-based schemes is challenging because of the intricate DL structure, class imbalance problems, poor feature representation, low-frequency resolution problems, and complexity of multi-channel signal processing. This paper presents a novel hybrid DL framework, BDDNet, which combines a deep convolutional neural network (DCNN), bidirectional long short-term memory (BiLSTM), and deep belief network (DBN). BDDNet provides superior spectral–temporal feature depiction and better long-term dependency on the local and global features of EEGs. BDDNet accepts multiple EEG features (MEFs) that provide the spectral and time-domain features of EEGs. A novel improved crow search algorithm (ICSA) was presented for channel selection to minimize the computational complexity of multichannel stress detection. Further, the novel employee optimization algorithm (EOA) is utilized for the hyper-parameter optimization of hybrid BDDNet to enhance the training performance. The outcomes of the novel BDDNet were assessed using a public DEAP dataset. The BDDNet-ICSA offers improved recall of 97.6%, precision of 97.6%, F1-score of 97.6%, selectivity of 96.9%, negative predictive value NPV of 96.9%, and accuracy of 97.3% to traditional techniques.

1. Introduction

Mental stress is an inexorable issue faced by human beings irrespective of age, religion, ethnicity, region, and gender. Mental stress affects and limits an individual’s ability to disrupt daily routines [1]. In psychology, stress combines the perception of stress or a situation with the body’s response to it. Stress is generally triggered when an individual encounters adverse conditions, such as mental, physical, or emotional stressors. The stressors were grouped into internal and external. Internal stressors depend on individual perceptions, thoughts, and personalities. External stressors include relationship problems; financial difficulties; work pressure; and professional, political, and religious pressures [2]. Stressors include mental arithmetic tests, picture perception tests, and rapidly changing tasks. Physical stressors included exercise, physical activity, painful stimuli, or sleep deprivation. Emotional stressors include videos or songs [3].
Mental stress is classified as either chronic or acute. Acute stress occurs when an individual is exposed to short-duration stressors such as public speaking or job interviews. Long-term and frequent exposure to stressors, such as poor sleep habits, stressful jobs, and poor relationships, lead to chronic stress. Various physiological changes occur in an individual’s body to deal with stress [4]. Stress may cause the release of cortisol, noradrenaline, and adrenaline, thereby providing instant energy to the body. Afterwards, the parasympathetic nervous system regulates the body to normal conditions (homeostasis condition) without any significant harm to the body. Continuous or long-term exposure to stress affects an individual’s mental and physical health. Stress leads to distinct health issues such as stroke, hypertension, cardiac arrest, coronary artery disease, persistent pain, anxiety, muscle exhaustion, and depression [5].
Psychiatrists and clinicians analyze stress using self-report questionnaires. Various questionnaires are used to analyze stress, such as the daily stress inventory, perceived stress scale, and relative stress scale. However, their trust and efficiency were highly subjective and prone to incorrect or invalid answers. Questionnaire-based stress analysis has a high error rate owing to social response and desirability biases. Additionally, behavior analysis based on vocal and non-verbal indications (rapid eye movement and body gesture) and visual responses was utilized for stress analysis [6]. However, behavioral analysis can vary in conscious states. The questionnaire reports and behavioral analysis are subject to erroneous expert knowledge due to fatigue, inadequate expert knowledge, tiredness, and bias. Stress refers to the physiological changes in a person affected by the autonomic nervous system. These changes include physiological modalities, such as eye gaze, skin temperature, pupil diameter, voice blood volume pressure, heart rate variability (HRV), and electrodermal conductance. However, physiological signals are significantly affected by environmental conditions and health. Skin diseases and environmental parameters, such as temperature and humidity, strongly influence electrodermal conductance [7].
Researchers have recently focused on various neuro-signals and neuroimaging techniques for stress analyses. These modalities include EEG, near-infrared spectroscopy, positron emission tomography, and functional magnetic resonance imaging [8]. EEG has shown greater reliability, robustness, and accuracy than neuroimaging techniques. EEG-based stress analysis is inexpensive and offers a high temporal resolution [9]. EEG is a noninvasive technique that captures oscillations produced by electrical brain activity by mounting electrodes over the scalp. EEG signals have amplitudes up to 200 V. EEG has different frequency bands, which represent distinct mental states such as delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–30 Hz), and gamma (>30 Hz). Details of the EEG signals are provided in Table 1.
AI-based stress detection systems are categorized into machine learning and deep learning-based stress detection systems. Traditional ML-based systems involve preprocessing, feature extraction, and classification. Preprocessing consisted of signal standardization, normalization, noise removal, artifact removal, data augmentation, signal cropping, and appending. It is essential to enhance the quality of the EEG signals. EEG signals often suffer from artifacts generated from body muscles, ocular signals, and body movements. These artifacts degrade EEG quality and lead to poor feature representation.
The next phase includes feature extraction, which acquires the unique characteristics of the EEG using computational algorithms. These features are important for segregating normal EEGs from stressed EEGs. ML algorithms are highly reliant on the data used for training. Quantity, diversity, and quality affect the performance of ML algorithms. Biased and insufficient data can lead to inaccurate predictions. ML algorithms are less suitable for larger datasets and require more learning time. The performance of the ML classifier depends largely on its features. Redundancy and non-distinct features reduce the model’s performance. The ML algorithm must provide better results with limited data. ML models require more contextual understanding and may provide less accuracy. Feature extraction algorithm selection is challenging owing to the unavailability of standard benchmarks for algorithm selection. The ML models show interior feature representation capabilities. Uneven training samples lead to a class-imbalance problem. The ML algorithm provides overfitting and shows a lower generalization capability for newer data. The outcomes of ML algorithms are easily affected by noise and signal artifacts. Traditional systems suffer from class imbalance problems owing to uneven training dataset samples. Generating a dataset for the stress class is challenging because of the micro-nature of the stress and reliability issues of the stressor [10].
This study presents a novel hybrid BDDNet for stress detection using EEG signals. The main contributions of this study are summarized as follows.
  • An efficient channel selection scheme using a novel improved CSA algorithm to select distinctive channels and reduce the computational complexity of the system.
  • Stress representation using Multiple EEG features (MEFs) provides time and frequency domain features.
  • Implementation of a novel BDDNet for stress detection where DCNN provides the spatial and spectral domain features of EEG; BiLSTM offers the temporal and long-term dependency of the EEG and DBN to provide multilevel hierarchical features of the EEG.
  • Hyper-parameter optimization of the BDDNet using novel EOA to boost training performance.
The remainder of this paper is structured as follows. Section 2 provides a literature survey of recent stress detection schemes. Section 3 describes the overview of the proposed methodology in detail. Section 4 offers the details about the proposed BDDNet. Section 5 describes the implementation details of EOA-based hyper-parameter optimization. Further, Section 6 presents the experimental results and discusses the analytical findings. Section 7 presents the conclusions, imperative findings, and directions for future work.

2. Related Work

Various deep learning-based schemes have been proposed in recent years to enhance the performance of stress-detection schemes. Roy et al. [11] presented CBGG, which is a hybrid combination of CNN, BiLSTM, and two gated recurrent unit GRU layers for EEG-based stress detection. It uses a discrete wavelet transform (DWT) representation to describe the spectral and temporal characteristics of EEGs. DWT minimizes the nonlinearity and non-stationarity of EEGs. It offers an overall accuracy of 98.1% for the simultaneous task EEG workload (STEW) dataset that includes 14-channel EEG signals. However, the effectiveness of stress detection is limited because of its high network complexity, higher recognition time, and extensive hyper-parameter tuning. In addition, selecting a DL algorithm to construct a hybrid classifier is a challenging task. Mane et al. [12] explored the amalgamation of 2D-CNN and LSTM for stress detection, which considered azimuthal projected images of alpha, theta, and beta signals as the input. The combination of CNN and LSTM assists in boosting the spectral and temporal depictions of the EEGs. This resulted in an overall stress detection rate of 97.8% for DEAP, 94.5% SEED, and 97.8% for (DEAP+SEED). The system required a higher training time of 4.2 h and a recognition time of 12.5 s. Patel et al. [13] investigated 1-D CNN and BiLSTM to enrich the spectral–temporal characteristics of EEGs. The stress detection model accepts time–frequency features to learn the local and global representations of the EEGs. It provides 88.03% accuracy for the DEAP dataset, but suffers from poor feature depiction and class imbalance problems.
Furthermore, Bhatnagar et al. [14] provided an EEGNet based on CNN, which accepts the mother wavelet decomposition of the EEG into five spectral bands for stress detection. It offered 99.45% accuracy for the in-house dataset created by capturing the EEGs while playing low- to high-pitched music. However, the dataset variability was limited owing to the limited sample size (45 subjects aged 13–21). According to Hafeez et al. [15], timing has a significant influence on stress. Researchers have observed that stress levels are greater for untimed tests than for timed tests based on real-time experimental data. Overall accuracy for the EEG signals in picture format was 70.67% and 90.46% for the LSTM and DCNN, respectively. The DCNN offers improved spatial correlation and connectivity among the various EEG bands. However, the temporal description of the signal and long-term reliance were absent from the 2D picture representation of the EEG. Geetha et al. [16] investigated an enhanced multilayer perceptron (EMLP) to identify stress by utilizing sleep patterns in EEG data. Owing to the extreme complexity of sleep patterns, the ability of the EMLP to extract complicated sleep pattern information from EEG signals is restricted for real-time analysis.
To identify epileptic seizures caused by stress and worry, Palanisamy et al. [17] used fuzzy c-mean (FCM) features, and LSTM was adjusted using a particle swarm algorithm (LSTM-PSO). The Hjorth Activity, variance, skewness, kurtosis, standard deviation, Shannon entropy, and mean are among the FCM properties. Location and random data augmentation resolve the class imbalance issue, creating synthetic EEG samples. It achieved 97% PSO-LSTM-based stress identification for the BONN EEG dataset, and an overall accuracy of 98.5% for FCM-PSO-LSTM. According to Bakare et al. [18], valence and arousal can provide stress information from different EEGs. KNN obtained better results for the smaller dataset, whereas the larger dataset did not provide encouraging results. Recurrent neural networks (RNNs) and random forests (RFs) were proposed by Khan et al. [19] for cross-dataset mental stress detection to improve the capacity of generalization of the stress detection scheme. It tests employing RNN and RF on the Game Emotion (GAMEEMO) dataset and the SJTU Emotion EEG Dataset (SEED) dataset for training. Regarding accuracy, the RNN performed better than the RF (83% for arousal and 75% for valence), with 87% for arousal and 83% for valence. Gonzalez-Vazquez et al.’s [20] proposal, which uses an 8-channel EEG for multilevel stress detection in serious gaming tasks, calls for gated recurrent units (GRUs). It performed well in stress detection with 94% accuracy, but its weak generalization limits its usefulness. Naren et al. [21] investigated a 1D CNN and Doppler characteristics for stress detection. The initial component of the 1D CNN was trained using induced stress via mirror tasks, the Stroop test, and arithmetic tests. Features of low, medium, and high stress levels were used to train the second portion of the 1D CNN. The SAM-40 dataset yielded an overall accuracy of 95.25 percent.
From an extensive survey of various stress-detection techniques, the following gaps were identified:
  • Lower feature depictions of single-channel EEGs, low-frequency resolution issues, limited spectral–temporal representation, and inferior long-term dependency on EEG signals [22].
  • The class imbalance problem provides the disparity between the qualitative and quantitative stress attributes of EEGs [23].
  • Low accuracy for low arousal and valence EEG signals.
  • Stress detection systems suffer from a low generalization capability, which limits their effectiveness in real-time implementation. DL-based systems have provided better results than ML-based stress-detection techniques [24].
  • DL algorithms work as black boxes and have higher abstraction levels, failing to justify different features adequately. Thus, their explainability and interpretability were inferior.

3. Methodology

Figure 1 shows a flow diagram of the proposed stress-detection framework, which encompasses EEG preprocessing, channel selection, feature extraction, and stress detection using a novel DL framework.

3.1. EEG Preprocessing

EEGs often affect noise and artifacts due to electrooculogram (EOG), electromyogram (EMG), and electrocardiogram (EEG) signals, thereby reducing stress. Minimizing noise and artifacts is essential for enhancing the EEG quality and improving the mental stress detection performance. The EEG was passed through a finite impulse response (FIR) filter with bands between 0.75 Hz and 45 Hz to minimize noise. Furthermore, a wavelet packet transform-based soft thresholding scheme was used for EEG denoising, which minimized the noise and artifacts in the EEG signal without degrading the actual information in the EEG [25]. The EEG signals were decomposed into three levels using a Daubechies filter (db3). The decomposed packets are compared with Donoho’s soft thresholding value and reconstructed to attain an enhanced signal.

3.2. Channel Selection Using Improved Crow Search Algorithm

Crows are regarded as the most intelligent bird species. They stow away food and retrieve it when needed. They stick close to one another, watch and investigate where other crows keep their food, and then take it after the owner has gone [26]. Crows will change their hiding spot if they believe that they are being followed to protect their food from being taken. The algorithm comprises four fundamental principles, all of which are derived from the behavioral patterns of crows.
  • Crows tend to congregate in large numbers.
  • Crows have excellent memories and can recall exactly where food was hidden.
  • Crows are known to stick together in order to steal food.
  • Crows have the capacity to perceive their environment. When they become aware that they are being followed, they move the food they have hidden to protect it from being taken.
The Crow Search Algorithm (CSA) is an innovative type of swarm intelligence optimization algorithm that was developed by modeling the intelligent actions that crows perform while searching for and locating food. The method is characterized by its straightforward structure, limited number of control parameters, and straightforward applications. The fact that only two factors can be adjusted makes it straightforward, which in turn makes it extremely appealing for use in a variety of technical domains [27]. The traditional CSA algorithm provides a poor optimization solution because of its low solution diversity, poor exploration and exploitation, and inferior optimization results [28]. The proposed improved CSA uses elite learning with spiralized learning and a weak member replacement scheme to enhance the solution diversity, convergence, and balance between exploration and exploitation. The flow of the proposed ICSA-based channel selection process is illustrated in Figure 2.
  • Process of ICSA
The crow search algorithm (CSA) imitates the behavior of crows by storing excess food and recovering it when required. According to optimization theory, the crow acts as the searcher, the environment around it serves as the search space, and storing the location of food in a manner that is entirely at random is a possible option. The CSA adheres to the following ideals, which are derived from the lifestyle of crows: (1) crows are gregarious creatures; (2) crows are able to recall the position of concealed food; (3) crows will follow each other and take food from each other; and (4) crows try their utmost to prevent other crows from stealing their food. The algorithm for the CSA is as follows [26,27,28]:
Step 1: Initialize the problem statement and algorithm parameters.
  • N: Flock size
  • Ft: Flight length
  • Iter_max: Maximum iterations
  • AP: Awareness probability
Step 2: Initialize the crow position and memory.
The flock is composed of N crows that are distributed randomly over a d-dimensional search space, where d denotes the total number of possible channels. The initial row positions are represented by Equation (1):
c r o w s = x 1 1 x 2 1 x d 1 x 1 2 x 2 2 x d 2 x 1 N x 2 N x d N
Each crow had initialized memory. It is supposed that crows have hidden their food in their first placements because they are thought to have little experience at this point. The memory of crows is described by Equation (2).
c r o w s = m 1 1 m 2 1 m d 1 m 1 2 m 2 2 m d 2 m 1 N m 2 N m d N
Step 3: Evaluate the fitness of each crow.
By entering the values of the decision variables into the objective function for each crow, the quality of its location was calculated. The objective function for channel selection considers the entropy ( E N ) and covariance ( C V ) of the channels, which helps to select salient channels with higher information. Channel selection assists in minimizing the computational effort of the stress detection system. The objective function utilized for computing the fitness is provided in Equation (3). Here, w 1 and w 2 were selected such that w 1 + w 2 = 1 .
f i t n e s s = w 1 E N + w 2 C V
Step 4: Generate new crow position.
To update, the crow selects a flock member at random, such as Crow J, and follows it to find the location of concealed food. In Equation (4), the new location of the crow is updated.
x i , i t e r + 1 = x i , i t e r + r i × f l i t e r × ( m j , i t e r x i , i t e r )      f o r    r j A P j , i t e r    A   r a n d o m   n u m b e r                                       o t h e r w i s e
The traditional CSA updates the population randomly, leading to poor solution diversity and convergence. However, updating the best and worst solutions is neglected, which creates a poor balance between exploration and exploitation. Thus, the improved CSA provides two competitive learning schemes to enhance the diversity of solutions: convergence and exploration–exploitation search space. The LFEL strategy updates the best object using the Levy step function to improve the exploration space of the algorithm. It considers the first two solutions ( x b e s t 1   a n d   x b e s t 2 ) with the highest fitness values, as given in Equation (5).
x i L F E L = x b e s t 1 + 2 r 1 1 l e v y ( β ) ( x b e s t 1 x b e s t 2 )
Here, x i L F E L indicates the updated object obtained using the LFEL scheme, β denotes the distribution index, and r1 is an arbitrary index between 0 and 1.
Furthermore, it uses the RWM strategy to boost exploitation of the algorithm. Every weak solution is updated towards the best solution to enhance the exploitation search space of the CSA. The crow position is updated using the RWM strategy, as shown in Equation (6).
x i R W M = x w o r s t + r 2 ( x b e s t x w o r s t )
Here, x i R W M signifies the updated crow using the RWM scheme, r2 is a random number between 0 and 1, and x w o r s t stands for a solution with the worst fitness.
Step 5: Feasibility checking of new crow positions.
The new position in each crow was examined for viability. A crow changes position if its new location is viable. Otherwise, the crow does not go to the new spot and remains in its existing location.
Step 6: Compute the fitness value for newer position.
Step 7: Update the crow memory using Equation (7).
m i , i t e r + 1 = m i , i t e r + 1      i f   f i t n e s s   o f   m i , i t e r + 1 > f i t n e s s   o f   m i , i t e r    m i , i t e r                                                      o t h e r w i s e
Step 8: Check the termination criteria. Steps 4–7 were repeated until i t e r _ m a x was achieved. After the termination requirement is satisfied, the optimal memory position relative to the value of the objective function is provided as a solution to the optimization problem. The algorithm for the ICSA is given as follows (Algorithm 1):
Algorithm 1: ICSA for EEG Channel Selection
Input: Random channel population
Output: Optimized Channels
Step 1:Initialize the problem and parameters.
Set flock size N, flight length Ft, maximum iterations Iter_max, and awareness probability AP.
Step 2:Initialize crow positions and memory.
   i.
Randomly generate initial positions of N crows in a d-dimensional search space, as given in Equation (1).
   ii.
Initialize memory assuming each crow hides food at its initial location, as given in Equation (2).
Step 3:Evaluate initial fitness.
- Compute fitness of each crow using the objective function considering entropy (EN) and covariance (CV), as given in Equation (3).
Step 4:Generate new crow positions.
For each crow i:
   i.
Randomly choose crow j.
   ii.
Update position based on Equation (4).
   iii.
Apply the LFEL strategy to update the best solutions using Equation (5).
   iv.
Apply the RWM strategy to guide weaker solutions using Equation (6).
Step 5:Check feasibility of new positions.
- If the new position is feasible, update the crow’s position. Otherwise, retain the old position.
Step 6:Recalculate fitness for updated positions.
Step 7:Update crow memory.
i. Compare current and previous fitness values.
ii. Update memory as per Equation (7).
Step 8:Termination check.
i. Repeat Steps 4–7 until Iter_max is reached.
ii. Return the best memory position as the optimal solution.

3.3. Multiple EEG Features

The features were classified into time-domain, spectral, and textural features of the EEG.
A. 
Time-Domain EEG Features
  • Mean, Standard Deviation, and Variance
The mean and SD offer time-domain variations in the EEG owing to stress. The variance provides consistency in the EEG patterns. The mean ( μ ), standard deviation ( σ ), and variance (vr) for EEG signal E having N samples are depicted in Equations (8)–(10), respectively.
μ = 1 N i = 1 N E i
σ = 1 N i = 1 N ( E i μ ) 2
v r = 1 N i = 1 N ( E i μ ) 2
  • Hjorth’s Parameters
Hjorth’s parameters provide the activity, mobility, and complexity of the EEG signal. The EEG signal activity depicts the signal’s variance over time [8]. Higher activity due to stress in the brain indicates a higher activity value compared with a normal mental state. Mobility is the square root of the ratio of the variance of the first-order derivative of EEG to that of EEG over time. The activity (act or σ 2 ) is denoted by Equations (11) and (12), where e e g i denotes individual samples of EEG signals, e e g ¯ describes the mean of the EEG, and N signifies the total samples of the EEG. Mobility ( m o b ) describes the frequency variations in the EEG, as given in Equation (13). Higher mobility indicates rapid variations in the EEG, representing higher brain activity or stress. A higher complexity ( c m p ) value represents more complex variations in brain activity that depict higher stress. The complexity is defined as the square root of the ratio of the mobility of the first derivative of EEG to the mobility of EEG, as given in Equation (14).
a c t = σ 2 = 1 N 1 i = 1 N ( e e g i e e g ¯ ) 2
e e g ¯ = 1 N i = 1 N E E G i
m o b = a c t d e e g d t a c t ( e e g )
c m p = m o b d e e g d t m o b ( e e g )
  • Median
The median value offers the central tendency of the EEG, which depicts the independence of the outliers.
  • ZCR
ZCR offers randomness and noise in the EEG, which has a higher stress value. Stress causes instability in EEG patterns and provides larger transitions in the EEG. ZCR is computed using Equation (15), where 1{.} provides one value as a sign that the current samples have changed from the previous samples that depict zero crossing. The signs of the EEG samples were obtained using s i g n  function.
Z C R = 1 N 1 i = 1 N 1 1 { s i g n ( x [ n ] ) / s i g n ( x [ n 1 ] ) }
  • RMS
The RMS describes the overall signal power, and the entropy depicts random or irregular EEG patterns. The RMS value was computed using Equation (16).
R M S = 1 N i = 1 N E ( i ) 2
  • Line length (LL)
The line length provides the overall vertical or curve length of the EEG signal, which shows the stress pattern in the signal. Equation (17) is used to calculate LL.
L L = i = 1 N ( E i E ( i 1 ) )
  • Shannon Entropy (SnE)
Equation (18) is used to determine the uncertainty value in the EEG signal provided by SnE, where p i is the frequency of each sample in the EEG.
S n E = p i log p i
  • Nonlinear Energy (NE)
NE offers information regarding the irregular and non-linear patterns of EEG. The amplitude and oscillation shifted frequency values increased with the NE value. Equation (19) was used to calculate the NE.
N E = i = 1 N 2 ( E 2 i E i + 1 E ( i 1 ) )
B. 
Frequency Domain Feature
  • WPT
WPT provides stationary and transient EEG patterns in the time–frequency domain. The EEG signal is split into five levels using a ‘db2’ filter that provides subbands. The fifth-level decomposition provided 32 subbands. Seven statistical features (mean, median, energy, skewness, kurtosis, variance, and entropy) were computed for each sub-band. The five-level decomposition provided 224 WPT features.
  • Energy (EN)
Energy provides the EEG strength in distinct frequency bands. It depicts the transition from the normal state to stress and is computed using Equation (20).
E N = i = 1 N [ E i ] 2
  • Instantaneous Wavelet Moment of Frequency (IWMF)
The IWMF offers dynamic disparities in EEG that describe the microarousal due to stress. The IWMF was estimated using Equation (21), where E[k] is the normalized PSD at frequency f[k].
I W M F = k E k · f ( k )
  • Instantaneous Wavelet Bandwidth of Frequency (IWBF)
The IWBF provides the bandwidth of the stress levels, as given in Equation (22). It is vital to discriminate between activities in the brain due to stress. PSD offers a high value for normal activity.
I W B F = k E k · ( f k I M W F ) 2
  • Spectral Kurtosis
SK describes the non-Gaussian nature of the EEG pattern, which depicts complex EEG patterns.
C. 
Texture Feature
Local temporal changes in the EEG signal pattern are provided by the local binary pattern (LBP) characteristics. Smaller amplitude fluctuations, micro-arousals, and transitory changes across the EEG due to stress may be obtained using the Local Neighborhood Difference Pattern (LNDP). The Local Gradient Pattern (LGP) offers directional changes in EEG to show prominent and subtle differences across complicated EEG patterns. Fluctuations in the EEG gradients (G), as shown in the equation, are provided by the LGP, where c signifies the center value in the window and x i represents the neighboring samples. LBP, LGP, and function f are given by Equations (23), (24), and (25), respectively. The details of all 527 EEG features for each channel are listed in Table 2.
L B P c = i = 1 N f ( E x i E c ) 2 i 1
L G P c = i = 1 N f ( G x i G c )
f x = 1 ,    x < 0 0 ,    x 0

4. BDDNet for Stress Detection

The proposed BDD network combines Bi-LSTM, DCNN, and DBN to enhance feature representation. Bi-LSTM provides bidirectional long-term dependencies and a superior temporal representation of EEG features. The DCNN provides spatial detection and hierarchical abstract-level features that offer correlation and connectivity between the distinct local and global features of the EEG for stress detection. The DBN offers multilevel hierarchical features and representation of the complex patterns of the EEG signal for stress detection.
A. 
Deep Convolution Neural Network
CNNs have shown robustness in various signal processing applications. They are capable of learning EEG features independently. CNNs adaptively and automatically learn the hierarchical and spatial features of EEG signals using back-propagation learning. CNNs have the potential for various pattern recognition applications for biomedical image signals, audio, and time-series data [29,30,31,32]. The convolution layer is the chief building block of CNNs. This provides a hierarchical correlation between the different local and global characteristics of the EEG signal. The convolution layer provides local correlations and connectivity in EEG features. It offers hierarchical abstract-level EEG features that describe distinctive features depicting variations in the EEG signal. In this layer, the input signal is convolved with multiple convolution kernels to provide deep features, as expressed in Equations (26) and (27).
E c o n v x , y = E E G K
E c o n v x , y = i = 1 R j = 1 C E E G i , j · K ( x i , y j )
Here, EEG denotes the original EEG signal, K denotes the convolution filter, and E c o n v denotes the convolution output.
Batch normalization converts the deep features into a normalized format to minimize outliers. This assists in accelerating the training performance of the DCNN. The BN operation for a batch size b is given by Equation (28). Here, μ b and σ b  denote the mean and variance over batch b, respectively, and and β indicates the scale and offset, respectively.
B N x = · E c o n v x μ b σ b + β
The ReLU layer enhances the non-linear characteristics of the features by replacing the negative values with zero. This helps to improve the classification accuracy and lessen the vanishing gradient problem. The output R L of the ReLu layer is described by Equation (29).
R L = m a x   ( 0 , B N ( x ) )
The maximum pooling layer selects the maximum value from a local window. It chooses salient features and neglects non-salient or redundant features. Maximum pooling minimizes the feature dimensions and thus helps reduce the network’s trainable parameters.
The output of the last max pool layer was flattened and converted into a 1-D vector. The flattened vector is provided to the FCL, which links every neuron of one layer with all other neurons of the other layers to enhance connectivity. The FCL learns the dependencies and relationships in EEG data. In the FCL, a linear transformation is applied to the input vector via FCL weights. Later, the non-linear activation function is applied to the product, as given in Equation (30).
y j k x = f i = 1 n H W j k x i + W j o
where x represents the input flattened vector to the FCL, W 0 is the bias, and f is the non-linear activation function. The Softmax classifier computes the probability of the output class using Equation (31). The class label with the highest probability is chosen as the output class, as given in Equation (32). The SML activation function is given by the equation. Z i indicates the output layer value, which is given by Equation (33): P i is the probability of the output class level, and Ŷ indicates the predicted class label.
P i = s o f t m a x ( z ) i = e z i j = 1 n e z i
Ŷ = max i ( P i )
Z i = j i h j · W j i
The mini-batch gradient descent algorithm (MBGDM), which combines batch gradient descent (BGD) and stochastic gradient descent (SGD) to achieve lower computational efforts and robustness, is utilized for training the DL frameworks. MBGDM splits the training data into fewer batches ( b ) to reduce training time. The weights of the DL framework are modified using the error function described in the equation.
E t f ( w ) = 1 b i = t 1 b + 1 t b f ( w , x i )
where xi is the ith feature set of the training data, and MBGDM considers the initial learning rate μ = 0.001 to modify the weights, as described in Equation (35).
W t + 1 = W t μ w E f ( w t )
Here, W t + 1 denoted modified weights, W t signifies older weights, E f ( w t ) describes the error function and w symbolizes the gradient.
B. 
Bi-LSTM
Bi-LSTM provides a temporal representation of EEG features and long-range connectivity in the local and global features of the EEG. The proposed scheme uses two Bi-LSTM layers with 50 hidden layers to represent the features [33,34].
BiLSTM is an extension of the LSTM, which provides forward and backward long-term dependencies in the EEG. Understanding the context of the sequence in both directions is essential for time series stress analysis. In the input sequence from t = 1 to t = T, the BiLSTM incorporates a forward LSTM model to learn forward dependence. The input sequence from t = T to t = 1 incorporates a backward LSTM model to learn backward reliance. After flattening the PDCNN’s output, the BiLSTM determined the forward and backward states for the input sequence { x _ 1 , x _ 2 , , x _ T } using Equations (36) and (37), respectively.
h ¯ t = L S T M f o r w a r d x t , h ¯ t 1
h ¯ t = L S T M b a c k w a r d x t , h ¯ t + 1
The output of the BiLSTM combines the backward and forward hidden states at time t, as given in Equation (38). The symbol h t represents the final hidden state of the BiLSTM at time t, and the operator represents the concatenation of the backward and forward states.
h t = h ¯ t ;   h ¯ t
The LSTM provides temporal portrayal and long-term connection in complex stress aspects. It is responsible for regulating the flow of information inside the model and comprises the forget gate, the input gate, and the output gate. The information stored in the cell state is discarded via the forget gate. A representation of the forgetting gate may be seen in Equation (39). Several symbols are used in this context: x t represents the input state at time step t , h t 1 represents the hidden state from the previous step, W f represents the weight matrix of the forget gate, b f represents the bias value, and σ represents the sigmoid activation function.
f t = σ W f h t 1 , x t + b f
The input gate ( i t )  and candidate values ( C ~ t ) add the new information in the cell state as given in Equations (40) and (41), respectively. The cell state update ( C t ) combines the new information and the forgot gate’s output as given in Equation (42). The output gate o t produces the final output considering the hidden state h t as given in Equations (43) and (44), respectively.
i t = σ W i h t 1 , x t + b i
C ~ t = tanh W C h t 1 , x t + b C
C t = f t C t 1 + i t C ~ t
o t = σ W o h t 1 , x t + b o
h t = o t tanh C t
C. 
Deep Belief Network
The DBN provides multilevel hierarchical features using multiple restricted Boltzmann Machines (RBMs). RBMs act as hidden layers and learn the connectivity and correlation between the input and hidden layers. The RBM attempts to minimize the energy required to learn the fundamental probability distribution of the EEG features as given in Equation (45) [35,36].
E v , h = i = 1 V v i b i j = 1 H h j c j i = 1 V j = 1 H v i h j W i j
The RBM computes the energy for every configuration of the input (visible) and hidden layers using Equation. Here, v i denotes a binary state of the input layer, h j signifies a binary state of the hidden layer, b i stands for bias values of the input layer, h j symbolizes bias values of the hidden layer, V denotes the number of input (visible) layers, H stands for the number of hidden layers, and W i j denotes weights linking the input and hidden layers. The RBM assigns probability p(v) to the visible vector v using Equation (46).
p v = h e E v , h u h e E u , h
As the connection between the hidden and hidden layers is unavailable, the conditional distribution p(h|v) is factorial and is depicted by Equation (47).
p h j = 1 | v = σ ( a j + i = 1 V w i j v i )
Similarly, the connection between the visible and invisible layers was unavailable. The conditional distribution p(v|h) is factorial and depicted by Equation (48).
p v j = 1 | h = σ ( b j + j = 1 H w i j h j )
The σ x indicates the sigmoid function and is denoted by Equation (49).
σ x = 1 ( 1 + e x )
The features of the last layer of the DCNN, BilSTM, and DBN are concatenated together and provided to the FC layer for connectivity improvement. Finally, a softmax classifier is utilized for the classification of the stress. The BDDNet is optimized for hyper-parameter tuning using novel EOA.

5. EOA for Hyper-Parameter Optimization

Employee satisfaction is crucial in private and government sectors to achieve the highest throughput of the employee for generating maximum profit and fulfilling employee satisfaction and personal needs. The novel EOA is motivated by the employee appraisal process in organizations where the employees are motivated for good work, penalized or warned for mistakes, and trained or mentored to enhance their professional skills. Here, the EOA is utilized for the hyper-parameter tuning of the BDDNet, such as learning rate, decay rate, and momentum, which are usually manually optimized and lead to poor performance for hybrid DL frameworks.
In EOA, the initial population of  N  employee is created for the  n  employee variables representing the performance variables. Here, N is analogous to the total possible solutions, and n denotes problem variables. The initial population of the employees is set randomly in 0 to 1 using Equation (50) where E M 1 , E M 2 , E M N represent the employees of the organization.
E M = E M 1 E M 2 E M N e m 11 e m 12 e m 1 n e m 21 e m 22 e m 2 n e m N 1 e m N 2 e m _ N n
The fitness of each employee is considered based on the training error of the BDDNet. The employees providing better fitness for the iteration are directly passed to the next iterations as the reward policy, and other employees are trained or mentored based on exploration and exploitation and exploration strategy. During exploration, the employees are trained by providing the external training using Equation (51), and during exploitation, the employees are optimized based on mentoring from the best employees ( E M B e s t ) of the organization using Equation (52).
E M i n e w = E M i + r 1 E M i E M r a n d
E M i n e w = E M i + r 2 E M i E M B e s t
Here, E M i n e w is the updated population, and r 1 and r 2 are the arbitrary numbers in 0 to 1. The population is updated to 100 iterations, and the final optimized solution, having a lower error rate, is considered for training the BDDNet.

6. Experimental Results and Discussions

This section provides discussions on the experimental results carried out for the stress detection on the DEAP dataset.

6.1. Simulation System and Parameter Configurations

The proposed BDDNet was trained using the MBGDM algorithm for 200 epochs with an initial learning rate of 0.001 and a cross-entropy loss function. The dataset was split at a ratio of 70:30 for training and testing. The BDDNet achieved a training accuracy of 100% for the 200 epochs. The system parameter configurations are presented in Table 3.

6.2. Dataset: DEAP

The DEAP dataset consists of EEG samples of 32 subjects recorded while watching musical videos [37]. The participants rated the videos based on valence, arousal, dislike, like, familiarity, and dominance. The EEGs were down-sampled to 128 Hz and segmented to 60 s duration. The dataset consists of 40 trials with 40 channels. Each EEG channel consists of 8064 samples. The samples included 32 EEG signals, two EOGs (horizontal and vertical), Zygomaticus EMG, trapezius EMG, GSR, temperature, respiration belt, and plethysmograph. A self-assessment manikin (SAM) scale based on Russell’s paradigm for emotion analysis was used to quantify valence (val) and arousal (arl). While low arousal and high valences are regarded as calm, high arousal and low valence are considered tension. The arousal is considered as the variations in amplitudes, and valence is considered as variations in the temporal properties of the EEG. The arousal and valence values are in the range of 0 to 9 obtained using the SAM method, where lower arousal (arl < 4) and higher valence (4 < val < 6) depict the positive and relaxed state. The higher arousal level (arl > 5) and lower valence (val < 3) describe negative emotion and high tension [37,38,39]. The Equations (53) and (54) explain how arousal and valence levels determine whether an EEG signal is calm or anxious, respectively. This analysis yielded 140 stress signals and 104 calm signals.
C a l m = ( a r l < 4 ) ( 4 < v a l < 6 )
S t r e s s = ( a r l > 5 ) ( v a l < 3 )

6.3. Performance Metrics

The outcomes of the proposed stress detection scheme were analyzed using different evaluation metrics. Recall and precision offer quantitative and qualitative measures of stress detection. The F1-score provides the balance in recall and precision, which depicts the balance in the accuracy of the two classes. The selectivity and negative predictive rate (NPV) were used to measure the absence of stress. The selectivity ensures the model’s ability, and NPV evaluates the reliability for predicting the calm state correctly. Equations (55)–(60) provide different evaluation metrics, where TPn represents the true-positive value, TNn denotes the true-negative value, FPn signifies the false-positive value, and FNn denotes the false-negative value for stress detection.
r e c a l l = T P n T P n + F N n
p r e c i s i o n = T P n T P n + F P n
F 1 - s c o r e = 2 r e c a l l p r e c i s i o n r e c a l l + p r e c i s i o n
S e l e c t i v i t y = T N n T N n + F P n
N P V = T N n T N n + F N n
A c c u r a c y = T P n + T N n T P n + T N n + F N n + F P n

6.4. Discussions on Results for DEAP Dataset

Figure 3 and Figure 4 show the confusion matrix (CF) for the different classifiers for the two-class stress detection for the BDDNet with all 40 channels and BDDNet-ICA with five channels, respectively.
The overall results of the proposed stress-detection scheme for different DL frameworks are presented in Table 4. The proposed BDDNet provides superior results compared to traditional classifiers. The BDDNet-ICSA provides a superior accuracy of 97.3% compared to Bi-LSTM (90.5%), DBN (87.8%), and DCNN (85.1%) for five channels selected using ICSA. ICA chooses prominent channels with higher information, lower intra-class variability, and higher inter-class variability. It offers improved results compared to the results for the original dataset. The hybrid BDDNet-ICSA provides better spectral–temporal representation, long-term dependency, and multilevel hierarchical features, and assists in achieving improvements of 14.33%, 10.82%, and 7.51% in accuracy over the DCNN, DBN, and Bi-LSTM, respectively. The BDDNet provides disparity in qualitative measures (precision of 86.7%) and a quantitative measure (recall of 92.9%) for the 40 channels because of redundant and non-salient information in the EEG. The BDDNet without channel selection resulted in poor selectivity (81.2%) and lower accuracy (87.8%) for the hybrid BDDNet.
However, channel selection using ICSA helps stabilize the training process and improves the training accuracy, as given in Figure 5. The BDDNet-ICSA provides a good balance between recall (97.6%) and precision (97.6%). BDDNet-ICSA provided an improved F1-score of 97.6% compared with the F1-score achieved for the BDDNet (89.69%). The BDDNet provides a recall of 92.9%, a precision of 86.7%, an F1-score of 89.69%, a selectivity of 81.2%, an NPV of 89.7%, and an accuracy of 87.8% for the 40-channel EEG data. However, the BDDNet-ICA provides enhanced recall of 97.6%, precision of 97.6%, F1-score of 97.6%, selectivity of 96.9%, NPV of 96.9%, and accuracy of 97.3% for the 15-channel EEG.
The salient channels selected using ICSA are visualized in Figure 6. The ICSA provides the F4, F3, FP1, FP2, and T7 channels when 5-channels are selected: F4, F3, FP2, FP1, T7, F7, T8, F8, P7, and O1 when 10 channels are selected, and F4, F3, FP2, FP1, T7, F7, T8, F8, P7, O1, C4, P8, O2, FC5, FC6, FC2, C3, AF4, AF3, and P3 when 20-channels are chosen using SMO. The selected EEG channels are crucial for stress detection, covering the brain regions involved in emotional processing, cognitive workload, and autonomic responses. Frontal channels (F3, F4, FP1, FP2, AF3, AF4, F7, and F8) capture stress-related asymmetry in the prefrontal cortex, where the right side is more active during stress. Temporal channels (T7, T8, F7, and F8) are linked to the amygdala and regulate fear and emotional reactions. The central (C3, C4, FC2, FC5, and FC6) and parietal (P3, P7, and P8) channels reflect cognitive stress, attention modulation, and sensorimotor responses. Occipital channels (O1 and O2) help monitor stress-induced changes in visual perception, particularly through alpha wave suppression.
Table 5 and Figure 7 offer the comparative analysis of BDDNet-based stress detection for different channels selected using ICSA. At 5 channels, BDDNet achieves 80.4%, improving to 84.2% at 10 channels and reaching a maximum of 97.7% at 15 channels. Similar trends are seen in other models. Beyond 15 channels, performance declines slightly; BDDNet drops to 96.8% (20 channels) and 87.8% (40 channels) due to redundancy in features. We have chosen 15 channels for final implementation, which leads to better accuracy and lower computational intricacy.

6.5. Discussions on Results for SEED Dataset

We have also evaluated the effectiveness of the proposed dataset for the SEED dataset, which consists of a total of 62 channels [40], to analyze the generalization capability of the proposed system. The effectiveness of the system is evaluated for the SEED dataset for ICSA-based channel selection and without channel selection, as given in Table 6. The BDDNet provides better results for channel selection using ICSA for the 20 channels. The BDDNet without channel selection resulted in an overall accuracy of 82.51%, a recall of 88.32%, a precision of 83.09%, an F1-score of 85.62%, an NPV of 86.43%, and a selectivity of 76.21% for 62 channels. The BDDNet with ICSA achieves an overall accuracy of 92.63%, a recall of 92.79%, a precision of 92.52%, an F1-score of 92.66%, a specificity of 91.48%, and an NPV of 91.68%, demonstrating a notable improvement over the traditional technique. With channel selection, the system achieves overall accuracies of 82.06% for DCNN, 82.59% for DBN, 86.97% for BiLSTM, and 92.62% for BDDNet, optimized using EOA for 20 channels. The proposed algorithm yields superior results for the SEED dataset, also demonstrating generalization capability for a stress detection system.
On the SEED dataset, BDDNet outperforms DCNN, DBN, and Bi-LSTM across almost all EEG channel configurations as given in Table 7. Its accuracy improves steadily with an increase in channels, reaching a peak of 92.62% with 20 channels, significantly higher than Bi-LSTM (86.97%), DBN (82.59%), and DCNN (82.06%). Even with 25 and 30 channels, BDDNet maintains strong performance (91.80% and 91.50%), demonstrating its ability to leverage richer feature sets effectively. Although accuracy slightly drops beyond 35 channels due to redundant information, BDDNet continues to lead, achieving 77.50% with 62 channels, demonstrating its superior feature learning and robustness for stress detection.

6.6. Discussions on Results for Different Channel Selection Techniques

Table 8 highlights the role of channel selection techniques in improving stress detection performance using the BDDNet-EOA model on the DEAP and SEED datasets. The optimal selection of EEG channels reduces redundant information, focuses on the most discriminative features, and thereby improves classification accuracy. Among the techniques, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) show significant gains, with PSO achieving 95.25% on DEAP and 88.2% on SEED. However, the CSA outperforms GA and PSO, achieving 96.8% (DEAP) and 90.25% (SEED) due to its balanced exploration and exploitation strategy, which prevents premature convergence. CSA is inspired by the intelligent behavior of crows in hiding and retrieving food, enabling it to dynamically switch between global and local searches. Unlike GA and PSO, which can become trapped in local optima, CSA effectively maintains diversity in the search space, resulting in better channel subset selection and higher accuracy. When further enhanced with Improved CSA (ICSA), the performance peaks at 97.3% (DEAP) and 92.62% (SEED), making it the most effective approach for stress detection among all compared techniques.

6.7. Discussions on Comparative Analysis of Results with Traditional Techniques

A comparison of the proposed scheme with the traditional state of the art is provided in Table 7. Li et al. [23] presented inter-frequency band mapping (IFBM) features for depicting the distinctiveness of EEGs for stress analysis. It provides 95.15% accuracy for spatial–frequency convolutional self-attention networks (SFCSAN). Kim et al. [24] suggested a 3-D convolutional gated self-attention DNN (3DCGSA) along with IBFM for stress detection, which showed 96.68% accuracy for stress detection. Saranya and Jayanthy [22] explored the affinity propagation. This artificial neural network achieved an overall accuracy of 86.80% for two-class stress detection, utilizing an experimental selection strategy for channel selection. The feature representation using a time-domain wavelet-based time–frequency domain depiction of EEG, presented by Hasan and Kim [39], offers an overall accuracy of 73.8% for the KNN classifier. The CBGG presented by Roy et al. [11] provides an overall accuracy of 96.35% for the DEAP dataset. The 1-D DCNN and LSTM achieve an overall accuracy of 88.03% for the DEAP dataset, which effectively captures temporal depiction using LSTM but lacks generalization capability (Table 9).
The proposed EOA-optimized BDDNet-ICSA offers an improved accuracy of 97.3% for 5-channel EEGs. The preprocessing of the EECG using WPT-based soft thresholding helps minimize the noise and artifacts in the EEG signal. The WPT helps retain the structural content of the EEG and demonstrates superior accuracy for 87.8% of the 40 EEG channels and 97.3% for the five channels selected using ICSA for stress detection. However, without EEG filtering, the BDDNet provides 82.25% and 85.45% accuracy for 40 channels and 15 channels, respectively.

7. Conclusions

Thus, this article presents stress detection using a novel hybrid BDDNet that combines a DCNN, BiLSTM, and DBN. It helps improve feature distinctiveness, spectral–temporal depiction, long-term dependency, and multilevel abstracted hierarchical features. The competitive improved CSA provides efficient channel selection, offering several benefits, including the capacity to self-organize, simplicity, flexibility, robustness, and scalability. The novel EOA is used to optimize the hyper-parameters of the BDDNet, such as learning rate, decay rate, and momentum. The performance of the EOA-optimized BDDNet-ICSA was evaluated on the DEAP dataset, yielding enhanced recall, precision, F1-score, selectivity, NPV, and accuracy of 97.6%, 97.6%, 97.6%, 96.9%, 96.9%, and 97.3%, respectively, for the 15-channel EEG. The proposed BDDNet offers an overall accuracy of 92.62% for the SEED dataset and helps to validate the generalization capability of the system. The complexity in the DL framework may limit the deployment flexibility of the suggested system on the resource-constrained standalone devices. As DL architectures are highly abstracted, the interpretability and explainability of the stress detection system are inferior, which limits the acquisition of trust and reliability in real-time critical applications. The disparity in the calm and stress samples leads to a class imbalance problem. In the future, the focus should be on improving the interpretability and explainability of the system. In the future, the effectiveness of the stress detection scheme can be improved by generating synthetic samples using data augmentation to lessen the class imbalance issue. Additionally, the effectiveness of the system can be enhanced by implementing an efficient feature selection scheme to minimize the computational complexity of the stress detection framework.

Author Contributions

Conceptualization, M.S.A.; Methodology, M.S.A.; Software, M.S.A.; Validation, M.S.A., B.K., T.V. and S.U.; Formal analysis, M.S.A., B.K. and T.V.; Investigation, M.S.A. and S.U.; Resources, B.K. and T.V.; Data curation, B.K.; Writing—original draft, M.S.A.; Writing—review & editing, M.S.A.; Visualization, M.S.A.; Supervision, B.K., T.V. and S.U.; Project administration, T.V.; Funding acquisition, S.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2025R79), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The DEAP dataset is publically available at https://www.eecs.qmul.ac.uk/mmv/datasets/deap/ accessed on 27 July 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Masri, G.; Al-Shargie, F.; Tariq, U.; Almughairbi, F.; Babiloni, F.; Al-Nashash, H. Mental stress assessment in the workplace: A review. IEEE Trans. Affect. Comput. 2023, 15, 958–976. [Google Scholar] [CrossRef]
  2. Mentis, A.-F.A.; Lee, D.; Roussos, P. Applications of artificial intelligence− machine learning for detection of stress: A critical overview. Mol. Psychiatry 2023, 29, 1882–1894. [Google Scholar] [CrossRef]
  3. Hemakom, A.; Atiwiwat, D.; Israsena, P.; Gadekallu, T.R. ECG and EEG based detection and multilevel classification of stress using machine learning for specified genders: A preliminary study. PLoS ONE 2023, 18, e0291070. [Google Scholar] [CrossRef] [PubMed]
  4. Jyothirmy, S.; Geethika, G.; Sai, S.B.; Saiteja, B.; Reddy, V.H.P. Machine Learning Algorithms based Detection and Analysis of Stress-A Review. In Proceedings of the 2023 Second International Conference on Electronics and Renewable Systems (ICEARS), Tuticorin, India, 2–4 March 2023; pp. 1456–1463. [Google Scholar]
  5. Jafari, M.; Shoeibi, A.; Khodatars, M.; Bagherzadeh, S.; Shalbaf, A.; García, D.L.; Gorriz, J.M.; Acharya, U.R. Emotion recognition in EEG signals using deep learning methods: A review. Comput. Biol. Med. 2023, 165, 107450. [Google Scholar] [CrossRef]
  6. Katmah, R.; Al-Shargie, F.; Tariq, U.; Babiloni, F.; Al-Mughairbi, F.; Al-Nashash, H. A review on mental stress assessment methods using EEG signals. Sensors 2021, 21, 5043. [Google Scholar] [CrossRef]
  7. Agrawal, J.; Gupta, M.; Garg, H. Early stress detection and analysis using EEG signals in machine learning framework. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1116, 012134. [Google Scholar]
  8. Castro-García, J.A.; Molina-Cantero, A.J.; Gómez-González, I.M.; Lafuente-Arroyo, S.; Merino-Monge, M. Towards human stress and activity recognition: A review and a first approach based on low-cost wearables. Electronics 2022, 11, 155. [Google Scholar] [CrossRef]
  9. Vanhollebeke, G.; De Smet, S.; De Raedt, R.; Baeken, C.; van Mierlo, P.; Vanderhasselt, M.-A. The neural correlates of psychosocial stress: A systematic review and meta-analysis of spectral analysis EEG studies. Neurobiol. Stress 2022, 18, 100452. [Google Scholar] [CrossRef]
  10. Albertetti, F.; Simalastar, A.; Rizzotti-Kaddouri, A. Stress detection with deep learning approaches using physiological signals. In International Conference on IoT Technologies for HealthCare; Springer International Publishing: Cham, Switzerland, 2020; pp. 95–111. [Google Scholar]
  11. Roy, B.; Malviya, L.; Kumar, R.; Mal, S.; Kumar, A.; Bhowmik, T.; Hu, J.W. Hybrid deep learning approach for stress detection using decomposed eeg signals. Diagnostics 2023, 13, 1936. [Google Scholar] [CrossRef]
  12. Mane, S.A.M.; Shinde, A. StressNet: Hybrid model of LSTM and CNN for stress detection from electroencephalogram signal (EEG). Results Control Optim. 2023, 11, 100231. [Google Scholar] [CrossRef]
  13. Patel, A.; Nariani, D.; Rai, A. Mental stress detection using eeg and recurrent deep learning. In Proceedings of the 2023 IEEE Applied Sensing Conference (APSCON), Bengaluru, India, 23–25 January 2023; pp. 1–3. [Google Scholar]
  14. Bhatnagar, S.; Khandelwal, S.; Jain, S.; Vyawahare, H. A deep learning approach for assessing stress levels in patients using electroencephalogram signals. Decis. Anal. J. 2023, 7, 100211. [Google Scholar] [CrossRef]
  15. Hafeez, M.A.; Shakil, S. EEG-based stress identification and classification using deep learning. Multimed. Tools Appl. 2024, 83, 42703–42719. [Google Scholar] [CrossRef]
  16. Geetha, R.; Gunanandhini, S.; Srikanth, G.U.; Sujatha, V. Human Stress Detection in and Through Sleep Patterns Using Machine Learning Algorithms. J. Inst. Eng. (India) Ser. B 2024, 105, 1691–1713. [Google Scholar] [CrossRef]
  17. Palanisamy, K.K.; Rengaraj, A. Early Detection of Stress and Anxiety Based Seizures in Position Data Augmented EEG Signal Using Hybrid Deep Learning Algorithms. IEEE Access 2024, 12, 35351–35365. [Google Scholar] [CrossRef]
  18. Bakare, S.; Kuge, S.; Sugandhi, S.; Warad, S.; Panguddi, V. Detection of Mental Stress using EEG signals-Alpha, Beta, Theta, and Gamma Bands. In Proceedings of the 2024 5th International Conference for Emerging Technology (INCET), Belgaum, India, 24–26 May 2024; pp. 1–9. [Google Scholar]
  19. Khan, M.R.; Ahmad, M. Mental Stress Detection from EEG Signals Using Comparative Analysis of Random Forest and Recurrent Neural Network. In Proceedings of the 2024 International Conference on Advances in Computing, Communication, Electrical, and Smart Systems (iCACCESS), Dhaka, Bangladesh, 8–9 March 2024; pp. 1–6. [Google Scholar]
  20. Gonzalez-Vazquez, J.J.; Bernat, L.; Ramon, J.L.; Morell, V.; Ubeda, A. A Deep Learning Approach to Estimate MultiLevel Mental Stress from EEG using Serious Games. IEEE J. Biomed. Health Inform. 2024, 28, 3965–3972. [Google Scholar] [CrossRef]
  21. Naren, J.; Babu, A.R. EEG stress classification based on Doppler spectral features for ensemble 1D-CNN with LCL activation function. J. King Saud Univ.-Comput. Inf. Sci. 2024, 36, 102013. [Google Scholar] [CrossRef]
  22. Saranya, K.; Jayanthy, S. An Efficient AP-ANN-Based Multimethod Fusion Model to Detect Stress through EEG Signal Analysis. Comput. Intell. Neurosci. 2022, 2022, 7672297. [Google Scholar]
  23. Li, D.; Xie, L.; Chai, B.; Wang, Z.; Yang, H. Spatial-frequency convolutional self-attention network for EEG emotion recognition. Appl. Soft Comput. 2022, 122, 108740. [Google Scholar] [CrossRef]
  24. Kim, H.-G.; Jeong, D.-K.; Kim, J.-Y. Emotional Stress Recognition Using Electroencephalogram Signals Based on a Three-Dimensional Convolutional Gated Self-Attention Deep Neural Network. Appl. Sci. 2022, 12, 11162. [Google Scholar] [CrossRef]
  25. Dhake, D.; Angal, Y. EEG Signal Enhancement using Wavelet based Soft-thresholding Approach. In Proceedings of the 2022 3rd International Conference for Emerging Technology (INCET), Belgaum, India, 27–29 May 2022; pp. 1–5. [Google Scholar] [CrossRef]
  26. Hussien, A.G.; Amin, M.; Wang, M.; Liang, G.; Alsanad, A.; Gumaei, A.; Chen, H. Crow search algorithm: Theory, recent advances, and applications. IEEE Access 2020, 8, 173548–173565. [Google Scholar] [CrossRef]
  27. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  28. Sayed, G.I.; Hassanien, A.E.; Azar, A.T. Feature selection via a novel chaotic crow search algorithm. Neural Comput. Appl. 2019, 31, 171–188. [Google Scholar] [CrossRef]
  29. Anita, M.; Kowshalya, A.M. Automatic epileptic seizure detection using MSA-DCNN and LSTM techniques with EEG signals. Expert Syst. Appl. 2024, 238, 121727. [Google Scholar] [CrossRef]
  30. Bhangale, K.B.; Kothandaraman, M. Speech emotion recognition using the novel PEmoNet (Parallel Emotion Network). Appl. Acoust. 2023, 212, 109613. [Google Scholar] [CrossRef]
  31. Bhangale, K.; Kothandaraman, M. Speech emotion recognition based on multiple acoustic features and deep convolutional neural network. Electronics 2023, 12, 839. [Google Scholar] [CrossRef]
  32. Bhangale, K.; Kothandaraman, M. Speech Emotion Recognition Using Generative Adversarial Network and Deep Convolutional Neural Network. Circuits Syst. Signal Process. 2024, 43, 2341–2384. [Google Scholar] [CrossRef]
  33. Alahmadi, T.J.; Rahman, A.U.; Alhababi, Z.A.; Ali, S.; Alkahtani, H.K. Prediction of mild cognitive impairment using EEG signal and BiLSTM network. Mach. Learn. Sci. Technol. 2024, 5, 025028. [Google Scholar] [CrossRef]
  34. Thiripurasundari, D.; Bhangale, K.; Aashritha, V.; Mondreti, S.; Kothandaraman, M. Speech emotion recognition for human–computer interaction. Int. J. Speech Technol. 2024, 27, 817–830. [Google Scholar] [CrossRef]
  35. Hassan, M.M.; Alam, M.G.R.; Uddin, M.Z.; Huda, S.; Almogren, A.; Fortino, G. Human emotion recognition using deep belief network architecture. Inf. Fusion 2019, 51, 10–18. [Google Scholar] [CrossRef]
  36. Song, R.; Wang, Z.; Guo, L.; Zhao, F.; Xu, Z. Deep belief networks (DBN) for financial time series analysis and market trends prediction. World J. Innov. Mod. Technol. 2024, 7, 1–10. [Google Scholar] [CrossRef]
  37. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.-S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef]
  38. Hag, A.; Handayani, D.; Altalhi, M.; Pillai, T.; Mantoro, T.; Kit, M.H.; Al-Shargie, F. Enhancing EEG-based mental stress state recognition using an improved hybrid feature selection algorithm. Sensors 2021, 21, 8370. [Google Scholar] [CrossRef] [PubMed]
  39. Hasan, M.J.; Kim, J.-M. A hybrid feature pool-based emotional stress state detection algorithm using EEG signals. Brain Sci. 2019, 9, 376. [Google Scholar] [CrossRef] [PubMed]
  40. Zheng, W.-L.; Lu, B.-L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
Figure 1. Flow diagram of the proposed stress detection system.
Figure 1. Flow diagram of the proposed stress detection system.
Brainsci 15 00835 g001
Figure 2. Flowchart of novel ICSA for EEG channel selection.
Figure 2. Flowchart of novel ICSA for EEG channel selection.
Brainsci 15 00835 g002
Figure 3. Confusion matrix for 2-class stress detection for BDDNet (40 channels).
Figure 3. Confusion matrix for 2-class stress detection for BDDNet (40 channels).
Brainsci 15 00835 g003
Figure 4. Confusion matrix for 2-class stress detection for BDDNet (15 channels).
Figure 4. Confusion matrix for 2-class stress detection for BDDNet (15 channels).
Brainsci 15 00835 g004
Figure 5. Visualizations of the results of stress detection scheme for DEAP.
Figure 5. Visualizations of the results of stress detection scheme for DEAP.
Brainsci 15 00835 g005aBrainsci 15 00835 g005b
Figure 6. Channels selected using ICSA for DEAP.
Figure 6. Channels selected using ICSA for DEAP.
Brainsci 15 00835 g006
Figure 7. Accuracy for different channel selections using ICSA for BDDNet.
Figure 7. Accuracy for different channel selections using ICSA for BDDNet.
Brainsci 15 00835 g007
Table 1. EEG Signal details.
Table 1. EEG Signal details.
EEG BandAmplitudeFrequencyMental State
Delta (δ)100–200 µv0.5–4 HzBrain injury, deep sleep, unconsciousness
Theta (θ)20–100 µv4–8 HzMeditation, drowsiness, creativity
Alpha (α)20–60 µv8–13 HzCalm, relaxed, awake (not alert)
Beta (β)5–20 µv13–30 HzProblem-solving, alert, active thinking
Gamma (γ)3–10 µv30–100 HzPerception, attention, high-level cognition
Table 2. Details of EEG Features.
Table 2. Details of EEG Features.
Types of FeaturesFeaturesTotal Features
Time-domain FeaturesMean1
Standard Deviation1
Variance1
Median1
Skewness1
ZCR1
Activity1
Mobility1
Complexity1
RMS1
Shannon Entropy1
Line Length1
Non-linear Energy1
Frequency-Domain FeaturesWPT224
Energy1
IWMF1
IWBF1
Spectral Kurtosis257
Textural FeaturesLBP10
LNDP10
LGP10
Total Features527
Table 3. Initial parameter configurations of the BDDNet.
Table 3. Initial parameter configurations of the BDDNet.
ParameterSpecification
Learning AlgorithmMBGDM
Initial learning rate0.001
Loss functionCross-entropy
Epoch200
Dropout0.5
Training Testing ratio70:30
DCNN FiltersFirst layer-64, Second Layer-128, Third layer-256
DCNN Filter Size3 × 1
Hidden Units in DBN LayersRBM 1—200, RBM 2—150, RBM 3—100
BiLSTM Layers2 Layers (50 Gates)
Table 4. Comparative Results for the Stress Detection for Different DL Frameworks for DEAP.
Table 4. Comparative Results for the Stress Detection for Different DL Frameworks for DEAP.
Performance
Metrics
Without Channel Selection (40 Channels)With Channel Selection (15 Channels)
DCNNDBNBi-LSTMBDDNetDCNNDBNBi-LSTMBDDNet
Recall83.383.388.192.988.190.592.997.6
Precision77.879.584.186.78688.490.797.6
F1-Score80.4681.3686.0589.6987.0489.4491.7997.6
Selectivity68.871.978.181.281.284.487.596.9
NPV75.976.783.389.783.987.190.396.9
Accuracy7778.483.887.885.187.890.597.7
Table 5. Comparative Analysis of BDDNet-Based Stress Detection for Different Channels for DEAP.
Table 5. Comparative Analysis of BDDNet-Based Stress Detection for Different Channels for DEAP.
Number of Channels SelectedAccuracy (%)
DCNNDBNBi-LSTMBDDNet
570.572.075.880.4
1074.075.579.584.2
1585.187.890.597.7
2084.286.989.896.8
2583.385.788.795.9
3081.984.487.594.5
3579.882.285.692.6
4077.078.483.887.8
Table 6. Comparative Results for the Stress Detection for Different DL Frameworks for SEED.
Table 6. Comparative Results for the Stress Detection for Different DL Frameworks for SEED.
Performance
Metrics
Without Channel Selection (62 Channels)With Channel Selection (20 Channels)
DCNNDBNBi-LSTMBDDNetDCNNDBNBi-LSTMBDDNet
Recall79.4077.6183.5788.3284.9786.1887.3492.79
Precision73.1476.4179.5983.0980.6983.6484.9892.53
F1-Score76.1477.0181.5385.6282.7884.8986.1492.66
Selectivity63.1366.2773.0176.2178.1681.1382.1591.48
NPV70.9670.7878.8686.4380.3581.5886.8491.68
Accuracy73.6574.9379.8782.5182.0682.5986.9792.62
Table 7. Comparative Analysis of BDDNet-Based Stress Detection for Different Channels for SEED.
Table 7. Comparative Analysis of BDDNet-Based Stress Detection for Different Channels for SEED.
Number of Channels SelectedAccuracy (%)
DCNNDBNBi-LSTMBDDNet
570.5072.0075.8080.40
1072.2573.7577.6582.30
1579.5581.6585.0090.95
2082.0682.5986.9792.62
2581.9084.4086.5091.80
3080.8583.3086.0591.50
3579.8082.2085.6088.80
4078.4080.3084.7085.45
4577.0078.4083.8084.20
5076.4578.0582.5682.65
5576.1877.8881.9479.85
6275.9077.7081.3377.50
Table 8. The role of channel selection techniques in improving stress detection performance using the BDDNet-EOA model on the DEAP and SEED datasets.
Table 8. The role of channel selection techniques in improving stress detection performance using the BDDNet-EOA model on the DEAP and SEED datasets.
Stress Detection MethodChannel
Selection
DEAP
(15 Channels)
SEED
(20 Channels)
BDDNet-EOAGA93.2587.25
BDDNet-EOAPSO95.2588.20
BDDNet-EOACSA96.8090.25
BDDNet-EOAICSA97.3092.62
Table 9. Comparative Analysis with Traditional State of the Art for DEAP Dataset.
Table 9. Comparative Analysis with Traditional State of the Art for DEAP Dataset.
Author Pre-ProcessingChannel
Selection
Feature
Representation
MethodAccuracy
Li et al. (2022) [23]--IFBMSFCSAN95.15%
Kim et al. (2022) [24]--IFBM3DCGSA96.68%
Saranya and Jayanthy (2022) [22]-Experimental selectionMultiple spectral and statistical features Pearson correlation coefficient (PCC) for feature selectionAP-ANN86.80%
Hasan and Kim (2019) [39]Band Pass Filter-Time-domain and wavelet time-frequency featuresKNN73.38%
Roy et al. [11]Band Pass Filter-DWTCBGG96.35%
Patel et al. [13] Band Pass Filter--1-D CNN and BiLSTM 88.03%
Proposed Method--MEGBDDNet82.25%
-ICSAMEGBDDNet85.45%
WPT-MEGBDDNet87.8%
WPTICSAMEGBDDNet97.3%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Andhare, M.S.; Vijayan, T.; Karthik, B.; Urooj, S. A Novel Optimized Hybrid Deep Learning Framework for Mental Stress Detection Using Electroencephalography. Brain Sci. 2025, 15, 835. https://doi.org/10.3390/brainsci15080835

AMA Style

Andhare MS, Vijayan T, Karthik B, Urooj S. A Novel Optimized Hybrid Deep Learning Framework for Mental Stress Detection Using Electroencephalography. Brain Sciences. 2025; 15(8):835. https://doi.org/10.3390/brainsci15080835

Chicago/Turabian Style

Andhare, Maithili Shailesh, T. Vijayan, B. Karthik, and Shabana Urooj. 2025. "A Novel Optimized Hybrid Deep Learning Framework for Mental Stress Detection Using Electroencephalography" Brain Sciences 15, no. 8: 835. https://doi.org/10.3390/brainsci15080835

APA Style

Andhare, M. S., Vijayan, T., Karthik, B., & Urooj, S. (2025). A Novel Optimized Hybrid Deep Learning Framework for Mental Stress Detection Using Electroencephalography. Brain Sciences, 15(8), 835. https://doi.org/10.3390/brainsci15080835

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop