# Estimating the Depth of Anesthesia from EEG Signals Based on a Deep Residual Shrinkage Network

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

^{®}Brain Function Monitoring (Masimo, Irvine, CA, USA) device has been recently introduced, and its crucial parameter is the patient state index (PSI) [12]. Previous work shows that the agreement between the PSI and BIS is relatively good, and the SedLine monitor is advantageous because it has more channels than the BIS monitor [13].

## 2. Materials and Methods

#### 2.1. Dataset

^{®}Brain Function Monitoring (Masimo, Irvine, CA, USA) device, which was recently introduced into clinical practice and displays PSI as the index of sedation depth. The SedLine EEG sensor consists of 6 electrodes: 1 reference channel (CT), 1 ground channel (CB), and 4 active EEG channels (L1, L2, R1, and R2) placed in the frontal pole. During the midazolam anesthesia, the raw EEG signals are sampled at 178.2 Hz. The dataset we used records 4 channels’ raw EEG signals, PSI values, spectral edge frequency (SEF), burst suppression ratio, electromyographic (EMG) activity, and artifact percentage.

#### 2.2. EEG Signals Preprocessing

#### 2.3. Evaluation Metrics

#### 2.4. Deep Learning Model

#### 2.4.1. Deep Residual Shrinkage Network

#### 2.4.2. 1 × 1 Convolution

#### 2.4.3. Proposed Regression Model

#### 2.5. Conventional Models

#### 2.5.1. Features Extraction

- Band Power

- Spectral Edge Frequency

- Sample Entropy

#### 2.5.2. Conventional Regression Models

- Support Vector Machine

- Random Forest

- Artificial Neural Network

## 3. Results

#### 3.1. Experimental Settings

#### 3.2. Experimental Results

## 4. Discussion

- The recorded raw EEG signals are usually contaminated by electrical noise and other physiological signals. We used bandpass finite filters to remove electrical noise, and the WT-CEEMDAN-ICA algorithm to extract clean EEG signals.
- We adopted deep learning models to extract discriminative features automatically instead of extracting features manually from EEG signals.
- To improve our proposed model’s generalization ability and convergence speed, we standardized the EEG signals.
- DRSN-CW can deal with signals disturbed by noise, which is suitable for EEG-signal processing.

## 5. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Hajat, Z.; Ahmad, N.; Andrzejowski, J. The role and limitations of EEG-based depth of anaesthesia monitoring in theatres and intensive care. Anaesthesia
**2017**, 72, 38–47. [Google Scholar] [CrossRef] [PubMed][Green Version] - Kent, C.; Domino, K.B. Depth of anesthesia. Curr. Opin. Anaesthesiol.
**2009**, 22, 782–787. [Google Scholar] [CrossRef] [PubMed] - Fahy, B.G.; Chau, D.F. The technology of processed electroencephalogram monitoring devices for assessment of depth of anesthesia. Anesth. Analg.
**2018**, 126, 111–117. [Google Scholar] [CrossRef] [PubMed] - Aydemir, E.; Tuncer, T.; Dogan, S.; Gururajan, R.; Acharya, U.R. Automated major depressive disorder detection using melamine pattern with EEG signals. Appl. Intell.
**2021**, 51, 6449–6466. [Google Scholar] [CrossRef] - Loh, H.W.; Ooi, C.P.; Aydemir, E.; Tuncer, T.; Dogan, S.; Acharya, U.R. Decision support system for major depression detection using spectrogram and convolution neural network with EEG signals. Expert Syst.
**2022**, 39, e12773. [Google Scholar] [CrossRef] - Tasci, G.; Loh, H.W.; Barua, P.D.; Baygin, M.; Tasci, B.; Dogan, S.; Acharya, U.R. Automated accurate detection of depression using twin Pascal’s triangles lattice pattern with EEG Signals. Knowl.-Based Syst.
**2022**, 260, 110190. [Google Scholar] [CrossRef] - Xiao, G.; Shi, M.; Ye, M.; Xu, B.; Chen, Z.; Ren, Q. 4D attention-based neural network for EEG emotion recognition. Cogn. Neurodynamics.
**2022**, 16, 805–818. [Google Scholar] [CrossRef] - Liang, Z.; Wang, Y.; Sun, X.; Li, D.; Voss, L.J.; Sleigh, J.W.; Li, X. EEG entropy measures in anesthesia. Front. Comput. Neurosci.
**2015**, 9, 16. [Google Scholar] [CrossRef] - Saadeh, W.; Khan, F.H.; Altaf, M.A.B. Design and implementation of a machine learning based EEG processor for accurate estimation of depth of anesthesia. IEEE Trans. Biomed. Circuits Syst.
**2019**, 13, 658–669. [Google Scholar] [CrossRef] - Khan, F.H.; Ashraf, U.; Altaf, M.A.B.; Saadeh, W. A patient-specific machine learning based EEG processor for accurate estimation of depth of anesthesia. In Proceedings of the 2018 IEEE Biomedical Circuits and Systems Conference (BioCAS), Cleveland, OH, USA, 17–19 October 2018; pp. 1–4. [Google Scholar]
- Gonsowski, C.T. Anesthesia Awareness and the Bispectral Index. N. Engl. J. Med.
**2008**, 359, 427–431. [Google Scholar] - Drover, D.; Ortega, H.R. Patient state index. Best Pract. Res. Clin. Anaesthesiol.
**2006**, 20, 121–128. [Google Scholar] [CrossRef] - Ji, S.H.; Jang, Y.E.; Kim, E.H.; Lee, J.H.; Kim, J.T.; Kim, H.S. Comparison of Bispectral Index and Patient State Index during Sevoflurane Anesthesia in Children: A Prospective Observational Study. Available online: https://www.researchgate.net/publication/343754479_Comparison_of_bispectral_index_and_patient_state_index_during_sevoflurane_anesthesia_in_children_a_prospective_observational_study (accessed on 3 November 2020).
- Li, P.; Karmakar, C.; Yearwood, J.; Venkatesh, S.; Palaniswami, M.; Liu, C. Detection of epileptic seizure based on entropy analysis of short-term EEG. PLoS ONE
**2018**, 13, e0193691. [Google Scholar] [CrossRef] [PubMed][Green Version] - Olofsen, E.; Sleigh, J.W.; Dahan, A. Permutation entropy of the electroencephalogram: A measure of anaesthetic drug effect. BJA Br. J. Anaesth.
**2008**, 101, 810–821. [Google Scholar] [CrossRef] [PubMed][Green Version] - Liu, Q.; Ma, L.; Fan, S.Z.; Abbod, M.F.; Shieh, J.S. Sample entropy analysis for the estimating depth of anaesthesia through human EEG signal at different levels of unconsciousness during surgeries. PeerJ
**2018**, 6, e4817. [Google Scholar] [CrossRef][Green Version] - Esmaeilpour, M.; Mohammadi, A. Analyzing the EEG signals in order to estimate the depth of anesthesia using wavelet and fuzzy neural networks. Int. J. Interact. Multimed. Artif. Intell.
**2016**, 4, 12. [Google Scholar] [CrossRef][Green Version] - Ortolani, O.; Conti, A.; Di Filippo, A.; Adembri, C.; Moraldi, E.; Evangelisti, A.; Roberts, S.J. EEG signal processing in anaesthesia. Use of a neural network technique for monitoring depth of anaesthesia. Br. J. Anaesth.
**2002**, 88, 644–648. [Google Scholar] [CrossRef] [PubMed][Green Version] - Shalbaf, A.; Saffar, M.; Sleigh, J.W.; Shalbaf, R. Monitoring the depth of anesthesia using a new adaptive neurofuzzy system. IEEE J. Biomed. Health Inform.
**2017**, 22, 671–677. [Google Scholar] [CrossRef] - Gu, Y.; Liang, Z.; Hagihira, S. Use of Multiple EEG Features and Artificial Neural Network to Monitor the Depth of Anesthesia. Sensors
**2019**, 19, 2499. [Google Scholar] [CrossRef][Green Version] - Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Dean, J. A guide to deep learning in healthcare. Nat. Med.
**2019**, 25, 24–29. [Google Scholar] [CrossRef] - Lee, H.C.; Ryu, H.G.; Chung, E.J.; Jung, C.W. Prediction of bispectral index during target-controlled infusion of propofol and remifentanil: A deep learning approach. Anesthesiology
**2018**, 128, 492–501. [Google Scholar] [CrossRef] - Afshar, S.; Boostani, R. A Two-stage deep learning scheme to estimate depth of anesthesia from EEG signals. In Proceedings of the 2020 27th National and 5th International Iranian Conference on Biomedical Engineering (ICBME), Tehran, India, 26–27 November 2020. [Google Scholar]
- Castellanos, N.P.; Makarov, V.A. Recovering EEG brain signals: Artifact suppression with wavelet enhanced independent component analysis. J. Neurosci. Methods
**2006**, 158, 300. [Google Scholar] [CrossRef] [PubMed] - Mammone, N.; La Foresta, F.; Morabito, F.C. Automatic artifact rejection from multichannel scalp EEG by Wavelet ICA. IEEE Sens. J.
**2012**, 12, 533–542. [Google Scholar] [CrossRef] - Torres, M.E.; Colominas, M.A.; Schlotthauer, G.; Flandrin, P. A complete ensemble empirical mode decomposition with adaptive noise. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Zhao, M.; Zhong, S.; Fu, X.; Tang, B.; Pecht, M. Deep residual shrinkage networks for fault diagnosis. IEEE Trans. Ind. Inform.
**2019**, 16, 4681–4690. [Google Scholar] [CrossRef] - Lin, M.; Chen, Q.; Yan, S. Network in network. arXiv
**2013**, arXiv:1312.4400. [Google Scholar] - Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Seeck, M.; Koessler, L.; Bast, T.; Leijten, F.; Michel, C.; Baumgartner, C.; Beniczky, S. The standardized EEG electrode array of the IFCN. Clin. Neurophysiol.
**2017**, 128, 2070–2077. [Google Scholar] [CrossRef] - Alexandre, G. MEG and EEG data analysis with MNE-Python. Front. Neurosci.
**2013**, 7, 267. [Google Scholar] - Prerau, M.J.; Brown, R.E.; Bianchi, M.T.; Ellenbogen, J.M.; Purdon, P.L. Sleep neurophysiological dynamics through the lens of multitaper spectral analysis. Physiology
**2017**, 32, 60–92. [Google Scholar] [CrossRef][Green Version] - Obert, D.P.; Schweizer, C.; Zinn, S.; Kratzer, S.; Hight, D.; Sleigh, J.; Kreuzer, M. The influence of age on EEG-based anaesthesia indices. J. Clin. Anesth.
**2021**, 73, 110325. [Google Scholar] [CrossRef] - Pincus, S.M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA
**1991**, 88, 2297–2301. [Google Scholar] [CrossRef][Green Version] - Richman, J.S.; Lake, D.E.; Moorman, J.R. Sample Entropy. In Methods in Enzymology; Elsevier: Amsterdam, The Netherlands, 2004; pp. 172–184. [Google Scholar]
- Vapnik, V. The Nature of Statistical Learning Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Rodriguez-Perez, R.; Vogt, M.; Bajorath, J. Support vector machine classification and regression prioritize different structural features for binary compound activity and potency value prediction. ACS omega
**2017**, 2, 6371–6379. [Google Scholar] [CrossRef][Green Version] - Shahid, N.; Rappon, T.; Berta, W. Applications of artificial neural networks in health care organizational decision-making: A scoping review. PloS ONE
**2019**, 14, e0212356. [Google Scholar] [CrossRef] - Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Chintala, S. PyTorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process Syst.
**2019**, 32, 8026–8037. [Google Scholar] - Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Duchesnay, É. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res.
**2011**, 12, 2825–2830. [Google Scholar] - Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H.; Subha, D.P. Automated EEG-based screening of depression using deep convolutional neural network. Comput. Methods Programs Biomed.
**2018**, 161, 103–113. [Google Scholar] [CrossRef]

**Figure 2.**The algorithm flowchart of the WT-CEEMDAN-ICA method used to remove EOAs from EEG signals.

**Figure 3.**The structure of residual building block (RBB): (

**a**) the identity block where the input feature map is the same size as the output feature map. H, W, and C represent the height, width, and channels of the input and output feature map, respectively. (

**b**) the convolutional block where the size of the input feature map is different from that of the output feature map. There is a convolution operation and a Batch-normalization operation in the convolutional shortcut for changing the shape of the input. ${\mathrm{H}}_{1}$, ${\mathrm{W}}_{1}$, and ${\mathrm{C}}_{1}$ represent the height, width, and channels of the input feature map, respectively. ${\mathrm{H}}_{2}$, ${\mathrm{W}}_{2}$, and ${\mathrm{C}}_{2}$ represent the height, width, and channels of the output feature map, respectively. An RBB consists of two convolutional layers, two batch normalization (BN) layers, two rectifier linear units (ReLUs) layers, and one shortcut connection.

**Figure 4.**The structure of residual shrinkage building unit with channel-wise thresholds (RSBU-CW). ${\mathrm{H}}_{1}$, ${\mathrm{W}}_{1}$, and ${\mathrm{C}}_{1}$ represent the height, width, and channels of the input feature map, respectively. ${\mathrm{H}}_{2}$, ${\mathrm{W}}_{2}$, and ${\mathrm{C}}_{2}$ represent the height, width, and channels of the output feature map, respectively. There is a soft thresholding module in RSBU-CW. ${x}_{avg}$, z, and α are the indicators of the feature maps used to determine the threshold τ. x and y are the input and output feature maps of the soft thresholding module, respectively.

**Figure 5.**The illustration of 1 × 1 convolution. H, W, and C represent the height, width, and channels of the input feature map, respectively. A 1 × 1 convolution does not change the height or width but the number of channels of inputs.

**Figure 6.**The structure of our proposed model consists of the DRSN-CW block and 1 × 1 convolution block. The inputs of our proposed model are 4 channel-EEG signals, and the outputs are the corresponding predicted PSI values.

**Figure 8.**The classification performances (ACC, SE, and F1) of all the models on different anesthetized states (AW, LA, NA, and DA) and the regression performance (MSE) of all the models.

**Figure 9.**Part of the predicted PSI values of our proposed model. The red line represents the ideal prediction model where the predicted PSI values equal the ground truth PSI values exactly.

**Figure 10.**The regression and classification performances of the two models in the ablation experiment on the soft thresholding module in the RSBU-CW.

**Figure 11.**The classification performances (ACC, SE, and F1) of all the models on different anesthetized states (AW, LA, NA, and DA) and the regression performance (MSE) of all the models in cross-subject validation.

Metric | Formula | Description |
---|---|---|

MSE (Regression) | $\frac{1}{N}\times {\displaystyle \sum _{1}^{N}}{\left(\widehat{PSI}-PSI\right)}^{2}$ | Mean Squared Error |

ACC (Classification) | $\frac{TP+TN}{TP+FP+FN+TN}$ | Accuracy |

SE (Classification) | $\frac{TP}{TP+FN}$ | Sensitivity |

PR (Not used directly in this paper) | $\frac{TP}{TP+FP}$ | Precision |

F1 (Classification) | $2\times \frac{SE\times PR}{SE+PR}$ | F1-score |

**Table 2.**The regression and classification results (mean ± STD) of our proposed model and three conventional models. The mean squared error (MSE) result is the average of the five-fold cross-validation where we split all the samples into five groups, four groups are used as the train set, and one group is used as the test set for each cross-validation. The accuracy (ACC), sensitivity (SE), and F1-score (F1) results are the macro-averaging (we compute the metrics independently for each anesthetized state and then take the average) results of the 4 different anesthetized states.

Metrics | SVR | RF | ANN | Our Proposed Model |
---|---|---|---|---|

MSE | 166.02 ± 7.77 | 90.95 ± 4.88 | 109.20 ± 5.80 | 40.35 ± 3.22 |

ACC | 0.8596 ± 0.0574 | 0.8640 ± 0.0720 | 0.8606 ± 0.0380 | 0.9503 ± 0.0224 |

SE | 0.4825 ± 0.3391 | 0.6685 ± 0.1266 | 0.5650 ± 0.2801 | 0.8411 ± 0.0790 |

F1 | 0.475 ± 0.2941 | 0.6770 ± 0.0840 | 0.5901 ± 0.2337 | 0.8395 ± 0.0812 |

**Table 3.**The regression and classification results (mean ± STD) of our proposed model and three conventional models in cross-subject validation.

Metrics | SVR | RF | ANN | Our Proposed Model |
---|---|---|---|---|

MSE | 173.22 ± 8.56 | 97.56 ± 6.88 | 133.49 ± 5.40 | 49.22 ± 4.62 |

ACC | 0.7908 ± 0.1187 | 0.8420 ± 0.0765 | 0.8216 ± 0947 | 0.9203 ± 0.0470 |

SE | 0.4675 ± 0.3391 | 0.6575 ± 0.1266 | 0.5700 ± 0.1414 | 0.8054 ± 0.0243 |

F1 | 0.4599 ± 0.2132 | 0.6670 ± 0.0821 | 0.5852 ± 0.1274 | 0.8070 ± 0.0306 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Shi, M.; Huang, Z.; Xiao, G.; Xu, B.; Ren, Q.; Zhao, H.
Estimating the Depth of Anesthesia from EEG Signals Based on a Deep Residual Shrinkage Network. *Sensors* **2023**, *23*, 1008.
https://doi.org/10.3390/s23021008

**AMA Style**

Shi M, Huang Z, Xiao G, Xu B, Ren Q, Zhao H.
Estimating the Depth of Anesthesia from EEG Signals Based on a Deep Residual Shrinkage Network. *Sensors*. 2023; 23(2):1008.
https://doi.org/10.3390/s23021008

**Chicago/Turabian Style**

Shi, Meng, Ziyu Huang, Guowen Xiao, Bowen Xu, Quansheng Ren, and Hong Zhao.
2023. "Estimating the Depth of Anesthesia from EEG Signals Based on a Deep Residual Shrinkage Network" *Sensors* 23, no. 2: 1008.
https://doi.org/10.3390/s23021008