Next Article in Journal
Dynamic Expected Threat (DxT) Model: Addressing the Deficit of Realism in Football Action Evaluation
Previous Article in Journal
Integrating Visual Cryptography for Efficient and Secure Image Sharing on Social Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Study of Convolutional Neural Network and Recurrent Neural Network Models for the Analysis of Cardiac Arrest Rhythms During Cardiopulmonary Resuscitation

1
Department of Emergency Medicine, Korea University Anam Hospital, Seoul 02841, Republic of Korea
2
AI Center, Korea University College of Medicine, Seoul 02841, Republic of Korea
3
Institute for Healthcare Service Innovation, Korea University College of Medicine, Seoul 08308, Republic of Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(8), 4148; https://doi.org/10.3390/app15084148
Submission received: 14 March 2025 / Revised: 29 March 2025 / Accepted: 7 April 2025 / Published: 9 April 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
To develop and evaluate deep learning models for cardiac arrest rhythm classification during cardiopulmonary resuscitation (CPR), we analyzed 508 electrocardiogram (ECG) segments (each 4 s in duration, recorded at 250 Hz) from 131 cardiac arrest patients. Compression-affected segments were recorded during chest compressions, while non-compression segments were extracted during compression pauses or immediately after return of spontaneous circulation (ROSC) declaration. One-dimensional convolutional neural network (1D-CNN) and recurrent neural network (RNN) models were employed for four binary classification tasks: (1) shockable rhythms (VF and pVT) versus non-shockable rhythms (asystole and PEA) in all ECG segments; (2) the same classification in compression-affected ECG segments; (3) pulse-generating rhythms (ROSC rhythm) versus non-pulse-generating rhythms (asystole, PEA, VF and pVT) in all ECG segments; and (4) the same classification in compression-affected ECG segments. The 1D-CNN model consistently outperformed the RNN model across all classification tasks. For shockable versus non-shockable rhythm classification, the 1D-CNN achieved accuracies of 91.3% and 89.8% for all ECG segments and compression-affected ECG segments, respectively, compared to 50.6% and 54.5% for the RNN. In detecting pulse-generating rhythms, the 1D-CNN demonstrated accuracies of 90.9% and 85.7% for all ECG segments and compression-affected ECG segments, respectively, while the RNN achieved 92.2% and 84.4%. The 1D-CNN model demonstrated superior performance in cardiac arrest rhythm classification, maintaining high accuracy even with compression-affected ECG data.

1. Introduction

Out-of-hospital cardiac arrest (OHCA) is a leading cause of mortality worldwide, with an estimated global incidence of 55 cases per 100,000 person-years [1]. The early and accurate diagnosis of the underlying cardiac rhythm is crucial for improving survival rates and neurological outcomes in cardiac arrest patients [2,3]. The initial heart rhythm determines the appropriate treatment strategy, such as defibrillation for shockable rhythms (ventricular fibrillation and pulseless ventricular tachycardia) or continued cardiopulmonary resuscitation (CPR) for non-shockable rhythms (asystole and pulseless electrical activity). Current methods for heart rhythm classification in cardiac arrest rely on the manual interpretation of electrocardiogram (ECG) signals by emergency medical services personnel or automated external defibrillators (AEDs). However, these approaches have significant limitations. Manual interpretation requires skilled personnel and is prone to human error [4], while existing automated algorithms often have suboptimal accuracy, especially in the presence of artifacts [5,6]. Moreover, the need to pause CPR for rhythm analysis can negatively impact patient outcomes, as chest compression interruptions lead to decreased cerebral and myocardial perfusion [7,8].
Recent advances in deep learning have shown promising results in ECG analysis and arrhythmia detection [9,10,11,12,13,14,15]. One-dimensional convolutional neural networks (1D CNNs) and recurrent neural networks (RNNs) have demonstrated superior performance in noise detection and ECG classification compared to traditional machine learning methods [9,11,12,13]. These algorithms can automatically learn hierarchical features from raw ECG signals and capture complex temporal patterns, enabling more accurate and efficient rhythm classification. In this context, some studies have explored the use of artificial intelligence for analyzing ECG rhythms during chest compressions to determine the need for defibrillation [16,17,18,19,20], whereas few studies have focused on pulse-generating rhythms during ongoing CPR [21]. However, the existing literature does not pay due attention to the following: (1) pulse-generating rhythms (indicating the return of spontaneous circulation, ROSC) can be detected early even during chest compressions; and (2) shockable rhythms (among non-pulse-generating rhythms) can be detected early regardless of chest compressions.
This study aims to address these gaps by developing and evaluating deep learning models for the real-time classification of heart rhythms in cardiac arrest using ECG data. By comparing the performance of 1D-CNN and RNN models on both compression-affected and non-compression ECG data, we aim to assess the feasibility of developing robust algorithms capable of accurate rhythm classification during ongoing resuscitation efforts in terms of the following: (1) pulse-generating rhythms (indicating ROSC) vs. non-pulse-generating rhythms early, even during chest compressions; and (2) shockable rhythms (among non-pulse-generating rhythms) vs. non-shockable rhythms early regardless of chest compressions. The successful implementation of such models could potentially reduce unnecessary pauses in CPR, facilitate timely defibrillation when appropriate and ultimately improve outcomes for cardiac arrest patients.

2. Materials and Methods

2.1. Data Collection and Processing

We conducted a single-center retrospective observational study using 508 cases of 4 s ECG segments collected from 131 patients in the emergency room of Anam Hospital from January 2021 to December 2022. ECG rhythms were acquired from patient monitors using Vital Recorder 1.15.2, a free research tool designed for the automatic recording of high-resolution, time-synchronized physiological data [22] (accessed 1 January 2021 at https://vitaldb.net/vital-recorder/). For each cardiac arrest patient, ECG rhythms were continuously recorded from their arrival and connection to the ECG monitor, to either the cessation of cardiopulmonary resuscitation due to death or transfer to the intensive care unit following successful resuscitation. This retrospective study was approved by the Institutional Review Board of Korea University Anam Hospital (IRB No. 2019AN0006). Informed consent was waived due to the fact that the data were anonymized before analyses. All methods were conducted in accordance with relevant guidelines and regulations.
The ECG segments contained complex noise induced by patient movements, muscle activity and data transmission quality. Therefore, a preprocessing step was performed to enhance the quality of the ECG segments. In this study, we applied Butterworth filters, which are commonly used in the field of biomedical engineering, to remove noise while preserving important features of the ST segment and QRS complex. The cutoff frequencies were set from 0.05 Hz to 150 Hz [23,24]. Body and electrode movements, as well as breathing, can induce low-frequency noise components. To remove such baseline wandering, we applied the asymmetrically reweighted penalized least squares (arPLS) method to the ECG segments. This technique improves accuracy by automatically adjusting weights iteratively and can be applied to data with various spectral characteristics [25]. Missing values in the ECG segments were filled using the backward–forward interpolation technique.
Each ECG segment comprised 1000 values recorded at 250 Hz over 4 s. This 4 s interval was chosen based on clinical and practical considerations. International resuscitation guidelines recommend reassessing the cardiac arrest rhythm every 2 min, with rhythm checks to be completed within 10 s [26]. In Anam Hospital’s emergency room, emergency medicine specialists typically took 6–8 s to confirm initial rhythm or reassess during resuscitation. However, it was observed that noise from chest compressions could persist for 1–2 s after their cessation. Therefore, extracting 4 s ECG segments was determined to be the optimal choice. This duration allowed for the acquisition of both compression-affected and non-compression ECG segments, ensuring a uniform data length for all patients while accounting for the brief period of residual noise after compression cessation.
A skilled clinician interpreted and labeled the ECG segments recorded through Vital Recorder. The ECG segments were initially categorized into five primary rhythm types across two phases, resulting in ten categories. The five primary rhythm types included asystole, pulseless electrical activity (PEA), ventricular fibrillation (VF), pulseless ventricular tachycardia (pVT) and pulse-generating rhythm (indicating ROSC). Each rhythm type was extracted in a 1:1 ratio between compression-affected and non-compression ECG segments. The clinician selected a representative 4 s segment from the same CPR cycle for each rhythm. Compression-affected ECG segments were recorded during chest compressions, while non-compression ECG segments were extracted during compression pauses or immediately after ROSC declaration. Examples of asystole, PEA, VF, pVT and pulse-generating rhythm in both compression-affected and non-compression ECG segments are presented in Figure 1.
These labels were cross-verified with electronic medical records (EMRs). Pulse-generating rhythms in compression-affected ECG segments were identified based on EMRs, and we specifically selected the rhythm immediately after ROSC declaration. For pulse-generating rhythms in non-compression ECG segments, we chose segments showing organized electrical activity interspersed between compression artifacts, just prior to ROSC declaration. Organized electrical activity refers to regular, discernible QRS complexes, indicative of coordinated ventricular depolarization. Shockable rhythms, which require defibrillation, included VF and pVT. For these shockable rhythms, additional ECG segments were extracted when different patterns were observed within the same patient, acknowledging the variability in these rhythms.

2.2. Classification Tasks

Four binary classifications were considered:
  • Shockable rhythms (VF and pVT) versus non-shockable rhythms (asystole and PEA) in all ECG segments;
  • Shockable rhythms (VF and pVT) versus non-shockable rhythms (asystole and PEA) in compression-affected ECG segments;
  • Pulse-generating rhythms (ROSC rhythm) vs. non-pulse-generating rhythms (asystole, PEA, VF and pVT) in all ECG segments;
  • Pulse-generating rhythms vs. non-pulse-generating rhythms (for non-compression ECG segments only) in compression-affected ECG segments.
The independent variables consisted of 1000 ECG lead II values over the 4 s duration for each segment.

2.3. Analysis

The 1-dimensional convolutional neural network (1D-CNN) [27,28] and the recurrent neural network (RNN) [29,30] were trained during 100 epochs. Data on 508 ECG segments were divided into training and validation sets with a 75:25 ratio (381 vs. 127 ECG segments). Sensitivity, specificity, F1-score and accuracy were introduced as criteria for validating the trained models. Sensitivity is the proportion of true positives among real positives, whereas specificity is the proportion of true negatives among real negatives. The F1-score is the harmonic mean of sensitivity and specificity. Accuracy is the proportion of correct predictions (both true positives and true negatives) among all cases, i.e., 125 ECG segments. Here, a neural network is a network of “neurons”, i.e., information units combined through weights. Usually, the neural network has one input layer, one, two or three intermediate layers and one output layer. Neurons in a previous layer connect with “weights” in the next layer, and these weights represent the strengths of connections between neurons in a previous layer and their next-layer counterparts. This process starts from the input layer, continues through intermediate layers and ends in the output layer (feedforward operation). Then, learning happens: These weights are accommodated based on how much they contributed to the loss, the difference between the actual and predicted final outputs. This process starts from the output layer, continues through intermediate layers and ends in the input layer (backpropagation operation). The two operations are replicated until a certain expectation is met regarding the accurate diagnosis of the dependent variable. In other words, the performance of the neural network improves as long as its learning continues [31].
A deep neural network is a neural network with a large number of intermediate layers, e.g., 5, 10 or even 1000. The deep neural network is referred to as “deep learning” given that learning “deepens” for complex representations based on various elements of its architecture design [31]. Specifically, certain types of deep learning models, so-called convolutional neural networks (CNNs) and their recurrent neural network counterparts (RNNs), have emerged as dominant deep learning models in the past decade. The CNN has convolutional layers, in which a kernel passes across input data and performs “convolution”, that is, computes the dot product of its own elements and their input-data counterparts. When the input data have width and height, the kernel also moves across the two dimensions. Here, convolution (“2D convolution”) detects the shape of the input data (e.g., circle), which agrees with the shape of the kernel (e.g., circle). When the input data have width or height only, the kernel moves across the one dimension as well. Here, convolution (“1D convolution”) selects the points of the input data, which agree with the points of the kernel. The operation of convolution helps the convolutional neural network to detect specific characteristics of the input data, e.g., the form of a normal rhythm vs. its arrhythmia counterpart. However, the CNN has the issue of gradient vanishing: as it becomes deeper (its number of layers increases), the gradient of the loss with respect to the weight quickly becomes 0 [27,28,31].
The 1D CNN in this study consisted of the following layers (Figure 2): one 1D convolutional layer with 64 kernels of a 3 × 1 size and the activation function of the rectified linear unit; another 1D convolutional layer with 64 kernels of a 3 × 1 size and the activation function of the rectified linear unit; one dropout layer with a rate of 50%; one max pooling layer with a 2 × 1 size; one dense layer with 100 units of output space and the activation function of the rectified linear unit; and one dense layer with 2 units of output space and the activation function of softmax. Here, categorical cross-entropy was used as the loss function. In the RNN, on the other hand, the current output information depends, in a repetitive (or “recurrent”) pattern, on the current input information and the previous hidden state (which is the memory of the network of what happened in all previous periods) [29,30,31]. The RNN in this study, long short-term memory, comprised the following layers: one RNN layer with the activation function of a tangent hyperbolic; the recurrent activation function of a sigmoid; a dropout rate of 0% and a recurrent dropout rate of 0%; and one dense layer with 2 units of output space and the activation function of softmax. Here, categorical cross-entropy was used as the loss function. Tensorflow 1.6.0 and Keras 2.2.0 were used for the analysis during 1 January 2024–31 December 2024 (accessed 1 January 2024 at https://github.com/fchollet/keras).

3. Results

The descriptive statistics of age among the 131 participants were 28 (min), 61 (Q1), 71 (median), 79 (Q3) and 99 (max). The proportions of men and women were 85 (64.9%) and 46 (35.1%), respectively. Among the total 508 ECG recordings, including both compression and non-compression phases, the distribution was as follows: asystole—104 (20.5%); PEA—164 (32.3%); VF—104 (20.5%); VT—20 (3.9%); and ROSC rhythm—116 (20.9%). Table 1 and Figure 3 and Figure 4 present the performance outcomes of the 1D-CNN and the RNN for oversampling data during 100 epochs. Here, oversampling denotes that the observations of shockable (or pulse-generating) rhythms were doubled for the detection of shockable (or pulse-generating) rhythms, resolving class imbalance problems. The 1D-CNN registered comparable or better test accuracy results compared to the RNN across the board (Table 1): 91.3 versus 50.6 for the detection of shockable rhythms in all ECG segments; 89.8 versus 54.4 for the detection of shockable rhythms in compression-affected ECG segments; 90.9 versus 92.2 for the detection of pulse-generating rhythms in all ECG segments; and 85.7 versus 84.4 for the detection of pulse-generating rhythms in compression-affected ECG segments. The results were similar in terms of test F1-scores, harmonic means of test sensitivity and test specificity (Table 1): 90.8 versus 34.3 for the detection of shockable rhythms in all ECG segments; 89.6 versus 50.7 for the detection of shockable rhythms in compression-affected ECG segments; 89.1 versus 91.8 for the detection of pulse-generating rhythms in all ECG segments; and 85.1 versus 82.9 for the detection of pulse-generating rhythms in compression-affected ECG segments. In Figure 3 (shockable vs. non-shockable rhythm) and Figure 4 (pulse-generating vs. non-pulse-generating rhythm), the validation-set accuracy of the 1D CNN rises during initial 25 epochs, and then it stabilizes during the remaining 75 epochs. A similar result can be observed for the RNN in Figure 4.

4. Discussion

This study employed deep learning techniques, specifically 1D-CNN and RNN models, to classify cardiac arrest rhythms using 508 ECG segments from 131 cardiac arrest patients, including both in-hospital and out-of-hospital cases. The 1D-CNN model consistently outperformed the RNN, demonstrating higher test-set accuracies across various binary classifications. Notably, the 1D-CNN achieved test-set accuracies of 91.3% for shockable versus non-shockable rhythms, 89.8% for shockable versus non-shockable rhythms in compression-affected ECG segments, 90.9% for pulse-generating versus non-pulse-generating rhythms and 85.7% for pulse-generating rhythms versus non-pulse-generating rhythms in compression-affected ECG segments. The superior performance of the 1D-CNN, maintained in compression-affected ECG segments, highlights its potential for cardiac arrest rhythm classification during ongoing CPR. This capability could address a significant challenge in current resuscitation practices by potentially reducing interruptions in chest compressions for rhythm assessment.
The superior performance of the 1D-CNN over the RNN in analyzing ECG segments from cardiac arrest patients can be attributed to several factors. The architectural characteristics of the 1D-CNN are particularly well suited for ECG signal analysis [19,32,33,34,35,36]. Convolutional layers excel at capturing local patterns, which are crucial for rhythm classification [19,32]. Pooling layers contribute to noise reduction and feature abstraction, which is advantageous when dealing with noisy ECG signals during resuscitation [33,34]. Furthermore, the 1D-CNN typically requires fewer parameters due to its weight-sharing mechanism, leading to better generalization and reduced overfitting, especially when dealing with limited datasets [32,35]. This parameter efficiency allows the 1D-CNN to effectively capture crucial local patterns in ECG signals. The ability of the 1D-CNN to maintain high performance even with compression-affected ECG data underscores its robustness and suitability for this critical medical application, potentially enabling more continuous and accurate rhythm analysis during resuscitation efforts. This advantage of the 1D-CNN over various artificial intelligence models was confirmed with a mini-review of 18 artificial intelligence studies on CPR-ECG [36].
These insights add to the growing body of research on deep learning applications for cardiac rhythm interpretation [9,10,11,12,13,14,15]. While our results demonstrate the potential of 1D-CNN models for classifying cardiac rhythms in CPR-affected ECG segments under controlled research conditions, significant development would be required before clinical implementation. Current performance metrics do not yet meet established AHA standards for rhythm detection, particularly for shockable rhythms. Further multi-center validation with larger, more diverse datasets would be essential to establish generalizability and clinical utility. Nevertheless, our study addresses the challenging scenario of cardiac arrest rhythm classification during ongoing CPR, including both manual and mechanical compressions, and goes a step further by tackling the additional challenge of pulse-generating rhythm detection during CPR, which is crucial for guiding resuscitation efforts.
In many non-medical settings where CPR experts are not available, non-expert responders face significant challenges in determining whether defibrillation is required or in recognizing ROSC during cardiac arrest; in this context, the development of AI-based rhythm classification is critically important. Our study’s 1D-CNN model demonstrates the potential to provide context-aware guidance to non-experts by automatically classifying cardiac arrest rhythms. Such an approach could be integrated into emergency response tools, including automated external defibrillators, to support rapid and accurate decision-making in critical situations where expert interpretation is unavailable.
Despite the promising results, our study has several limitations that should be addressed in future research. Firstly, the data were collected from a single hospital, which may limit the model’s generalizability to different populations or healthcare settings. It should be noted that a previous study used 1889 cases from 172 participants in another hospital, applied a 1D CNN and achieved an F1 score of 73.2% for shockable vs. non-shockable rhythm classification [37]. The F1 performance in the current study, 90.8%, is higher than that in this previous study, 73.2%. More examination is needed for greater generalization in this direction. Secondly, the relatively small sample size of 508 ECG segments is modest in the context of deep learning applications, potentially affecting the robustness of our model when applied to more diverse datasets such as independent test sets. Thirdly, hyper-parameter tuning and regularization were beyond the scope of this study. Fourthly, our study did not account for the potential impact of different CPR quality levels on rhythm classification, which could be an important factor in real-world applications. To address these limitations, future research should focus on multi-center studies involving diverse patient populations. This approach would not only validate the robustness of these models across different clinical environments but also provide larger and more varied datasets. Fourthly, incorporating CPR quality metrics (e.g., compression depth, rate and fraction) into the model could yield significant benefits. This integration would not only improve classification accuracy but also provide valuable insights into how CPR quality affects ECG signal characteristics and pulse-generating rhythm prediction. Sixthly, explainable artificial intelligence such as a gradient-weighted class activation map (Grad-CAM) is expected to further the performance of deep learning and expand the boundaries of knowledge on this topic [38]. By considering these factors, future models could offer more comprehensive and accurate assessments during resuscitation efforts, potentially leading to improved patient outcomes.

5. Conclusions

Our study demonstrates the potential of deep learning, particularly CNN models, for cardiac arrest rhythm analysis during CPR. By comparing the performance of 1D-CNN and RNN models on both compression-affected and non-compression ECG data, our study assessed the feasibility of developing robust algorithms capable of accurate rhythm classification during ongoing resuscitation efforts, i.e., pulse-generating rhythms (indicating ROSC) vs. non-pulse-generating rhythms early, even during chest compressions, as well as shockable rhythms (among non-pulse-generating rhythms) vs. non-shockable rhythms early regardless of chest compressions. While further validation is needed, these findings lay the groundwork for developing decision support systems that can aid responders in time-sensitive situations, ultimately enhancing the chain of survival for cardiac arrest patients.

Author Contributions

Conceptualization, S.J.K.; methodology, S.L., K.-S.L. and S.J.K.; software, K.-S.L.; validation, S.L. and S.J.K.; formal analysis, K.-S.L. and S.J.K.; investigation, S.L. and K.-S.L.; resources, S.L. and S.J.K.; data curation, S.L., H.-J.P., K.S.H., J.S. and S.W.L.; writing—original draft preparation, S.L. and K.-S.L.; writing—review and editing, S.L., K.-S.L., H.-J.P. and S.J.K.; visualization, S.L. and K.-S.L.; supervision, K.-S.L. and S.J.K.; project administration, S.W.L. and S.J.K.; funding acquisition, S.J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Research Foundation of Korea (RS-2024-00344842). Su Jin Kim received this funding. The funder had no role in the design of the study, the collection, analysis and interpretation of the data and the writing of the manuscript.

Institutional Review Board Statement

This retrospective study was approved by the Institutional Review Board of Korea University Anam Hospital (IRB No. 2019AN0006). All methods were conducted in accordance with relevant guidelines and regulations.

Informed Consent Statement

The requirement for obtaining informed consent was waived owing to the retrospective nature of the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CPRCardiopulmonary resuscitation
ECGElectrocardiogram
ROSCReturn of spontaneous circulation
PEAPulseless electrical activity
VFVentricular fibrillation
pVTPulseless ventricular tachycardia
1D-CNN1-dimensional convolutional neural network
RNNRecurrent neural network
OHCAOut-of-hospital cardiac arrest
AEDAutomated external defibrillator
EMRElectronic medical record
Grad-CAMGradient-weighted class activation map

References

  1. Berdowski, J.; Berg, R.A.; Tijssen, J.G.; Koster, R.W. Global incidences of out-of-hospital cardiac arrest and survival rates: Systematic review of 67 prospective studies. Resuscitation 2010, 81, 1479–1487. [Google Scholar] [CrossRef] [PubMed]
  2. Yan, S.; Gan, Y.; Jiang, N.; Wang, R.; Chen, Y.; Luo, Z.; Zong, Q.; Chen, S.; Lv, C. The global survival rate among adult out-of-hospital cardiac arrest patients who received cardiopulmonary resuscitation: A systematic review and meta-analysis. Critical Care 2020, 24, 1–13. [Google Scholar] [CrossRef]
  3. Ho, A.F.W.; Lee, K.Y.; Nur, S.; Fook, S.C.; Pek, P.P.; Tanaka, H.; Sang, D.S.; Chow, P.I.-K.; Tan, B.Y.-Q.; Lim, S.L. Association between conversion to shockable rhythms and survival with favorable neurological outcomes for out-of-hospital cardiac arrests. Prehospital Emerg. Care 2024, 28, 126–134. [Google Scholar] [CrossRef]
  4. Hammad, M.; Kandala, R.N.; Abdelatey, A.; Abdar, M.; Zomorodi-Moghadam, M.; San Tan, R.; Acharya, U.R.; Pławiak, J.; Tadeusiewicz, R.; Makarenkov, V. Automated detection of shockable ECG signals: A review. Inf. Sci. 2021, 571, 580–604. [Google Scholar] [CrossRef]
  5. Herlitz, J.; Bång, A.; Axelsson, Å.; Graves, J.R.; Lindqvist, J. Experience with the use of automated external defibrillators in out of hospital cardiac arrest. Resuscitation 1998, 37, 3–7. [Google Scholar] [CrossRef]
  6. Van Alem, A.P.; Sanou, B.T.; Koster, R.W. Interruption of cardiopulmonary resuscitation with the use of the automated external defibrillator in out-of-hospital cardiac arrest. Ann. Emerg. Med. 2003, 42, 449–457. [Google Scholar] [CrossRef]
  7. Cheskes, S.; Schmicker, R.H.; Christenson, J.; Salcido, D.D.; Rea, T.; Powell, J.; Edelson, D.P.; Sell, R.; May, S.; Menegazzi, J.J. Perishock pause: An independent predictor of survival from out-of-hospital shockable cardiac arrest. Circulation 2011, 124, 58–66. [Google Scholar] [CrossRef]
  8. Cheskes, S.; Schmicker, R.H.; Verbeek, P.R.; Salcido, D.D.; Brown, S.P.; Brooks, S.; Menegazzi, J.J.; Vaillancourt, C.; Powell, J.; May, S. The impact of peri-shock pause on survival from out-of-hospital shockable cardiac arrest during the Resuscitation Outcomes Consortium PRIMED trial. Resuscitation 2014, 85, 336–342. [Google Scholar] [CrossRef]
  9. Isin, A.; Ozdalili, S. Cardiac arrhythmia detection using deep learning. Procedia Comput. Sci. 2017, 120, 268–275. [Google Scholar] [CrossRef]
  10. Rajpurkar, P.; Hannun, A.Y.; Haghpanahi, M.; Bourn, C.; Ng, A.Y. Cardiologist-level arrhythmia detection with convolutional neural networks. arXiv 2017, arXiv:1707.01836. [Google Scholar]
  11. Rahman, S.; Pal, S.; Yearwood, J.; Karmakar, C. Robustness of Deep Learning models in electrocardiogram noise detection and classification. Comput. Methods Prog. Biomed. 2024, 253, 108249. [Google Scholar] [CrossRef] [PubMed]
  12. Ansari, Y.; Mourad, O.; Qaraqe, K.; Serpedin, E. Deep learning for ECG Arrhythmia detection and classification: An overview of progress for period 2017–2023. Front. Physiol. 2023, 14, 1246746. [Google Scholar] [CrossRef]
  13. Li, D.; Zhang, J.; Zhang, Q.; Wei, X. Classification of ECG signals based on 1D convolution neural network. In Proceedings of the 2017 IEEE 19th International Conference on e-Health Networking, Applications and Services (Healthcom), Dalian, China, 12–15 October 2017; pp. 1–6. [Google Scholar]
  14. Sannino, G.; De Pietro, G. A deep learning approach for ECG-based heartbeat classification for arrhythmia detection. Future Gener. Comput. Syst. 2018, 86, 446–455. [Google Scholar] [CrossRef]
  15. Zhang, C.; Wang, G.; Zhao, J.; Gao, P.; Lin, J.; Yang, H. Patient-specific ECG classification based on recurrent neural networks and clustering technique. In Proceedings of the 2017 13th IASTED International Conference on Biomedical Engineering (BioMed), Innsbruck, Austria, 20–21 February 2017; pp. 63–67. [Google Scholar]
  16. Eftestøl, T.; Hognestad, M.A.; Søndeland, S.A.; Rad, A.B.; Aramendi, E.; Wik, L.; Kramer-Johansen, J. A Convolutional Neural Network Approach for Interpreting Cardiac Rhythms from Resuscitation of Cardiac Arrest Patients. In Proceedings of the 2023 Computing in Cardiology (CinC), Atlanta, GA, USA, 1–4 October 2023; pp. 1–4. [Google Scholar]
  17. Isasi, I.; Irusta, U.; Aramendi, E.; Olsen, J.-Å.; Wik, L. Detection of shockable rhythms using convolutional neural networks during chest compressions provided by a load distributing band. In Proceedings of the 2020 Computing in Cardiology, Rimini, Italy, 13–16 September 2020; pp. 1–4. [Google Scholar]
  18. Hajeb-M, S.; Cascella, A.; Valentine, M.; Chon, K. Deep neural network approach for continuous ECG-based automated external defibrillator shock advisory system during cardiopulmonary resuscitation. J. Am. Heart Assoc. 2021, 10, e019065. [Google Scholar] [CrossRef]
  19. Jekova, I.; Krasteva, V. Optimization of end-to-end convolutional neural networks for analysis of out-of-hospital cardiac arrest rhythms during cardiopulmonary resuscitation. Sensors 2021, 21, 4105. [Google Scholar] [CrossRef]
  20. Isasi, I.; Irusta, U.; Aramendi, E.; Eftestøl, T.; Kramer-Johansen, J.; Wik, L. Rhythm analysis during cardiopulmonary resuscitation using convolutional neural networks. Entropy 2020, 22, 595. [Google Scholar] [CrossRef]
  21. Sashidhar, D.; Kwok, H.; Coult, J.; Blackwood, J.; Kudenchuk, P.J.; Bhandari, S.; Rea, T.D.; Kutz, J.N. Machine learning and feature engineering for predicting pulse presence during chest compressions. R. Soc. Open Sci. 2021, 8, 210566. [Google Scholar] [CrossRef]
  22. Lee, H.-C.; Jung, C.-W. Vital Recorder—A free research tool for automatic recording of high-resolution time-synchronised physiological data from multiple anaesthesia devices. Sci. Rep. 2018, 8, 1527. [Google Scholar] [CrossRef]
  23. Altay, Y.A.; Kremlev, A.S.; Zimenko, K.A.; Margun, A.A. The effect of filter parameters on the accuracy of ECG signal measurement. Biomed. Eng. 2019, 53, 176–180. [Google Scholar] [CrossRef]
  24. Sohn, J.; Yang, S.; Lee, J.; Ku, Y.; Kim, H.C. Reconstruction of 12-lead electrocardiogram from a three-lead patch-type device using a LSTM network. Sensors 2020, 20, 3278. [Google Scholar] [CrossRef]
  25. Baek, S.J.; Park, A.; Ahn, Y.J.; Choo, J. Baseline correction using asymmetrically reweighted penalized least squares smoothing. Analyst 2015, 140, 250–257. [Google Scholar] [CrossRef] [PubMed]
  26. Panchal, A.R.; Bartos, J.A.; Cabañas, J.G.; Donnino, M.W.; Drennan, I.R.; Hirsch, K.G.; Kudenchuk, P.J.; Kurz, M.C.; Lavonas, E.J.; Morley, P.T. Part 3: Adult basic and advanced life support: 2020 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation 2020, 142, S366–S468. [Google Scholar] [CrossRef]
  27. Lee, K.-S.; Jung, S.; Gil, Y.; Son, H.S. Atrial fibrillation classification based on convolutional neural networks. BMC Med. Inform. Decis. Mak. 2019, 19, 1–6. [Google Scholar] [CrossRef]
  28. Lee, K.-S.; Park, H.-J.; Kim, J.E.; Kim, H.J.; Chon, S.; Kim, S.; Jang, J.; Kim, J.-K.; Jang, S.; Gil, Y. Compressed deep learning to classify arrhythmia in an embedded wearable device. Sensors 2022, 22, 1776. [Google Scholar] [CrossRef]
  29. Lee, K.S.; Park, K.W. Social determinants of the association among cerebrovascular disease, hearing loss and cognitive impairment in a middle-aged or older population: Recurrent neural network analysis of the Korean Longitudinal Study of Aging (2014–2016). Geriatr. Gerontol. Int. 2019, 19, 711–716. [Google Scholar] [CrossRef]
  30. Kim, R.; Kim, C.-W.; Park, H.; Lee, K.-S. Explainable artificial intelligence on life satisfaction, diabetes mellitus and its comorbid condition. Sci. Rep. 2023, 13, 11651. [Google Scholar] [CrossRef]
  31. Lee, K.-S.; Ahn, K.H. Application of artificial intelligence in early diagnosis of spontaneous preterm labor and birth. Diagnostics 2020, 10, 733. [Google Scholar] [CrossRef]
  32. Kiranyaz, S.; Ince, T.; Gabbouj, M. Real-time patient-specific ECG classification by 1-D convolutional neural networks. IEEE Trans. Biomed. Eng. 2015, 63, 664–675. [Google Scholar] [CrossRef]
  33. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M.; Gertych, A.; San Tan, R. A deep convolutional neural network model to classify heartbeats. Comput. Biol. Med. 2017, 89, 389–396. [Google Scholar] [CrossRef]
  34. Zahid, M.U.; Kiranyaz, S.; Ince, T.; Devecioglu, O.C.; Chowdhury, M.E.; Khandakar, A.; Tahir, A.; Gabbouj, M. Robust R-peak detection in low-quality holter ECGs using 1D convolutional neural network. IEEE Trans. Biomed. Eng. 2021, 69, 119–128. [Google Scholar] [CrossRef]
  35. Thanapol, P.; Lavangnananda, K.; Bouvry, P.; Pinel, F.; Leprévost, F. Reducing overfitting and improving generalization in training convolutional neural network (CNN) under limited sample sizes in image recognition. In Proceedings of the 2020-5th International Conference on Information Technology (InCIT), Online, 6–7 December 2020; pp. 300–305. [Google Scholar]
  36. Krasteva, V.; Didon, J.-P.; Ménétré, S.; Jekova, I. Deep learning strategy for sliding ECG analysis during cardiopulmonary resuscitation: Influence of the hands-off time on accuracy. Sensors 2023, 23, 4500. [Google Scholar] [CrossRef] [PubMed]
  37. Lee, S.; Jung, S.; Ahn, S.; Cho, H.; Moon, S.; Park, J.-H. Comparison of Neural Network Structures for Identifying Shockable Rhythm During Cardiopulmonary Resuscitation. J. Clin. Med. 2025, 14, 738. [Google Scholar] [CrossRef] [PubMed]
  38. Ramprasaath, R.; Selvaraju, M.; Das, A. Visual Explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 618–626. [Google Scholar]
Figure 1. Representative ECG examples of asystole, pulseless electrical activity, ventricular fibrillation, pulseless ventricular tachycardia and pulse-generating rhythm in compression-affected and non-compression electrocardiogram (ECG) segments.
Figure 1. Representative ECG examples of asystole, pulseless electrical activity, ventricular fibrillation, pulseless ventricular tachycardia and pulse-generating rhythm in compression-affected and non-compression electrocardiogram (ECG) segments.
Applsci 15 04148 g001
Figure 2. One-dimensional CNN design. Note: 1 the input of the shape (1000, 3); 2 the convolutional layer with 64 filters of size 3 and the activation function of the rectified linear unit; 3 the dropout layer of 50%; 4 the max pooling layer of size 2; 5 the dense layer with 100 output neurons and the activation function of the rectified linear unit; 6 the dense layer with 2 output neurons and the activation function of softmax.
Figure 2. One-dimensional CNN design. Note: 1 the input of the shape (1000, 3); 2 the convolutional layer with 64 filters of size 3 and the activation function of the rectified linear unit; 3 the dropout layer of 50%; 4 the max pooling layer of size 2; 5 the dense layer with 100 output neurons and the activation function of the rectified linear unit; 6 the dense layer with 2 output neurons and the activation function of softmax.
Applsci 15 04148 g002
Figure 3. RNN vs. 1D-CNN accuracy for shockable vs. non-shockable rhythms. (A) Performance in all ECG segments. (B) Performance in compression-affected ECG segments. ECG—electrocardiogram; RNN—recurrent neural network; CNN—convolutional neural network.
Figure 3. RNN vs. 1D-CNN accuracy for shockable vs. non-shockable rhythms. (A) Performance in all ECG segments. (B) Performance in compression-affected ECG segments. ECG—electrocardiogram; RNN—recurrent neural network; CNN—convolutional neural network.
Applsci 15 04148 g003
Figure 4. RNN vs. 1D-CNN accuracy for pulse-generating rhythm vs. non-pulse-generating rhythm. (A) Performance in all ECG segments. (B) Performance in compression-affected ECG segments. ECG—electrocardiogram; RNN—recurrent neural network; CNN—convolutional neural network.
Figure 4. RNN vs. 1D-CNN accuracy for pulse-generating rhythm vs. non-pulse-generating rhythm. (A) Performance in all ECG segments. (B) Performance in compression-affected ECG segments. ECG—electrocardiogram; RNN—recurrent neural network; CNN—convolutional neural network.
Applsci 15 04148 g004
Table 1. Model performance: 1D-CNN vs. RNN after 100 epochs.
Table 1. Model performance: 1D-CNN vs. RNN after 100 epochs.
Shockable Rhythms vs. Non-Shockable Rhythms
SensitivitySpecificityF1-ScoreAccuracy
CNN98.084.590.891.3
RNN22.275.834.350.6
Performance in compression-affected ECG segments
CNN88.690.789.689.8
RNN40.966.750.754.4
Pulse-Generating Rhythms vs. Non-Pulse-Generating Rhythms
SpecificitySensitivityF1-ScoreAccuracy
CNN10080.389.190.9
RNN97.486.891.892.2
Performance in compression-affected ECG segments
CNN90.280.685.185.7
RNN92.775.082.984.4
Note: all values are presented as percentages. CNN—convolutional neural network; RNN—recurrent neural network; ECG—electrocardiogram. Sensitivity is the proportion of true positives among real positives, whereas specificity is the proportion of true negatives among real negatives. F1-score is the harmonic mean of sensitivity and specificity. Accuracy is the proportion of correct predictions (both true positives and true negatives) among all cases. Performance in compression-affected ECG segments refers to the model’s ability to classify cardiac arrest rhythms in the chest compression phase.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, S.; Lee, K.-S.; Park, H.-J.; Han, K.S.; Song, J.; Lee, S.W.; Kim, S.J. A Comparative Study of Convolutional Neural Network and Recurrent Neural Network Models for the Analysis of Cardiac Arrest Rhythms During Cardiopulmonary Resuscitation. Appl. Sci. 2025, 15, 4148. https://doi.org/10.3390/app15084148

AMA Style

Lee S, Lee K-S, Park H-J, Han KS, Song J, Lee SW, Kim SJ. A Comparative Study of Convolutional Neural Network and Recurrent Neural Network Models for the Analysis of Cardiac Arrest Rhythms During Cardiopulmonary Resuscitation. Applied Sciences. 2025; 15(8):4148. https://doi.org/10.3390/app15084148

Chicago/Turabian Style

Lee, Sijin, Kwang-Sig Lee, Hyun-Joon Park, Kap Su Han, Juhyun Song, Sung Woo Lee, and Su Jin Kim. 2025. "A Comparative Study of Convolutional Neural Network and Recurrent Neural Network Models for the Analysis of Cardiac Arrest Rhythms During Cardiopulmonary Resuscitation" Applied Sciences 15, no. 8: 4148. https://doi.org/10.3390/app15084148

APA Style

Lee, S., Lee, K.-S., Park, H.-J., Han, K. S., Song, J., Lee, S. W., & Kim, S. J. (2025). A Comparative Study of Convolutional Neural Network and Recurrent Neural Network Models for the Analysis of Cardiac Arrest Rhythms During Cardiopulmonary Resuscitation. Applied Sciences, 15(8), 4148. https://doi.org/10.3390/app15084148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop