Potential of Rule-Based Methods and Deep Learning Architectures for ECG Diagnostics

The main objective of this study is to propose relatively simple techniques for the automatic diagnosis of electrocardiogram (ECG) signals based on a classical rule-based method and a convolutional deep learning architecture. The validation task was performed in the framework of the PhysioNet/Computing in Cardiology Challenge 2020, where seven databases consisting of 66,361 recordings with 12-lead ECGs were considered for training, validation and test sets. A total of 24 different diagnostic classes are considered in the entire training set. The rule-based method uses morphological and time-frequency ECG descriptors that are defined for each diagnostic label. These rules are extracted from the knowledge base of a cardiologist or from a textbook, with no direct learning procedure in the first phase, whereas a refinement was tested in the second phase. The deep learning method considers both raw ECG and median beat signals. These data are processed via continuous wavelet transform analysis, obtaining a time-frequency domain representation, with the generation of specific images (ECG scalograms). These images are then used for the training of a convolutional neural network based on GoogLeNet topology for ECG diagnostic classification. Cross-validation evaluation was performed for testing purposes. A total of 217 teams submitted 1395 algorithms during the Challenge. The diagnostic accuracy of our algorithm produced a challenge validation score of 0.325 (CPU time = 35 min) for the rule-based method, and a 0.426 (CPU time = 1664 min) for the deep learning method, which resulted in our team attaining 12th place in the competition.


Introduction
The automatic detection and classification of cardiac abnormalities from 12-lead ECG signals has been an area of research interest for a long time [1]. Methods have ranged from medical decision-support systems to statistical approaches, from simple neural network architectures to more sophisticated methods based on deep neural networks [1][2][3]. There has been much focus on research employing the use of deep learning with medical images [4], time series classification [5], and object detection [6]. In [7], a deep recurrent neural network approach was developed and tested for the classification of four types of the severity of atrial fibrillation (AF) based on 21 features. The use of continuous wavelet transforms (CWTs) for ECG signal processing is present in several studies; for example, in [8] the CWT was considered for multiscale parameter estimation for delineation of the fiducial points of P-QRS-T waves.
Recent examples of diagnostic 12-lead ECG classification have been reported. They come from the use of a deep neural network for the classification of six diagnostic classes [3], whereas the study in [9] considered the analysis of 12-lead ECG signals based on deep learning for the classification of four types of arrhythmias. A deep learning neural network model was tested in a database of 6788 12-lead ECG records for the identification of nine diagnostic classes [10].
Consequently, many algorithms may be used to identify cardiac abnormalities. However, most of these methods are trained, tested or developed in relatively small or homogeneous databases, and most of them focus on identifying a small number of cardiac arrhythmias that do not represent the full complexity of ECG classifications [11]. After a long series of interesting annual challenges, the PhysioNet/Computing in Cardiology Challenge 2020 provided the opportunity to address these problems, considering an extended set of diagnostic classes and a set of learning/testing ECG records belonging to different databases [11][12][13].
The main objective of this study was to test two different techniques for the automatic classification of ECG signals with active participation in the PhysioNet/Computing in Cardiology Challenge 2020. In particular, the classical rule-based system method, as well as a more sophisticated technique based on direct learning from ECG raw data through deep learning architectures, are explored and compared in the same framework.

ECG Database
The PhysioNet/Computing in Cardiology Challenge 2020 provided a training set of six databases (Table 1) with 43,101 annotated recordings of 12-lead ECGs, lasting from 6 to 60 s [11,[14][15][16][17]. In addition, the Challenge involved a seventh undisclosed dataset from an American institution that was geographically distinct from the other datasets, which was used as a test set. Then, a total of 23,260 ECG records, kept hidden, were used for validation (6630) and test (16,630) procedures. In Table 1, the heterogeneity of the databases is evident, considering different sets of diagnostic classes. Undisclosed [11] 10,000 ≤24 The initial 111 diagnoses or classes were further reduced to the 27 diagnostic classes considered in the Challenge scoring system (see Table 2 for a full list of the diagnoses and codes). They are reduced to 24 when considering three equivalent classes.
The composition and the number of annotated diagnostic classes of the six considered datasets for the learning phase are reported in Table 1. The collection of this large training dataset consists of a total of 43,101 ECG recordings and 60,373 diagnostic instances. This means that in every record of the entire database, there is a mean of 1.4 diagnostic classes. Table 2 shows the distribution of the 24 different diagnostic classes considered in the entire learning set. As can be seen in this table, the number of training instances of the various diagnostic classes is not uniform, with the evident presence of a class imbalance. For example, the NSR class is present in 20,846 records, whereas the Bradycardia class consists of only 288 instances. In addition, there are seven (29.1%) diagnostic classes (AFL, Brady, PR, LPR and RAD, LQRSV, PVC) with a number of records lower than 600 and 16 classes (66.6%) with a number of records higher than 1000. For this reason, a strategy of under-sampling for a more uniform distribution of the classes was adopted. For the selection of representative learning subsets, a random selection of ECG records with at most N_max instances for the considered classes was determined. Three values of N_max were considered and tested: 600, 1000 and 1500, obtaining the learning subsets LS_N600, LS_N1000 and LS_N1500, as described in Table 3. The weighted distribution of the learning subset LS_N1000, consisting of 16,002 ECG records, is reported in the rightmost column of Table 2, which shows a slightly more equilibrated distribution for the learning phase. All ECG data were resampled at 500 Hz (if necessary) for compatibility purposes.

Preprocessing
The ECG recordings were filtered to suppress the power-line interference, the drift of the isoelectric line and the electromyographic noise. QRS detection was performed via the identification of significant peaks of spatial velocity (absolute value of the first derivative of one or more leads), using combined thresholds, with the subsequent identification of the R waves and the computation of the heart rate [18]. Then, a robust average beat is calculated, with reference to the positive (R), or negative (S) peaks with the highest amplitude, through the signal-averaging of the sustained beats in the record. The rejected outliers are suspected to be artefacts or abnormal beats with non-sustained amplitudes.

Rule-Based Classifier
Manual interpretation of the electrocardiogram is time-consuming and requires skilled personnel with a high degree of training [11]. Although the knowledge of an expert is complex to formalize, we tried to develop a rule-based method to mimic some simple rules. This method uses morphological and time-frequency ECG descriptors, characterizing each diagnostic label. These rules have been extracted from the knowledge-base of a physician or from a textbook [19], with no direct learning procedure in the first phase, although a refinement was tested in the second phase.
After QRS detection and computation of the median of all beats, the next step considered the delineation of QRS-onsets and QRS-offsets, and the identification of T_end and heartbeat classification [20,21]. In addition, some parameters were computed in the derived vectorcardiographic (VCG) signal [22,23].
The main parameters, computed using classical algorithms, are reported in Table 4. Figure 1 reports some components of the signal processing for the detection AF/AFL, with the identification of zero-line crossing of the first derivative. Figure 2 reports two examples of QRS segmentation that resemble an 'M' shaped QRS, which is a particular step for the detection of RBBB.   Figure 1).

positive/negative_P_wave
Positive/negative_P_wave in presence of a positive/negative wave in the interval [QRS_onset-400 ms, QRSonset-40 ms] with a significant peak of 20 uV within an interval of ±40 ms. opp_ST_T Displacement of ST-T wave in opposition to major deflection of QRS complex in V5.  From these parameters, the considered diagnostic rules are represented and described in the following "if-then" format:

Deep Learning Network Classifier
The deep learning method considers both raw ECG signals and previously computed median beat signals. It is composed of continuous wavelet transforms (CWTs), followed by a convolutional neural network (CNN). The input data of the CWTs are the concatenation of two components of raw ECG signals:

•
Concatenation of 10 s of the ECG signals of eight independent leads; • Concatenation of average beats computed previously using the rule-based method.
These data are processed by the CWTs, obtaining a time-frequency domain representation, with the generation of specific 2D images. These images are then used for the training of a CNN network for ECG diagnostic classification. A pretrained image CNN classification network that has already learned to extract powerful and informative features from natural images has been used as a starting point to train the specific classifier for 24 classes [24].
The CWT transforms the selected window of ECG signals into time-frequency representations, which compose a 2D image. In particular, the absolute values of the CWT coefficients of the considered ECG signal have been considered, obtaining the so-called scalogram [24]. The Matlab function cwtfilterbank was used to create the continuous wavelet transform filter bank, using a family of exactly analytic wavelets (Morse wavelet), with symmetry = 3, and time-bandwidth product = 60. In addition, CWT scales are discretized using 12 voices per octave. Two examples of scalograms are shown in Figure 3a (Atrial Fibrillation AF) and in Figure 3b (Normal Sinus Rhythm NSR), where the x-axis represents the time, the y-axis the frequency and the color map is the magnitude.
These models were pretrained on a subset of the ImageNet database (www.ImageNet.org, accessed on 1 August 2021), which was used in the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) [24,25]. Both networks, trained on ImageNet, are able to classify images into thousands of object categories, learning rich feature representations for a wide range of images. Thanks to the generalization property common to the neural network approach, it is possible to develop an appropriate learning procedure to force the networks to classify images of a different domain produced by the CWT block into 24 diagnostic classes. Both networks were tested in the first phase of the Challenge: the SqueezeNet showed a faster training procedure, whereas the GoogLeNet presented a better performance, and thus the latter was used in the official phase. GoogLeNet is a CNN, including 22 layers, pretrained to classify images into 1000 object categories. Each layer can be considered as a filter; consequently, the first ones characterize more common features, whereas the deeper ones characterize more specific features in order to differentiate between the considered diagnostic classes.
The learning procedure is characterized by the initial learning rate = 0.0001, a minibatch size in the interval [30:50] as the minimum factor of the number of elements in the learning set, a variable number of iterations in the various experiments, and the use of the stochastic gradient descent optimization algorithm with momentum (=0.9).
Specific techniques have been implemented and developed during the training phase of the GoogLeNet CNN in order to cope with the particular aspects of multi-label, multiclass characteristics/features. Among different techniques related to class imbalance [26], in the present study, two data-level methods were used: a random under-sampling for reducing the size of the learning sets, and a random over-sampling for duplicating random samples from the minority group.
A schematic representation of the architecture of the GoogLeNet network is shown in Figure 4, where the input is a 2D jpeg image (224 × 224 × 3), and the output is represented by the scores or probabilities of the 24 considered ECG classes. An example of a trained CNN GoogLeNet is available at https://github.com/giovanni-ivaylo/cinc20.git (accessed on 1 August 2021) for a complete analysis of its structure.

Results and Discussion
The score indices of the first and second phase of the Challenge (validation scores) are defined and reported in [11]. In particular, based on the indices of true positive (TP), true negative (TN), false positive (FP), and false negative (FN), precision (TP/(TP + FP)) and recall (TP/(TP + FN)) the following indices were considered: F 1 is a F-measure, which is the harmonic mean of precision and recall: F 1 = 2 * TP/(2 * TP + FP + FN) F 2 , a more general F-measure which weighs recall more highly than precision: In a preliminary phase, the learning process was based only on the CPSC database, consisting of 6877 ECG records with only nine possible diagnostic classes, with a consequent simplification both of the rule-based method and the architecture of the CNN. Table 5 displays the cross-validation indices of the tested algorithms in this preliminary dataset.
In the official Challenge phase, the entire learning set of 43,101 ECG records was considered, and the number of diagnostic classes increased to 110. The challenge scoring system was essentially concentrated on a subset of 27 classes, considering the relevant diagnostic classes of clinical interest. A particular scoring system was defined by the Challenge for coping with the fact that not all misdiagnosed results are equally bad. In addition, a subset of 24 classes was activated in the identification process, considering three equivalent classes (CRBBB and RBBB, PAC and SVPB and PVC and VEB). During this official phase, the submissions were tested on the validation set of 6630(1463 + 5167) records. To increase the efficiency of the learning process, the learning subsets LS_N1000 (16,002 records), LS_N600 (11,210 records), and LS_N1500 (20,044 records) were used in the testing procedures (Table 3). Table 2 shows the weighted distribution of the learning set LS_N1000 in the 24 diagnostic classes considered. The deep learning process was performed and tested using three-fold cross-validation techniques. This choice was mainly due to the CPU time required for the training. For example, for a one-fold training iteration, the execution took from 15 to 24 h of CPU time. However, in the submitted algorithms, the presence of several platform-related problems slowed the training process, and consequently, the learning was performed one-fold to ensure an acceptable duration of the learning process and a more convenient feedback phase. Table 6 reports the official Challenge Validation score of the submitted algorithms tested in the validation set of 6630 records. The rule-based method RB1 essentially did not use any learning process from the database LS_N1000 and the score was in agreement with the behavior of the first phase, whereas the second version (RB2) tried to extract some information from LS_N1000. For example, it tried to differentiate AF from AFL on the basis of the AF-waves' frequency and amplitude, but the consequent improvement was not significant. Different deep learning algorithms were submitted, with different learning subsets (LS_N1000, LS_N600, LS_N1500) and a different number of iterations, but the scores (Table 6) were all in the range of [0.400, 0.426], indicating that all these algorithms showed similar behavior. In particular, GoogLeNet_6 resumed the training from a previously saved pretrained network, which comes from a 3-fold cross-validation technique on LS_N1000 and 10 iterations. Table 7 displays the cross-validation indices trained and tested in the learning databases LS_N1000 and LS_N1500. It is interesting to note that the reported indices F_2, G_2 and the normalized score are in agreement with the official results, with some more optimistic results, probably depending on the composition of the unknown test set. The final official results were announced considering the test set of 16,630 ECG records. Our team, named 'Gio_Ivo', submitted the deep learning method GoogLeNet_6, and achieved a challenge validation score of 0.426 and a full test score of 0.298, thus placing us 12th out of 41 in the official ranking. In particular, Table 8 reports the various official validation score performance indices in the different hidden test/validation sets. The presence of a hidden undisclosed set (10,000 ECG records) from an American institution geographically distinct from the other datasets caused a significant decrease in the Challenge score. This critical point is significant, showing the importance of the composition of the learning/testing sets.  Table 9 shows the AUROC, AUPRC and the F 1 scores for the considered diagnostic classes. In this table, we can observe the weak points of the classifier. Three diagnostic classes had very low F 1 scores: Bradycardia (0.0), PR (0.05) and RAD (0.053), which corresponded to the three classes with the lowest numbers of examples (288, 340 and 427, respectively), and also correspondingly low AUPRC values (0.001, 0.019 and 0.025, respectively). These results confirm the critical point of the problem of class imbalance and show the limits of the random over-sampling technique.
The results clearly show that the deep learning architecture that directly examines raw ECG data and time-frequency images is able to produce satisfactory results.
Various teams that participated in the Physionet/Challenge considered the deep learning approach [27][28][29][30], showing a particular interest in this methodology. For example, the team with the highest score [27] considered both raw ECG data and ECG features extracted from ECG signals, including age and gender. A deep neural network with a modified residual neural network architecture was considered in [28], in which the scatter blocks processed the 12 leads separately. In [29], wavelet analysis and a convolutional network were used for each single lead, and a single output label was obtained, reducing the diagnostic categories to the individual and the most frequent combinations. In [30], the authors combined a rule-based model and a squeeze-and-excitation network.
Over recent years, there has been a rapid development of machine learning techniques, with a growing number of ECG classifiers [3,31]. These algorithms consider different sets of cardiac arrhythmias and small or relatively homogeneous datasets, reducing the possibility of a real comparison [11]. For example, in [31] the authors consider 12 classes, in [3] they consider six cardiac abnormalities, whereas the present work considers a set of 24 relevant diagnostic classes of clinical interest, making a direct comparison complex. Some of the characteristics of the proposed methods can be outlined. The RBM method mimics the classification process of an expert physician, and it obtain the classification in a very short time. However, the accuracy and the mimicking property could be improved with significant effort, considering, for example, some active tuning from the learning database, with more modular rules and fuzzy thresholds. The deep learning method is characterized by the use of a linear architecture fed only with raw ECG data, in which all the leads are examined simultaneously, considering a multi-label classifier with a large number of diagnostic classes, with a positive behavior in the presence of a significant class imbalance. This method has the drawback of complexity and a long training time. The use of pre-trained CNNs has simplified the training process; however, more specific architectures of deep learning could improve the classification accuracy.

Conclusions
In the present study, we have explored the potential of a classical rule-based method and a deep learning architecture for the automatic classification of ECG signals. The two methods were tested and validated in the framework of the PhysioNet/Computing in Cardiology Challenge 2020, in which six annotated databases of 43,101 ECG records were considered for the training set. The training and validation databases contained a set of 27 relevant diagnostic classes of clinical interest, which represents the complexity and difficulty of ECG interpretation. A particular scoring system was defined by the Challenge judges because not all misdiagnosed classifications are equally bad.
The results of the two different techniques showed that deep learning methods which directly examine raw ECG data and images are able to produce very satisfactory results. In addition, this technique, which is quite a simple methodology but with a high consumption of computation capacity, performs better than the classical rule-based system.
The reported results showed that our team was able to complete the challenge steps with two different methods. The final official results of our team, performed using the deep learning GoogLeNet_6 approach, achieved a challenge validation score of 0.426 and a full test score of 0.298, resulting in our team placing 12th out of 41 in the official rankings. The PhysioNet/Computing in Cardiology Challenge 2020 has provided the opportunity for unbiased and comparable research for testing the complexity of 12-lead ECG classifiers with a large public training set, as well as undisclosed validation and test sets.
Among the topics open for future investigations are the development of class-imbalance analysis, multi-label datasets and unequal sample sizes, in addition to the combination of the two proposed methods.