Next Article in Journal
Mobile Edge Computing Task Offloading Strategy Based on Parking Cooperation in the Internet of Vehicles
Next Article in Special Issue
Aberrated Multidimensional EEG Characteristics in Patients with Generalized Anxiety Disorder: A Machine-Learning Based Analysis Framework
Previous Article in Journal
Anisotropy of the ΔE Effect in Ni-Based Magnetoelectric Cantilevers: A Finite Element Method Analysis
Previous Article in Special Issue
A Novel Method for Baroreflex Sensitivity Estimation Using Modulated Gaussian Filter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ECG Classification Using Orthogonal Matching Pursuit and Machine Learning

Faculty of Mechanical Engineering, Bydgoszcz University of Science and Technology, 85-796 Bydgoszcz, Poland
Sensors 2022, 22(13), 4960; https://doi.org/10.3390/s22134960
Submission received: 9 June 2022 / Revised: 26 June 2022 / Accepted: 28 June 2022 / Published: 30 June 2022

Abstract

:
Health monitoring and related technologies are a rapidly growing area of research. To date, the electrocardiogram (ECG) remains a popular measurement tool in the evaluation and diagnosis of heart disease. The number of solutions involving ECG signal monitoring systems is growing exponentially in the literature. In this article, underestimated Orthogonal Matching Pursuit (OMP) algorithms are used, demonstrating the significant effect of concise representation parameters on improving the performance of the classification process. Cardiovascular disease classification models based on classical Machine Learning classifiers were defined and investigated. The study was undertaken on the recently published PTB-XL database, whose ECG signals were previously subjected to detailed analysis. The classification was realized for class 2, class 5, and class 15 cardiac diseases. A new method of detecting R-waves and, based on them, determining the location of QRS complexes was presented. Novel aggregation methods of ECG signal fragments containing QRS segments, necessary for tests for classical classifiers, were developed. As a result, it was proved that ECG signal subjected to algorithms of R wave detection, QRS complexes extraction, and resampling performs very well in classification using Decision Trees. The reason can be found in structuring the signal due to the actions mentioned above. The implementation of classification issues achieved the highest Accuracy of 90.4% in recognition of 2 classes, as compared to less than 78% for 5 classes and 71% for 15 classes.

1. Introduction

Cardiovascular disease (CVD) is a term for disorders related to the heart and blood vessels. According to statistics released by the American Heart Association in 2019, CVDs have become the dominant global cause of death. In 2016, there were over 17.6 million deaths (31% of global deaths), estimated to reach 23.6 million in 2030.
In clinical diagnostics, the electrocardiogram (ECG) is the most commonly used tool to assess cardiovascular function. The choice of ECG is based on its widespread availability, as well as its non-invasive nature, repeatability, and low cost of the exam. The idea of ECG measurement is to analyze the electrocardiographic signal, which reflects the change in electrical potential generated by the heart during each work cycle. During the ECG test, the frequency of contractions is determined, thus showing abnormal heartbeat rhythm and activity. This test helps diagnose many heart diseases that damage the function of the heart muscle, including arrhythmia, myocardial infarction, and coronary artery disease. Early detection helps prevent complications, such as an increased risk of stroke or sudden death.
Over the past decade, numerous attempts have been made to identify the ECG signal. This has been possible mainly due to the availability of large, public open-source ECG datasets. The literature indicates the application of various approaches to ECG signal classification. Existing ECG signal classification models can be divided into two main categories: classical methods and deep learning methods. Many proposed approaches explore the Accuracy of classification algorithms, such as Support Vector Machines (SVM), Naive Bayes classifier, k-nearest neighbors algorithm (kNN), Decision Trees (DT), and group classifiers. A common aspect of these algorithms is the need to extract features from the input ECG signal. These features are multimodal, e.g., temporal, frequency, and statistical. The magnitude of these features is of variable importance in recognizing different classes of arrhythmias and is used to train Machine Learning (ML) algorithms.
The success of ECG classification using the classical ML method depends largely on selecting features, which must be carefully designed for different algorithms. In addition, the selected input dataset for which the classification process is also performed has a great influence. The second approach is based on deep learning techniques, which are increasingly used in computer-aided diagnosis of almost all diseases. Common deep learning networks used in ECG signal analysis are Convolutional Neural Networks (CNN) and Recurrent Neural Network (RNN), as well as Long Short Term Memory (LSTM) and their combinations.
Most classification studies are performed using the MIT-BIH Arrhythmia database and PTB Diagnostic ECG. Classification is usually performed using two or five classes of arrhythmias. For classical methods, SVM classifiers [1,2,3] combined with genetic algorithms [4], Wavelet Transform (WT) [5], or Discrete Wavelet Transform [6,7] have been used for most of them. The evaluation metric was most often Accuracy (ACC), for which the results took values of 91–93% up to five classes of arrhythmias and above five classes, with values between 95.92 and 99.66%. The authors also used models based on k-NN algorithms [7,8], taking into account prior extraction of morphological features of QRS complexes and Decision Tree (DT) algorithms [9]. In the area, five to seven different arrhythmias were subjected to these classifications, yielding an ACC of 99%. Convergent procedural scenarios are noticeable for the deep learning network. The authors undertook classifications of a similar number of arrhythmia classes. Mostly they used the CNN model [10,11]. The authors of the work [12] implemented a 1-D CNN to combine the feature extraction and classification process. A similar approach was used in the article [13], limited to classifying two arrhythmia classes, focusing on myocardial infarction detection. In [14], the authors used two different CNNs trained with 2-s and 5-s segments of ECG data to classify atrial fibrillation, atrial flutter, ventricular fibrillation, and normal rhythms. Improving the Accuracy in classification was undertaken by the authors of the article [15] using Short-Time Fourier Transform (STFT) and Stationary Wavelet Transform (SWT) to obtain 2D CNN. A combination of CNN with LSTM was presented by the authors of the article [16] for detecting five types of heartbeats, relying on variable-length ECG segments for feature generation. Continued work using the LSTM model was proposed by the authors of the article [17,18], undertaking the classification of two and eight classes of arrhythmias, respectively. An Accuracy of 99% was gained when they focused their research on aspects including atrial fibrillation. The new RNN architecture model has been successfully used to classify five types of ECG beats [19,20].
From the application perspective, ECG signal classification is important in remote patient monitoring devices. Their development and diffusion promote the prevention and treatment of cardiovascular diseases. Mobile solutions, i.e., small and discrete devices for long-term ECG monitoring, are associated with limitations. Especially when their purpose is to measure, analyze, archive, and transmit real-time data containing clinical information. This area was successfully recognized during the SARS-CoV-2 pandemic (COVID-19).
The correct interpretation of ECG signals is complex and clinically challenging, and misinterpretation can result in inappropriate treatment. Recommendations for standardization and interpretation of ECG are well known. However, the ubiquity of this test and the transition from analog to digital recordings have affected its detailed interpretation. Traditional approaches have increasingly focused on memorizing the morphological patterns of individual components of the ECG signal and associating them with a disease symptom. The idea seems to shift to automatic analysis of ECG signal fragments with the simultaneous classification of disease entities.
The process of diagnosing heart disease uses the information contained in electrocardiographic signals. The starting point in the evaluation of the ECG is the heart rate and the type of rhythm. The former is regulated by the heart rate and is related to the rate at which another follows one wave of the heart beat. The heart rhythm is the pattern in which the heart beats. It can be described as regular or irregular, fast or slow. A normal heart rhythm is called sinus rhythm. Its rate corresponds to the pulse, and accurate interpretation requires the evaluation of electrocardiographic signals. Various cardiovascular diseases can be detected with the help of the widely used electrocardiogram. Several parameters have clinical significance in the ECG, including PR interval, QRS complex, ST segment, and QT interval.
The authors often emphasized that data from single, small, or relatively homogeneous datasets, further limited by the small number of patients and rhythm, prevented the creation of reliable algorithms in Machine Learning models. To some extent, the PTB-XL database [21,22], for which multi-class classification work is already known [23,24,25], has become a solution to the problem of data inaccessibility.
This study aimed to find the best possible classifiers of classical Machine Learning methods for disease entities belonging to 2, 5, and 15 classes of heart disease. In addition, a new method for R-wave determination and QRS complex extraction was used in this study. This method uses a 12-lead signal for which an estimate of the R-peak position is generated using R-wave detection [23,24]. In this article, we used the Feature Selection Method approach [26] to perform the study in steps. The research was based on finding the optimal parameters from predefined parameters. Each stage was carried out for different classifier models, input data, dictionaries, and parameters, and four aggregation methods were developed. For this purpose, it was proposed to study nine classical Machine Learning classifiers using the Orthogonal Matching Pursuit algorithm.
This article is organized as follows. After the introduction, Section 2 presents the research methodology. The characteristics of the databases and the methods used in the article are discussed. Then, the feature detection from ECG signal and the application of classical Machine Learning models are specified. Implementation details and experimental results are described in Section 3. The conclusion and discussion are given after that.

2. Materials and Methods

Based on Feature Selection Methods [26], the different classification steps were planned. The study aimed to find the optimal classification model for 2, 5, and 15 classes related to heart disease. The number of classes should be interpreted as follows: 2 classes—NORM class and others from PTB-XL database, 5 classes—disease classes from PTB-XL database, and 15 classes—subclasses of diseases from PTB-XL database.
The methodology used in this article was as follows (Figure 1): a PTB-XL dataset containing labeled 10-s ECG signal records was used for the study. First, the records in the database were filtered. Then, in the raw signal, R peaks were labeled and segmented so that there was precisely one QRS complex in each segment. In the next step, the data were divided into training data, validation, and test data (using cross-validation) and data for Dictionary Learning. In the next step, the dictionaries were created. Then, an Orthogonal Matching Pursuit operation was performed for the training data, resulting in the coefficients. Extracted QRS and coefficients were input for classifiers of 2, 5, and 15 heart disease classes. In the last step, an evaluation was conducted. The effectiveness of the proposed network methods was evaluated.

2.1. PTB-XL Dataset

In this study, all ECG data used are from the PTB-XL dataset [21,22]. The PTB-XL database is a clinical ECG dataset adapted for evaluating Machine Learning algorithms. Initially, PTB-XL consists of 21,837 records, corresponding to 12-lead ECG recordings. Each ECG signal is 10 s long and annotated by cardiologists. PTB-XL data are balanced by gender. The database contains 71 heart disease types with 5 relevant classes: normal ECG (NORM), myocardial infarction (CD), ST/T change (STTC), conduction abnormalities (MI), and hypertrophy (HYP).
Figure 2 and Figure 3 show the detailed distribution of classes and subclasses used in the study. For Figure 2, the data include the number and the percentage of records. In contrast, for Figure 3, we are limited to the percentage of subclasses only.

2.2. Data Filtering

PTB-XL contained 21,837 ECG records. However, not all records have labels (assigned classes), and not all assigned classes have a specific 100% confidence (in terms of assigned medical diagnosis). For this reason, both cases were filtered from the original dataset. Each record has a specific class and subclass, defining cardiovascular disease. Records with less than 100 subclasses were also filtered out. This yielded 17,011 records, each belonging to one of 5 classes and one of 15 subclasses. For this study, it was decided to use ECG records with a sampling rate of 500 Hz.

2.3. R-Peak Detection

The classification studies were preceded by detecting features from the ECG signal. P wave, QRS complex, and T wave are its main components. The QRS complex was considered the leading one, for which the R-peak detection algorithm was developed. For R-peak detection, it was decided to investigate well-known detectors, such as: Hamilton [27], Two Average [28], Stationary Wavelet Transform [29], Christov [30], Pan-Tompkins [31], and Engzee [32] with modification [33]. For a better illustration of the developed algorithm, Listing A1 shows its implementation written in Python.
The proposed algorithm was based on determining the position and number of R-waves using all detectors for each ECG lead (Figure 4). The result was a list of detected R-peaks for each detector and lead. From this, a flat list of R-peak numbers was created, including all leads and detectors. Next, the algorithm determined the number of R-peaks in the examined ECG signal. For this purpose, the list’s median containing all counted R-peak was used. The last step was to determine the position of each of the R-peaks. The k-mean algorithm was used for this purpose. A flat list of R-peaks is used as training data for the k-mean algorithm. A determined R-peaks number is used as a k-value. Cluster centers of the k-mean algorithm are the determined R-peak positions.
Similar approaches applying different detectors for the Wavelet Transform examples and a variable number of leads have been proposed in [34]. The proposed approach combines three well-known algorithms operating on 1-lead ECG: Christov detectors, Pan-Tompkins, and Discrete Wavelet Transform. The aim of this procedure, as in the present work, was to obtain the best possible combination of different detectors for the highest Accuracy of R-wave detection.
This article’s test to evaluate the Accuracy of R-wave detection is presented in Appendix B. The tests were related to both 1-lead ECG signal and 12-lead ECG signal. The combination of the Two Average, Christov, and Engzee detector for the research methodology was further used due to the Original Algorithm.

2.4. QRS Extraction

The determined positions of the R-peaks were used to extract the segments containing the QRS complexes. For a 10-s signal fragment, this operation consisted in determining the midpoint of the segments between consecutive R-peaks. The first and last segments obtained in this way were discarded. With this procedure, the R-peaks and thus the QRS complex would always be in the middle of the segment.

2.5. Description of the Implemented Method

Orthogonal Matching Pursuit was assumed to be the primary technique [35,36]. The Orthogonal Matching Pursuit (OMP) algorithm is an extension of the Matching Pursuit (MP) algorithm. The OMP algorithm, like MP, is based on a continuous search and matching of appropriate elements (atoms) of the dictionary that best reflect the desired features of the studied (original) signal. This process should maximize the correlation between an element from the dictionary and the rest of its part (residual) of the processed signal. The result of the OMP is a vector of coefficients. To give an idea of the process discussed above, Figure 5 shows an example of the original (input) signal, the coefficients obtained from its decomposition, and the signal after reconstruction and the residual (the difference between the original signal). The blue color indicates the signal after reconstruction, and the orange color indicates the residual. The decomposition was performed with 6 non-zero coefficients. Figure 6 shows the selected atoms with their coefficients. The expansion for this case is shown in Figure 7 and Figure 8, where 30 non-zero coefficients were used respectively.

2.6. Dictionary Created Using Dictionary Learning Technique

Dictionary Learning dictionary creation is based on data [37]. The task of the algorithms is to find a dictionary of atoms that best represents a given signal type. Figure 9 shows an example of Dictionary Learning atoms.

2.7. Dictionary Created Using KSVD Technique

KSVD [38] is an algorithm from the Dictionary Learning group that performs Singular Value Decomposition (SVD) to update the dictionary atoms, one by one, and is a kind of generalization of the k-means algorithm. An example of KSVD dictionary atoms is shown in Figure 10.

2.8. Designed Machine Learning Algorithms

The following classifiers were examined: KNeighbors—k-Nearest Neighbors [39], DecisionTree—Decision Tree [40], RandomForest—Random Forest [41], SVC—Support Vector Machine [42], XGBoost, LGBM—LightGBM, MLP—Multi-Layer Perceptron [43], AdaBoost [44], and GaussianNB—Naive Bayesian Classifier.
The classifiers take a fixed-size data vector as input. Unfortunately, for different records, the number of QRS episodes varies (BPM varies), ranging from 3 to 27. This causes the flat data vector with a different number of QRS to have a variable size. Accordingly, QRS episode aggregation methods have been proposed. A histogram of the number of segments containing the QRS complex is shown in Figure 11.
The author’s 4 methods of aggregation, i.e., a grouping of episodes containing QRS complexes, were developed. In the rest of the article, the proposed aggregation methods are called Single, Mean, Max, and Voting.
The inputs of aggregation methods were ECG signal records, episodes containing QRS complexes extracted on their basis, and the result of the OMP algorithm, i.e., coefficients obtained from them. In each aggregation method, the model’s output is a prediction corresponding to the disease entities.
The Single method is the most straightforward approach to QRS segment aggregation. The principle is to take a vector of coefficients obtained from the OMP algorithm for the first QRS segment from each lead. As a result, a 2-dimensional matrix was obtained. Then, such a matrix of coefficients was transformed into a 1-dimensional vector. The resulting vector was fed to the model input. The schematic for the Single method is shown in Figure 12.
The Mean method involves determining the arithmetic mean of the values of each ratio for all QRS segments. The operation was performed separately for each lead. The 2-dimensional matrix was then transformed into a 1-dimensional vector. In the next step, the vector was the model input. The schematic for the Mean method is shown in Figure 13.
The Max method determines the maximum from the absolute value of each ratio from all QRS episodes. The operation was performed separately for each lead. The 2-dimensional matrix was then transformed into a 1-dimensional vector. In the next step, the vector was the model input. The schematic for the Max method is shown in Figure 14.
The Voting method involves training model consisting of all QRS episodes. For this purpose, for each QRS segment, a 2-dimensional coefficient matrix is transformed into a 1-dimensional vector and given to the model input. In contrast, prediction is performed for each QRS segment separately. In the next step, the arithmetic mean of the prediction probabilities derived from each QRS segment is determined, and the prediction is made on this basis. The schematic for the Voting method is shown in Figure 15.

2.9. Data Splitting

The following data were used for each record:
  • Metadata (sex, age, BPM, resampling ratio);
  • Segments containing QRS complexes;
  • Coefficients from the OMP algorithm;
  • Dictionary and its parameters, i.e., dictionary type (Dictionary Learning (DL), KSVD, Gabor), dictionary size (62, 125, 250, 500, 1000 elements), number of non-zero coefficients (5, 10, 20, 40);
  • Aggregation methods: Single, Mean, Max, Voting.
Records were divided into training, validation, and test data in proportions of 70%, 15%, and 15%. To improve the quality of testing, non-exhaustive cross-validation was used. For this purpose, the split function was called with 3 or 5 different seeds. This means that all tests were repeated five times for different data splits.

2.10. Metrics

Models were evaluated using the metrics described below [45]. For simplicity of equations, specific acronyms have been created: T P —True Positive, T N —True Negative, F P —False Positive, and F N —False Negative.
The metrics used for network evaluation are:
  • Accuracy: A c c = ( T P + T N ) / ( T P + F P + T N + F N ) ;
  • P r e c i s i o n = T P / ( T P + F P ) ;
  • R e c a l l = T P / ( T P + F N ) ;
  • F 1 = 2 · P r e c i s i o n · R e c a l l / ( P r e c i s i o n + R e c a l l ) ;
  • Balanced Accuracy: B A c c = 1 / 2 · ( T P / ( T P + F N ) + T N / ( T N + F P ) ) .

2.11. Used Tools

The computations were performed on a server equipped with 2 Intel Xeon Silver 4210R processors (192 GB of RAM), Nvidia Tesla A100 (40 GB RAM), and Nvidia Tesla A40 (48 GB RAM) GPUs. They were also performed on 5 servers, each of which was equipped with 2 Intel Xeon Gold 6132 processors (512 GB of RAM). In this research, Sklearn, Numpy, Pandas, and Jupyter Lab programming solutions were used.

3. Results

The classifiers were evaluated by dividing them into three steps, as shown in Figure 16.

3.1. Step 1

Step 1—started with the initial selection of dictionaries and models. The study was conducted for classification based on five classes. The model input vector included the coefficients obtained from the OMP algorithm. Results were obtained for parameter combinations:
  • Dictionary: Dictionary Learning (DL), Gabor, KSVD;
  • Dictionary size: 62, 125, 250, 500 i 1000 elements;
  • Number of non-zero coefficients: 5, 10, 20, 40;
  • Aggregation methods: Single, Mean, Max;
  • Classifier model: KNeighbors, DecisionTree, RandomForest, SVC, XGBoost, LGBM, MLP, AdaBoost, GaussianNB.
The created combinations were tested for three different seeds. The 1620 parameter combinations were analyzed. Table 1 summarizes the results of the classification calculations for all dictionary types and sizes, arranged according to decreasing values of the Accuracy metric (ACC). Correspondingly, Table 2 summarizes the results for each aggregation method and model. The tables show the averaged Accuracy (ACC), Precision, Recall, and F1 values.

3.2. Step 2

Step 2—tests were conducted for classifications of 2, 5, and 15 classes. The input vector included the coefficients obtained from the OMP algorithm. Results were obtained for combinations of parameters:
  • Dictionary: Gabor, KSVD;
  • Dictionary size: 125, 250, 500 elements;
  • Number of non-zero coefficients: 5, 10, 20, 40;
  • Aggregation methods: Mean, Voting;
  • Classifier model: XGBoost, LGBM.
The created combinations were tested for five different seeds. The 96 parameter combinations were analyzed. Table 3, Table 4 and Table 5 summarize the results for each aggregation method and model, arranged according to decreasing values of the Accuracy metric (ACC). The tables show the averaged Accuracy (ACC), Precision, Recall, and F1 values.

3.3. Step 3

Step 3—testing was performed for classifications 2, 5, and 15. The input vector included coefficients obtained from the OMP algorithm, sections containing QRS complexes in the form of raw signal and metadata. Results were obtained for combinations of parameters:
  • Input data type: signal, coef, meta, signal + coef, signal + meta, coef + meta, signal + coef + meta;
  • Dictionary: Gabor, KSVD;
  • Dictionary size: 125, 250 elements;
  • Number of non-zero coefficients: 20, 40;
  • Aggregation methods: Voting;
  • Classifier model: XGBoost, LGBM.
The created combinations were tested for five different seeds. The 70 parameter combinations were analyzed. Table 6, Table 7 and Table 8 summarize the results for each aggregation method and model, arranged according to decreasing values of the Accuracy metric (ACC). The tables show the averaged Accuracy (ACC), Precision, Recall, and F1 values. The N/A designation means not applicable.
Figure 17, Figure 18 and Figure 19 present the confusion matrices from the evaluation on the test dataset. The confusion matrices generated in this article are for the first seed only, in order to limit their volume. For the other seeds they look similar. The confusion matrices include classification results for 2, 5, and 15 classes.

4. Discussion

The classification of an ECG signal is a complex issue, for which many obstacles limit the high Accuracy of the conducted studies. The studies and analyses need to integrate available methods with feature extraction techniques from electrocardiographic signals. Only in this approach is it possible to achieve the objectives at the highest possible level from the clinical perspective and not only because of non-medical diagnostics.
Although different results are available for ECG classification experiments, it is difficult to compare directly due to different classification schemes and evaluation metrics. Furthermore, one can see differences in the adopted classification objectives, which are not always aimed at obtaining the highest possible scores and often show differences when using different classification models. Nevertheless, the methodology proposed for this work achieved relatively good results for the PTB-XL database compared to other works. However, to give an idea of the current state of the art, it was decided to present related studies using different classifiers within other databases.
ECG signal classification is known primarily from articles involving the diagnosis of myocardial infarction, atrial fibrillation, or ventricular fibrillation. An equally wide range of articles relates to the general definition of arrhythmias. For example, articles in myocardial infarction classification [46,47] based on classical SVM-type classifiers are known from research for the PTB Diagnostic ECG Database or MIT-BIH Arrhythmia [48]. The test set results were ACC = 0.9958, ACC = 0.9874, and ACC = 0.976, respectively. Although they obtained high scores, the dataset dependencies remain uncertain for which, due to the small number of waveforms, it is possible that they used ECG signal segments from the same patient during model validation and testing. In addition, these results should not be interpreted as a classification for the two classes used in this study. The authors also chose to use models based on kNN algorithms [6,8], considering the previous extraction of QRS complexes and Decision Tree algorithms [9]. The evaluation metric most commonly used was Accuracy, for which results took values of 0.910–0.930 for five classes of arrhythmias and above five classes, values ranging from 0.9592 to 0.9966.
This work implemented classification for 2, 5, and 15 classes. Each step was carried out for different classifier models, input data, and aggregation methods. The input data were enriched with features derived from the Orthogonal Matching Pursuit algorithm, including different dictionaries and their parameters. The author’s algorithm for R-wave determination and QRS complex extraction was also evaluated.
For the purpose of this article, tests were performed to assess the computational complexity of the study. The results are summarized in Table A3, located in Appendix C. The measured times correspond to the model training and prediction steps on the validation and test sets. The experiment was performed for the Single aggregation method and using only OMP coefficient vectors’ given inputs. For most of the classifiers considered, the length of computation was more dependent on the size of the dictionary than the number of classes. One can see a differential increase in these times depending on the model. For example, for the Decision Tree model, as the size of the dictionary and the number of classes increased, the computation increased slightly. The situation looks different for the XBoost model, where the calculation times increase significantly. The longest calculation times were observed for the SVC classifier.
In evaluating cardiac classification algorithms, it is important to evaluate the R-peak detection algorithm. Based on the results obtained, the Two Average detector showed the highest Accuracy in R-peak detection in the 1-lead approach, whereas the Engzee detector showed the lowest. Different combinations for the 12-lead signal significantly influenced the results obtained. An improvement in the Accuracy of R-peak detection could be observed with the 12-lead approach to the signal under study. For example, Two Average detector in the classical approach achieved MAE = 0.389, and in that proposed for this study MAE = 0.270. This was due to the simultaneous consideration of all leads from the examined 10-segment ECG signal. The results were completely different when all detectors were selected simultaneously, for which the MAE evaluation metric was 0.367. Although, concerning the classical best approach, i.e., MAE = 0.389, it still obtained a higher score. The solution proposed for this article indicates that there is undoubtedly some optimal combination of detectors that provides the best results. In the case of analyzed ECG signals, the highest Accuracy of R-peak detection was obtained by combining the Two Average detector, the Christov detector, and the Engzee detector.
The experiments conducted showed that the results on the test data differ little from the results on the validation data. The values of the ACC classification metric for grades 2 and 5 remain higher than the tests in the work [23,24], regardless of the approach used. In the case of the work [25], they remain lower. The obtained Accuracy results for the classification of two and five classes achieved an Accuracy of 0.9023 and 0.7766, respectively. It is impossible to compare the classification for 15 classes, which has not been attempted so far by the authors of other works. The classification results for 15 classes achieved an Accuracy of 0.7079.
The problem of heart disease classification using classical models of Machine Learning methods was supported by underestimated Orthogonal Matching Pursuit (OMP) algorithms, showing the significant effect of concise representation parameters on improving the Accuracy of the classification process. Different combinations of dictionaries created for the operation of the OMP algorithm were investigated. Their optimal parameters were determined. The study shows that not only the type of dictionary is important. Its size and the number of non-zero coefficients are also important. The realized studies indicate that the hybrid system provides the highest ACC metric scores. For this system, the input data vector includes coefficients obtained from the OMP algorithm, segments containing QRS complexes in raw signal form, and metadata.
The RandomForest, XGBoost, or LightGBM classifiers proposed in this article are Decision Tree-based models designed to work with structured data. Thus, models cannot cope with unstructured data. This is the inability to recognize and detect shapes and displacements. It can be speculated that the approach proposed in this article, related to extraction of QRS complexes and resampling of the raw signal, contributes to locating the R-peak always in the same place and structures the data well enough for tree-based models to cope.
The realized experiments highlight the Accuracy of the proposed Voting aggregation method, for which the highest Accuracy results were obtained, regardless of the model or class size. A similar observation was noted in the comparison of dictionaries, where dictionaries created using Gabor functions were found to be the best. The performed experiments also emphasized the importance of using coefficients obtained from the OMP algorithm, for which the tested models obtained the highest Accuracy.
The confusion matrix analysis gives more possibilities to evaluate the obtained results. Classification Accuracy for two classes regardless of classifier type seems to be true. Although also, in this case, for subclasses with a small number of records, skipping occurs, which affects to some extent the skewness of the model. However, this explains the worse classification performance as the class size increases, i.e., 5 and 15 classes. What is particularly clear in Figure 19 is that a large part of the misclassification is caused by an imbalanced dataset. Classes with a low record number (such as IVCD or ISCI) are less frequently selected by the model. The situation is different for classes with many records (such as NORM or STTC). The model more often selects them. The NORM class is the most numerous, which makes it the best internalized by the model. It has the highest Precision and Recall values (green percentages on the bottom and right matrix bars). This is confirmed by the balanced Accuracy presented in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8.
If the aim of the work was to evaluate the results obtained in terms of statistical significance, it would be necessary to carry out Levenes test, Annova test, and Tukey-Hsd test. In the analyzed article, such an approach was carried out for step 3, considering this step as final. In the case of classification for 2, 5, and 15 classes, the results of Levenes test reached p > 0.05 and the results of Annova test reached p < 0.05. Realizing the comparison of different combinations of models with the use of Tukey-Hsd test, it can be noticed that statistically significant differences of the ACC metric for two classes are obtained above 1%, and respectively for 5 classes and 15 classes, −2.5% and 3.8%.

5. Conclusions

The issues corresponding to ECG signal classification were realized in the increasingly dynamic Machine Learning methods. The implementation of the classification issues in work achieved the highest Accuracy of 90% in recognizing 2 classes, whereas less than 78% for 5 classes and 71% for 15 classes. The research was undertaken on the recently published PTB-XL database, whose ECG signals were previously subjected to detailed analysis. Orthogonal Matching Pursuit algorithms were used, demonstrating the effect of concise representation parameters on improving the Accuracy of the classification process. Heart disease classification models based on classical classifiers were defined and investigated. Authors’ methods of aggregation of ECG segments containing QRS complexes were proposed. As a result, it was proved that the ECG signal subjected to algorithms of R-peak detection, QRS complexes extraction, and resampling performs very well in classification using Decision Trees. This is due to the structuring of the signal due to the actions mentioned above.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Listing A1 shows the function determining the R-wave indices for a 12-lead ECG signal.
Listing A1. R-wave index function for 12-lead ECG signal.
Sensors 22 04960 i001

Appendix B

Table A1 summarizes the results of the Accuracy of detection of R-wave numbers by detectors operating independently on each lead. Evaluation metrics denoted the Accuracy of this performance: MAE—mean absolute error and std—standard deviation. The abbreviations used describe the detectors studied: H—Hamilton, TA—Two Average, SWT—Stationary Wavelet Transform, Ch—Christov, PT—Pan-Tompkins, E—Engzee. The Two Average detector showed the highest Accuracy in R-wave detection, whereas the Engzee detector showed the lowest.
Table A1. R-peak determination Accuracy results using a one detector on a 1-lead signal.
Table A1. R-peak determination Accuracy results using a one detector on a 1-lead signal.
DetectorMAEStd
TA0.3890.887
SWT0.6141.125
PT0.6641.315
H1.2201.939
Ch2.8955.009
E5.5715.249
A novel approach using the 12-lead ECG signal was proposed to enhance the detection of R-peak. Different combinations of detectors were investigated. The best combination was evaluated using MAE and Std metrics. The results are summarized in Table A2. The Original Algorithm for the purpose of this study was a combination of Two Average detector, Christov detector, and Engzee detector.
Table A2. R-peak determination Accuracy results using detector combinations for 12-leads signal.
Table A2. R-peak determination Accuracy results using detector combinations for 12-leads signal.
DetectorMAEStd
TA, Ch, E0.2430.528
TA0.2700.664
H, TA, E0.2730.571
TA, SWT, Ch0.2970.570
H, TA, SWT, Ch, E0.3100.578
TA, Ch, PT, E0.3300.608
TA, PT, E0.3430.597
H, TA, Ch, E0.3430.633
H, TA, Ch, PT, E0.3500.637
TA, Ch, PT0.3570.642
TA, SWT, Ch, PT0.3600.633
TA, SWT, Ch, PT, E0.3670.622
H, TA, SWT, Ch, PT, E0.3670.640
H, TA, SWT0.3670.642
H, TA, SWT, Ch0.3770.644
TA, SWT, Ch, E0.3900.630
TA, PT0.3900.740
H, TA, SWT, Ch, PT0.3930.656
H, TA, PT, E0.3930.676
H, TA, SWT, PT, E0.4000.652
TA, SWT, PT0.4030.670
TA, SWT0.4100.668
Ch, PT, E0.4100.675
H, TA, PT0.4130.677
H, TA, SWT, E0.4170.655
H, TA, SWT, PT0.4200.682
TA, SWT, E0.4330.587
SWT, Ch, E0.4370.622
H, SWT, Ch, E0.4370.685
H, TA, Ch, PT0.4400.686
H, Ch, PT, E0.4470.701
TA, SWT, PT, E0.4500.665
H, SWT, Ch, PT, E0.4500.704
SWT, Ch, PT0.4530.715
SWT, PT, E0.4630.653
SWT, Ch, PT, E0.4630.694
H, Ch, E0.4630.701
H, TA0.4630.735
H, SWT, Ch, PT0.4830.726
H, PT, E0.4830.736
PT0.4830.747
H, TA, Ch0.5000.694
H, SWT, PT, E0.5100.738
SWT0.5130.714
SWT, PT0.5130.741
H, SWT, PT0.5130.765
H, SWT, Ch0.5170.729
H, SWT, E0.5270.765
H, Ch, PT0.5470.738
H, PT0.5570.784
H, SWT0.5630.839
TA, E0.7230.700
H, E0.7700.981
H0.7800.898
PT, E0.8100.827
SWT, E0.8470.941
TA, Ch0.8871.897
SWT, Ch0.8901.875
Ch, PT0.9901.939
Ch, E1.0401.708
H, Ch1.3232.351
Ch2.4834.474
E4.0833.459

Appendix C

Table A3 shows measured computation times for classifier models.
Table A3. Measured computation times for classifier models for 2, 5, and 15 classes and dictionary size (ds) = 125 and 500 elements.
Table A3. Measured computation times for classifier models for 2, 5, and 15 classes and dictionary size (ds) = 125 and 500 elements.
Model2 cl / ds = 1255 cl / ds = 12515 cl / ds = 1252 cl / ds = 5005 cl / ds = 50015 cl / ds = 500
KNeighbors3 s3 s3 s7 s7 s8 s
DecisionTree25 s27 s33 s55 s59 s1 min 22 s
RandomForest27 s32 s37 s34 s37 s43 s
SVC8 min 15 s12 min 43 s12 min 29 s33 min 43 s55 min 22 s1 h 1 min 9 s
XGBoost10 s38 s1 min 27 s23 s1 min 26 s3 min 33 s
LGBM5 s20 s47 s21 s1 min 26 s3 min 49 s
MLP1 min6 s1 min 11 s1 min 37 s2 min 48 s2 min 35 s3 min 54 s
AdaBoost59 s1 min 0 s1 min 4 s2 min 22 s2 min 24 s2 min 27 s
GaussianNB1 s2 s3 s6 s8 s12 s

References

  1. Karnan, H.; Natarajan, S.; Manivel, R. Human machine interfacing technique for diagnosis of ventricular arrhythmia using supervisory machine learning algorithms. Concurr. Comput. Pract. Exp. 2021, 33, e5001. [Google Scholar] [CrossRef]
  2. Khalaf, A.F.; Owis, M.I.; Yassine, I.A. A novel technique for cardiac arrhythmia classification using spectral correlation and support vector machines. Expert Syst. Appl. 2015, 42, 8361–8368. [Google Scholar] [CrossRef]
  3. Subramanian, K.; Prakash, N.K. Machine learning based cardiac arrhythmia detection from ecg signal. In Proceedings of the 2020 3rd International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; pp. 1137–1141. [Google Scholar]
  4. Nasiri, J.A.; Naghibzadeh, M.; Yazdi, H.S.; Naghibzadeh, B. ECG arrhythmia classification with support vector machines and genetic algorithm. In Proceedings of the 2009 3rd UKSim European Symposium on Computer Modeling and Simulation, Athens, Greece, 25–27 November 2009; pp. 187–192. [Google Scholar]
  5. Ye, C.; Coimbra, M.T.; Kumar, B.V. Arrhythmia detection and classification using morphological and dynamic features of ECG signals. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 1918–1921. [Google Scholar]
  6. Golrizkhatami, Z.; Acan, A. ECG classification using three-level fusion of different feature descriptors. Expert Syst. Appl. 2018, 114, 54–64. [Google Scholar] [CrossRef]
  7. Rangappa, V.G.; Prasad, S.; Agarwal, A. Classification of cardiac arrhythmia stages using hybrid features extraction with k-nearest neighbour classifier of ecg signals. Learning 2018, 11, 21–32. [Google Scholar] [CrossRef]
  8. Karimifard, S.; Ahmadian, A.; Khoshnevisan, M.; Nambakhsh, M.S. Morphological heart arrhythmia detection using hermitian basis functions and kNN classifier. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 1367–1370. [Google Scholar]
  9. Mondal, P.; Mali, K. Cardiac arrhythmias classification using decision tree. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2015, 5, 540–542. [Google Scholar]
  10. Yıldırım, Ö.; Pławiak, P.; Tan, R.S.; Acharya, U.R. Arrhythmia detection using deep convolutional neural network with long duration ECG signals. Comput. Biol. Med. 2018, 102, 411–420. [Google Scholar] [CrossRef]
  11. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M.; Gertych, A.; San Tan, R. A deep convolutional neural network model to classify heartbeats. Comput. Biol. Med. 2017, 89, 389–396. [Google Scholar] [CrossRef]
  12. Kiranyaz, S.; Ince, T.; Gabbouj, M. Real-time patient-specific ECG classification by 1-D convolutional neural networks. IEEE Trans. Biomed. Eng. 2015, 63, 664–675. [Google Scholar] [CrossRef]
  13. Acharya, U.R.; Fujita, H.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M. Application of deep convolutional neural network for automated detection of myocardial infarction using ECG signals. Inf. Sci. 2017, 415, 190–198. [Google Scholar] [CrossRef]
  14. Acharya, U.R.; Fujita, H.; Lih, O.S.; Hagiwara, Y.; Tan, J.H.; Adam, M. Automated detection of arrhythmias using different intervals of tachycardia ECG segments with convolutional neural network. Inf. Sci. 2017, 405, 81–90. [Google Scholar] [CrossRef]
  15. Xia, Y.; Wulan, N.; Wang, K.; Zhang, H. Detecting atrial fibrillation by deep convolutional neural networks. Comput. Biol. Med. 2018, 93, 84–92. [Google Scholar] [CrossRef]
  16. Oh, S.L.; Ng, E.Y.; San Tan, R.; Acharya, U.R. Automated diagnosis of arrhythmia using combination of CNN and LSTM techniques with variable length heart beats. Comput. Biol. Med. 2018, 102, 278–287. [Google Scholar] [CrossRef] [PubMed]
  17. Faust, O.; Shenfield, A.; Kareem, M.; San, T.R.; Fujita, H.; Acharya, U.R. Automated detection of atrial fibrillation using long short-term memory network with RR interval signals. Comput. Biol. Med. 2018, 102, 327–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Gao, J.; Zhang, H.; Lu, P.; Wang, Z. An effective LSTM recurrent network to detect arrhythmia on imbalanced ECG dataset. J. Healthc. Eng. 2019, 2019. [Google Scholar] [CrossRef] [Green Version]
  19. Liu, F.; Zhou, X.; Cao, J.; Wang, Z.; Wang, H.; Zhang, Y. Arrhythmias classification by integrating stacked bidirectional LSTM and two-dimensional CNN. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Macau, China, 14–17 April 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 136–149. [Google Scholar]
  20. Yildirim, Ö. A novel wavelet sequence based on deep bidirectional LSTM network model for ECG signal classification. Comput. Biol. Med. 2018, 96, 189–202. [Google Scholar] [CrossRef] [PubMed]
  21. Wagner, P.; Strodthoff, N.; Bousseljot, R.D.; Kreiseler, D.; Lunze, F.I.; Samek, W.; Schaeffter, T. PTB-XL, a large publicly available electrocardiography dataset. Sci. Data 2020, 7, 154. [Google Scholar] [CrossRef]
  22. Goldberger, A.L.; Amaral, L.A.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef] [Green Version]
  23. Śmigiel, S.; Pałczyński, K.; Ledziński, D. Deep Learning Techniques in the Classification of ECG Signals Using R-Peak Detection Based on the PTB-XL Dataset. Sensors 2021, 21, 8174. [Google Scholar] [CrossRef]
  24. Śmigiel, S.; Pałczyński, K.; Ledziński, D. ECG Signal Classification Using Deep Learning Techniques Based on the PTB-XL Dataset. Entropy 2021, 23, 1121. [Google Scholar] [CrossRef]
  25. Pałczyński, K.; Śmigiel, S.; Ledziński, D.; Bujnowski, S. Study of the Few-Shot Learning for ECG Classification Based on the PTB-XL Dataset. Sensors 2022, 22, 904. [Google Scholar] [CrossRef]
  26. Sammut, C.; Webb, G.I. Encyclopedia of Machine Learning; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  27. Hamilton, P. Open source ECG analysis. In Computers in Cardiology; IEEE: Piscataway, NJ, USA, 2002; pp. 101–104. [Google Scholar]
  28. Elgendi, M.; Jonkman, M.; De Boer, F. Frequency Bands Effects on QRS Detection. Biosignals 2010, 2003, 2002. [Google Scholar]
  29. Kalidas, V.; Tamil, L. Real-time QRS detector using stationary wavelet transform for automated ECG analysis. In Proceedings of the 2017 IEEE 17th International Conference on Bioinformatics and Bioengineering (BIBE), Washington, DC, USA, 23–25 October 2017; pp. 457–461. [Google Scholar]
  30. Christov, I.I. Real time electrocardiogram QRS detection using combined adaptive threshold. Biomed. Eng. Online 2004, 3, 28. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Pan, J.; Tompkins, W.J. A real-time QRS detection algorithm. IEEE Trans. Biomed. Eng. 1985, 3, 230–236. [Google Scholar] [CrossRef] [PubMed]
  32. Engelse, W.A.; Zeelenberg, C. A single scan algorithm for QRS-detection and feature extraction. Comput. Cardiol. 1979, 6, 37–42. [Google Scholar]
  33. Lourenço, A.; Silva, H.; Leite, P.; Lourenço, R.; Fred, A.L. Real Time Electrocardiogram Segmentation for Finger based ECG Biometrics. Biosignals 2012, 49–54. [Google Scholar]
  34. Thurner, T.; Hintermueller, C.; Blessberger, H.; Steinwender, C. Complex-Pan-Tompkins-Wavelets: Cross-channel ECG beat detection and delineation. Biomed. Signal Process. Control 2021, 66, 102450. [Google Scholar] [CrossRef]
  35. Mallat, S.G.; Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 1993, 41, 3397–3415. [Google Scholar] [CrossRef] [Green Version]
  36. Pati, Y.C.; Rezaiifar, R.; Krishnaprasad, P.S. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; pp. 40–44. [Google Scholar]
  37. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G. Online dictionary learning for sparse coding. In Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009; pp. 689–696. [Google Scholar]
  38. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  39. Goldberger, J.; Hinton, G.E.; Roweis, S.; Salakhutdinov, R.R. Neighbourhood components analysis. Adv. Neural Inf. Process. Syst. 2004, 17. [Google Scholar]
  40. Dumont, M.; Marée, R.; Wehenkel, L.; Geurts, P. Fast multi-class image annotation with random subwindows and multiple output randomized trees. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP), Lisboa, Portugal, 5–8 February 2009; pp. 196–203. [Google Scholar]
  41. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  42. Platt, J. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv. Large Margin Classif. 1999, 10, 61–74. [Google Scholar]
  43. Hastie, T.; Rosset, S.; Zhu, J.; Zou, H. Multi-class adaboost. Stat. Its Interface 2009, 2, 349–360. [Google Scholar] [CrossRef] [Green Version]
  44. Hinton, G.E. Connectionist learning procedures. In Machine Learning; Elsevier: Amsterdam, The Netherlands, 1990; pp. 555–610. [Google Scholar]
  45. Pałczyński, K.; Śmigiel, S.; Gackowska, M.; Ledziński, D.; Bujnowski, S.; Lutowski, Z. IoT Application of Transfer Learning in Hybrid Artificial Intelligence Systems for Acute Lymphoblastic Leukemia Classification. Sensors 2021, 21, 8025. [Google Scholar] [CrossRef] [PubMed]
  46. Acharya, U.R.; Fujita, H.; Sudarshan, V.K.; Oh, S.L.; Adam, M.; Koh, J.E.; Tan, J.H.; Ghista, D.N.; Martis, R.J.; Chua, C.K.; et al. Automated detection and localization of myocardial infarction using electrocardiogram: A comparative study of different leads. Knowl.-Based Syst. 2016, 99, 146–156. [Google Scholar] [CrossRef]
  47. Sharma, L.; Tripathy, R.; Dandapat, S. Multiscale energy and eigenspace approach to detection and localization of myocardial infarction. IEEE Trans. Biomed. Eng. 2015, 62, 1827–1837. [Google Scholar] [CrossRef]
  48. Naz, M.; Shah, J.H.; Khan, M.A.; Sharif, M.; Raza, M.; Damaševičius, R. From ECG signals to images: A transformation based approach for deep learning. PeerJ Comput. Sci. 2021, 7, e386. [Google Scholar] [CrossRef]
Figure 1. General overview diagram of the method.
Figure 1. General overview diagram of the method.
Sensors 22 04960 g001
Figure 2. Distribution of PTB-XL database data by classes.
Figure 2. Distribution of PTB-XL database data by classes.
Sensors 22 04960 g002
Figure 3. Distribution of PTB-XL database data by subclasses.
Figure 3. Distribution of PTB-XL database data by subclasses.
Sensors 22 04960 g003
Figure 4. Block diagram of the proposed R-wave detection algorithm.
Figure 4. Block diagram of the proposed R-wave detection algorithm.
Sensors 22 04960 g004
Figure 5. Example input ECG signal (original), non–zero coefficients, the signal after reconstruction.
Figure 5. Example input ECG signal (original), non–zero coefficients, the signal after reconstruction.
Sensors 22 04960 g005
Figure 6. Atoms with non–zero coefficients, for example, signal decomposed using 6 non-zero coefficients.
Figure 6. Atoms with non–zero coefficients, for example, signal decomposed using 6 non-zero coefficients.
Sensors 22 04960 g006
Figure 7. Operation of the OMP algorithm −30 non–zero coefficients.
Figure 7. Operation of the OMP algorithm −30 non–zero coefficients.
Sensors 22 04960 g007
Figure 8. Atoms with non–zero coefficients, for example, signal decomposed using 30 non–zero coefficients.
Figure 8. Atoms with non–zero coefficients, for example, signal decomposed using 30 non–zero coefficients.
Sensors 22 04960 g008
Figure 9. Example atoms of the DL dictionary.
Figure 9. Example atoms of the DL dictionary.
Sensors 22 04960 g009
Figure 10. Examples atoms of the KSVD dictionary.
Figure 10. Examples atoms of the KSVD dictionary.
Sensors 22 04960 g010
Figure 11. Histogram of the number of segments comprising the QRS complex.
Figure 11. Histogram of the number of segments comprising the QRS complex.
Sensors 22 04960 g011
Figure 12. Aggregation method—Single.
Figure 12. Aggregation method—Single.
Sensors 22 04960 g012
Figure 13. Aggregation method—Mean.
Figure 13. Aggregation method—Mean.
Sensors 22 04960 g013
Figure 14. Aggregation method—Max.
Figure 14. Aggregation method—Max.
Sensors 22 04960 g014
Figure 15. Aggregation method—Voting.
Figure 15. Aggregation method—Voting.
Sensors 22 04960 g015
Figure 16. Steps of implementation of research related to classification.
Figure 16. Steps of implementation of research related to classification.
Sensors 22 04960 g016
Figure 17. Confusion matrix of the best model in classification for 2 classes.
Figure 17. Confusion matrix of the best model in classification for 2 classes.
Sensors 22 04960 g017
Figure 18. Confusion matrix of the best model in classification for 5 classes.
Figure 18. Confusion matrix of the best model in classification for 5 classes.
Sensors 22 04960 g018
Figure 19. Confusion matrix of the best model in classification for 15 classes.
Figure 19. Confusion matrix of the best model in classification for 15 classes.
Sensors 22 04960 g019
Table 1. Summary of average metrics scores for each dictionary, class size = 5.
Table 1. Summary of average metrics scores for each dictionary, class size = 5.
PositionDictionaryModel-MethodACCPrecisionRecallF1BACC
1Gabor-1000-20LGBM-Mean0.7360.6920.6130.6340.613
3KSVD-250-5LGBM-Mean0.7340.6880.6210.6420.621
4KSVD-500-10LGBM-Mean0.7330.6910.6130.6350.613
5Gabor -125-20LGBM-Mean0.7330.6850.6190.6390.619
7KSVD-125-5LGBM-Mean0.7310.6910.6140.6360.613
9Gabor -250-20LGBM-Mean0.730.6880.6070.6290.607
25DL-62-5LGBM-Mean0.7260.6770.6010.6230.601
31Gabor -500-10LGBM-Mean0.7250.6850.6040.6250.603
32KSVD-1000-10LGBM-Mean0.7240.6710.6040.6240.604
37KSVD-62-10LGBM-Mean0.7240.6810.6030.6250.602
54DL-125-5LGBM-Mean0.7210.6850.5930.6170.593
55Gabor -62-40LGBM-Mean0.7210.680.6170.6380.616
64DL-250-5LGBM-Mean0.7190.6730.5870.6080.586
99DL-500-5LGBM-Mean0.7140.6730.5730.5930.572
127DL-1000-5LGBM-Mean0.7080.6530.5680.5870.568
Table 2. Summary of average metrics scores for each aggregation method and classifier model, class size = 5.
Table 2. Summary of average metrics scores for each aggregation method and classifier model, class size = 5.
PositionDictionaryModel-MethodACCPrecisionRecallF1BACC
1Gabor-1000-20LGBM-Mean0.7360.6920.6130.6340.613
12Gabor-125-20XGBoost-Mean0.7290.6760.6130.6320.613
39KSVD-125-10SVC-Mean0.7240.6730.6110.6320.611
60KSVD-250-5LGBM-Max0.720.6680.6240.6410.624
93KSVD-500-5XGBoost-Max0.7150.6640.6140.6330.614
115KSVD-500-10SVC-Max0.7090.6670.5730.5930.573
150Gabor-125-20Random Forest-Mean0.7030.6580.5550.5740.554
159KSVD-250-20MLP-Mean0.7030.6350.6130.6220.613
187Gabor-125-20LGBM-Single0.6990.6450.5880.6070.588
222Gabor-125-20XGBoost-Single0.6960.6380.5780.5970.578
265DL-1000-10MLP-Max0.6910.6160.5630.5780.563
373KSVD-125-5SVC-Single0.6810.6310.5510.5730.551
393Gabor-125-10Random Forest-Single0.6780.6360.5390.5620.539
459KSVD-250-5Random Forest-Max0.6710.6510.5220.5490.522
564KSVD-62-5MLP-Single0.6560.5660.5560.5600.556
696Gabor-500-40KNeighbors-Mean0.6320.5970.4680.4820.468
698DL-1000-20GaussianNB-Mean0.6310.5190.4950.5020.494
718Gabor-125-5AdaBoost-Mean0.6280.5340.5270.5210.526
825KSVD-250-5Kneighbors-Max0.610.6290.430.4400.430
865DL-1000-10GaussianNB-Max0.6030.4980.4650.4730.465
943Gabor-125-10AdaBoost-Max0.5860.4890.4990.4780.499
950Gabor-125-10AdaBoost-Single0.5850.4850.4860.4720.486
971Gabor-125-5Decision Tree-Mean0.5820.4740.4760.4740.476
977Gabor-125-10Kneighbors-Single0.5810.5510.4140.4280.414
1086Gabor-125-20Decision Tree-Single0.560.4580.4580.4570.458
1150DL-250-20GaussianNB-Single0.5480.4490.4170.4110.416
1176KSVD-1000-5Decision Tree-Max0.5430.4470.440.4430.439
Table 3. Summary of average metrics scores for each data model, class size = 2.
Table 3. Summary of average metrics scores for each data model, class size = 2.
PositionDictionaryModel-MethodACCPrecisionRecallF1BACC
1Gabor-125-20XGBoost-Voting0.8970.8930.8970.8950.897
3Gabor-125-20LGBM-Voting0.8960.8920.8950.8930.895
4Gabor-125-20LGBM-Mean0.8950.8910.8960.8930.895
12Gabor-125-20XGBoost-Mean0.8920.8880.8920.8890.892
Table 4. Summary of average metrics scores for each data model, class size = 5.
Table 4. Summary of average metrics scores for each data model, class size = 5.
PositionDictionaryModel-MethodACCPrecisionRecallF1BACC
1Gabor-125-20XGBoost-Voting0.7440.7040.6440.6650.644
4Gabor-125-20LGBM-Voting0.7410.6980.6450.6640.645
27KSVD-250-5LGBM-Mean0.7340.6880.6210.6420.621
48Gabor-125-20XGBoost-Mean0.7290.6760.6130.6320.613
Table 5. Summary of average metrics scores for each data model, class size = 15.
Table 5. Summary of average metrics scores for each data model, class size = 15.
PositionDictionaryModel-MethodACCPrecisionRecallF1BACC
1Gabor-250-20LGBM-Voting0.6710.4770.3970.3960.397
4Gabor-250-20XGBoost-Voting0.6680.4510.390.3870.390
26KSVD-125-5LGBM-Mean0.6580.450.3790.3820.379
40KSVD-500-10XGBoost-Mean0.6530.4520.380.3820.380
Table 6. Summary of average metrics scores for each data model, class size = 2.
Table 6. Summary of average metrics scores for each data model, class size = 2.
PositionDictionaryModel-MethodACCPrecisionRecallF1BACC
1Gabor-125-20signal + coef + meta-XGBoost0.9060.9030.9060.9040.906
5N/Asignal-XGBoost0.9040.9000.9050.9020.904
6N/Asignal + meta-XGBoost0.9040.9000.9040.9020.904
8Gabor-125-40signal + coef-XGBoost0.9040.9000.9040.9020.904
12Gabor-250-20signal + coef + meta-LGBM0.9030.8990.9030.9010.903
19Gabor-250-20signal + coef-LGBM0.9020.8980.9020.9000.902
25N/Asignal + meta-LGBM0.9010.8970.9010.8990.901
32N/Asignal-LGBM0.9000.8960.9000.8980.900
36Gabor-125-20coef+meta-XGBoost0.8990.8950.8990.8960.899
38Gabor-125-20coef + meta-LGBM0.8980.8940.8980.8960.898
40Gabor-125-20coef-XGBoost0.8970.8930.8970.8950.897
44Gabor-125-20coef-LGBM0.8960.8920.8950.8930.895
69N/Ameta-LGBM0.7310.7260.7120.7160.712
70N/Ameta-XGBoost0.7290.7240.7100.7140.710
Table 7. Summary of average metrics scores for each data model, class size = 5.
Table 7. Summary of average metrics scores for each data model, class size = 5.
PositionDictionaryModel-MethodACCPrecisionRecallF1BACC
1Gabor-125-20signal + coef-XGBoost0.7770.7440.6890.7090.688
2Gabor-250-20signal + coef + meta-LGBM0.7770.7450.6900.7110.690
6N/Asignal + meta-LGBM0.7750.7440.6870.7090.687
7KSVD-125-20signal + coef + meta-XGBoost0.7750.7450.6830.7060.683
8Gabor-250-20signal + coef-LGBM0.7750.7390.6840.7050.684
15N/Asignal + meta-XGBoost0.7740.7430.6850.7070.685
18N/Asignal-XGBoost0.7740.7410.6800.7020.679
23N/Asignal-LGBM0.7730.7410.6840.7060.684
37Gabor-250-20coef + meta-LGBM0.7470.7120.6450.6690.645
38Gabor-125-20coef + meta-XGBoost0.7460.7010.6490.6680.649
40Gabor-125-20coef-XGBoost0.7440.7040.6440.6650.643
48Gabor-125-20coef-LGBM0.7410.6980.6450.6640.645
69N/Ameta-LGBM0.4640.3370.2880.2780.288
70N/Ameta-XGBoost0.4640.3540.2930.2850.293
Table 8. Summary of average metrics scores for each data model, class size = 15.
Table 8. Summary of average metrics scores for each data model, class size = 15.
PositionDictionaryModel-MethodACCPrecisionRecallF1BACC
1Gabor-250-20signal + coef + meta-LGBM0.7120.5640.4510.4570.451
2Gabor-250-20signal + coef-LGBM0.7110.5480.4470.4530.447
3N/Asignal + meta-XGBoost0.7100.5750.4490.4580.449
4N/Asignal-XGBoost0.7090.5940.4480.4560.448
5Gabor-250-40signal + coef + meta-XGBoost0.7090.5710.4460.4540.447
6Gabor-250-20signal + coef-XGBoost0.7080.5900.4450.4520.445
9N/Asignal-LGBM0.7080.5530.4440.4510.444
22N/Asignal + meta-LGBM0.7060.5300.4430.4490.443
37Gabor-250-40coef + meta-XGBoost0.6730.5360.4020.4050.402
39Gabor-250-20coef + meta-LGBM0.6720.5120.4010.4030.401
41Gabor-250-20coef-LGBM0.6710.4770.3970.3960.397
48Gabor-250-20coef-XGBoost0.6680.4510.3900.3870.390
69N/Ameta-XGBoost0.4240.1310.0930.0860.093
70N/Ameta-LGBM0.4220.1300.0920.0860.092
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Śmigiel, S. ECG Classification Using Orthogonal Matching Pursuit and Machine Learning. Sensors 2022, 22, 4960. https://doi.org/10.3390/s22134960

AMA Style

Śmigiel S. ECG Classification Using Orthogonal Matching Pursuit and Machine Learning. Sensors. 2022; 22(13):4960. https://doi.org/10.3390/s22134960

Chicago/Turabian Style

Śmigiel, Sandra. 2022. "ECG Classification Using Orthogonal Matching Pursuit and Machine Learning" Sensors 22, no. 13: 4960. https://doi.org/10.3390/s22134960

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop