Next Article in Journal
Electrocardiographic Fragmented Activity (I): Physiological Meaning of Multivariate Signal Decompositions
Previous Article in Journal
Pre-Composting and Vermicomposting of Pineapple (Ananas Comosus) and Vegetable Waste
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Electrocardiographic Fragmented Activity (II): A Machine Learning Approach to Detection

by
Francisco-Manuel Melgarejo-Meseguer
1,2,3,
Francisco-Javier Gimeno-Blanes
4,*,
María-Eladia Salar-Alcaraz
1,3,
Juan-Ramón Gimeno-Blanes
1,3,
Juan Martínez-Sánchez
1,3,
Arcadi García-Alberola
1,2,3 and
José Luis Rojo-Álvarez
5,6
1
Unidad de Arritmias, Hospital Clínico Universitario Virgen de la Arrixaca, 30120 El Palmar, Spain
2
Departamento de Medicina Interna, Universidad de Murcia, 30001 Murcia, Spain
3
Instituto Murciano de Investigación Biosanitaria Virgen de la Arrixaca (IMIB), 30120 El Palmar, Spain
4
Departamento de Ingeniería de Comunicaciones, Universidad Miguel Hernández, 03202 Elche, Spain
5
Departamento de Teoría de la Señal y Comunicaciones y Sistemas Telemáticos y Computación, Universidad Rey Juan Carlos, 28943 Fuenlabrada, Spain
6
Center for Computational Simulation, Universidad Politécnica de Madrid, 28223 Pozuelo de Alarcón, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(17), 3565; https://doi.org/10.3390/app9173565
Submission received: 15 July 2019 / Revised: 19 August 2019 / Accepted: 27 August 2019 / Published: 31 August 2019

Abstract

:
Hypertrophic cardiomyopathy, according to its prevalence, is a comparatively common disease related to the risk of suffering sudden cardiac death, heart failure and stroke. This illness is characterized by the excessive deposition of collagen among healthy myocardium cells. This situation, which is medically known as fibrosis, constitutes effective conduction obstacles in the myocardium electrical path, and when severe enough, it can be outlined as additional peaks or notches in the QRS, clinically entitled as fragmentation. Nowadays, the fragmentation detection is performed by visual inspection, but the fragmented QRS can be confused with the noise present in the electrocardiogram (ECG). On the other hand, fibrosis detection is performed by magnetic resonance imaging with late gadolinium enhancement, the main drawback of this technique being its cost in terms of time and money. In this work, we propose two automatic algorithms, one for fragmented QRS detection and another for fibrosis detection. For this purpose, we used four different databases, including the subrogated database described in the companion paper and incorporating three additional ones, one compounded by more accurate subrogated ECG signals and two compounded by real and affected subjects as labeled by expert clinicians. The first real-world database contains QRS fragmented records and the second one contains records with fibrosis and both were recorded in Hospital Clínico Universitario Virgen de la Arrixaca (Spain). To deeply analyze the scope of these datasets, we benchmarked several classifiers such as Neural Networks, Support Vector Machines (SVM), Decision Trees and Gaussian Naïve Bayes (NB). For the fragmentation dataset, the best results were 0.94 sensitivity, 0.88 specificity, 0.89 positive predictive value, 0.93 negative predictive value and 0.91 accuracy when using SVM with Gaussian kernel. For the fibrosis databases, more limited accuracy was reached, with 0.47 sensitivity, 0.91 specificity, 0.82 predictive positive value, 0.66 negative predictive value and 0.70 accuracy when using Gaussian NB. Nevertheless, this is the first time that fibrosis detection is attempted automatically from ECG postprocessing, paving the way towards improved algorithms and methods for it. Therefore, we can conclude that the proposed techniques could offer a valuable tool to clinicians for both fragmentation and fibrosis diagnoses support.

1. Introduction

The heart is the principal element in the circulatory system and it is divided in four chambers—the right and left atria, which are at the upper part of the heart; and the right and left ventricles, which are located at the lower part of the heart. The right part of the heart is responsible for receiving un–oxygenated blood from the whole body and carrying it to the lungs to oxygenate it, whereas the left part is responsible for receiving the oxygenated blood from the lungs and sending it to the whole body. This process is performed by the heartbeat, which is originated in the sinoatrial node, a small mass of tissue located in the right atrium. This node acts as a pacemaker creating the basic electrical impulse that travels to the atrioventricular node, which is a mass of tissue located between the atria and the ventricles. In this node, the impulse is delayed allowing the blood flowing from the atria to ventricles. Finally, this impulse is conducted to the His bundle where it is divided in two paths, in order to reach both ventricles simultaneously [1]. In terms of a representation of real electrical activity and operational behavior of the heart, the electrocardiogram (ECG) has proven to be the most widespread used tool for fast heart functioning diagnose. This test consists of several electrodes attached to the patient chest and limbs which record the different projections of the cardiac electrical activity. It allows clinicians to easily detect a wide number of illness, such as, abnormalities in rhythm like tachycardia or bradycardia, abnormalities in electrical conduction like bundle branch blocks, or markers of sudden cardiac death, among others [2,3].
In this work, we covered the fibrosis, which is the apparition of non-conductive (fibrous) tissue patches among the normal myocardium tissue. These patches act as obstacles in the normal myocardium electrical conduction and favor the risk of suffering different kinds of arrhythmias [4], even those ones that cause sudden cardiac death. On the other hand, recent research [5,6,7] sugged that the presence of fragmentation, which is a ECG feature manifesting as a number of extra peaks and deflections in QRS complex, is related to the presence of myocardial fibrosis. Nowadays, the fragmentation is detected by visual inspection of the ECG and this method presents two main drawbacks, namely, several different definitions of this condition are present in the literature and the inter-observer error is high, because this characteristic can be confused with noise or artifacts in the ECG. On the other hand, the fibrosis is detected by magnetic resonance imaging with late gadolinium enhancement (MRI-LGE), which is an expensive test in terms of time and money. Therefore, the main objectives of this work are two, namely, the development of an algorithm allowing a systematic detection of fragmentation in the ECG and the evaluation of the detection power of these very same techniques to identify the visually unnoticeable effects of fibrosis on digital registers.
In the companion paper [8], we scrutinized the behavior of two frequently used multivariate transforms, principal component analysis (PCA) and independent components analysis (ICA), over a fragmented subrogated model of ECG. As a result, this previous work proved that the use of these transforms can enhance the presence of fragmentation waves in the computed components. In this work we extend the use of these transformations in order to compute features that can be used in a classifier for the suitable fragmentation and fibrosis detection.
This paper is structured as follows: Section 2 describes the literature available with regard to fragmentation and fibrosis detection in ECG digital registers; In Section 3, the four databases used in this work are described, as well as the different proposed methods; Section 4 presents the results of each proposed method over the previously described databases; Finally, in Section 5, the conclusions are presented together with the suggested future work in this area.

2. Background

Very limited literature has been published according to this topic, specially related to automatic detection of singularities such as fragmentation and almost no paper with regard to systematic automatic detection of fibrosis using the ECG was found in our search of precedents. In particular, and to our best knowledge, there are just four main works available about fragmentation detection. In Reference [9], the authors proposed a method based on discrete wavelet transform (DWT). The algorithm firstly presents a signal accommodation stage for processing through denoising and detrending. Then, the DWT is computed by using Haar signals and the resulting coefficients are interpolated. Finally, fragmentation is detected and classified according to the values of the coefficients close to the zero–cross points. This method was tested over 31 registers from the PTB database from Physionet, achieving 0.90 sensitivity and 0.90 specificity.
In Reference [10], the authors presented an algorithm based on intrinsic time-scale decomposition (ITD). First, the algorithm computes the first four components of ITD from the ECG. Second, the ECG is delineated according to the second and third ITD components. Then, the fragmentation index is computed by averaging the ratio between the wave–speed and the absolute amplitude of the half wave over the QRS, both of them obtained from the first ITD component. Finally, the algorithm is tested over the records of the PTB Physionet database, limiting the cases to those meeting the following criteria: QRS duration lower than 120 ms and noise level computed as the standard deviation of the first 80 ms of the record lesser than 0.02 mV. Results showed 0.96 area under the receiver operating characteristic curve.
In Reference [11], the authors proposed an algorithm based on stationary wavelet transform (SWT). In this case, the ECG is detrended by using median filters. Then, fiducial point detection is made based in zero–crossing techniques. Finally, fragmented QRS are detected by computing the SWT, jointly together with Haar signals, from each beat, thus finding the zero–crossing of the transformed signal and finally applying certain rules based on morphology. Results applied over 51 patients from University Hospital of Southampton showed 0.90 accuracy.
In Reference [12], the authors proposed a method to detect fragmentation. Their algorithm firstly presents a pre-processing stage where the signal is band–pass filtered and the baseline wander is processed. Afterwards, the beats are detected and delineated by using the variant mode decomposition (VMD). Then, ten features are computed from the signal and from the VMD output, namely: The average slope of the rising and downing flanks of QRS; The values of the linear fit of the averaged slopes; The peaks and main frequencies, of the 3rd, 4th and 5th component of VMD; And the number of peaks of the QRS. Finally, a number of classifiers are tested using the previous computed features over a private database of 616 records labeled by 5 experts. The final algorithm exhibited 0.86 sensitivity, 0.89 specificity and 0.88 accuracy.
With regard to fibrosis, and as mentioned earlier, to our knowledge there is no published paper devoted to automatic detection of fibrosis. Hence, this would be a new research line pioneered by the present paper.
In this work, we used linear and non-linear classifiers in order to detect both fragmentation and fibrosis. We selected those classifiers that previously have presented good performance in other similar bio-signal processing fields, which are: The support vector machine (SVM), used for the vasovagal syncope detection [13]; The K-nearest neighbour (KNN), applied in beat classification tasks [14]; The multilayer perceptron (MLP), used for arrhythmia classification in implanted cardioverter defibrillators [15]; The naïve Bayes (NB), applied also in arrhythmia classification tasks [16]; And the decision trees (DT), used to lung cancer detection [17].

3. Materials and Methods

This section is divided in three main parts describing the databases, the classifiers used, and the features spaces. In the first subsection, the following four databases are described with detail: The subrogated fragmented database (Sfrag-DB), the subrogated wide-fragmented database (SWfrag-DB), the fragmented database (FHCM-DB) and the fibrosis database(HCM-DB). The second subsection summarizes the principles of the classifiers that have been applied in this research, namely, SVM, MLP, KNN, DT and the Gaussian NB. The last subsection is devoted to describing a systematic approach to the different and possible input spaces and to the signal processing being done in this stage, such as the previous normalization or the construction of the features themselves.

3.1. Databases

As mentioned above, in this work we used four different databases, namely, Sfrag-DB, SWfrag-DB, FHCM-DB and HCM-DB. Each of them includes both affected and control recordings in order to be able to apply and benchmark the selected learning techniques.
The Sfrag-DB was created using control records (from healthy subjects) and modifying them by synthetic incorporation of fragmented waves. This added element was stemmed according to
F ( t ) = A · sin π n t w
where A is a random number between 1% and 30% of the main peak amplitude, t is the time vector with its values going from 0 to 2 w , w is a random number between 4 and 24 ms that represents the fragmentation duration and n represents the number of semi-cycles of the sinusoid. This database was extensively developed including this wide variety of possible values in the variables included in the equation to even overcome any imaginable composition of the fragmentation: (i) From almost invisible values of amplitude to exceeding possible ones; (ii) From low frequencies to very high ones; (iii) From just a section of the wavelength to a number of them. This sinusoid was applied to the first 150 ECG from 418 control ECG recorded from students of Universidad Católica San Antonio (Murcia, Spain). These ECG were recorded using the ELI 350 from MortaraTM, a device that uses 500 Hz sampling frequency and 5 μ V resolution. The demographic data from this ECG population were a men-to-woman ratio of 314/104 and 23.1 ± 4.3 years. The fragmented positive-to-negative ratio were 150/150. An example of a control (fragmented) record from this database can be seen in Figure 1a,b.
The SWfrag-DB was created to simulate the effect of wide miss-conduction areas on the heart. Hence, we used the same previous control database, but this time synthetically fragmented using the following sinusoid equation,
F ( t ) = A · sin ( 2 π f t )
where A is a random number between 1% and 30% of the main peak amplitude, t is the time vector and its values going from 0 to 2 w , w is a random number between 20 ms and 40 ms that represents the fragmentation duration and f is a random number from 45 to 75 representing the frequency, in Hertz, of the sinusoid. An example of a record from this database can be seen in Figure 1c.
The FHCM-DB was created using 80 standard 12-lead ECG records from HCM positive diagnosed patients. For the development of this database, the presence of fragmentation was identified by two independent clinical reviewers, from Hospital Clínico Universitario Virgen de la Arrixaca (Murcia, Spain). This batch was selected inside a larger set of 225 cases. The demographic data of the whole database was 47.0 ± 15.9 years and men-to-women ratio of 152/73. These ECG were recorded with GE-MAC 5000 from General ElectricTM at 500 Hz sampling rate and 4.88 μ V resolution. The fragmented positive-to-negative ratio was 42/38. An example of a record from this database can be seen in Figure 1d.
The HCM-DB comprised the fibrosis cases and it is a subset of 300 records from a larger database of 1750 cases of analyzed HCM patients including positively fibrosis diagnosed patients using MRI-LGE. These records were collected, analyzed and diagnosed by the expert clinicians in our group, from Hospital Clínico Universitario Virgen de la Arrixaca (Murcia, Spain). The signal-recorder was PageWriter TC30 from PhilipsTM at 500 Hz sampling rate and 5 μ V resolution. The demographic data of the whole database were 55.3 ± 16.4 years and a men-to-women ratio of 651/1099. The fibrosis positive-to-negative ratio was 150/150. An example of a record from this database can be seen in Figure 1e.

3.2. Classifiers

In this work, we used five different classifiers to evaluate the diagnostic capabilities of each of them to detect fragmentation or fibrosis. The implemented classifiers were SVM, MLP, KNN, DT and NB and all of these presented classifiers were developed and tested according to the recommendations in [18].
Before the explanation of each classifier, some shared mathematical concepts need to be summarized. First, our observations are defined in terms of their features, which are numeric values that shape the observations and they are labelled according to the class to which they belong. More, hereafter we work with binary problems. Therefore, from a mathematical point of view, our data set can be defined as
S = x i , y i ; i = 1 , 2 , , N ; x i R n y i 1 , 1
where x i ( y i ) is the feature vector (label) of the i-th observation. Second, all of these classifiers provide us with a final classification function that is computed by using the training observations x i to determine the label of a new unknown observation, denoted as x . Third, most of these classifiers are based on a training-test scheme and they must be trained with a subset of observations to set their internal parameters in order to define their classification function. As said before, each observation is defined by its features, hence, the input space is often called also feature space.
The SVM, which was proposed in [19], is a classifier with at least three interesting properties, namely, it presents a tractable optimization formulation, tractable complexity control and flexible non-linear parametrization. The basic idea behind the SVM is the computation of a hyperplane that splits the input space according to the observed classes. For reader convenience, some basic cases are next explained. Let us consider a linearly separable binary classification problem in which the classification function is given by
H : w T x + b = 0 C ( x ) = s i g n ( w T x + b )
where H is the separator hyperplane, w is the normal vector to this plane and b is the bias or independent term. According to this equation, the number of separator hyperplanes is infinity, therefore, we must define the optimal hyperplane. It is advantageously defined as the hyperplane that maximizes the distance to the closest observation in both classes, which are named the support vectors. After computed all the calculus, the equation for w and b can be written as
w = i = 1 N α i y i x i b = ( y k w T x k ) for α k 0 C ( x ) = s i g n ( i = 1 N α i y i x T x + b )
where α i is the i-th Lagrangian multiplier, which is greater than zero as far as the i-th observation is a support vector. As can be seen, the classification function C ( x ) is computed by applying inner product of support vector, those ones observation which present α k = 0 and the observation that we want to classify x . This solution is known as hard-margin classification and an example of this kind of problems can be seen in Figure 2a, where the highlighted points are the support vectors, the solid line represents the optimal separator hyperplane and the dotted lines represent the optimized margin.
As it can be seen, the SVM provides the classifier equation as a function of the inner product of the training observations. It should be noticed here that in the real world, the previous solution cannot be computed as clean as expected due to the existence of classification errors and noisy observations. In order to deal with this, a new parameter needs to be added to the formulation. This new approach considers slack variables, the solution derived from this approach is known as soft-margin and after these new calculi the new equation for w and b can be expressed as
w = i = 1 N α i y i x i b = ( y k ( 1 ζ k ) w T x k ) for α k 0 C ( x ) = s i g n ( i = 1 N α i y i x i T x + b )
where ζ k is the slack variable associated to the k-th support vector. As we have seen, the hard and soft margin approaches present a similar classification function, which only depends of the inner product among support vectors and the observation to be classified. An example of this kind of problem can be seen in Figure 2b, where the highlighted points are the support vectors, the solid black line represents the optimal separator hyperplane, the solid colored lines represent the slack variables and the dotted lines represent the optimized margin.
The SVM is a powerful tool to work with linear problems, yet a classical question is how it deals with non-linear problems. As we have seen before, the SVM can classify linearly separable problems, hence the way to manage the non-linear problems is to use a function allowing to map the features space into a higher dimensional space where the problem is linearly separable and this mapping is performed by function ϕ ( x ) . Now the problem is to find that function ϕ ( x ) which meets the following equation.
K ( x i , x ) = ϕ ( x i ) T ϕ ( x )
where K is the kernel function, which represents the inner product in a higher dimensional space. As can be observed, finding ϕ ( · ) that meets the previous equation can be difficult, therefore we can apply the Mercer’s Theorem that allows us to use a function as a kernel if it is semi-positive definite symmetric. An example of the use of kernel function to solve a non-linear problem is shown in Figure 2c, where the highlighted points are the support vectors, the solid line represents the optimal separator hyperplane and the dotted lines represent the optimized margin. The most common kernel functions used in this work are described here after. First, the linear inner product function is computed according to the following equation,
K ( x i , x ) = x i T x
where x i is the i-th observation of training set and x is a new observation. In a similar way, the Gaussian function, maps the input space and performs the product in the feature space according to
K ( x i , x ) = e | | x i x | | 2 2 σ 2
where x i is the i-th observation of training set, x is a new observation and σ is the width of the Gaussian given as SVM input.
The second classifier that can be used is the MLP [20]. This model is a classifier based on the biological neural systems and it is structured as several perceptrons or neurons in different layers and interconnected. The neurons are composed by four different parts, namely: Input weights, which assign importance to the inputs; Bias, which acts like a threshold; Adder, which sums all the weighted inputs and the bias; And activation function, which give us the output of the neuron. Mathematically the output function of a simple neuron can be expressed as
t ( x i ) = g ( j = 1 P w j · x i j + w 0 )
where g ( · ) is the activation function, P is the number of inputs, w is the vectors of weight associated to the inputs, x i j is the j-th feature of the i-th observation and w 0 is the bias. The most common activation functions are, the linear function, the step function and the sigmoid function, defined according to Equation (11) respectively.
g ( a ) = a g ( a ) = 0 if a < 0 1 if a 0 g ( a ) = 1 1 + e a
As said before, these neural networks are formed by aggregation of neurons in layers. For a complex neural network, conformed by L layers with P neurons per layer, the classification function can be written as
C ( x i ) = s i g n ( g ( j = 1 P L 1 ( w j L · o j L 1 ) + w 0 L ) ) o l = g ( j = 1 P l 1 ( w j l · o j l 1 ) + w 0 l )
where g is the activation function, w j L is the weight associated to the j-th input of the output layer and o l , which is a function of x i , is the output of the l-th layer. An example of an MLP neural network can be seen in Figure 3.
In order to use the MLP as classifier, a training stage is needed, where each bias and weight of every perceptron in the classifier. The algorithm used to perform this training stage is compounded by several steps: (i) All the weights and bias are initialized; (ii) The outputs are computed for each observation of the training subset; (iii) The squared error is computed for these observation; (iv) The generalized delta rule is used to modified the weights and bias; (v) Steps (ii), (iii) and (iv) are repeated until all the training observations are used; (vi) The total error of each training iteration is computed; (vii) If the total error is less or equal than a threshold the training is finished, else the steps (ii), (iii), (iv), (v) and (vi) are repeated until the threshold is satisfied or the maximum number of iterations is reached.
Another option is the NB classifier [20], which is based on Bayes Theorem and the word naïve is related to the assumption of statistical independence among the variables. The NB classifies using the probability of belonging to a class. From a mathematical point of view, the probability of certain observation modelled by its feature vector x i belonging to a class c can be written
P ( c | x i ) = P ( x i | c ) · P ( c ) P ( x i )
The previous equation cannot be solved without considering the statistic independence among the features, but by applying this assumption, we can rewrite it into
P ( c | x i ) = P ( x i 1 | c ) · P ( x i 2 | c ) · P ( x i 3 | c ) P ( x i n | c ) · P ( c )
where P ( c ) is the probability of a class, this value is a constant that can be computed as 1 # c l a s s e s or as the probability of the class in the training data and P ( x i j | c ) can be computed as a Gaussian function, according to
P ( x i j | c ) = 1 2 π σ c 2 e ( x i j μ c ) 2 2 σ c 2
where σ c 2 is the variance of the i -th feature and it is computed using the training data, μ c 2 is the mean of the j -th feature and it is computed using the training data and x i j is the value of the j -th feature of test data. Finally, the decision rule can be written as
C ( x i ) = a r g m a x ( P ( c k ) j = 1 n P ( x i j | c )
where P ( c k ) is the probability of the k -th class and x i j is the j-th feature from i-th observation.
Another widely used classifier is the KNN [21], which belongs to the so-called lazy learning classifiers, as they do not use the whole training dataset, but instead the solution is only computed as a discriminative function from the closest subset of training observations to classify the test dataset. The classification process follows the next steps: First, the training dataset is mapped to features space; Second, when a new observation is classified, the KNN must compute the distance to every point in the plane; Finally, the classification is made taking into account the label of the k-nearest points in the plane. The distance definition could diverse, but the Euclidean distance is mostly used and it can be mathematically expressed as
d ( x i , x j ) = k = 1 D ( x i k x j k ) 2
where D is the number of dimension of observation x and x i and x j are the points which their distance between which we need to know. Moreover, there is a collection of algorithms to label an observation according to their neighbors and one of the most used is the one using a polling among the k-nearest neighbors in order to label a new observation, this is,
V o t e x i = k = 1 K 1 d ( x k , x i ) α
where V o t e x i is the voting value of the c -th class for x i observation, x i is the observation that we wanted to classify, x k is the k -th nearest neighbour and α is a constant that is 1 when x i and x k belong to the same class and zero otherwise. The final classification is performed by taking the label that reaches highest value of V o t e x i .
The DT are classifying algorithm that recursively split the input space [20]. DT are compounded by three different parts, namely, the root node (which has no inputs), the leaf nodes (which have one input and no outputs) and the internal nodes (which present one input and different outputs). Each internal node divides the input space into two or more semi-spaces according to a discrete function of the features. Many different algorithms to generate a binary DT are based on Hunt’s algorithm, which is a simple algorithm conformed by the following steps: First, if all the observations on a node are from the same class, this node is marked as a leaf; Second, if not all the observations are from the same class, the node is marked as an internal one, the observations are divided by using attribute test conditions and this process iterates until all the observations are classified. Nowadays, the algorithms to build a DT are very efficient and implement stop criteria to prevent the overfitting.
As said before, we need some attribute test conditions, which are the way of splitting the observations in a node according to the value of their features, as shown in Figure 4. In order to select the best splitting, we need to implement a variable that can take into account the impurity, which is a measure of the number of classes present in a node, of the parent and the child nodes. This variable is known as gain, Δ and it is computed as
Δ = I ( p a r e n t ) j = 1 K S v j S I ( v j )
where I ( p a r e n t ) represents the impurity measured in parent node, K is the number of children nodes, S v j is the number of observations in the children node v j , S is the number of observations in the parent node and I ( v j ) is the impurity of children node v j . The impurity can be computed from different ways and, in this work, we used the following implementation,
G i n i ( t ) = 1 i = 0 C 1 [ p ( i | t ) ] 2
where C is the number of classes in the subset, i is the class and t is the selected node.
The main drawback of DT is their tendency to grow too much. Hence, exiting techniques must be implemented, such as pruning algorithms that remove the branches with lower information, or setting maximum values for the parameters that control the growth, as the maximum depth or the minimum number of observations per leaf.

3.3. Processing and Features Spaces

The data processing comprises several steps in our ECG data: (i) Low-order band-pass filtering from 0.5 to 100 Hz, in order to reduce the out-of-band noise without endanger the characteristics of fragmentation (fibrosis); (ii) Notch filtering to reduce the power-line interference; (iii) Baseline wander removal method, based on cubic spline interpolation [22]; (iv) QRS-detector stage, based on a tailored version of the Pan-Tompkins algorithm developed by our group [23]; (v) Beat template stage, where the beat templates are created by averaging the well-correlated QRS complexes, which reduces the total amount of noise present in the ECG without noticeable distortion [24].
Once the signals have been properly adjusted and key variables such fiducial points have been carefully isolated, the environment is set for the feature space analysis. To do so, features-spaces calculation is divided into three additional ladders: (i) Transformation stage, where a signal is processed according to the multivariate transformations presented in the companion paper [8]; (ii) Signal selection stage, where setting the slot between 140 ms or 700 ms around the ECG main peak and the normalization is applied, if it is required; And (iii) feature computation, which is presented in two different possible ways, namely, statistical features and signal-samples related features.
First, in order to broadly analyze the signal, statistics (stats) were computed in four separated situations, namely, the statistic of the whole signal, the statistic of the 25 Hz low-pass filtered signal, the statistic of the 25 to 75 Hz band-pass filtered signal and the statistic of 75 Hz high-pass filters signal. For each of them we computed: (i) the average; (ii) the standard deviation, which represents the power of the signal; (iii) the skewness of the windowed signal, which takes into account the position of the maximum of the windowed signal distribution; (iv) the kurtosis, which represents the shape of the windowed signal distribution; (v) and the number of maxima present in the windowed signal. All of these features were computed according to Reference [25]. On the other hand, the features based on the signal-samples were three, namely, the aggregation of the ECG samples across all components (Sum), the aggregation of the squared values of the ECG samples across all components (PowSum) and the concatenation of the samples of each ECG component in one single vector (Concat).

4. Results

The presentation of our results is divided in four subsections. The first subsection is devoted to fragmentation detection based on linear models and we describe the experiments using linear classifiers over the two fragmented databases. In the second subsection, focused on feature relevance, we show the statistical relevance of the best linear classifier and a new subrogated model that takes into account these results. In the third subsection, we present the benchmarking of linear classifiers on fibrosis database and the statistical relevance of the best performing features is additionally dealt. Finally, in the fourth subsection, non-linear detection is scrutinized and the results for non-linear classifiers on the fibrosis and fragmented databases are presented with detail.
In all cases, we benchmarked the classifiers using the merit figures usually applied for clinical purposes, namely, sensibility (Sen), specificity (Spe), positive predictive value (PPV), negative predictive value (NPV) and accuracy (Acc). The interpretation of these parameters is extensively described in the literature [26] and they are calculated by
S e n = T P T P + F N S p e = T N T N + F P P P V = T P T P + F P N P V = T N T N + F N A c c = T P + T N T P + T N + F N + F P
where T P ( T N ) is the number of records marked as pathological (non-pathological) by the clinician and by the classifier and F P ( F N ) is the number of records marked as non-pathological (pathological) by the clinicians and marked as pathological (non-pathological) by the classifier. As mentioned in Section 3, segment preprocessing was applied prior to classification processing. Segments were set initially to 700 ms around the main peak. This segment is referred to as non-normalized beat prior to normalization and normalized beat after normalization. With the same criteria, segments of 140 ms around the main peak are recognized hereafter as non-normalized QRS when normalization is not yet applied and normalized QRS once it is statistically normalized.

4.1. Fragmentation Detection Based on Linear Models

The main goal of this experiment is to determine the linear classifier that detects the best the fragmentation activity on the ECG. The scheme of the tested methods is as follows. A lineal classifier is firstly selected from the two main implementation of the SVM, namely, C-SVM and ν -SVM (the interested readers can see Reference [18] for details) and the linear SVM was selected according to its behavior working with high dimensionality spaces [27]. Then, the ECG segments of interest are computed as mentioned above. And finally, the input space is computed for classification according to the methods described in Section 3.

4.2. Features Relevance and New Fragmented Subrogated Model

In this section, we tested the statistical relevance of the features used in the best method for fragmentation detection methods described in previous section. To do so, we performed a bootstrap resampling analysis, which allows us to compute the probability density function of a parameter by the it computation over resampled subsets of the population [28], setting the number of resamplings B to 100. When the confidence interval of an SVM weight does not overlap zero, the associated feature is identified as relevant for fragmentation detection. On the other hand, in the methods that use the signal as input of the SVM, we searched the frequency bands which are related to the fragmentation, and to do so, we worked with the SVM as a transversal linear filter where their weights are the coefficient of the filter.
The first presented classifier is the ν -SVM, which is combined with the principal components of the normalized QRS from the independent leads computed over the Sfrag-DB. Figure 5a shows the confidence interval at 95% for each SVM weight associated to the features described in Section 3. Each panel shows the confidence interval of the corresponding features weight associated to a one principal component. As it can be seen in first Panel, which corresponds to the first principal component, the relevant features were the 11, 12, 13, and 14. Those ones correspond to mean, standard deviation, kurtosis, and skewness of the band-pass filtered component respectively. In second Panel, which corresponds to the second principal component, the relevant features were: The skewness of the component, the skewness and the number of extrema of the band-pass filtered component; And mean, skewness, and number of extrema of the high-pass filtered component. In the third Panel, which corresponds to the third principal component, the relevant feature was the mean of the high-pass filtered component. In the fifth Panel, which corresponds with the fifth principal component, the relevant features were the number of extrema of the band-pass filtered component and the mean of the high-pass filtered component. In the sixth Panel, which corresponds to the sixth principal component, the relevant feature was the number of extrema of the high-pass filtered component. In the seventh Panel, which corresponds to the seventh principal component, the relevant features were the kurtosis, skewness, and number of extrema of the low-pass filtered component, the mean, standard deviation, kurtosis, and skewness of the band-pass and high-pass filtered component, and the number of extrema of the band-pass filtered component. The last Panel, which corresponds to the eighth principal component, the relevant features were the mean, standard deviation, kurtosis, skewness, and number of extrema of band-pass and high-pass filtered component. As can be observed, the relevant principal component are the last two, this is coherent with the obtained results in the companion paper [8], where we said using the detailed components, those with lower variance enhance the fragmentation detection.
In Figure 5b we can observe the confidence interval of the ν -SVM weights, when it is fed with the summation of regionalized principal components power from FHCM-DB. The exhibited behavior of the coefficient is near periodic with a frequency around 10 Hz, as seen in the upper panel around feature 15, and in lower the panel in the spectrum representation. The fragmentation waves in these records are visible by a clinician, for this reason we thought that the apparition of these frequency bands are related to the minimum size of the fibrotic mass in the myocardium that originate fragmentation waves, if the size of the fibrotic mass is lower, the fragmentation waves become invisible to the clinician. Figure 5c shows the ν -SVM weights for the ν -SVM fed with the concatenation of non-normalized QRS from the eight independent leads from FHCM-DB. We see the confidence interval of the SVM weights in columns 1 and 3, and their frequency behavior associated to these weights in columns 2 and 4. In this case the in periodic behavior is clearer than the previous experiment, where the frequency behavior of Sfrag-DB was presented and the main frequency is around 60 Hz.
According to the previous results, we developed a new subrogated database called SWfrag-DB, which considers this results in order to enhance the fit between our model and the real-world fragmentation. In the next experiment, we showed the behavior of the proposed detection methods when they work with this new database. Figure 6a shows the Acc for ν -SVM applied over SWfrag-DB and using each signal selection. The best results for this database are achieved using SVM with the statistics computed over the principal components of the non-normalized QRS and the statistics computed over the last three principal components of the normalized QRS. In both cases, the achieved Acc is 0.817. These values can be seen at Panels (1,1), and (2,1). The best results are obtained when the statistics are used as input for the SVM. On the other hand, the worst scenario is obtained using the aggregation, the aggregation of power, or the concatenation of components. Figure 6b shows the Acc for ν -SVM applied over SWfrag-DB and using each signal selection. as can be observed, the best results for this database are achieved by using as features of SVM the statistics computed over the principal components of the non-normalized QRS achieving 0.808 Acc. These values can be seen at Panel (1,1). The best results are obtained when we used the statistics as SVM features. On the other hand, the worst cases appeared when is used as input, signal aggregation, aggregation of power, or the concatenation of components.
These results are similar to the obtained in case of Sfrag-DB and these are coherent with the results presented in [8], where the fragmented waves information tends to isolate in detailed components. Therefore, we must use the information from all the components.
Table 1 shows the results for each signal selection and classifier for SWfrag-DB. The Acc for combination is above 0.70. The maximum Acc that was achieved was 0.82. This value was reached using ν -SVM fed with the statistics computed over the three last principal components of the non-normalized QRS and using ν -SVM feed with the statistic computed over the principal components of the normalized QRS. In general, the use of the QRS exhibits a good performance and the worst output was presented when ICA was computed over non-normalized beats. These results reinforce the conclusions of [8], where we said that the use of PCA is better than ICA for fragmentation detection, because is more stable in terms of components output, which is relevant to get good results from the SVM.

4.3. Fibrosis Detection Based on Linear Models and Statistical Relevance

In this subsection, we cover the results of the experiments applied over HCM-DB. As mentioned previously, and according to clinical criteria, it is not possible to visually identify this affection in ECG, and MRI-LGE is currently required for diagnosis. Hence, given that fibrosis is physiopathologically related to missconduction in the heart, we tried to apply the same algorithmic. The scheme of the tested methods was next. A linear classifier was first selected from both SVM algorithms; Then, the interesting ECG segment was selected from the following ones, normalized beat, normalized QRS, non-normalized beat, and non-normalized QRS; Finally, the input space of the classifiers was computed according to Section 3.
Figure 7a shows the Acc for ν -SVM applied over HCM-DB and using each signal selection. The best result for this database is achieved using the statistics computed over the QRS taken from 12 leads as input for SVM, reaching A c c = 0.68 , see Panel (2,1). Figure 7b shows the Acc for C-SVM applied over HCM-DB and using each signal selection. As can be observed, the best results for this database are achieved using as features of SVM the concatenation of non-normalized QRS from the 8 independent leads achieving A c c = 0.683 , see Panel (2,3). Table 2 shows the combinations that achieved the best results for each signal selection and classifier. as can be observed, the minimum Acc for combinations was this time much lower, reaching only 0.64. The maximum Acc was 0.68, achieved by ν -SVM, fed with the statistics computed over the non-normalized QRS of the 12 leads, and the concatenation the non-normalized QRS of the 8 independent leads. The use of non-normalized QRS exhibit a good performance, but the results, although much lower in the fragmentation analysis. These results are relevant because they proved the existence of fibrosis markers in the standard ECG.
The goal of the second part of this experiment, is to describe the statistical relevance of the features of the best fragmentation detection methods described in previous experiment. Hence, we performed a bootstrap analysis with B equal to 100, after which we extracted the relevant features by selecting those ones the confidence interval of whose weight did not overlap zero.
Figure 8 shows the confidence interval of the used features in the linear model that exhibit the best Acc applied over the HCM-DB. In this case, we can observe that not exist any non-overlapping-zero feature, hence we can say that any computed feature is statistically significant in our linear model. Which indicates that the relation among used features and fibrosis is not linear. Therefore, we need to explore the use of non-linear classifiers to enhance the fibrosis detection.

4.4. Fragmentation and Fibrosis Detection Based on Non-Linear Models

In this subsection we experiment using the same databases, and processing techniques, but now using non-linear classifiers. The benchmarked classifiers were, the C-SVM with Gaussian kernel, the ν -SVM with Gaussian kernel, the MLP, the KNN, the DT, and the Gaussian NB.
Table 3a shows the best algorithm of each class tested over the Sfrag-DB. As it can be seen, the best algorithm was NB using statistics computed over the output of the principal components of the non-normalized QRS, this algorithm reached an A c c = 0.83 . The algorithm provided different accuracies depending on the used classifier, and sorting them from top to low performers they could be listed as follows: NB, the C-SVM and DT, the MLP and ν -SVM, and KNN. These results indicate statistical independence among the used features for the fragmentation detection. Table 3b shows the best algorithm of each class tested over the SWfrag-DB. The best algorithm was again the NB using the statistic computed over the principal components of the non-normalized QRS, which achieved A c c = 0.83 . According to classifier Acc, they can be sorted as: NB, C-SVM, ν -SVM, MLP, and DT, and the KNN. In these cases, the most used features were the statistics computed over the principal components. These results also proven the statistical independence among the used features for the wide-fragmentation detection. Table 3c shows the best algorithm of each class tested over the HCM-DB. The best algorithm is the C-SVM fed with the aggregation of normalized QRS from the eight independent leads, which achieved A c c = 0.91 . In this case all the classifiers reached a significant positive Acc. According to classifier Acc, they can be sorted as: C- and ν -SVM, KNN, MLP, DT, and NB. Unlike the subrogated databases used before, Sfrag and Swfrag, in case of real fragmented records the results exhibit a non-linear dependency among the used features and the existence of fragmentation.
The main goal of the second part of this experiment is to determine if the use of non-linear model classifiers exhibit best results than linear models in the fibrosis detection. The classifiers tested in this test were, the C-SVM with Gaussian kernel, the ν -SVM with Gaussian kernel, the MLP, the KNN, the DT, and the Gaussian NB.
Table 4 shows the behavior of non-linear classifier over HCM-DB, we can observe that better performance is achieved by NB fed using the computed statistic from the eighth independent leads, this algorithm presents A c c = 0.70 . In this case, the overall performance is low compared with the other databases, moreover, the use of the statistics computed over the non-normalized signals presents the best performance. According to their accuracy, the classifiers can be divided in three groups, namely, NB that achieves the best results, C-SVM, KNN, ν -SVM and MLP, and finally DT achieving the worst results. These results show that the statistical independence among the used features for fibrosis detection.
As seen in Table 5 the non-linear methods outperformed the linear methods in detection of both fragmentation and fibrosis.

5. Discussion

The main goals of this study are two: First, the development of an algorithm allowing the clinicians to automatic detection of fragmentation in twelve-leads ECG; Second, the creation of an algorithm allowing the clinicians to early detection of fibrosis in the twelve-leads ECG. According with the obtained results in Reference [8], where we use multivariate transforms, such as, PCA or ICA to enhance the presence of fragmentation in the ECG. We computed several features that can modelled this situation. In case of fibrosis, due to both affections are similar, we followed the same strategy. The developed algorithms are based on linear and no-linear classifiers, namely, linear SVM, SVM with Gaussian Kernels, KNN, MLP, DT, and Gaussian NB. The main advantage of linear methods is the interpretability of their results, but in general, their behavior is lower than non-linear methods. On the other hand, the use of non-linear methods enhances the results obtained from linear methods, but these one loose the interpretability of their results.
If we check the obtained results in the case of Sfrag-DB, which corresponds to the subrogated model that extend widely the number of frequencies of the synthetic fragmentation, the difference between linear and non-linear models are quite relevant. Results showed NB as the best performing method, in terms of Acc, when applied over the statistics computed to the output of PCA computed over a narrow window around QRS and taking in the analysis just only the eighth independent leads. This method achieved 0.79 Sen, 0.77 Spe, 0.96 PPV, 0.80 NPV, and 0.87 Acc. It is relevant to mention here, which although figures depicted are quite relevant, we are actually working with a streamed synthetic formulation of the fragmentation wave. This model could help to know the behavior of proposed algorithms over well-known signals before applying them over real cases.
Moving now to the second used database, SWfrag-DB, which is a subrogated model that articulates a more restrictive range of frequencies to enhance the fit to the real fragmented signals, results did not move away from the previous ones. For this case, results were the following for the best scenario, 0.76 Sen, 0.90 Spe, 0.89 PPV, 0.77 NPV, and 0.83 Acc. As it happened in Sfrag-DB, according to the best classifier, NB, the results shown and statistical independence among the selected features for the fragmentation detection.
The third used database was FHCM-DB, which contains the real fragmented records. The results showed a much better figures than the subrogated models. This can be due the wide extension of the free parameters set in the synthetic model, appears to be too extensive compared to the effective real case. The best method was for this case then the C–SVM with Gaussian kernel over the aggregation of the normalized QRS from the eight independent leads. Final figures were 0.94 Sen, 0.88 Spe, 0.89 PPV, 0.93 NPV, and 0.91 Acc. The obtained values are very much comparable positively with results published in literature, improving slightly all existing reported results. Those ones proved a non-linear dependency among the selected features and the presence of fragmentation in the records.
A new and challenging situation was found for our team when it was proposed by the clinicians to evaluate this very same techniques over the HCM-DB, which contains records from patients affected by fibrosis. Where as mentioned earlier in this paper, very little references are found in the scientific processing literature in this area. This lack of references is justified due to fact that these affections are rarely visible in the ECG, and diagnosis always require from an MRI-LGE. Though, academic researchers do not set their target on this affection when working with ECG processing. We hypothesis in this work, which although it might not be visible in the ECG, electrophysiological components related to the misconducting cells, could eventually be present inside the signal. The provided results are relevant, not because of the merit figures obtained, which are much lower than in the case of fragmentation, but unique as a clinical reference for better diagnosis. The best proposed method is compounded by NB classifier applied over the statistics computed from the real QRS of the eighth independent leads, and the merit figures obtained were 0.47 Sen, 0.91 Spe, 0.82 PPV, 0.66 NPV, and 0.70 Acc. As reader can see, even the results were not that good in terms of Sen, the presented algorithm achieved a high value of Spe. This is especially relevant, as nowadays does not exist any algorithm that allow the clinicians to evaluate the presence of fibrosis based on the ECG.
As a general conclusion we can state that the algorithms and techniques presented in this paper opens a wide range of possible applications, such as, the development risk assessment tool for early diagnosis of misconducting affections namely, fibrosis, fragmentation, and so on.
We also think that this paper opens a new opportunity in the ECG processing field to analyze and eventually develop improved algorithms for fibrosis detection to enhance the Sen presented in this work. Additionally we think that other transformation techniques, not included in this paper, empirical mode decomposition, wavelets, among others, could also be evaluated to improve the results presented here, which could end up creating a clinically validated score for misconducting affections.

Author Contributions

Conceptualization, F.-J.G.-B., A.G.-A. and J.-L.R.-Á.; Data curation, M.-E.S.-A., J.-R.G.-B. and J.M.-S.; Formal analysis, F.-J.G.-B., A.G.-A. and J.-L.R.-Á.; Investigation, F.-M.M.-M.; Software, F.-M.M.-M.; Supervision, F.-J.G.-B., A.G.-A. and J.-L.R.-Á.; Writing—original draft, F.-M.M.-M., F.-J.G.-B., M.-E.S.-A., J.-R.G.-B., J.M.-S., A.G.-A. and J.-L.R.-Á.; Writing—review & editing, F.-M.M.-M., F.-J.G.-B., M.-E.S.-A., J.-R.G.-B., J.M.-S., A.G.-A. and J.-L.R.-Á.

Funding

This work was supported by Research Grants FINALE (TEC2016-75161-C2-1-R, and TEC2016-75161-C2-2-R) by Ministerio de Economía, Industria y Competitividad – Agencia Estatal de Investigación and cofunded by Fondos FEDER and KERMES (TEC2016-81900-REDT) from Spanish Government.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Foster, D.B. Twelve-Lead Electrocardiography: Theory and Interpretation, 2nd ed.; Springer: London, UK, 2007. [Google Scholar]
  2. Lee, D.H.; Park, J.W.; Choi, J.; Rabbi, A.; Fazel-Rezai, R. Automatic Detection of Electrocardiogram ST Segment: Application in Ischemic Disease Diagnosis. Int. J. Adv. Comput. Sci. Appl. 2013, 4. [Google Scholar] [CrossRef]
  3. Salam, K.A.; Srilakshmi, G. An Algorithm for ECG Analysis of Arrhythmia Detection. In Proceedings of the International Conference on Electrical, Computer and Communication Technologies, Coimbatore, India, 5–7 March 2015; pp. 1–6. [Google Scholar]
  4. Liu, T.; Song, D.; Dong, J.; Zhu, P.; Liu, J.; Liu, W.; Ma, X.; Zhao, L.; Ling, S. Current Understanding of the Pathophysiology of Myocardial Fibrosis and Its Quantitative Assessment in Heart Failure. Front. Physiol. 2017, 8, 238. [Google Scholar] [CrossRef] [PubMed]
  5. Basaran, Y.; Tigen, K.; Karaahmet, T.; Isiklar, I.; Cevik, C.; Gurel, E.; Dundar, C.; Pala, S.; Mahmutyazicioglu, K.; Basaran, O. Fragmented QRS Complexes Are Associated with Cardiac Fibrosis and Significant Intraventricular Systolic Dyssynchrony in Nonischemic Dilated Cardiomyopathy Patients with a Narrow QRS Interval. Echocardiography 2011, 28, 62–68. [Google Scholar] [CrossRef] [PubMed]
  6. Kang, K.W.; Janardhan, A.H.; Jung, K.T.; Lee, H.S.; Lee, M.H.; Hwang, H.J. Fragmented QRS as a Candidate Marker for High-Risk Assessment in Hypertrophic Cardiomyopathy. Heart Rhythm 2014, 11, 1433–1440. [Google Scholar] [CrossRef] [PubMed]
  7. Konno, T.; Hayashi, K.; Fujino, N.; Oka, R.; Nomura, A.; Nagata, Y.; Hodatsu, A.; Sakata, K.; Furusho, H.; Takamura, M.; et al. Electrocardiographic QRS Fragmentation as a Marker for Myocardial Fibrosis in Hypertrophic Cardiomyopathy. J. Cardiovasc. Electrophysiol. 2015, 26, 1081–1087. [Google Scholar] [PubMed]
  8. Melgarejo-Meseguer, F.M.; Gimeno-Blanes, F.J.; Salar-Alcaraz, M.E.; Gimeno-Blanes, J.R.; Martínez-Sánchez, J.; García-Alberola, A.; Rojo-Álvarez, J.L. Electrocardiographic Fragmented Activity (I): Physiological Meaning of Multivariate Signal Decompositions. Appl. Sci. 2019. This issue. [Google Scholar]
  9. Maheshwari, S.; Acharyya, A.; Puddu, P.E.; Mazomenos, E.B.; Leekha, G.; Maharatna, K.; Schiariti, M. An Automated Algorithm for Online Detection of Fragmented QRS and Identification of Its Various Morphologies. J. R. Soc. Interface 2013, 10, 20130761. [Google Scholar] [PubMed]
  10. Jin, F.; Sugavaneswaran, L.; Krishnan, S.; Chauhan, V.S. Quantification of Fragmented QRS Complex Using Intrinsic Time-Scale Decomposition. Biomed. Signal Process. Control. 2017, 31, 513–523. [Google Scholar] [CrossRef]
  11. Bono, V.; Mazomenos, E.B.; Chen, T.; Rosengarten, J.A.; Acharyya, A.; Maharatna, K.; Morgan, J.M.; Curzen, N. Development of an Automated Updated Selvester QRS Scoring System Using SWT-based QRS Fractionation Detection and Classification. IEEE J. Biomed. Health Inform. 2014, 18, 193–204. [Google Scholar] [CrossRef] [PubMed]
  12. Goovaerts, G.; Padhy, S.; Vandenberk, B.; Varon, C.; Willems, R.; Huffel, S.V. A Machine Learning Approach for Detection and Quantification of QRS Fragmentation. IEEE J. Biomed. Health Inform. 2018. [Google Scholar] [CrossRef] [PubMed]
  13. Gimeno-Blanes, F.J.; Rojo-Álvarez, J.L.; García-Alberola, A.; Gimeno-Blanes, J.R.; Rodríguez-Martínez, A.; Mocci, A.; Flores-Yepes, J.A. Early Prediction of Tilt Test Outcome, with Support Vector Machine Non Linear Classifier, Using ECG, Pressure and Impedance Signals. Comput. Cardiol. 2011, 38, 101–104. [Google Scholar]
  14. Basar, M.D.; Kotan, S.; Kilic, N.; Akan, A. Morphologic Based Feature Extraction for Arrhythmia Beat Detection. In Proceedings of the Medical Technologies National Congress, Antalya, Turkey, 27–29 October 2016; pp. 1–4. [Google Scholar]
  15. Rojo-Álvarez, J.L.; García-Alberola, A.; Arenal-Maíz, A.; Piñeiro-Ave, J.; Valdés-Chavarri, M.; Artés-Rodríguez, A. Automatic Discrimination Between Supraventricular and Ventricular Tachycardia using a Multilayer Perceptron in Implantable Cardioverter Defibrillators. Pacing Clin. Electrophysiol. 2002, 25, 1599–1604. [Google Scholar] [CrossRef] [PubMed]
  16. Bhoi, A.K.; Sherpa, K.S.; Khandelwal, B. Ischemia and Arrhythmia Classification Using Time-Frequency Domain Features of QRS Complex. Procedia Comput. Sci. 2018, 132, 606–613. [Google Scholar] [CrossRef]
  17. Paing, M.P.; Hamamoto, K.; Tungjitkusolmun, S.; Pintavirooj, C. Automatic Detection and Staging of Lung Tumors using Locational Features and Double-Staged Classifications. Appl. Sci. 2019, 9, 2329. [Google Scholar] [CrossRef]
  18. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Müller, A.; Nothman, J.; Louppe, G.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  19. Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  20. Tan, P.N.; Steinbach, M.; Kumar, V. Introduction to Data Mining; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 2005. [Google Scholar]
  21. Cunningham, P.; Delany, S.J. K-Nearest Neighbour Classifiers. Mult. Classif. Syst. 2007, 34, 1–17. [Google Scholar]
  22. Everss-Villalba, E.; Melgarejo-Meseguer, F.M.; Blanco-Velasco, M.; Gimeno-Blane, F.J.; Sala-Pla, S.; Rojo-Álvarez, J.L.; García-Alberola, A. Noise Maps for Quantitative and Clinical Severity Towards Long-Term ECG Monitoring. Sensors 2017, 17, 2448. [Google Scholar] [CrossRef] [PubMed]
  23. Melgarejo-Meseguer, F.M.; Everss-Villalba, E.; Gimeno-Blanes, F.J.; Blanco-Velasco, M.; Molins-Bordallo, Z.; Flores-Yepes, J.A.; Rojo-Álvarez, J.L.; García-Alberola, A. On the Beat Detection Performance in Long-Term ECG Monitoring Scenarios. Sensors 2018, 18, 1387. [Google Scholar] [CrossRef] [PubMed]
  24. Casanez-Ventura, A.; Gimeno-Blanes, F.J.; Rojo-Álvarez, J.L.; Flores-Yepes, J.A.; Gimeno-Blanes, J.R.; Lopez-Ayala, J.M.; García-Alberola, A. QRS Delineation Algorithms Comparison and Model Fine Tuning for Automatic Clinical Classification. In Proceedings of the Computing in Cardiology Conference, Zaragoza, Spain, 22–25 September 2013; pp. 1163–1166. [Google Scholar]
  25. Melgarejo-Meseguer, F.M.; Gimeno-Blanes, F.J.; Rojo-Álvarez, J.L.; Salar-Alcaraz, M.; Gimeno-Blanes, J.R.; García-Alberola, A. Cardiac Fibrosis Detection Applying Machine Learning Techniques to Standard 12-Lead ECG. In Proceedings of the Computing in Cardiology Conference, Maastricht, The Netherlands, 23–26 September 2018; Volume 45. [Google Scholar]
  26. Yerushalmy, J. Statistical Problems in Assessing Methods of Medical Diagnosis, with Special Reference to X-Ray Techniques. Public Health Rep. 1947, 62, 1432–1449. [Google Scholar] [CrossRef] [PubMed]
  27. Whitaker, B.; Rizwan, M.; Aydemir, B.; Rehg, J.; Anderson, D. AF Classification from ECG Recording Using Feature Ensemble and Sparse Coding. In Proceedings of the Computing in Cardiology Conference, Rennes, France, 24–27 September 2017. [Google Scholar]
  28. Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; CRC Press: London, UK, 1994. [Google Scholar]
Figure 1. Example for each kind of record used in this work, namely: (a) Control record from subrogated fragmented database (Sfrag-DB); (b) Subrogated fragmented record from Sfrag-DB; (c) Subrogated wide-fragmented (SWfrag-DB) record; (d) Fragmented record from FHCM-DB; And (e) record affected by fibrosis from HCM-DB.
Figure 1. Example for each kind of record used in this work, namely: (a) Control record from subrogated fragmented database (Sfrag-DB); (b) Subrogated fragmented record from Sfrag-DB; (c) Subrogated wide-fragmented (SWfrag-DB) record; (d) Fragmented record from FHCM-DB; And (e) record affected by fibrosis from HCM-DB.
Applsci 09 03565 g001
Figure 2. Examples of different support vector machine (SVM) problems. Highlighted observations are the support vector, solid line is the separator hyperplane and dashed lines represent the optimized margin. (a) Linear SVM applied over a linear separable problem. (b) Linear SVM applied over a linear non-separable problem. (c) Non-linear separable problem solved by using a kernelized SVM.
Figure 2. Examples of different support vector machine (SVM) problems. Highlighted observations are the support vector, solid line is the separator hyperplane and dashed lines represent the optimized margin. (a) Linear SVM applied over a linear separable problem. (b) Linear SVM applied over a linear non-separable problem. (c) Non-linear separable problem solved by using a kernelized SVM.
Applsci 09 03565 g002
Figure 3. Example of an MLP, compounded by three different layers: An input layer, where the observations features are weighted, added and mapped by an activation function; A hidden layer, where the outputs of the input layer are weighted, added and mapped by an activation function; And output layer where the output of the hidden layer are weighted, added and mapped by an activation function in order to classify the observation.
Figure 3. Example of an MLP, compounded by three different layers: An input layer, where the observations features are weighted, added and mapped by an activation function; A hidden layer, where the outputs of the input layer are weighted, added and mapped by an activation function; And output layer where the output of the hidden layer are weighted, added and mapped by an activation function in order to classify the observation.
Applsci 09 03565 g003
Figure 4. Examples of attribute test condition. (a) Non-binary attribute test condition. (b) Binary attribute test condition.
Figure 4. Examples of attribute test condition. (a) Non-binary attribute test condition. (b) Binary attribute test condition.
Applsci 09 03565 g004
Figure 5. Statistical relevance of SVM coefficients. (a) Confidence interval at 95% of the ν -SVM weights for each statistic computed over the principal components of the normalized QRS for Sfrag-DB. Each panel shows the statistics for one principal component, the statistics were described in Section 3. (b) ν -SVM weights for FHCM-DB. Up: Confidence interval at 95% of the SVM weights for the power summation of regionalized principal components. Down: Fourier Transform of the coefficients, seen as a transversal filter. (c) ν -SVM weights for FHCM-DB, computed from the concatenation of non-normalized QRS extracted from eight independent leads. The first and third columns show the confidence interval at 95% of the SVM weight for each independent lead, the second and fourth columns show the impulse response created by using the weights as coefficient of filter. Red lines correspond to weights the confidence interval of which does not overlap the zero.
Figure 5. Statistical relevance of SVM coefficients. (a) Confidence interval at 95% of the ν -SVM weights for each statistic computed over the principal components of the normalized QRS for Sfrag-DB. Each panel shows the statistics for one principal component, the statistics were described in Section 3. (b) ν -SVM weights for FHCM-DB. Up: Confidence interval at 95% of the SVM weights for the power summation of regionalized principal components. Down: Fourier Transform of the coefficients, seen as a transversal filter. (c) ν -SVM weights for FHCM-DB, computed from the concatenation of non-normalized QRS extracted from eight independent leads. The first and third columns show the confidence interval at 95% of the SVM weight for each independent lead, the second and fourth columns show the impulse response created by using the weights as coefficient of filter. Red lines correspond to weights the confidence interval of which does not overlap the zero.
Applsci 09 03565 g005
Figure 6. Accuracy of linear SVM applied over SWfrag-DB. (See Figure 5 for details) (a) ν -SVM implementation. (b) C-SVM implementation.
Figure 6. Accuracy of linear SVM applied over SWfrag-DB. (See Figure 5 for details) (a) ν -SVM implementation. (b) C-SVM implementation.
Applsci 09 03565 g006
Figure 7. Accuracy of Linear SVM applied over HCM-DB. (See Figure 5 for details) (a) ν -SVM implementation. (b) C-SVM implementation.
Figure 7. Accuracy of Linear SVM applied over HCM-DB. (See Figure 5 for details) (a) ν -SVM implementation. (b) C-SVM implementation.
Applsci 09 03565 g007
Figure 8. Confidence interval at 95% of the ν -SVM weight for each statistic computed over the non-normalized QRS from the 12 leads for HCM-DB. Each panel shows the statistics for one principal component. Blue lines correspond to weights the confidence interval of which overlap the zero.
Figure 8. Confidence interval at 95% of the ν -SVM weight for each statistic computed over the non-normalized QRS from the 12 leads for HCM-DB. Each panel shows the statistics for one principal component. Blue lines correspond to weights the confidence interval of which overlap the zero.
Applsci 09 03565 g008
Table 1. Summary of linear classifiers over SWfrag-DB.
Table 1. Summary of linear classifiers over SWfrag-DB.
ClassifierInput SpaceSignal SelectionSenSpePPVNPVAcc
NuSVMStatistics + 3PCANon-normalized Beat0.700.750.760.690.73
NuSVMStatistics + 3PCANormalized Beat0.700.810.800.710.75
NuSVMStatistics + 3PCANon-normalized QRS0.760.880.870.770.82
NuSVMStatistics + PCANormalized QRS0.750.900.890.760.82
CSVMStatistics + ICANon-normalized Beat0.780.630.700.720.71
CSVMStatistics + PCANormalized Beat0.650.880.850.690.76
CSVMStatistics + ICANon-normalized QRS0.760.790.800.750.78
CSVMStatistics + PCANormalized QRS0.700.860.850.720.78
Table 2. Summary of linear classifiers over fibrosis database.
Table 2. Summary of linear classifiers over fibrosis database.
ClassifierInput SpaceSignal SelectionSenSpePPVNPVAcc
NuSVMStatistics + 8-LdNon-normalized Beat0.550.730.670.630.64
NuSVMConcat + 8-LdNormalized Beat0.740.580.630.700.66
NuSVMStatistics + 12-LdNon-normalized QRS0.650.710.670.690.68
NuSVMStatistics + 12-LdNormalized QRS0.700.650.690.710.68
CSVMSum + 8-LdNon-normalized Beat0.590.750.690.650.67
CSVMConcat + 8-LdNormalized Beat0.670.620.630.660.64
CSVMConcat + 8-LdNon-normalized QRS0.610.750.690.680.68
CSVMSum + 8-LdNormalized QRS0.540.710.630.630.63
Table 3. Accuracy for the best classification method of each proposed non-linear classifier applied over the fragmentation databases. (a) Sfrag-DB. (b) SWfrag-DB. (c) FHCM-DB.
Table 3. Accuracy for the best classification method of each proposed non-linear classifier applied over the fragmentation databases. (a) Sfrag-DB. (b) SWfrag-DB. (c) FHCM-DB.
ClassifierSignal SelectionInput SpaceAccuracyClassifierSignal SelectionInput SpaceAccuracyClassifierSignal SelectionInput SpaceAccuracy
C-SVMNormalized QRSStats + PCA0.79C-SVMNon-normalized QRSStats + 3PCA0.77C-SVMNormalized QRSSum + 8 Ld0.91
Nu-SVMNon-normalized beatStats + ICA0.78Nu-SVMNon-normalized QRSStats + 3PCA0.78Nu-SVMNormalized QRSSum + 8 Ld0.91
KNNNon-normalized QRSStats +PCA0.65KNNNormalized QRSStats + PCA0.72KNNNon-normalized QRSStats + 8 Ld0.79
MLPNon-normalized QRSStats + ICA0.78MLPNon-normalized QRSStats + ICA0.78MLPNormalized BeatStats + 8 Ld0.78
DTNormalized QRSStats + 8Ld0.81DTNon-normalized QRSStats + PCA0.78DTNon-normalized QRSStats + 12 Ld0.79
NBNon-normalized QRSStats +PCA0.83NBNon-normalized QRSStats + PCA0.83NBNormalized QRSStats + 8 Ld0.79
(a)(b)(c)
Table 4. Accuracy value for the best pairs of signal selection and input space for each classifier over the HCM database.
Table 4. Accuracy value for the best pairs of signal selection and input space for each classifier over the HCM database.
ClassifierSignal SelectionInput SpaceAccuracy
C-SVMNormalized QRSSum + 8 Ld0.68
ν -SVMNormalized QRSSum + 8 Ld0.63
KNNNon-normalized QRSStats + 8 Ld0.65
MLPNon-normalized QRSStats + 12 Ld0.63
DTNon-normalized QRSStats + 8 Ld0.61
NBNon-normalized QRSStats + 8 Ld0.70
Table 5. Summarized results for the best linear and no linear classifier for each database.
Table 5. Summarized results for the best linear and no linear classifier for each database.
Data BaseClassifierSignal SelectionInput SpaceSenSpePPVNPVAcc
SfragLinear ν -SVMNormalized QRSStats + PCA0.7300.7720.7800.7210.750
NBNon-normalized QRSStats + PCA0.7780.9650.9610.7970.867
SWfragLinear ν -SVMNormalized QRSStats + PCA0.6980.8600.8460.7210.775
NBNon-normalized QRSStats + PCA0.7620.8950.8890.7730.825
FHCMLinear ν -SVMNormalized BeatPowSum + RegPCA0.7330.9410.9170.8000.844
RBF CSVMNormalized QRSSum + 8 Ld0.9410.8750.8890.9330.909
HCMLinear ν -SVMNon-normalized QRSStats + 12 Ld0.6490.7140.6730.6920.683
NBNon-normalized QRSStats + 8 Ld0.4740.9050.8180.6550.700

Share and Cite

MDPI and ACS Style

Melgarejo-Meseguer, F.-M.; Gimeno-Blanes, F.-J.; Salar-Alcaraz, M.-E.; Gimeno-Blanes, J.-R.; Martínez-Sánchez, J.; García-Alberola, A.; Rojo-Álvarez, J.L. Electrocardiographic Fragmented Activity (II): A Machine Learning Approach to Detection. Appl. Sci. 2019, 9, 3565. https://doi.org/10.3390/app9173565

AMA Style

Melgarejo-Meseguer F-M, Gimeno-Blanes F-J, Salar-Alcaraz M-E, Gimeno-Blanes J-R, Martínez-Sánchez J, García-Alberola A, Rojo-Álvarez JL. Electrocardiographic Fragmented Activity (II): A Machine Learning Approach to Detection. Applied Sciences. 2019; 9(17):3565. https://doi.org/10.3390/app9173565

Chicago/Turabian Style

Melgarejo-Meseguer, Francisco-Manuel, Francisco-Javier Gimeno-Blanes, María-Eladia Salar-Alcaraz, Juan-Ramón Gimeno-Blanes, Juan Martínez-Sánchez, Arcadi García-Alberola, and José Luis Rojo-Álvarez. 2019. "Electrocardiographic Fragmented Activity (II): A Machine Learning Approach to Detection" Applied Sciences 9, no. 17: 3565. https://doi.org/10.3390/app9173565

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop