Next Article in Journal
Thermoplastic Elastomer (TPE)–Poly(Methyl Methacrylate) (PMMA) Hybrid Devices for Active Pumping PDMS-Free Organ-on-a-Chip Systems
Next Article in Special Issue
Epileptic Seizure Detection on an Ultra-Low-Power Embedded RISC-V Processor Using a Convolutional Neural Network
Previous Article in Journal
Design and Fabrication of the Vertical-Flow Bioreactor for Compaction Hepatocyte Culture in Drug Testing Application
Previous Article in Special Issue
Automatic Premature Ventricular Contraction Detection Using Deep Metric Learning and KNN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Classification of ECG and EEG Signals with Various Degrees of Dimensionality Reduction

1
Institute of Computer Science, Romanian Academy, 700481 Iasi, Romania
2
Faculty of Medical Bioengineering, Grigore T. Popa University of Medicine and Pharmacy of Iasi, 700115 Iasi, Romania
3
Faculty of Electronics, Telecomunications & Information Technology, Gheorghe Asachi Technical University of Iasi, 700050 Iasi, Romania
*
Author to whom correspondence should be addressed.
Biosensors 2021, 11(5), 161; https://doi.org/10.3390/bios11050161
Submission received: 27 March 2021 / Revised: 7 May 2021 / Accepted: 14 May 2021 / Published: 19 May 2021
(This article belongs to the Special Issue Intelligent Biosignal Processing in Wearable and Implantable Sensors)

Abstract

:
Classification performances for some classes of electrocardiographic (ECG) and electroencephalographic (EEG) signals processed to dimensionality reduction with different degrees are investigated. Results got with various classification methods are given and discussed. So far we investigated three techniques for reducing dimensionality: Laplacian eigenmaps (LE), locality preserving projections (LPP) and compressed sensing (CS). The first two methods are related to manifold learning while the third addresses signal acquisition and reconstruction from random projections under the supposition of signal sparsity. Our aim is to evaluate the benefits and drawbacks of various methods and to find to what extent they can be considered remarkable. The assessment of the effect of dimensionality decrease was made by considering the classification rates for the processed biosignals in the new spaces. Besides, the classification accuracies of the initial input data were evaluated with respect to the corresponding accuracies in the new spaces using different classifiers.

1. Introduction

Manifold learning [1] is a method for reducing dimensionality using the fact that essential information for many classes of high dimensional signals lies in much smaller dimensional spaces/manifolds. This is as the process of generating the data happens to have fewer degrees of independence thus permitting to the transformed data to belong to a low-dimensional subspace. Thus, even though data can’t be represented in the initial space, when embedded in two or three dimensions, they can be easily represented and show, when possible some inherent structure. Therefore, to be able to visualize data dimension has to be decreased to one, two or three [2].
One possibility to get dimensionality reduction as well as compression is by taking projections of the data on a reduced number of random signals. However, using random projections, it is expected that some significant structure of the data might be lost since the signals are only approximately sparse and thus cannot be recovered with good accuracy [3].
Concerning geometry preserving, the techniques of manifold learning can be categorized into two classes:
(a)
Techniques that preserve the local arrangement: locally linear embedding (LLE), Laplacian eigenmaps (LE), manifold charting (MC), Hessian locally linear embedding (HLLE), and
(b)
Techniques that conserve global structure: isometric mapping (ISOMAP), diffusion map.
Several linear methods in manifold learning are principal component analysis (PCA), locality preserving projections (LPP) and multidimensional scaling (MDS), while among nonlinear ones are Isomap, Hessian eigenmaps, Laplacian eigenmaps, local linear embedding, and diffusion maps. From another point of view linear dimensionality reduction algorithms such as PCA, independent component analysis (ICA), linear discriminant analysis (LDA), and many others exhibit certain aspects to define an “interesting” way of linear data projection [4,5] at the price of possibly missing nonlinear structure of data. This is why non-linear methods are often stronger. The three steps of such algorithms are generally the following [6]:
  • a nearest-neighbor search,
  • defining of distances or affinities between elements,
  • resolving a generalized eigenproblem to obtain the embedding of the initial space into a lower dimensional one.
The two main ingredients for dimensionality reduction are feature selection and feature extraction.
As mentioned above, we will discuss three methods for dimensionality reduction, two “standard” ones and the third, CS, which is not necessarily specific but interesting and useful as it will be shown.
In order to compare the methods we count on the fact that good dimensionality reduction will permit classification rates (usually smaller but) close to the initial ones.
We made use for testing, electrocardiographic (ECG) and electroencephalographic (EEG) signals downloaded from Internet databases and we compared the outcomes got with LE, LPP and CS using several standard classifiers aiming at getting an image about the compromise between dimensionality reduction and classification results.
In this paper we analyze the way the classifiers give good results for signals with various rates of dimensionality reduction. Thus, we present relevant information regarding the chosen method according to (a) the adopted rates of dimensionality reduction; (b) requirements such as reduced complexity (up to 2 or 3 dimensions), and (c) need for reconstruction. The advantages of each method are presented in the Section 4.

2. Materials and Methods

2.1. Laplacian Eigenmaps—LE

In the literature there are reported two similar techniques, in the sense that they consist each of three stages, the first two being common. The difference between the two is in the final stage, one of the algorithms keeping the local data arrangement, compared to the other that finds the optimal directions to project the data in a small space, so as to keep the data neighborhoods. These two techniques are Laplacian eigenmaps (LE) and locality preserving projections (LPP). Besides, for training data, Kernel LPP has the same significance as LE.
The basic assumption of the two methods is that data belong to a nonlinear subspace or nearly to it and in this way aim at discovering a low-dimensional modeling by retaining local characteristics. In LE the local properties are built on the keeping even distances between close neighbors.
The initial step in the LE algorithm [7] is to construct an adjacency graph G so that each data point xi is linked to its k nearest neighbors. In this way two things are important, namely, the number of neighbors as well as the weights of the graph branches which convey information about the distances between points.
The graph G will be constructed so that the weight wij is high if the points are close and wij is small if the nodes are far away. These weights are computed for all pairs of points xi and xj of the initial space; however, for points exterior the neighborhood k of a certain xm, the weights will have null value. In addition to the simplest weight assignment rule—one for neighboring points and null for outer points—a more exquisite rule is to use the Gaussian kernel [7,8,9]. After the calculation of the weights, follows the stage in which the calculation of the small dimensional representations is performed and on the manifold involves minimizing the cost function.
( Y ) = ij y i y j 2 w ij ,
where great weights wij strongly penalize distant points, thus nearly items in the initial space will be represented as near as possible in the new low-dimensional space.
Briefly, the LE algorithm [9] can be sketched in three main steps, namely:
(i.)
Nearest-neighbor search and adjacency graph construction
Choose a number between K or a distance ε > 0 such that the vicinities of each data point are established: for a k-neighborhood nodes i and j are linked by a branch if i is through the k nearest neighbors of j or j is through the k nearest neighbors of i. On the other hand, nodes i and j are linked by a branch if x i x j 2 < ϵ , in which the Euclidean norm appears.
(ii.)
Weighted adjacency matrix (Choosing the weights)
The weights wij of the symmetric (n × n) vicinity matrix are computed as:
w ij = w ( x i x j ) = { exp { x i x ij 2 2 σ 2 } ,   i f   x N i ; 0 ,   o t h e r w i s e ,
according to the graph G that is assumed to be connected.
(iii.)
Eigenmaps
In this stage, the eigenvalues and eigenvectors are calculated for the general eigenvector problem,
Lf = λDf,
where D = (dij) is an (n × n) diagonal matrix with
d ii = j N i w ij ,
and L = D − W is a Laplacian matrix which may be considered as an operator on functions applied on the nodes of G.
Ultimately, the eigenvector f0 suitable to the 0 eigenvalue is discarded. The next m eigenvectors related to the next m eigenvalues in increasing gamut are utilized for embedding in a m-dimensional Euclidean space:
xi → (f1(i), …, fm(i)),
where f0, …, fk−1 are the solutions of (1).

2.2. Locality Preserving Projections—LPP

The locality preserving projections (LPP) method is established on the similarly variation rule as for the LE method. It has alike locality conserving attributes: the training data are utilized to learn a projection and the testing samples are embedded into the low-dimensional space [10].
Therefore, the first two stages of the LPP algorithm are alike as those of the LE while the final stage assumes calculating the eigenvectors and eigenvalues for the generalized eigenvector problem:
XLXTa = λXDXTa,
in which X is the training data matrix and L, D have the same meaning as before.
Designating with a0, …, al−1 the column vectors related to the solutions of (2), ordering increasingly λ0 < … < λl-1, the mapping is defined as:
X i y i = A T x I ,   A = ( A 0 ,   A 1 , , A l 1 ) ,
in which yi is l-dimensional, and A is a (nxl) matrix.

2.3. Compressed Sensing—CS

Compressed sensing is an acquisition technique that requires fewer samples than the Nyquist rate in the hypothesis of sparsity of signals [11]. Thus a signal x can be expressed by the projections:
y = x,
where x R N , y R M is the projection vector and R M ,   N is the compressed sensing matrix whose entries are random i.i.d. (independent and identically distributed) signals. In this paper we will use the low dimensional projection vector y for signal classifications [12] and not for restoration signals.

2.4. Classifier Types

Since there are many methods of classification presented in the literature, it is difficult to decide which algorithm is superior to the others. The choice of one or the other depends on the type of application in which the classifier is incorporated but also on the specifics of the type of data used in the application. For example, for the classes linear separable, if the classes are linearly separable, the linear classifiers as logistic regression, Fisher’s linear discriminant can surpass complex models as support vector machine (SVM) and artificial neural networks (ANN) and vice versa [13,14,15].
For the classification of ECG and EEG segments in the original space and in decreased dimensions, several classes of classifiers were used, namely: Decision Trees; Discriminant Analysis; Naive Bayes; SVM; Nearest Neighbor; Ensembles. Most of these classes have subclasses that have been used. In what follows several short descriptions of the main classifiers are given.

2.4.1. Decision Trees

Given data of attributes annotated with classes, a decision tree provides a series of rules that can be applied to classify new data. It utilizes an if-then command set which is reciprocally exclusive and exhaustive for classification. The commands are read sequentially utilizing the training data one at a time. Each time a rule is learned, the tuples incorporated by the rules are eliminated. This process is sustained on the training set until fulfilling a finish condition.
Advantages: Decision Tree is easy to comprehend and to view, the data does not require much preparation and the method can manage both numerical and qualitative data.
Drawback: This method can yield trees that do not generalize well and can be unstable i.e., small fluctuations in data could lead to the generation of a completely different tree.

2.4.2. Discriminant Analysis

This is a common primary classification method to test since it is quick, precise and simple to comprehend. Discriminant analysis is appropriate for voluminous datasets.
This technique presumes that particular categories provides data to whom they are assigned certain Gaussian distributions. In the training stage, the fitting function assesses the variables of a Gaussian law for every class.

2.4.3. Naive Bayes

Bayes’ theorem is the source of this technique and it is based on the hypothesis of independence between every couple of attributes. Naive Bayes decision making behaves appropriately well in many real environments circumstances and applications, such as spam removal, document classification and person recognition. Naive Bayes is a simple method to apply and favorable outcomes have been acquired in the vast majority of situations. Additionally, it can be quickly used for voluminous datasets because it implies a linear function in time rather than by very time consuming iterative algorithms as in the case of a lot of other types of classifiers.
Advantages: Usually it needs a small number of training data to assess the necessary parameters. Naive Bayes decision making is very fast in contrast with more complex techniques.
Drawbacks: The big problem with this classifier is that it can manifest the so called “the zero probability problem”. Thus, in the situation where the conditioned probability is zero for a certain attribute, the classifier is not able to offer a correct decision. This problem is usually solved by means of a Laplacian estimator.

2.4.4. Support Vector Machine—SVM

The support vector machine classifications consider the training data set as points divided into classes by an interval which is, ideally, as large as possible. The new data points are then embedded and estimated to belong to a certain class on one side or the other of the gap between the initial points.
In this way a SVM finds the most appropriate hyperplane that divides data points into two classes, in the sense that this hyperplane has the largest margin between the two classes. In other words, the SVM finds the maximal thickness of the area that is parallel to the hyperplane that has no inner data points [14].
Advantages: This classifier is efficient in high dimensional spaces and utilizes a subset of training data in the decision function that makes its memory very efficient.
Drawback: The SVM method does not directly give probability approximations. They are determined by applying usually an inefficient five-fold cross-validation.

2.4.5. Nearest Neighbor

The neighbors based classification is a type of slow training as it does not attempt to build a universal internal pattern, but simply stores cases of the training data. Classification is estimated from a simple majority vote of the k nearest neighbors of each point. Upper bound of the error rate approaches twice that of the ideal Bayes classifier.
Benefits: This method is easy to apply, powerful for noisy training sets, and efficient if the training set is huge.
Drawback: The main problem is the necessity to calculate k and the computation effort is great as it needs to compute the distance of each input point to all the training data.

2.4.6. Ensembles of Classifiers

The ensemble classifier combines a collection of classifiers that might perform superior classification performance compared to every single classifier. The principal rule behind the ensemble model is that a collection of poor learners join together to build a powerful learner. Qualities depend on the choice of the algorithm. Some techniques to perform ensemble decision trees are bagging and boosting.
Bagging (Bootstrap Aggregation) is applied when the object is to decrease the variance of a decision tree. The main idea is to create different data subsets from the training sample chosen randomly with replacement. Now, each group of subset data is utilized to train their decision trees. As a consequence, we end up with an ensemble of distinct models. Average of all the predictions from different trees are applied which is a more strong solution than a singular decision tree.
Boosting ensemble is another method to build a combination of classifiers. In this method, learners are determined sequentially with early learners applying uncomplicated models to the data and then evaluating data for errors. Hence, it fits consecutive trees (random sample) and, at all step, the object is to solve for net error from the previous tree.
Another type of ensemble of classifiers is the ensemble of nearest neighbor classifiers where each individual of the ensemble uses a random feature subset only and the decisions of these multiple classifiers are amalgamated for the ultimate decision.
Starting from the boosted trees ensemble, boosting being the most popular decision tree ensemble, Random under-sampling boosting (RUSBoost) has been introduced. Random under-sampling boosting (RUSBoost) is exceptionally successful at classifying irregular data. That means some classes with the training data have many more members than others. The method uses N, the number of members in the class with the fewest members in the training data, as the basic structure for sampling. In this way, by taking only N data points, classes with more members are under-sampled. If we have K classes, during the training stage, RUSBoost uses a smaller set of the data with N data points from each of those K classes. Then the method achieves the re-weighting and building the ensemble in Adaptive Boosting for Multiclass Classification [15].

3. Experimental Results and Discussions

3.1. ECG Signals

To analyze the feasibilities of dimension reduction utilizing LE, LPP and CS methods, we used for testing methods 44 ECG records from the MIT-BIH Arrhythmia database, including Holter data (so from wearable acquisition devices), collected at a sampling frequency of 360 Hz and on precision by 11 bits/sample [16]. Taking into account the annotations in the database, 7 pathological classes and the normal beating class were identified. The pathological classes included in this study are atrial premature beat (A), left bundle branch block beat (L), right bundle branch block beat (R), premature ventricular contraction (V), fusion of ventricular and normal beat (F), paced beat (/), fusion of paced and normal beat (f) and a class of normal beats (N).
For segmentation ECG signals we applied the segmentation method presented in a previous paper, namely, segmentation with centered R wave [17]. Our segmentation method begins with the precise determination of the R-wave, which has the maximum amplitude of ECG. Thus, the ECG signals are split in heartbeats cycles. An ECG cycle starts in the midst of a certain RR interval and finishes in the midst of the following RR interval. The R wave is placed in the center of the ECG cycle by resampling the signals on both parts of R. Thus cycles with the centered R waveform have been computed. Thereby, all ECG cycles are defined by 301 samples with the R wave being situated on the 150-th sample. Figure 1 shows an example of segmentation of the ECG signals belonging to each of the eight pattern categories.
The database constructed is a data collection including 5608 ECG patterns, with 701 patterns for each of the eight considered types (seven pathological groups and a normal one).
A comparison of ECG behavior in the initial and reduced spaces implies first the classification of the ECG signals with the centered R-wave in the original space. The work was done in MATLAB® medium (MathWorks, Natick, MA, USA) and we used the next classifiers, each with different versions for tuning their key settings: Decision Trees (with fine, medium and coarse type classifier), Linear Discriminant and Quadratic Discriminant, Naive and Kernel Naive Bayes, Support Vector Machine (Linear, Quadratic, Cubic and Gaussian), k-nearest neighbors (fine, medium, coarse, Cosine, Cubic and Weighted KNN), besides different kinds of the ensemble of classifiers (Boosted and Bagged trees, discriminant and KNN Subspace and RUSBoosted Trees).
Figure 2 and Table 1 (its first column) show the classification accuracies for ECG signals with R-wave centered, in the initial space (raw data only). One can observe that good outcomes (over 90% classification accuracies) with SVM classifiers (Cubic, Quadratic and Medium Gaussian SVM), Fine KNN, and Ensemble Subspace KNN are got.
The decision borders obtained with the KNN classifier are much more complex than for all Decision Trees, so getting an excellent classification for Fine KNN. The bad outcomes got with Bayes as opposed to KNN may have the following explanation: the fundamental distinction between KNN and Naive Bayes methods is that KNN is a discriminative classifier, and the Naive Bayes is a generative classifier. The Fine KNN classifier behaves better because it has the characteristic to be optimized locally. The great results achieved with Fine KNN were expected to be so. With an ensemble subspace KNN even better outcomes may be acquired.
In our approach the best accuracy is achieved with Cubic SVM, i.e., 95.2%. This parameter is valuable because the 8 classes studied are not easily distinguishable, and they are even intertwining.
In Table 1 and Figure 3 there are the classification outcomes: (a) in the original space with 301 samples; (b) results for ECG signals with dimensionality reduction by LE, LPP and CS methods for 2, 3 and 25 dimensions, respectively. We computed the classification accuracies for 2- and 3-dimensional cases because the signals with these dimensionalities can be easily illustrated graphically, which is very helpful and significant for comprehension the data spatial grouping. The graphic representation is very useful when we have many classes to handle and know nothing concerning their volumetric disposing. We also calculated the classification rate for dimensionality decrease to 25-space as we considered that a reduction from 301 to 25 dimensions is plausible both from the point of view of dimensionality reduction as well as in terms of classification accuracy.
Figure 4 and Table 2 show the results for various spatial dimensions for the Compressed Sensing (CS) method. It is observed that utilising Coarse Decision Tree very bad outcomes are got in the original space as well as in all other reduced spaces. Outcomes similar to those of the original space are achieved beginning with more than 10 dimensions in the projected space. Additionally, it can be observed the best outcomes hold with the SVM classifier. Depending on the degree of the dimensionality decrease they can be with cubic SVM or with fine Gaussian SVM. These classifiers achieve excellent classification rates, near to the medium Gaussian SVM. As a finding, for the dimensionality decrease with CS method, the SVM algorithm is best suited for that.
In the original 301-dimensional space the classification accuracy is 95.2%. In the case of decreasing to 10 and 25 dimensions, an accuracy of 91.7% and 93.4% were obtained, respectively. An interesting aspect that can be remarked in Table 2 (underlined numbers) is that for dimensionality reduction to 20 or 25 slightly improved results compared to those in the initial space have been obtained with some classifiers. A possible explanation is that through dimensionality reduction the classification problem complexity diminishes and thus the classification rate increases.
Figure 5 and Table 3 show the results obtained with LE, both for the initial and reduced ECG signals. In the original space the best outcomes are attained with cubic SVM classifier. On the contrary, in the case of very small dimensions (between 2 and 5) of the projected space with the LE algorithm very weak outcomes are achieved. For very small manifolds, the best outcomes are accomplished with the Weighted KNN classifier. This statement can be justified by maintaining the vicinities at the local level. Likewise, excellent outcomes for very small spaces are obtained by using the Fine Gaussian SVM classifier. Thus, for these small spaces, the classification of the test data is strongly dependant on the quality of the classifier. In other words, the classifier has to be able to draw very precise decision limits for very close data. It is the case of the Fine Gaussian SVM kernel range, that is establish to (1/4) sqrt(no. of features).
However, the Laplacian Eigenmaps technique for very small spaces, such as 2 and 3 dimensions, leads to very good classification results (81.5% and 84.5% classification accuracy, respectively) with Weighted KNN classifier. It is to remember here that the current classification problem is a difficult one, as there are 8 categories of ECG signals. We may state that a classification rate with only almost 10% under the original space versus a decrease in size from 301 to 2 is a remarkable result. The exceptional benefit of shrinking to 2 or 3 dimensions is the input data may be easily visualized graphically, allowing certain comprehension of the spatial arrangement. For a dimensionality reduction over 10, it can be observed that for some classifiers (results underlined in Table 3) higher classification accuracy than in the initial space has been obtained reminding of a kind of feature selection algorithm.
Figure 6 and Table 4 show the results of dimensionality reduction when using the LPP algorithm. As seen, the results are very similar to those achieved with the Laplacian Eigenmaps technique besides for very low dimensions (of 2, 3, and 4), when the classification measures achieved are much inferior (54%, 70.1%, and 77.3%, respectively). In the case of dimensions superior to 5, the classification measures are similar to those attained with the Laplacian Eigenmaps technique. For dimensions upper 20, classification measures very near to those in the original space are reached. As an example, for 20- and 25-dimensional spaces classification accuracies of above 95% are achieved by means of the Ensemble Subspace KNN classifier.
It has been observed again (underlined numbers in Table 4) that for dimensionality reduction over 10, in some cases improved results have been obtained.
In Figure 7 ECG signals with reduced dimensionality to 3D obtained with the 3 techniques are presented (each color corresponds to a different class) [18]; the great advantage of the possibility of data graphical visualization is obvious.
It can be observed that LE leads to a better data clustering/spatial separation than the other two methods for which, even though data are clustered, overlapping occurs. This is the reason why, when choosing dimensionality reduction to 3D, the classification ratio is better for LE compared to LPP and CS.

3.2. EEG Signals

For testing the dimensionality reduction methods, the EEG signals collected by Hoffmann and collaborators in their laboratory were used; a small database is free on the internet at [19]. This database includes EEG signals collected on the configuration with 32 channels, arranged in 942 vectors to be classified, lasting 1 sec. each [20,21]. The classification task is to detect the P300 waveform from a single EEG trial which has been used to build a P300 based spelling device for Brain-Computer Interface—BCI. We used configurations with 23, 8 and 4 channels for original EEGs for preprocessing and classifications tasks. The paradigm with P300 spelling device [22] that has been used is as follows.
One of the first examples for BCI is the algorithm proposed by Farwell and Donchin [22] that relies on the unconscious decision-making processes expressed via P300 in order to lead a computer. Another example, described in [23], refers to a real-time training of voted perceptron for classification of EEG data, also for a BCI application.
Now returning to the experiments proposed in [22], a (6 × 6) matrix containing (as in Figure 8) the letters of the alphabet and the numbers 1–9 were shown to the subjects on a computer display. The horizontal and vertical lines of the table were run at random for 100 ms with a 100 ms pause between sparkles i.e., after 12 sparkles every horizontal and vertical line was glowing once. Two datasets were acquired from every subject. During the first meeting subjects were requested to write the French words “lac”, “nuage”, “montagne”, and “soleil”, while for the second recording the subjects had to write the words “fromage”, “chocolat”, “pain”, and “vin” [21].
As reported in [20] the EEG signals were registered from channels FP1, FP2, AF3, AF4, F7, F3, FZ, F4, F8, FC1, FC5, FC6, FC2, T7, C3, CZ, C4, T8, CP1, CP5, CP6, CP2, P7, P3, PZ, P4, P8, PO3, PO4, O1, OZ, O2 with a Biosemi Active 2 system (NEUROSPEC AG, Stans, Switzerland) at 2048 Hz. The signals were then referred to the average of channels O1, OZ, O2, low pass filtered (0…9) Hz with a 7th order Butterworth filter, and re-sampled with 128 Hz. The channels used as reference and channels T7, T8 were not used for EEG processing as they did not bring significant information for the P300s waveform detection. A more detailed explanation of the experimental work, i.e., EEG acquisition, preprocessing and artifact rejection is presented in [21].
In Figure 9 the electrodes configurations with 4, 8 and 23 channels are shown.
Figure 10 shows the classification results for different channel configurations cases. It is observed that in general for the 8-channel version the best classification results of the original EEG signals are obtained. In general, good results are obtained for linear, quadratic and cubic SVM, but the best results are obtained with medium Gaussian SVM in the 8-channel configuration.
Because, in general, the configuration with 8 electrodes offers the best results, in the following we will present the results of this configuration for dimensionality reduction through the three analyzed methods. It should be mentioned that the initial EEG signals are segmented according to the stimulus applied to segments of 128 samples, i.e., we will consider that the space of the initial EEG signals is 128-dimensional.
Figure 11 and Table 5 show the results for the dimensionality reduction with CS algorithm. It is found that there are classifiers with which better results are obtained in a space reduced to 15 dimensions compared to the initial space. This is the case of the discriminant linear classifier for which in the original space the classification rate is 77.2% and in a space reduced to 15 dimensions it classifies with a rate of 84.6%. Additionally, Quadratic Discriminant and Logistic Regression offers improved results for all spaces compared to the initial space. Additionally, in the case of Discriminant Subspace Ensembles the results in the reduced spaces are generally superior to the initial space. These results for which in spaces of reduced dimensionality improved results are obtained, compared to the initial spaces, are an example that the initial signals are in reality in a space of a much smaller dimensionality. It is much easier to classify data with a small dimension compared to the same data that is represented in a false large space.
Figure 12 shows the results obtained with the LE algorithm to reduce the dimensionality of the space for EEG signals in the 8-channel configuration. It can be seen in Table 6 that in the case of the CS algorithm, the Linear and Quadratic Discriminant and Logistic Regression classifiers offer improved classification rates. Additionally, Discriminant Subspace Ensembles and KNN Subspace Ensembles classify better in reduced spaces with LE algorithm. The major difference from the CS method is that for very small spaces of dimensionality 3 and 5 the results are much better for the LE method compared to CS method. Hence the utility of the LE algorithm for data representation in 2 and 3 dimensional spaces for better visualization and understanding of spatial and geometric data arrangement.
Figure 13 shows the results obtained with the LPP algorithm to reduce the dimensionality of space for EEG signals in the 8-channel configuration. It is observed in Table 7 that the best results are obtained with all the classifiers for the initial space. These poor results are obtained both when applying LPP on each channel and then concatenating the signals with small spaces, or concatenating the initial EEG signals for the 8 channels and then applying the LPP method for dimensionality reduction.
In Figure 14 EEG signals with dimensionality reduced to 3D with all three techniques are represented. Signals containing the P300 wave have been plotted in blue and the others in red. It can be observed that for CS and LPP the two classes overlap, thus explaining the modest classification results for the 3D case. When using LE we get a better clustering of the two classes on the left laying non-P300 waves marked in red and on the right the P300 ones marked in blue. This is why LE leads to better results for 3D compared to LPP and CS.

4. Conclusions

The aim of the paper was to offer a general view of the way the classifiers give good results for signals with various rates of dimensionality reduction.
Regarding ECG signals we stress the fact that they were preprocessed by aligning the R-wave. Our best results were obtained with SVM and KNN while for low dimensions (2 or 3), the best outcomes have been achieved with LE with the drawback that computations should be repeated for any new signal. Additionally, it has been found that in the case of CS for more than 10 dimensions the classification rate is near that obtained in the original space. Similar classification rates results have been achieved for dimensionality reduction larger than 10 with LPP for which the advantage for new testing signal is that no new calculations are necessary. Regarding CS, it is the most computationally advantageous compared to LE and LPP, which are much more computationally expensive.
For EEG signals, the CS and LE algorithms led to results similar to those obtained for ECG signals. The major difference that occurs in the case of EEG signals is for the LPP algorithm. This leads to much weaker results in reducing the dimensionality of the signals. To explain these results, we propose two hypotheses. A first one is that the LPP algorithm cannot find universal optimal projections for all 8 channels. The second hypothesis is that in the case of EEG signals the data are located on a manifold and the LPP algorithm fails to capture the local and at the same time general structure of the manifold, a situation encountered, for example, in the Swiss Roll manifold case.
The main conclusions of this work envisage the way dimensionality reduction and classification algorithms can be combined in order to obtain reasonable classification results even for (very) low dimensions both for ECG and a class of EEG signals. Choosing the rate of reduction of dimensionality is dependent on the motivation of the analysis. Thus, if we intend to reconstruct the initial signal, we will adopt CS, if we want intuition for 2 or 3 D we will choose LE while if we want to reduce dimensionality by about ten–twelve times and make classification in the reduced space without re-computation for new signals, we will use LPP. However, it seems LPP does not fit too well the global structure for EEG signals so that between LPP and LE the second one is better.
We assume these methods and outcomes might be extended in specific limits for more types of signals too, yet this concept should be attentively applied.

Author Contributions

Conceptualization, M.F., H.-N.C. and L.G.; methodology, M.F.; software, M.F.; validation, M.F., H.-N.C. and L.G.; formal analysis, M.F.; investigation, M.F.; resources, M.F.; data curation, M.F.; writing—original draft preparation, M.F., H.-N.C. and L.G.; writing—review and editing, M.F., H.-N.C. and L.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in [physionet] at [10.1109/51.932724 and 10.1161/01.cir.101.23.e215], reference number [16] and [epfl], reference number [19]. The webpage of the MIT-BIH Arrhythmia Database is “https://www.physionet.org/content/mitdb/1.0.0/” (accessed on 17 May 2021) and “http://mmspg.epfl.ch/cms/page-58322.html” (accessed on 22 May 2017).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mordohai, P.; Medioni, G. Dimensionality Estimation, Manifold Learning and Function Approximation using Tensor Voting. J. Mach. Learn. Res. 2010, 11, 411–450. [Google Scholar]
  2. Boehmke, B.; Greenwell, B.M. Dimension Reduction. In Hands-On Machine Learning with R; Chapman & Hall, CRC Press: Boca Raton, FL, USA, 2019; pp. 343–396. [Google Scholar]
  3. Bingham, E.; Mannila, H. Random projection in dimensionality reduction: Applications to image and text data. In Proceedings of the 7-th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 26–29 August 2001; pp. 245–250. [Google Scholar]
  4. Lee, J.A.; Verleysen, M. Nonlinear Dimensionality Reduction; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  5. Bengio, Y.; Paiement, J.; Vincent, P.; Delalleau, O.; Le Roux, N.; Ouimet, M. Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering. Adv. Neural Inf. Process. Syst. 2004, 16, 177–186. [Google Scholar]
  6. Fodor, I. A Survey of Dimension Reduction Techniques; Technical Report; Center for Applied Scientific Computing, Lawrence Livermore National: Livermore, CA, USA, 2002.
  7. Belkin, M.; Niyogi, P. Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering. Adv. Neural Inf. Process. Syst. 2001, 14, 586–691. [Google Scholar]
  8. Belkin, M. Problems of Learning on Manifolds. Ph.D. Thesis, Department of Mathematics, The University of Chicago, Chicago, IL, USA, August 2003. [Google Scholar]
  9. Belkin, M.; Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 2003, 15, 1373–1396. [Google Scholar] [CrossRef] [Green Version]
  10. He, X.; Niyogi, P. Locality preserving projections. In Proceedings of the Conference on Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–13 December 2003. [Google Scholar]
  11. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  12. Candès, E.J.; Wakin, M.B. An Introduction to Compressive Sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  13. Duda, R.; Hart, P. Pattern Recognition and Scene Analysis; Wiley Interscience: Hoboken, NJ, USA, 1973. [Google Scholar]
  14. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  15. Alpaydin, E. Introduction to Machine Learning, 4th ed.; MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
  16. MIT-BIH. Arrhythmia Database. Available online: http://www.physionet.org/physiobank/database/mitdb/ (accessed on 8 January 2021).
  17. Fira, M.; Goraș, L.; Cleju, N.; Barabașa, C. On the classification of compressed sensed signals. In Proceedings of the International Symposium on Signals, Circuits and Systems (ISSCS) 2011, Iasi, Romania, 30 June 2011. [Google Scholar]
  18. Fira, M.; Goraș, L. On Some Methods for Dimensionality Reduction of ECG Signals. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 326–607. [Google Scholar] [CrossRef]
  19. EPFL. Available online: http://mmspg.epfl.ch/cms/page-58322.html (accessed on 22 May 2017).
  20. Hoffmann, U.; Vesin, J.M.; Ebrahimi, T.; Diserens, K. An efficient P300-based brain-computer interface for disabled subjects. J. Neurosci. Methods 2008, 167, 115–125. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Hoffmann, U.; Garcia, G.; Vesin, J.-M.; Diserens, K.; Ebrahimi, T. A Boosting Approach to P300 Detection with Application to Brain-Computer Interfaces. In Proceedings of the IEEE EMBS Conference on Neural Engineering, Arlington, VA, USA, 16–20 March 2005. [Google Scholar]
  22. Farwell, L.A.; Donchin, E. Talking off the top of your head: A mental prosthesis utilizing event-related brain potentials. Electroencephalogr. Clin. Neurophysiol. 1988, 70, 510–523. [Google Scholar] [CrossRef]
  23. Martišius, I.; Šidlauskas, K.; Damaševičius, R. Real-Time Training of Voted Perceptron for Classification of EEG Data. Int. J. Artif. Intell. 2013, 10, 207–217. [Google Scholar]
Figure 1. ECG patterns of the eight pattern classes used.
Figure 1. ECG patterns of the eight pattern classes used.
Biosensors 11 00161 g001
Figure 2. Classification rate in the original ECG space (centered 301 samples segments).
Figure 2. Classification rate in the original ECG space (centered 301 samples segments).
Biosensors 11 00161 g002
Figure 3. Classification results with CS, LE, LPP methods for 2, 3 and 25 dimensions, respectively.
Figure 3. Classification results with CS, LE, LPP methods for 2, 3 and 25 dimensions, respectively.
Biosensors 11 00161 g003
Figure 4. Classification results with CS method for dimensionality reduction.
Figure 4. Classification results with CS method for dimensionality reduction.
Biosensors 11 00161 g004
Figure 5. Classification results with LE method for dimensionality reduction.
Figure 5. Classification results with LE method for dimensionality reduction.
Biosensors 11 00161 g005
Figure 6. Classification results with LPP method for dimensionality reduction.
Figure 6. Classification results with LPP method for dimensionality reduction.
Biosensors 11 00161 g006
Figure 7. ECG data mapped into a 3-dimensional space with LE, LPP and CS techniques.
Figure 7. ECG data mapped into a 3-dimensional space with LE, LPP and CS techniques.
Biosensors 11 00161 g007
Figure 8. Classical P300 spelling paradigm described by Farwell–Donchin (1988).
Figure 8. Classical P300 spelling paradigm described by Farwell–Donchin (1988).
Biosensors 11 00161 g008
Figure 9. The electrodes configurations with 4, 8 and 23 channels.
Figure 9. The electrodes configurations with 4, 8 and 23 channels.
Biosensors 11 00161 g009
Figure 10. Classification results with original EEG signals for configurations with 4, 8 and 23 channels.
Figure 10. Classification results with original EEG signals for configurations with 4, 8 and 23 channels.
Biosensors 11 00161 g010
Figure 11. Results for the dimensionality reduction with CS algorithm for configurations with 8 channels.
Figure 11. Results for the dimensionality reduction with CS algorithm for configurations with 8 channels.
Biosensors 11 00161 g011
Figure 12. Results for the dimensionality reduction with LE algorithm for configurations with 8 channels.
Figure 12. Results for the dimensionality reduction with LE algorithm for configurations with 8 channels.
Biosensors 11 00161 g012
Figure 13. Results for dimensionality reduction with LPP algorithm for configurations with 8 channels.
Figure 13. Results for dimensionality reduction with LPP algorithm for configurations with 8 channels.
Biosensors 11 00161 g013
Figure 14. EEG data mapped into a 3-dimensional space with LE, LPP and CS techniques.
Figure 14. EEG data mapped into a 3-dimensional space with LE, LPP and CS techniques.
Biosensors 11 00161 g014
Table 1. Classification accuracies with CS, LE, LPP algorithms for 2, 3 and 25 dimensions respectively.
Table 1. Classification accuracies with CS, LE, LPP algorithms for 2, 3 and 25 dimensions respectively.
ECG
Original Centered
Compressed Sensed (CS)Laplacian Eigenmaps (LE)Locality Preserving
Projections (LPP)
ECG
Original
CS 2CS 3CS 25LE 2LE 3LE 25LPP 2LPP 3LPP 25
Fine Trees83.4449.4155.3479.8176.2577.3286.7354.0066.6581.15
Medium Trees71.3245.3548.0069.2371.5368.8579.6252.3460.4367.91
Coarse Trees42.8332.2134.4140.3245.6445.6450.6740.8541.5449.75
Linear Discriminant76.3224.2333.7273.9434.7738.8177.4430.4235.4173.64
Quadratic Discriminant70.0034.0047.5389.7747.3454.5484.2244.4156.2491.51
Naive Bayes47.6333.4338.9352.2237.6438.3474.3642.5149.3777.21
Kernel Naive Bayes62.5345.9448.871.8570.3469.9581.7452.5462.2682.64
Linear SVM87.3429.5238.985.1449.0861.3785.6237.5247.7285.92
Quadratic SVM95.1144.5454.394.5443.9559.9290.5444.5264.6494.24
Cubic SVM95.2442.7253.0094.5026.1033.0091.2027.1047.9294.24
Fine Gaussian SVM87.4751.8062.9087.9175.3678.7590.6954.4070.1061.14
Medium Gaussian SVM92.9149.8458.7493.0067.9269.8887.1253.4467.8494.14
Coarse Gaussian SVM79.4732.8543.6580.9754.3655.4180.9244.4557.8283.82
Fine KNN93.4239.1455.1493.7179.9283.3689.8445.1163.9093.74
Medium KNN90.2748.7260.8290.8280.7683.9289.6552.4268.0091.32
Coarse KNN77.6250.4757.7177.4474.0075.3580.1253.6365.7478.34
Cosine KNN90.5429.6447.1590.7461.2581.4289.5532.8054.6292.76
Cubic KNN90.2248.8160.8190.8180.8883.9589.7252.3868.3490.77
Weighted KNN91.4743.6059.4492.3481.5284.8290.3248.5167.4292.35
Ensemble Boosted Trees78.3445.9749.4576.8172.6570.1982.4953.5561.3677.67
Ensemble Bagged Trees 91.8143.9459.4590.480.0083.9190.9148.8668.3191.84
Ensemble Subspace Discriminant76.2424.3129.1470.33538.9576.9330.2234.3273.05
Ensemble Subspace KNN94.7123.3444.0094.0451.2480.8289.9824.1456.1095.34
Ensemble RUS Boosted Trees 71.5445.3447.9469.3171.5468.8479.6452.8460.6767.97
Table 2. Classification results with CS method for dimensionality reduction.
Table 2. Classification results with CS method for dimensionality reduction.
ECG Original CenteredCS 2CS 3CS 4CS 5CS 7CS 9CS 10CS 15CS 20CS 25
Fine Tree83.449.455.358.168.672.371.572.475.777.379.8
Medium Tree71.345.348.049.35452.851.652.352.760.669.2
Coarse Tree42.832.234.434.236.535.236.236.735.938.040.3
Linear Discriminant76.324.233.735.241.447.355.360.069.271.673.9
Quadratic Discriminant70.034.047.550.363.274.177.882.087.689.189.7
Naive Bayes47.633.438.940.847.248.647.849.150.350.952.2
Kernel Naive Bayes62.545.948.851.762.466.168.068.170.570.571.8
Linear SVM87.329.538.941.654.263.271.375.982.884.485.1
Quadratic SVM95.144.554.361.774.785.288.990.893.394.294.5
Cubic SVM95.242.753.062.275.986.690.191.793.494.794.5
Fine Gaussian SVM87.451.862.969.582.086.487.888.588.087.687.9
Medium Gaussian SVM92.949.858.765.478.085.487.388.691.292.093.0
Coarse Gaussian SVM79.432.843.645.262.167.269.571.877.579.580.9
Fine KNN93.439.155.164.480.787.689.491.092.493.593.7
Medium KNN90.248.760.867.580.686.587.888.489.690.390.8
Coarse KNN77.650.457.761.569.273.874.975.576.376.677.4
Cosine KNN90.529.647.158.273.883.285.986.788.389.790.7
Cubic KNN90.248.860.867.780.386.487.788.589.890.590.8
Weighted KNN91.443.659.468.281.988.189.390.191.592.192.3
Ensemble Boosted Trees78.345.949.452.261.866.167.570.669.573.876.8
Ensemble Bagged Trees 91.843.959.465.680.385.287.188.289.790.290.4
Ensemble Subspace Discriminant76.224.329.131.540.043.945.647.061.164.470.3
Ensemble Subspace KNN94.723.344.049.574.286.089.090.392.493.694.0
Ensemble RUSBoosted Trees71.545.347.949.453.953.852.052.552.860.669.3
Table 3. Classification results with LE method for dimensionality reduction.
Table 3. Classification results with LE method for dimensionality reduction.
ECG Original CentredLE 2LE 3LE 4LE 5LE 7LE 9LE 10LE 15LE 20LE 25
Fine Tree83.476.277.380.480.482.983.782.885.886.586.7
Medium Tree71.371.568.872.772.474.97575.178.980.179.6
Coarse Tree42.845.645.652.552.550.951.251.351.851.650.6
Linear Discriminant76.334.738.834.740.357.861.160.372.176.277.4
Quadratic Discriminant7047.354.558.360.16972.27378.182.184.2
Naive Bayes47.637.638.339.839.55757.160.971.473.774.3
Kernel Naive Bayes62.570.369.970.871.574.973.67477.379.581.7
Linear SVM87.34961.367.370.275.376.977.579.183.785.6
Quadratic SVM95.143.959.976.27986.187.687.387.78990.5
Cubic SVM95.226.13352.564.287.990.189.789.690.491.2
Fine Gaussian SVM87.475.378.781.18285.285.986.588.690.490.6
Medium Gaussian SVM92.967.969.873.475.478.378.679.582.886.687.1
Coarse Gaussian SVM79.454.355.461.266.269.272.172.576.680.180.9
Fine KNN93.479.983.385.786.286.287.287.188.188.989.8
Medium KNN90.280.783.98585.586.88786.387.488.989.6
Coarse KNN77.67475.375.377.17978.678.578.380.680.1
Cosine KNN90.561.281.483.885.986.986.786.987.688.989.5
Cubic KNN90.280.883.984.785.586.886.886.187.48989.7
Weighted KNN91.481.584.886.686.987.488.187.889.189.990.3
Ensemble Boosted Trees78.372.670.175.57678.379.279.981.482.282.4
Ensemble Bagged Trees 91.88083.986.286.688.288.688.789.990.990.9
Ensemble Subspace Discriminant76.23538.934.740.259.261.960.572.275.976.9
Ensemble Subspace KNN94.751.280.883.286.186.987.687.888.789.689.9
Ensemble RUSBoosted Trees71.571.568.872.772.474.97575.17980.179.6
Table 4. Classification results with LPP method for dimensionality reduction.
Table 4. Classification results with LPP method for dimensionality reduction.
ECG Original CenteredLPP 2LPP 3LPP 4LPP 5LPP 7LPP 9LPP 10LPP 15LPP 20LPP 25
Fine Tree83.45466.67375.677.277.877.581.581.381.1
Medium Tree71.352.360.465.966.566.866.9676868.167.9
Coarse Tree42.840.841.546.746.646.949.749.949.749.749.7
Linear Discriminant76.330.435.435.537.847.563.265.371.272.673.6
Quadratic Discriminant7044.456.265.167.676.282.383.489.190.591.5
Naive Bayes47.642.549.358.358.163.571.572.576.577.577.2
Kernel Naive Bayes62.552.562.265.670.673.67777.781.382.682.6
Linear SVM87.337.547.753.658.970.476.978.183.584.885.9
Quadratic SVM95.144.564.673.577.686.490.290.993.794.194.2
Cubic SVM95.227.147.974.381.288.191.291.894.394.594.2
Fine Gaussian SVM87.454.470.177.381.284.884.482.975.865.261.1
Medium Gaussian SVM92.953.467.875.479.286.790.290.493.593.894.1
Coarse Gaussian SVM79.444.457.865.868.973.477.47882.18383.8
Fine KNN93.445.163.973.98087.391.491.593.393.893.7
Medium KNN90.252.4687780.88789.989.991.992.191.3
Coarse KNN77.653.665.770.672.277.38080.38179.378.3
Cosine KNN90.532.854.670.776.484.188.488.992.292.792.7
Cubic KNN90.252.368.376.880.686.889.289.391.691.190.7
Weighted KNN91.448.567.477.382.387.99191.19392.992.3
Ensemble Boosted Trees78.353.561.368.1707275.876.577.677.377.6
Ensemble Bagged Trees91.848.868.377.281.987.389.189.991.290.891.8
Ensemble Subspace Discriminant76.230.234.33737.746.36263.270.370.973
Ensemble Subspace KNN94.724.156.162.676.386.491.291.694.595.495.3
Ensemble RUSBoosted Trees71.552.860.66666.566.866.867.16868.167.9
Table 5. Classification results with CS method for configurations with 8 channels.
Table 5. Classification results with CS method for configurations with 8 channels.
ECG Orig.EEG 8 Channels CS
8 ChannelsCS 3CS 5CS 10CS 15
Fine Tree73.855.161.464.869.5
Medium Tree75.559.860.868.473.1
Coarse Tree75.859.659.465.770.5
Linear Discriminant77.268.37479.984.6
Quadratic Discriminant63.466.56872.671.2
Logistic Regression50.567.873.280.683.7
Naive Bayes81.766.368.572.475.8
Kernel Naive Bayes79.864.168.57274.9
Linear SVM84.168.373.480.984
Quadratic SVM84.46972.481.185.1
Cubic SVM83.764.470.880.683.8
Fine Gaussian SVM50.550.750.550.550.5
Medium Gaussian SVM85.469.373.680.883.9
Coarse Gaussian SVM82.168.772.176.979.6
Fine KNN69.256.459.963.865.2
Medium KNN77.861.865.46974.7
Coarse KNN78.766.869.973.978
Cosine KNN78.563.367.47074.1
Cubic KNN75.960.866.369.974.3
Weighted KNN77.962.666.869.474.2
Ensemble Boosted Trees82.364.568.974.980
Ensemble Bagged Trees77.565.667.77072.8
Ensemble Subspace Discriminant71.868.373.48185
Ensemble Subspace KNN71.162.36469.169.7
Ensemble RUSBoosted Trees7759.164.16974.4
Table 6. Classification results with LE algorithm for configurations with 8 channels.
Table 6. Classification results with LE algorithm for configurations with 8 channels.
ECG OriginalsEEG 8 Channels LE
8 ChannelsLE 3LE 5LE 10LE 15
Fine Tree73.871.17270.369.6
Medium Tree75.575.175.371.872.3
Coarse Tree75.875.174.374.175.2
Linear Discriminant77.279.181.683.281.1
Quadratic Discriminant63.477.876.977.977.2
Logistic Regression50.578.781.481.678.8
Naive Bayes81.776.576.67777.1
Kernel Naive Bayes79.875.577.176.176.3
Linear SVM84.179.280.882.880.8
Quadratic SVM84.478.279.181.781.1
Cubic SVM83.772.977.779.580.4
Fine Gaussian SVM50.550.750.550.550.5
Medium Gaussian SVM85.479.280.381.181
Coarse Gaussian SVM82.179.28081.479
Fine KNN69.266.169.167.668.2
Medium KNN77.873.174.475.676.1
Coarse KNN78.777.777.879.178.8
Cosine KNN78.574.474.875.576.4
Cubic KNN75.972.773.574.573.8
Weighted KNN77.973.574.376.576.8
Ensemble Boosted Trees82.377.778.378.378
Ensemble Bagged Trees77.576.874.472.976
Ensemble Subspace Discriminant71.8798082.581.7
Ensemble Subspace KNN71.17375.274.873
Ensemble RUSBoosted Trees7775.474.972.673.6
Table 7. Classification results with LPP algorithm for configurations with 8 channels.
Table 7. Classification results with LPP algorithm for configurations with 8 channels.
EEG Orig.EEG 8 Channels
8 ChannelsLPP 3LPP 5LPP 10LPP 15
Fine Tree73.853.250.850.749.8
Medium Tree75.553.849.851.252.2
Coarse Tree75.850.448.650.355.6
Linear Discriminant77.256.351.954.956.6
Quadratic Discriminant63.45550.753.152.1
Logistic Regression50.556.35254.857.5
Naïve Bayes81.753.254.251.357
Kernel Naïve Bayes79.853.851.250.255.6
Linear SVM84.155.749.55459.4
Quadratic SVM84.456.252.552.758.8
Cubic SVM83.7525452.154.9
Fine Gaussian SVM50.551.850.553.554.5
Medium Gaussian SVM85.452.5505155.1
Coarce Gaussian SVM82.152.949.252.958.8
Fine KNN69.249.848.952.153.1
Medium KNN77.851.350.349.754.2
Coarse KNN78.751.748.950.853.7
Cosine KNN78.549.648.552.756.4
Cubic KNN75.949.449.750.652.7
Weighted KNN77.951.349.951.857.3
Ensemble Boosted Trees82.35148.351.754.9
Ensemble Bagged Trees77.551.347.950.852.8
Ensemble Subspace Discriminant71.8555153.558.4
Ensemble Subspace KNN71.15348.551.353.2
Ensemble RUSBoosted Trees775448.851.952.1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fira, M.; Costin, H.-N.; Goraș, L. On the Classification of ECG and EEG Signals with Various Degrees of Dimensionality Reduction. Biosensors 2021, 11, 161. https://doi.org/10.3390/bios11050161

AMA Style

Fira M, Costin H-N, Goraș L. On the Classification of ECG and EEG Signals with Various Degrees of Dimensionality Reduction. Biosensors. 2021; 11(5):161. https://doi.org/10.3390/bios11050161

Chicago/Turabian Style

Fira, Monica, Hariton-Nicolae Costin, and Liviu Goraș. 2021. "On the Classification of ECG and EEG Signals with Various Degrees of Dimensionality Reduction" Biosensors 11, no. 5: 161. https://doi.org/10.3390/bios11050161

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop