An EigenECG Network Approach Based on PCANet for Personal Identification from ECG Signal

We herein propose an EigenECG Network (EECGNet) based on the principal component analysis network (PCANet) for the personal identification of electrocardiogram (ECG) from human biosignal data. The EECGNet consists of three stages. In the first stage, ECG signals are preprocessed by normalization and spike removal. The R peak points in the preprocessed ECG signals are detected. Subsequently, ECG signals are transformed into two-dimensional images to use as the input to the EECGNet. Further, we perform patch-mean removal and PCA algorithm similar to the PCANet from the transformed two-dimensional images. The second stage is almost the same as the first stage, where the mean removal and PCA process are repeatedly performed in the cascaded network. In the final stage, the binary quantization, block sliding, and histogram computation are performed. Thus, this EECGNet performs well without the use of back-propagation to obtain features from the visual content. We constructed a Chosun University (CU)-ECG database from an ECG sensor implemented by ourselves. Further, we used the well-known MIT Beth Israel Hospital (BIH) ECG database. The experimental results clearly reveal the good performance and effectiveness of the proposed method compared with conventional algorithms such as PCA, auto-encoder (AE), extreme learning machine (ELM), and ensemble extreme learning machine (EELM).


Introduction
ECG is the process of recording the electrical activity of the heart over a period of time using electrodes placed on the skin. These electrodes detect small electrical changes on the skin that arise from the heart muscle's electrophysiological pattern of depolarizing and repolarizing during each heartbeat. In addition, The ECG signal is some kind of an electric provocation spread in the heart muscle cells. Under the influence of this provocation, the heart muscle cells shrink, which as a result, causes a mechanical effect in the form of the cyclic shrinking of heart atria and ventricles. As an effect of heart muscle shrinking, the blood circulates in the human organs. The propagation of electric provocation in the heart muscle forms a depolarization wave of the bioelectric potentials of the neighboring heart cells. The propagation of the depolarization wave is caused due to a quick movement of positive ions of sodium (Na + ). After moving of the depolarization wave, the heart muscle cells return to their rest state recovering before starting resting negative potential. This state is called a repolarization phase. The depolarization and repolarization phenomena of the heart muscle cells are caused by the movement of ions. This is the essence of the heart electric activity. Movement of ions in the heart muscle cells is the electric current, which generates the electromagnetic field around the heart. There is a possibility to measure the electric potential at each point of the electromagnetic field [1,2]. In summary, the heart's electrical supply is automatically functioned by slow spontaneous depolarization of the sinus node, The initial motivation of our study was to apply a simple deep learning network using ECG. PCANet is an algorithm that is primarily used in face recognition, and facial data are primarily composed of two dimensions. However, to apply ECG to EECGNet, one-dimensional ECG should be changed to 2D. How to change the dimension of the input is also seen in many trends. In addition, it is applied to many fields through recent deep running, but it is difficult to analyze the reason for its performance. That is, you cannot tell why the performance is good. Undoubtedly, EECGNet is easy to understand and analyze because it is part of the most intuitive and easy deep learning. Thus, we represent the novelty and excellence of EECGNet applications through the comparison with deep-learning algorithms in ECG.
The contribution of this paper are helping Database (DB) acquisition and performing various experiments with a preprocessing method. The preprocessing process and the data were processed. Various experiments were conducted and analyzed using EECGNet in ECG. We herein classify ECG signals through EECGNet and compare the performance with PCA, ELM and EELM. We also present a new approach using ECG signals. The ECG is originally composed of one-dimensional signals. One-dimensional ECG is featured by the proposed preprocessing method, and the feature is converted into two-dimensional. The features extracted by the PCANet using ECG are exploratory and the ECG recognition rate is improved through the following modeling. In short, the merit of this paper is the possibility of research on ECG personal recognition of research through EECGNet, and the validity of the pre-processing process, the geometric transformation of data, and algorithm are verified by using two kinds of data. In addition, the CU-ECG database is a self-produced database by the CU. This paper is organized as follows. Section 2 describes the concept of PCANet. The core of PCANet is PCA with learning. Hence, the Net that is called the weight is presented to train each layer, individually. Regarding speed, PCANet is slower than PCA because of the training, but PCANet performs better than PCA in terms of performance. In Section 3, we present the structure of the EigenECG Network (EECGNet), data processing, and preprocessing of ECG signals. Further, we explore the feature of EECGNet using ECG and analyze the flow of EECGNet. Section 4 describes the CU-ECG database of the construction environment and sensor information and summarizes the MIT-BIH ECG database and CU-ECG databases which are developed with a sensor to acquire ECG data. For the experiment, 1D ECG signals are transformed into 2D ones because EECGNet uses 2D and three-dimensional (3D) signals as inputs. We also use comparative indicators to analyze the differences between the data and compare their performances using PCA, Auto-Encoder (AE), ELM, and EELM. The parameters of EECGNet such as the patch size, number of filters, block size, and lap ratio are adjusted to improve its performance. We verified the influence of these parameters on the performance of PCANet by modifying the key parameters of the experiment. Finally, Section 5 presents the conclusion of this paper.

PCANet
In this section, PCANet-related papers and PCA papers are reviewed to apply PCANet [35]. The architecture of the basic PCANet is shown in Figure 1. It can be divided into three stages that include 10 steps. is converted into two-dimensional. The features extracted by the PCANet using ECG are exploratory and the ECG recognition rate is improved through the following modeling. In short, the merit of this paper is the possibility of research on ECG personal recognition of research through EECGNet, and the validity of the pre-processing process, the geometric transformation of data, and algorithm are verified by using two kinds of data. In addition, the CU-ECG database is a self-produced database by the CU. This paper is organized as follows. Section 2 describes the concept of PCANet. The core of PCANet is PCA with learning. Hence, the Net that is called the weight is presented to train each layer, individually. Regarding speed, PCANet is slower than PCA because of the training, but PCANet performs better than PCA in terms of performance. In Section 3, we present the structure of the EigenECG Network (EECGNet), data processing, and preprocessing of ECG signals. Further, we explore the feature of EECGNet using ECG and analyze the flow of EECGNet. Section 4 describes the CU-ECG database of the construction environment and sensor information and summarizes the MIT-BIH ECG database and CU-ECG databases which are developed with a sensor to acquire ECG data. For the experiment, 1D ECG signals are transformed into 2D ones because EECGNet uses 2D and three-dimensional (3D) signals as inputs. We also use comparative indicators to analyze the differences between the data and compare their performances using PCA, Auto-Encoder (AE), ELM, and EELM. The parameters of EECGNet such as the patch size, number of filters, block size, and lap ratio are adjusted to improve its performance. We verified the influence of these parameters on the performance of PCANet by modifying the key parameters of the experiment. Finally, Section 5 presents the conclusion of this paper.

PCANet
In this section, PCANet-related papers and PCA papers are reviewed to apply PCANet [35]. The architecture of the basic PCANet is shown in Figure 1. It can be divided into three stages that include 10 steps.

PCA
A PCA is a linear subspace projection technique used to downsample high-dimensional datasets and minimize the reprojection error. The primary steps in the process are summarized as follows. First, the covariance matrix of the input data is calculated; subsequently, from this covariance matrix, the eigenvalues and eigenvectors are calculated. Next, the eigenvalues provide a measure of the significance of the corresponding eigenvectors, and considers the variation in the original data. Finally, the eigenvectors that account for the desired level of the variation are selected. The eigenvectors are the coordinate axes of the new feature space. The variables in the original dataset are projected onto the most significant eigenvectors to yield the original data solely in terms of the chosen vectors. These transformed variables are called the principal components (PCs). The pseudocode of PCA algorithm is presented in Algorithm 1.

PCA
A PCA is a linear subspace projection technique used to downsample high-dimensional datasets and minimize the reprojection error. The primary steps in the process are summarized as follows. First, the covariance matrix of the input data is calculated; subsequently, from this covariance matrix, the eigenvalues and eigenvectors are calculated. Next, the eigenvalues provide a measure of the significance of the corresponding eigenvectors, and considers the variation in the original data. Finally, the eigenvectors that account for the desired level of the variation are selected. The eigenvectors are the coordinate axes of the new feature space. The variables in the original dataset are projected onto the most significant eigenvectors to yield the original data solely in terms of the chosen vectors. These transformed variables are called the principal components (PCs). The pseudocode of PCA algorithm is presented in Algorithm 1.

Algorithm 1 PCA
Input: A D-dimensional training set X = {x 1 , x 2 , . . . , x N } and the new (lower) dimensionality d (with d ≤ D) Find the spectral decomposition of Cov(x), obtaining the eigenvectors V 1 , V 2 , . . . , V D and their corresponding eigenvalues λ 1 , λ 2 , . . . , λ D . Note that the eigenvalues are sorted, such that For any x ∈ R D , its new lower dimensional representation is: and the original x can be approximated as

PCANet
In this section, we first review the PCANet [31,35], whose architecture is shown in Figure 1, and can be divided into three stages that include 10 steps. Figure 2 shows the detailed flow of process that PCANet extracts features of train dataset. The detailed description of the workflow is below: (1) The original input is training images such as ECG. However, input image is PCA output's output if Step (4) processes before.
(2) Extracting the PCANet filters is the main mission of PCA filterbank.
(3) The eigenvector is an output of PCA filterbank (4) Make original images convolve with PCANet filters as output to the next step. (5) Make V convolve with PCANet filters as an output image to the next step. (6) Output image is the PCANet output after two PCA stage. (7) Binarize output image and calculate block-wise histograms. Here, we create a weight map and proceed with the binary quantization and weighted combination of the elements of the input data. (8) Ftrain is the final feature extracted by PCANet. Suppose that we have N input training images {I i , i = 1, 2, . . . , N}, I i ∈ R m × n , and that the patch size (or 2D filter size) of all stages is k 1 × k 2 , where k 1 and k 2 are odd integers satisfying 1 ≤ k 1 ≤ m, 1 ≤ k 2 ≤ n. Further, the number of filters in layer i is L i , i.e., L 1 is the first stage and L 2 is the second stage. In the following, we describe the structure of PCANet in detail. Let the N input images {I i , i = 1, 2, . . . , N}, be concatenated as follows:

First Stage of PCANet
As shown in Figure 1, the first stage of PCANet includes the following: Step 1: The first patch sliding process.
The images are padded to ′ ∈ ℝ ( + 1 − 1)×( + 2 − 1) before the sliding operation. Out-of-range input pixels are assumed as zero. This ensures that all weights in the filters reach the entire images. We use a patch of size 1 × 2 to slide each pixel of the ith image ′ ∈ ℝ ( + 1 − 1) × ( + 2 − 1) , and subsequently reshape each 1 × 2 matrix into a column vector, followed by concatenation to obtain a matrix: where , denotes the jth vectorized patch in . Therefore, for all the input training images { , = 1, 2, … , }, we can obtain the following matrix. Figure 3 shows an example of the first patch sliding process. The data of the square matrix are vectorized according to the patch size. For example, if the patch size is [3 × 3], a square matrix is set as 3 × 3. In Figure 2, element ① (A1, A2, ..., A9) is converted to a vector. The elements are stacked from element (1) to (49) in the circle; subsequently, the mean removal step is performed using this data.

First Stage of PCANet
As shown in Figure 1, the first stage of PCANet includes the following: Step 1: The first patch sliding process.
The images are padded to I i ∈ R (m + k 1 − 1)×(n + k 2 − 1) before the sliding operation. Out-of-range input pixels are assumed as zero. This ensures that all weights in the filters reach the entire images. We use a patch of size k 1 × k 2 to slide each pixel of the ith image I i ∈ R (m + k 1 − 1) × (n + k 2 − 1) , and subsequently reshape each k 1 × k 2 matrix into a column vector, followed by concatenation to obtain a matrix: where x i,j denotes the jth vectorized patch in I i . Therefore, for all the input training images {I i , i = 1, 2, . . . , N}, we can obtain the following matrix. Figure 3 shows an example of the first patch sliding process. The data of the square matrix are vectorized according to the patch size. For example, if the patch size is [3 × 3], a square matrix is set as 3 × 3. In Figure 2, element x (A1, A2, ..., A9) is converted to a vector. The elements are stacked from element (1) to (49) in the circle; subsequently, the mean removal step is performed using this data.
Step 2: The first mean remove process. In this step, we subtract the patch mean from each patch and obtain the following: x i,j, is a mean-removed vector. For each input training image I i ∈ R m × n , we can obtain a substituted matrix as follows: Step 3: The first PCA process. In this step, the eigenvalues and eigenvectors of X are calculated from Equation (5) using the PCA algorithm, which in fact minimizes the reconstruction error in the Frobenius norm as follows: where I L1 is an identity matrix of size L 1 × L 2 , and the T denotes transposition. The PCA filters are expressed as follows: where mat k 1 ,k 2 (u l ) is a function that maps (u l ) ∈ R k1 × k 2 to a matrix W 1 l ∈ R k1×k 2 . The output of the first stage of PCANet is as follows: where * denotes a 2D convolution, and the boundary of I i is zero padding before convolving with W 1 l such that I l i is the same size as I i .  Step 2: The first mean remove process. In this step, we subtract the patch mean from each patch and obtain the following: is a mean-removed vector. For each input training image ∈ ℝ × , we can obtain a substituted matrix as follows: Step 3: The first PCA process.
In this step, the eigenvalues and eigenvectors of ̅ are calculated from Equation (5) using the PCA algorithm, which in fact minimizes the reconstruction error in the Frobenius norm as follows: where 1 is an identity matrix of size 1 × 2 , and the T denotes transposition. The PCA filters are expressed as follows:

Second Stage of PCANet
Almost repeating the same process as the first stage, as shown in Figure 1, the second stage of PCANet also includes three steps: Step 4: The second patch sliding process.
Similar to Step 1, we use a patch of size k 1 × k 2 to slide each pixel of the ith image I i,l ∈ R k 1 k 2 ×mn , l = 1, 2, . . . , L 1 , and obtain a matrix as follows: We concatenate the matrices of all the L 1 filters and obtain the following equation: Step 5: The second mean removal process.
Step 6: The second PCA process.

Output Stage of PCANet
Step 7: Binary quantization, In this step, we binarize the outputs I I I i,l,ρ of the second stage of PCANet, and obtain the following: where H(·) is a Heaviside step function whose value is 1 for positive entries, and 0 otherwise. We denote P as follows: Step 8: Weight and sum.
Around each pixel, we view the vector of L 2 binary bits as a decimal number. This converts the binary images P i , l, back into integer-valued images as follows: We denote T as follows: Around each pixel, we view the vector of L 2 binary bits as a decimal number. This converts the binary images P i , l, back into integer-valued images.
We use a block of size h 1 × h 2 to slide each of the L 1 images T i,l , l = 1, . . . , L 1 , with overlap ratio R, and subsequently reshape each h 1 × h 2 matrix into a column vector, which is then concatenated to obtain a matrix as follows: where z i,l,j denotes the jth vectorized patch in T i,l , l = 1, . . . , L 1 . B is the number of blocks when using a block of size h 1 × h 2 to slide each T i,l , l = 1, . . . , L 1 , with the overlap ratio R, and expressed as where stride1 and stride2 are the vertical and horizontal steps, respectively, and round(.) means round off. As shown in Equation (16), the number of blocks B increases as the overlap ratio R increases. For L 1 images, we concatenate Z i,l to obtain a matrix as follows: We denote Z as follows: Step 10: Histogram. We compute the histogram (with 2 L 2 bins) of the decimal values in each column of Z i and concatenate all the histograms into one vector and obtain the following: This is the feature of the input image I 1 and Hist(.) denotes the histogram operation. We denote f as follows: The feature vector is subsequently sent to a classifier, for example, the Support Vector Machine (SVM) [36], etc. Figure 1 shows a diagram of PCANet. In the first stage of PCANet, mean removal and the PCA algorithm are applied. The second step is the same as the first step.
Step 3 includes binary quantization and histogram computation. Table 1 shows core parameters of PCANet.

Parameters
Definition The number of data The patch size. k 1 and k 2 are odd integers and satisfy The number of filters of two stages.
The overlap ratio of block. R ∈ {0 : 0.1 : 0.9} which means R varies f rom 0 to 0.9 with the interval 0.1

Comparison of PCA with EigenECGs Network (EECGNet)
The most significant difference between PCA and EECGNet is the dependency of training. EECGNet contains weights and training, while PCA is not dependent on training. Further, when PCA and EECGNet are applied as feature extractors, PCA is primarily reduced by a snapshot, but EECGNet is far from dimensional reduction. The other is the structural difference. Structurally, EECGNet contains Stage 1, Stage 2, and hashing histograms in the output layers; however, PCA contains covariance, eigenvectors, and eigenvalues. Further, these concepts are included in the EECGNet. In addition, PCA can analyze the performance according to the number of eigenvalues, but EECGNet contains various parameters (filter size, block size, patch size, and the overlap ratio of a block). Therefore, PCA compares and analyzes the performance according to the number of eigenvalues, but EECGNet can analyze the influence on various parameters and adjust the robust parameters to specific data. EECGNet is most affected by the patch size and filter size, and it is not significantly affected by the block size. The result is a large matrix formed by taking all possible products between the elements of X and those of Y.

ECG Biometrics Based on EECGNet
In this section, we present ECG-based preprocessing and EECGNet analysis. In the preprocessing, we show the sampling method by Q point detection in the original signal using ECG. The EECGNet process shows the ECG features or analyses extracted from EECGNet.

Preprocessing
ECG recordings are typically contaminated by different types of noise and artifacts. In the preprocessing step, the goals are to reduce such noise and artifacts to determine the fiducial points (P, Q, R, S, and T), and to avoid amplitude and offset effects to compare the signals from different patients. Typical types of noise are briefly described and grouped into the following categories [37]. The preprocessing process of ECG significantly affects signal analysis and classification. Therefore, we applied the following preprocessing procedure (Steps 1-5). Figure 4 shows the preprocessing using the CU ECG database.
Step 1: Convolution is performed on the original signal with an average filter of size 500, and the average convoluted signal is subtracted from the original signal.
Step 2: Convolution is performed with an average filter of size 10 for the regular signal.
Step 3: The largest value in the signal is detected.
Step 4: An average of 400 frames are extracted based on the peaks of both sides.
Step 5: The ECG average signal of one lead and two leads are connected. The preprocessing process of ECG significantly affects signal analysis and classification. Therefore, we applied the following preprocessing procedure (Steps 1-5). Figure 4 shows the preprocessing using the CU ECG database.
Step 1: Convolution is performed on the original signal with an average filter of size 500, and the average convoluted signal is subtracted from the original signal.
Step 2: Convolution is performed with an average filter of size 10 for the regular signal.
Step 3: The largest value in the signal is detected.
Step 4: An average of 400 frames are extracted based on the peaks of both sides.
Step 5: The ECG average signal of one lead and two leads are connected. (a)

EECGNet-Based ECG Biometrics
ECG authentication, which has been highlighted in the field of biometric signal authentication, is studied to design a simple and secure authentication system through a band-type clock. ECG consists of 1D data, but the inputs of EECGNet are 2D and 3D. Therefore, ECG data of size of 1 ×

EECGNet-Based ECG Biometrics
ECG authentication, which has been highlighted in the field of biometric signal authentication, is studied to design a simple and secure authentication system through a band-type clock. ECG consists of 1D data, but the inputs of EECGNet are 2D and 3D. Therefore, ECG data of size of 1 × 784 should be changed to a 2D size. We proceed with matrix resizing to transform 1D to 2D. First, we obtain a rectangular matrix of size (k 1 × k 1 ) through patch-mean removal from 2D ECG data. Subsequently, the covariance matrix, eigenvalue, and eigenvector are obtained using PCA. This is the PCA process, and the next is the PCA output process. In the PCA output process, the size of D + 1 × k 2 + 1 is changed to zero by padding; subsequently, patch-mean removal is performed again. Next, the value of the patch-mean removal is projected to the eigenvector. The convolution output can be obtained at this stage. Subsequently, the rectangular matrix is obtained again through patch-mean removal and the processes of PCA and PCA output are performed again. Subsequently, when the combination of weights is performed, the size (m 1 × m 2 ) is returned to the original image size, and the sparse feature is constructed from the calculated histogram. The Kronecker is a tensor product of X and Y in the PCANet algorithm. Finally, the extracted features are used as inputs to the SVM classifier and the final recognition rate is obtained. Figure 5 shows a structure of EECGNet. Figure 6 shows convolution output of MIT ECG database. Figure 7 shows a comparison of PCA with EECGNet in structure. The pseudocode of EECGNet algorithm is presented in Algorithm 2. patch-mean removal and the processes of PCA and PCA output are performed again. Subsequently, when the combination of weights is performed, the size ( 1 × 2 ) is returned to the original image size, and the sparse feature is constructed from the calculated histogram. The Kronecker is a tensor product of X and Y in the PCANet algorithm. Finally, the extracted features are used as inputs to the SVM classifier and the final recognition rate is obtained. Figure 5 shows a structure of EECGNet. Figure 6 shows convolution output of MIT ECG database. Figure 7 shows a comparison of PCA with EECGNet in structure. The pseudocode of EECGNet algorithm is presented in Algorithm 2.

Algorithm 2 EECGNet
Input: training data {X i } N i=1 , X i ∈ R m×n, L 1 , L 2 , The patch size is k 1 × k 2 1.
Initialization of variable: The size of the patch ← select the size of the patch in EECGNet The number of filters ← select the number of filters in EECGNet The size of a block ← select the size of a block in EECGNet Sparsity regularization ← select the number of layers in EECGNet 3.

5.
Output layer -Binary hashing: compute the decimal-valued image

Experimental Results
This section lists the database acquisition process and environment, evaluates the data, and examines the similarities. In addition, we present the performance evaluation and the effectiveness of the EECGNet.

ECG
The ECG measurement method induces three types of induction: one lead (right hand, left hand), two leads (right hand, left foot), and three leads (left hand, left foot). In this paper, we use the simplest method, which is one lead, and the signal appears as PQRST wave due to contraction of the ventricle. The P-wave is the first deflection of the ECG. It results from depolarization of the atria. Atrial repolarization occurs during ventricular depolarization and is obscured. The QRS complex corresponds to the ventricular depolarization. The T-wave represents ventricular repolarization, restoration of the resting membrane potential. In about one-quarter of population, a U-wave can be seen after the T-wave. This usually has the same polarity as the preceding T-wave. It has been suggested that the U-wave is caused by after-potentials that are probably generated by mechanical-electric feedback. Inverted U-waves can appear in the presence of left ventricular hypertrophy or ischemia. The PQ segment corresponds to electrical impulses transmitted through the S-A node, bundle of His and its branches, and the Purkinje fibers and is usually isoelectric. The PQ interval expresses the time elapsed from atrial depolarization to the onset of ventricular depolarization. The ST-T interval coincides with the slow and rapid repolarization of ventricular muscle. The QT interval corresponds to the duration of the ventricular action potential and repolarization. Then, TP interval is the period for which the atria and ventricles are in diastole. The RR interval represents one cardiac cycle and is used to calculate the heart rate [1,2,38]. Figure 8 shows the ECG waves, segment, and intervals. complex corresponds to the ventricular depolarization. The T-wave represents ventricular repolarization, restoration of the resting membrane potential. In about one-quarter of population, a U-wave can be seen after the T-wave. This usually has the same polarity as the preceding T-wave. It has been suggested that the U-wave is caused by after-potentials that are probably generated by mechanical-electric feedback. Inverted U-waves can appear in the presence of left ventricular hypertrophy or ischemia. The PQ segment corresponds to electrical impulses transmitted through the S-A node, bundle of His and its branches, and the Purkinje fibers and is usually isoelectric. The PQ interval expresses the time elapsed from atrial depolarization to the onset of ventricular depolarization. The ST-T interval coincides with the slow and rapid repolarization of ventricular muscle. The QT interval corresponds to the duration of the ventricular action potential and repolarization. Then, TP interval is the period for which the atria and ventricles are in diastole. The RR interval represents one cardiac cycle and is used to calculate the heart rate [1,2,38]. Figure 8 shows the ECG waves, segment, and intervals.

CU-ECG Database
The CU-ECG is the data from the CU in Korea. The data consist of 100 persons, the measurement time was 10 s for one measurement, and a total of 60 times was measured during three days. The participants were measured while they were sitting on a chair in a relaxed state, and the data sampling rate was 500,000 Hz. The type of ECG acquired was only Lead1, and the type of electrode was a wet electrode. We used a processor, amplifier, bandpass filter, band stop filter, and low-pass filter for the primary board and sensor. The processor uses Atmega8, and the Analog to Digital (AD) converter uses a 10-bit resolution. The communication is USB to serial. The gain of the amplifier was 1000 times, and the signal was measured using a 5-V positive power source. Figure 9 shows the internal block diagram of the ECG measuring device.
To determine the suitability of the sensor, we compared the input and output signals using an SECG 4.0 machine which is manufactured in Gyeonggi, South Korea [39]. In addition, it was confirmed that the signal was outputted according to Association for the Advancement of Medical Instrumentation (AAMI) EC11 standard. The ECG performance tester designed for compliance tests. The SECG 4.0 is compliant with IEC 60601-2-25, IEC 60601-2-27, IEC 60601-2-47, and AAMI EC11 [39]. The experiment sequence is to send a signal from the computer to the generator and connect the

CU-ECG Database
The CU-ECG is the data from the CU in Korea. The data consist of 100 persons, the measurement time was 10 s for one measurement, and a total of 60 times was measured during three days. The participants were measured while they were sitting on a chair in a relaxed state, and the data sampling rate was 500,000 Hz. The type of ECG acquired was only Lead1, and the type of electrode was a wet electrode. We used a processor, amplifier, bandpass filter, band stop filter, and low-pass filter for the primary board and sensor. The processor uses Atmega8, and the Analog to Digital (AD) converter uses a 10-bit resolution. The communication is USB to serial. The gain of the amplifier was 1000 times, and the signal was measured using a 5-V positive power source. Figure 9 shows the internal block diagram of the ECG measuring device.
To determine the suitability of the sensor, we compared the input and output signals using an SECG 4.0 machine which is manufactured in Gyeonggi, South Korea [39]. In addition, it was confirmed that the signal was outputted according to Association for the Advancement of Medical Instrumentation (AAMI) EC11 standard. The ECG performance tester designed for compliance tests. The SECG 4.0 is compliant with IEC 60601-2-25, IEC 60601-2-27, IEC 60601-2-47, and AAMI EC11 [39]. The experiment sequence is to send a signal from the computer to the generator and connect the toolbox to the mainboard ECG system. Next, an output port is created in the system and measured with an oscilloscope. Figure 10 shows ECG performance tester for compliance tests with signal.
Sensors 2018, 18, 4024 14 of 25 toolbox to the mainboard ECG system. Next, an output port is created in the system and measured with an oscilloscope. Figure 10 shows ECG performance tester for compliance tests with signal.

MIT-BIH ECG Database
The MIT-BIH ECG database includes 48 parts that contain two-channel ECG recordings. Those parts were recorded between 1975 and 1979 at Boston's Beth Israel Hospital (now the Beth Israel Deaconess Medical Center). The MIT-BIH ECG database was obtained from 47 persons, including 25 men and 22 women. Further, the men were 32-89 years of age, and the women were 23-89 years of age (two records are collected from the same male participant among all the records). This database was acquired in approximately 30 min. The sampling rate is 360 samples per second, and the resolution for digitization is 11-bit over a 10 mV range. Twenty data are constructed per class. The size of the training data is 940  1600, and both one lead and two leads are used. Figure 11 shows the training and testing feature data of 1D ECG using preprocessing. toolbox to the mainboard ECG system. Next, an output port is created in the system and measured with an oscilloscope. Figure 10 shows ECG performance tester for compliance tests with signal.

MIT-BIH ECG Database
The MIT-BIH ECG database includes 48 parts that contain two-channel ECG recordings. Those parts were recorded between 1975 and 1979 at Boston's Beth Israel Hospital (now the Beth Israel Deaconess Medical Center). The MIT-BIH ECG database was obtained from 47 persons, including 25 men and 22 women. Further, the men were 32-89 years of age, and the women were 23-89 years of age (two records are collected from the same male participant among all the records). This database was acquired in approximately 30 min. The sampling rate is 360 samples per second, and the resolution for digitization is 11-bit over a 10 mV range. Twenty data are constructed per class. The size of the training data is 940  1600, and both one lead and two leads are used. Figure 11 shows the training and testing feature data of 1D ECG using preprocessing.

MIT-BIH ECG Database
The MIT-BIH ECG database includes 48 parts that contain two-channel ECG recordings. Those parts were recorded between 1975 and 1979 at Boston's Beth Israel Hospital (now the Beth Israel Deaconess Medical Center). The MIT-BIH ECG database was obtained from 47 persons, including 25 men and 22 women. Further, the men were 32-89 years of age, and the women were 23-89 years of age (two records are collected from the same male participant among all the records). This database was acquired in approximately 30 min. The sampling rate is 360 samples per second, and the resolution for digitization is 11-bit over a 10 mV range. Twenty data are constructed per class. The size of the training data is 940 × 1600, and both one lead and two leads are used. Figure 11 shows the training and testing feature data of 1D ECG using preprocessing.

Data Evaluation and Similarity Measurement
The compression ratio (CR) is defined as the ratio of the original signal size to the compressed signal size. The CR provides information about the degree by which the compression algorithm removes the redundant data. A higher CR requires fewer bits to store or transmit the data, which can be defined as where 0 is the total number of bits required to represent the original data, and is the total number of bits required to represent the compressed data. The percent mean square difference (PRD) measures the error between the original and reconstructed signals.
The percentage root mean square difference normalized (PRDN) is a normalized version of the PRD, which is independent of the signal mean value ̅ .

Data Evaluation and Similarity Measurement
The compression ratio (CR) is defined as the ratio of the original signal size to the compressed signal size. The CR provides information about the degree by which the compression algorithm removes the redundant data. A higher CR requires fewer bits to store or transmit the data, which can be defined as where B 0 is the total number of bits required to represent the original data, and B c is the total number of bits required to represent the compressed data. The percent mean square difference (PRD) measures the error between the original and reconstructed signals.
The percentage root mean square difference normalized (PRDN) is a normalized version of the PRD, which is independent of the signal mean value X.
The root mean square error (RMS) provides a measure of error in the reconstructed signal with respect to the original signal.
The signal-to-noise ratio (SNR) is the measure of the degree of noise energy introduced by compression in decibels (dB). Figure 12 shows the comparison of the MIT-BIH ECG database with wavelet decomposition and the CU-ECG database without wavelet decomposition. The root mean square error (RMS) provides a measure of error in the reconstructed signal with respect to the original signal.
The signal-to-noise ratio (SNR) is the measure of the degree of noise energy introduced by compression in decibels (dB). Figure 12 shows the comparison of the MIT-BIH ECG database with wavelet decomposition and the CU-ECG database without wavelet decomposition.

Performance Evaluation
The accuracy performance of the SVM is used as the ratio of the correct classification to the number of total classified samples. The accuracy can be formulized as follows: TP is the number of correct predictions for the positive samples, TN is the number of correct predictions for the negative samples, FN is the number of incorrect predictions for the positive samples, and FP is the number of incorrect predictions for the negative samples. Next, to extract the optimal performance of the EECGNet, various parameters were changed. The modified parameters correspond to h, k, l, and R. We set each parameter as ℎ 1 = ℎ 2 , 1 = 2 , 1 = 2 and extracted the performance accordingly. We adjusted each parameter for the performance of the experiment and analyzed the effects of the parameters. The analysis shows that the higher is the block size, the better is the recognition rate. The filter number generally shows a good performance between 4 and 8. Next, concerning the database, we performed ECG personal authentication using CEECGNet. The size of both the training data and the testing data is 784 × 8550 on the CU-ECG database. There are 17100 sizes, with 180 ECG signals per class, consisting of 95 classes. The training and testing

Performance Evaluation
The accuracy performance of the SVM is used as the ratio of the correct classification to the number of total classified samples. The accuracy can be formulized as follows: TP is the number of correct predictions for the positive samples, TN is the number of correct predictions for the negative samples, FN is the number of incorrect predictions for the positive samples, and FP is the number of incorrect predictions for the negative samples. Next, to extract the optimal performance of the EECGNet, various parameters were changed. The modified parameters correspond to h, k, l, and R. We set each parameter as h 1 = h 2 , k 1 = k 2 , L 1 = L 2 and extracted the performance accordingly. We adjusted each parameter for the performance of the experiment and analyzed the effects of the parameters. The analysis shows that the higher is the block size, the better is the recognition rate. The filter number generally shows a good performance between 4 and 8. Next, concerning the database, we performed ECG personal authentication using CEECGNet. The size of both the training data and the testing data is 784 × 8550 on the CU-ECG database. There are 17,100 sizes, with 180 ECG signals per class, consisting of 95 classes. The training and testing data are divided by 0.5, resulting in a size of 8550. Training and testing data were selected as shown in Figure 13. Figure 13 shows how to divide training and testing data.  Table 2 shows the performance of EECGNet using the CU-ECG database. Table 3 shows the performance of EECGNet using the MIT-BIH ECG database. Table 4 shows a comparison among EECGNet, PCA, AE, ELM, and EELM. Figure 14 shows the performance of the MIT-BIH ECG database using one lead. Figure 15 shows the performance of ELM using the MIT-BIH ECG database and CU-ECG database. Figure 16 shows the identification performance for the MIT-BIH ECG database using one lead.   Table 2 shows the performance of EECGNet using the CU-ECG database. Table 3 shows the performance of EECGNet using the MIT-BIH ECG database. Table 4 shows a comparison among EECGNet, PCA, AE, ELM, and EELM. Figure 14 shows the performance of the MIT-BIH ECG database using one lead. Figure 15 shows the performance of ELM using the MIT-BIH ECG database and CU-ECG database. Figure 16 shows the identification performance for the MIT-BIH ECG database using one lead.  Table 2 shows the performance of EECGNet using the CU-ECG database. Table 3 shows the performance of EECGNet using the MIT-BIH ECG database. Table 4 shows a comparison among EECGNet, PCA, AE, ELM, and EELM. Figure 14 shows the performance of the MIT-BIH ECG database using one lead. Figure 15 shows the performance of ELM using the MIT-BIH ECG database and CU-ECG database. Figure 16 shows the identification performance for the MIT-BIH ECG database using one lead.

Results and Discussion
We recruited 100 subjects and and acquired three days of data per person. We propose an EigenECG Network (EECGNet) based on the principal component analysis network (PCANet) for the personal identification of electrocardiogram (ECG) from human biosignal data. In this paper, we propose the classification of ECG for identification systems using EECGNet from the human biosignal data. ECG authentication, which has been highlighted in the field of biometric signal authentication, was studied to design a simple and secure authentication system through a band-type clock. We designed the EECGNet-based SVM classifier for the ECG authentication system. The design results show good performance when compared with other algorithms, and the validity of EECGNet was confirmed. The validity of EECGNet was confirmed to be 98.2% and the proposed method showed good performance when compared with conventional algorithms such as PCA, auto-encoder, ELM, and EELM.
In addition, we analyzed the results in terms of parameters of PCANet. To illustrate with visual data, Figure 17 is shown. As the number of L, which is a parameter of PCANet, increases, the performance of the recognition rate also increases. R did not significantly affect performance. Figure 17 fixes R and h, and increases k and L from 1 to 9. When L is 9, k is increased by 1, and when k is 9, h is increased by 1. It can be seen that 10 cycles are created by looking at the whole shape. This means that every time L becomes 9 and k becomes 9, a cycle occurs once every 50 times; as a result, the performance is affected by the parameters of k and L. In addition, when the lowest performance of 86% was shown, L was 3 and k was 9. That is, when L becomes k × k, the performance is low.
On the other hand, we compared performance with existing algorithms. PCANet shows higher performance than PCA derived PCA. PCA showed stable performance between 30 and 40 eigenvectors. In addition, it shows better performance than AE, which is the basic model of deep running. The advantage of PCANet is that it is a simple deep-learning, but it is higher in performance than the existing deep learning. ELM is also a model that does not learn. That is, there is no backpropagation. ELM was not as high as PCANet's performance despite its fast and popular algorithms. In the active function of the ELM, the sigmoid function, the relu function, and the sin function showed more than 85% performance. For the experiment, we designed the ensemble model using the ELM activation function, but the performance of PCANet was high.
When analyzing DB, the performance of CU-ECG database was better than MIT ECG database. First, participants in the ECG CU database did not have heart disease, and, second, the CU-ECG database was the bigger than the MIT-BIH database. ECG is sensitive to heartbeat and heart disease. Therefore, the MIT-BIH database affects classification because of irregular data. In addition, the size of the data affected the learning of PCANet. The database of MIT-BIH is relatively small.
On the other hand, we used the MIT-BIH ECG database in the field of recognition. The reason the MIT-BIH ECG database was used in the recognition field is that 47 patients did not judge the same class despite having the same disease (arrhythmia). Even in a few disease trials, arrhythmia, which has a significant effect on ECG, was not a major problem in ECG recognition, and the results are shown in Table 3. Figure 17 shows performance of MIT-BIH database according to number of parameters.

Conclusions
We herein propose the classification of ECG for identification systems using EECGNet from the human biosignal data. ECG authentication, which has been highlighted in the field of biometric signal authentication, was studied to design a simple and secure authentication system through a band-type clock. We designed the EECGNet-based SVM classifier for the ECG authentication system. The results show good performance when compared with other algorithms, and the validity of EECGNet was confirmed. The validity of EECGNet was confirmedto be 98.2% and the proposed method showed good performance when compared with conventional algorithms such as PCA, auto-encoder, ELM, and EELM. We verified the influence of these parameters on the performance of PCANet by modifying the key parameters of the experiment. Subsequently, we constructed an ECG dataset to perform the experiment and to verify whether the EECGNet can be used for the identification. In particular, EECGNet could influence the ECG certification system because of the advantage of the faster verification time. In the future, we plan to investigate EECGNet in three dimensions, as well as study the use of 1D ECG data and the structure change in the PCANet algorithm.