Next Article in Journal
RCC Structural Deformation and Damage Quantification Using Unmanned Aerial Vehicle Image Correlation Technique
Next Article in Special Issue
Attention-Gate-Based Model with Inception-like Block for Single-Image Dehazing
Previous Article in Journal
Applied Optimization in Clean and Renewable Energy: New Trends
Previous Article in Special Issue
Improving Human Activity Recognition for Sparse Radar Point Clouds: A Graph Neural Network Model with Pre-Trained 3D Human-Joint Coordinates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sex Recognition through ECG Signals aiming toward Smartphone Authentication

1
Department of Telecommunications, Faculty of Engineering, Fundación Universitaria Compensar, Bogota 111311, Colombia
2
Department of Electronics, Faculty of Engineering, Pontificia Universidad Javeriana, Bogota 110231, Colombia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(13), 6573; https://doi.org/10.3390/app12136573
Submission received: 14 May 2022 / Revised: 19 June 2022 / Accepted: 23 June 2022 / Published: 29 June 2022
(This article belongs to the Special Issue AI, Machine Learning and Deep Learning in Signal Processing)

Abstract

:
Physiological signals are strongly related to a person’s state of health and carry information about the human body. For example, by ECG, it is possible to obtain information about cardiac disease, emotions, personal identification, and the sex of a person, among others. This paper proposes the study of the heartbeat from a soft-biometric perspective to be applied to smartphone unlocking services. We employ the user heartbeat to classify the individual by sex (male, female) with the use of Deep Learning, reaching an accuracy of 94.4% ± 2.0%. This result was obtained with the RGB representation of the union of the time-frequency transformation from the pseudo-orthogonal X, Y, and Z bipolar signals. Evaluating each bipolar contribution, we found that the XYZ combination provides the best category distinction using GoogLeNet. The 24-h Holter database of the study contains 202 subjects with a female size of 49.5%. We propose an architecture for managing this signal that allows the use of a few samples to train the network. Due to the hidden nature of ECG, it does not present vulnerabilities like public trait exposition, light/noise sensibility, or learnability compared to fingerprint, facial, voice, or password verification methods. ECG may complement those gaps en route to a cooperative authentication ecosystem.

1. Introduction

ECG signals are mostly used for monitoring a patient’s health status. This signal can provide information about arrhythmia and coronary artery disease, among other conditions. However, researchers have found that it is possible to extract from the heartbeat specific information such as emotions [1], age estimation [2], person identification [3], and sex [4].
Sex recognition has been studied using body [5] and face [6] images, gait [7], voice [8], twittering [9], and keystroke dynamics [10], among others.
Cardiovascular diseases are the most prominent cause of death in the world [11]. According to the Centers for Disease Control and Prevention, each year in the USA, around 659,000 people die from heart failure, which represents 25% of deaths in the country [12]. At approximately 65 and 72 years old, respectively, men and women suffer their first heart attack [13]. Some studies have shown that men are twice as susceptible to having a heart attack as women [14,15], though the survival rate after a heart attack is lower in women [13]. This may be attributed to the fact that a woman’s heart tends to be smaller and can pump 10% less blood, and potentially because women experience the heart attack at an older age [16]. Women’s hearts tend to have shorter PR and QRS intervals [17], longer QT duration, and distinct T waveform dynamics compared to men [18].
We propose the ECG sex classification study as the first step and approach to fighting cardiovascular diseases because the discrimination could help to emphasize specific heart behaviors given the person’s anatomical features. Although sex recognition has been covered by cardiologists, this area has turned to machine learning (ML) algorithms since 2014 [3,19], and reciprocally, ECG has proven its biometric characteristics through ML, too [20].
In fact, biometric multimodal studies have pointed out that the inclusion of soft-biometric features can enrich and facilitate the discrimination performance [21], being mostly proven in camera-related systems. Soft-biometric refers to the extraction of shared—or nonunique—traits such as ethnicity, sex, age, etc., that help to describe human subjects but are not strong enough to distinguish between two people [22]. Following this scheme, we propose to treat the ECG sex property as a soft-biometric input for future inclusion in ECG-related verification engines to complement the classification space. Moreover, we intend to strengthen this technology to be adapted as a smartphone authentication tool in the near future. Recent tests show that smartphone fingerprint and face authentication hacking is possible [23,24,25,26], evidencing that biometric authentication is not infallible [27]. Therefore, user privacy protection should be turned into an authentication community, embracing possible traits that could support the current open gates. In consequence, it is necessary to be aware of the strengths and weaknesses of authentication tools when the user context changes because each biometric method assumes the sensor’s constraints. ECG biometrics’ potential is not affected by light-saturated or noisy environments in contrast to touch-screen, camera, or microphone sensors. In addition, due to the face exposition or the fingerprint trace in daily touched objects, the copy reproduction attack is more probable for face and fingerprint. Instead, ECG is a hidden measure [3]. The nature of ECG includes an inherent liveness property as a prospective ubiquitous tool that could be used for continuous or transparent authentication mechanisms. Although this area is still maturing, dealing with issues such as signal time acquisition, computational complexity, and user comfortability, new related products evidence the first market inclusions as described in Section 5.
The ECG fiducials orientation finds the temporal and amplitude inter- and intra-proportions of the P, Q, R, S, and T peaks (Figure 1) and has been a classical and useful tool for studying the ECG. However, fiducial-based approaches do not cover all the signal waveform; for that reason, it is preferable to choose transformations or feature extraction options that can completely cover the heartbeat. On the other hand, intending to develop a portable device, variables such as comfort and fast configuration require a fewer amount of electrodes; nevertheless, the best accuracy scores in sex recognition mostly implement a 12-lead configuration using deep learning as a classifier. Similarly, during the day, a person’s stance directly affects the heartbeat waveform change; indeed, in the field of sex recognition, resting position samples have a preponderance in several studies. Moving to the smartphone authentication space, users demand a fast process for unlocking their devices; in contrast, ten seconds is the common window in the related work. Finally, unlike static biometric traits such as face or fingerprint, the heart contains a dynamic behavior, so the classifier needs to consider and adopt its conditions to the nature of this phenomenon.
In the next list, we show the series of contributions of this text.
  • We implement a novel system that uses heart rate as a feature of a classifier selection, reaching a general accuracy of 94.4% ± 2.0% with a peak over 96% in some bins, collecting only four heartbeats per sample.
  • Our approach does not use the 12-lead configuration that is common in related studies. In contrast, we implement the use of pseudo-orthogonal lead configuration that uses three signals.
  • We propose an RR discrimination to feed the Deep Convolutional Neural Network (CNN) classifier. It provides a fewer number of samples to train the classifier but achieves similar results compared with related work.
  • Taking advantage of all the signal waveforms, we provide an RGB strategy for representing orthogonal lead signals in one sample through a wavelet transform.
  • Our experiment can classify the person’s sex without controlling the person’s stance.
The structure of this work starts with the related work on sex classification in Section 2, providing the last results in this area. The section of materials and methods are covered in Section 3.1 with the database features, and Section 3.2 covers the methodology proposed in this work. Then, in Section 4, we expose the architecture that contains the procedure implemented in our work. The experimental results are exposed in Section 5, and lastly, we conclude this article with our final discussion (Section 6) and conclusions (Section 7) of the topics mentioned in this text.

2. Related Work

This section intends to provide a scope of the work on sex recognition in ECG during the last few years, starting with an overview of Deep Learning in ECG, covering some examples of applications such as arrhythmias, sleep apnea, pregnancy, R peak detection, QRS complex recognition, cardiac rhythm, and human identification.
Most of the work in Deep Learning in ECG is related to arrhythmia detection with the intention of avoiding human errors and supporting the processing of large databases for better diagnosis. Izci et al. with a CNN and the database MIT-BIH differentiate normal beats of left bundle branch block (LBBB), right bundle branch block (RBBB), premature ventricular contractions (PVC), and premature atrial contractions (PAC) arrhythmias with a 98.07% of accuracy [29]. Essa et al. [30], following arrhythmia detection, proposed the bagging model of a CNN-LSTM (long short-term memory) and RRHOS-LSTM (RR Interval and higher-order statistics) classifiers, each trained with different sub-samples, being both connected with an ANN meta-classifier and verified with a final CNN-LSTM model, allowing a general performance of 95.81%. Ahmad et al. [31] proposed the Multimodal Image Fusion (MIF) and the Multimodal Feature Fusion (MFF) frameworks for arrhythmia and Myocardial Infarction, each receives the ECG signal transformed into Gramian Angular Field, Recurrence Plot, and Markov Transition Field images. MIF creates a new image fusing its image input for CNN classification, while MFF generates the fusion of the extracted features of the CNN penultimate layer for training a Support Vector Machine (SVM) for heartbeat classification. Finally, the model got an accuracy of 99.7% and 99.2% for arrhythmia and Myocardial Infarction. Jinghao et al. [32], from an inter-patient perspective, transform the signal waveform and dynamics into a symbolic space in the MIT-BIH arrhythmia dataset to detect the supraventricular ectopic beat (SVEB) and ventricular ectopic beat (VEB) using a multiperspective convolutional neural network with an overall accuracy of 96.4%. Due to the extension of this field, Ebrahami et al. [33] composed a review focused on the application of deep learning for the detection of arrhythmias.
Covering the sleep apnea detection study, Bahrami et al. [34] performed a comprehensive comparison between 14 conventional ML algorithms and deep learning approaches such as CNN, recurrent networks, and hybrid-CNN, having the best detection on hybrid deep models with an accuracy, sensitivity, and specificity of 88.13%, 84.26%, and 92.27%. For pregnancy ECG separation, Lee et al. [35] seek to distinguish the ECG of the prenatal person due to the dominance of the abdominal maternal ECG signal, looking for early cardiac detection risks, decomposing the joint signal using the W-net deep learning architecture based on QRS complexes with a recall and precision of 98.23% and 99.3% with the abdominal and direct fECG database (ADFECGDB). For future analysis, there is an approach proposed by Li et al. to be tested that consists of the signal separation by IMF decomposition to recognize specific maternal and prenatal components by LSTM-RNN [36,37]. In the R peak detection, the study of Gajare et al. is trained with wavelet scalograms from the ECG Heartbeat Categorization Dataset from Kaggle and evaluated by the MIT-BIH Arrhythmia Dataset, achieving an accuracy of 98.32% [38]. In the field of QRS findings, Cai et al. [39], with the presence of noise in arrhythmia samples, categorize it with CNN and CRNN models, reaching an F1 score of 0.99 with four different databases, ensuring a generalizable model. Pokaprakarn et al. [40] classify the ECG rhythm with the spectrogram and signal waveform inputs with sequences of 5 seconds with noise presence and verified it with an external database, achieving an F1 score of 0.89 for discriminating Normal Sinus Rhythm (NSR), Atrial Fibrillation (AF), Supraventricular Tachyarrhythmia (SVTA), Bigeminy (B), and Trigeminy (T) classes. Finally, an emerging area is biometric ECG. An application example is performed by Abdeldayem et al., which developed a person identification CNN classification cross-validating nine different databases reaching a 95.6% accuracy in extracting attributes by spectral correlation [41]. Due to the expansion of this topic, we recommend the work of Uwaechia et al. [42], which presents a survey that covers deep supervised learning, deep semi-supervised learning, and deep unsupervised learning methods for the identification of people.
Regarding ECG sex recognition, Strodthoff et al. provided the Physikalisch-Technische Bundesanstalt (PTB-XL), a database that contains 71 different annotations and 5 diagnostic super-classes. PTB-XL follows the standard communications protocol for computer-assisted electrocardiography (SCP-ECG) with the purpose of being a reference point for future research into the machine learning scope for different classification approaches in ECG [43]. Additionally, they seek to classify person sex with transfer learning and inception models, getting an accuracy of 84.9% with a ResNet pre-trained model. Strodthoff et al.’s work also focuses their direction on age prediction, diagnosis likelihood, and person stratification.
The work proposed by Siegersma et al. used a study of risk mortality by misclassified sex data with an increasing score of 1.4 times over people with ECG samples that were properly classified [44]. At the same time, they implemented a Deep Neural Network (DNN) for sex classification, achieved an interval confidence of 95%, an accuracy score of 89% for internal validation, and [80%, 82%] and [80%, 83%] for external validation, using two different databases.
On the other hand, Diamant et al., based on SimCLR [45], implemented a system that automatically acquires information about the ECG signal for a pre-training stage known as PCLR, trained with around 3.2 million ECGs, using contrastive learning without the requirement to fine-tune the network [46]. They transform each sample into a lineal representation, and contrastive learning allows a self-supervised scheme for data without annotations that focuses on highlighting regions with similarity or distinction to be usable in new operations. The system focuses on age prediction, sex classification, and left ventricular hypertrophy (LVH) and atrial fibrillation (AF) detection passed through four steps: selection module, encoder, projection head, and contrastive loss function. For sex classification, with more than 5000 samples, it achieves an F1 score of 86%.
It is also noteworthy that the contributions of Attia et al. [2], Siegersma et al. [4], and Lyle et al. [47] reached accuracies of 90.4%, 92.2%, and [91.3%, 86.3%], respectively. In Table 1, more indicators can be found regarding the mentioned work in this section.
The advantage of our work in comparison with the related work is the significant accuracy score, taking into account variables such as: (i.) A minor number of electrodes on the person represents less complexity in the installation, increasing the user comfort. (ii.) Fewer signals to study. (iii.) Minor signal window acquisition. (iv.) Proposal of signal integration (v.) The nature of our analyzed database and our work classifies sex without taking into account the person’s stance, while the related work mostly works with patients in resting position.

3. Materials and Methods

3.1. Database Features

Our data belongs to the medical center of the University of Rochester with the code reference E-HOL-03-0202-003 [48]. It includes 202 healthy patients using a pseudo-orthogonal lead configuration (X, Y, and Z) acquired from a 24-h Holter. Only heartbeats labeled as Normal are used in this research. Complementing the official statistics, we have detected four things: (1.) All labels, such as R peak location, start after minute 5. (2.) The mean recording duration is 19.7 ± 6.1 h. (3.) Thew’s website informs us that there are two sex labels marked as unknown; this information is incorrect, and the University of Rochester confirms that there are 102 Males and 100 Females. (4.) We decided to ignore volunteer #1043 due to quality issues.
The pseudo-orthogonal placement intends to capture the heart signal from three orthogonal vectors, using six electrodes for acquiring the bipolar signals X, Y, and Z, following Figure 2. The X signal is achieved at the V6 position with its right equivalent point. The Y signal starts in the left anterior axillary line at the lower rib until the upper sternum. Finally, the Z signal is at the V3 position, both frontal and on the person’s back.

3.2. Methodology

Looking to cover the heartbeat dynamism, we adapt the histogram strategy to see the differences between male and female heartbeats based on their rhythm. Commonly, the RR interval oscillates from 0.6 to 1.2 s [50]. In this way, we defined the border values of the histogram. Then, with a bin width of 0.06 seconds, the total number of bins is ten. This value is calculated from the difference between the maximum and minimum values in the dataset divided by the width of the bin. Figure 3, with 9 million samples for each sex, helps to establish that both categories (male and female) possess similar RR tendencies, separated into 10 bins with seconds as the unit.
Taking advantage of the previous distribution, we propose to discriminate each bin into a different classification model (Figure 4). This is to better differentiate the set of heartbeats. We assume that similar RRs have similar waveforms with the intention of helping the classifier performance.
With the purpose of evaluating each signal input, we will develop the same experiment structure seven times but enable the ECG signal leads in different moments. As a result, the set of experiments will have the following sequence X, Y, Z, XY, XZ, YZ, XYZ.

4. Architecture

An overview of the scheme proposed in this work consists of six blocks described in Figure 5. (i.) The database module contains the set of ECG data to analyze with the attributes described in Section 3.1. (ii.) The noise filtering component implements both high- and stop-band filtering over the X, Y, and Z signals. (iii.) The heartbeat selector block collects each heartbeat separately oriented by the ten bins in Figure 3 and randomly selects two of them, each composed of four beats. (iv.) The signal transformation module uses each four-beat sample composed of the X, Y, and Z signals and turns each into the time-frequency domain to build the RGB image. (v.) The samples storage element collects each image, saving it in the folder male or female, as appropriate. (vi.) The classification section generates the computational model; for this, it distributes and processes the storage samples’ data for the training and testing of the model.
Delving into the scheme of Figure 5, we propose that the architecture described in Figure 6 is composed of two general moments: signal preprocessing and data classification. The next procedure is applied to each person in the database.
The preprocessing stage contains five moments, taking into account that the signals X, Y, and Z are involved in each step. To start, the complete recording goes through a high-pass filter of 0.7 Hz and a notch filter of 50/60 Hz to remove noises such as baseline wander and powerline. Then, taking advantage of the database annotations in the Holter recording (Section 3.1), we collected the vector indices of common RR values according to their working bin (Figure 3). Consequently, by RR value, we randomly take two of them, including their next three R peaks, to complete one sample.
For the next step, the X, Y, and Z samples are transformed into the wavelet space. This transformation helps to exploit the entire ECG waveform, highlighting jointly both time and frequency data in a two-dimensional space without data loss. We have implemented a continuous wavelet filter-bank that follows an analytic Morse wavelet with a time-bandwidth product of 60, a symmetry characterization of 3, and a voice per octave of 12.
Each matrix transformation magnitude (Wvt-X, Wvt-Y, Wvt-Z) is used as a part of an RGB image. That is to say, the matrices Wvt-X, Wvt-Y, and Wvt-Z each correspond to the colors red, green, and blue, in strict order; if required, a zero matrix is used as a replacement when the X, Y, or Z leads are disabled. Finally, each image is stored in a different folder, depending on the person’s sex.
The classification is performed with a data distribution of 70–30%. We decided to use a pre-trained network called GoogLeNet, which is a convolutional neural network of 144 layers provided by Google. The network dropout implemented is 60%, a minibatch size of 15 samples, and a maximum number of epochs of 20.

5. Results

We show seven different and independent experiments following the architecture of Figure 6. We started from single leads (X, Y, Z), then with the possible set of combinations (XY, XZ, YZ, XYZ). The results are separated into two parts, single lead and mixed lead analysis.
From the accuracy perspective, X lead reaches an 87.3% ± 4.4%, Y lead 84.6% ± 4.0%, and Z lead 86.6% ± 4.0%. Figure 7 consolidates the results set, and by bin, the biggest score is marked with a red dot. Under bipolar configuration, X lead provides better performance, being positioned in 7 of all 10 bins, compared to Y and Z leads with 1/10 and 2/10, respectively.
Taking only the best results by bin, the general sensitivity climbs to 89.6% ± 4.5%, specificity to 86.2% ± 8.1%, precision to 86.8% ± 6.3%, and the accuracy to 87.9% ± 4.1 %. Figure 8 consolidates the confusion matrix derivation results of the classification.
On the other hand, the combination of the leads XYZ achieved an accuracy of 94.4% ± 2.0%, which is an improvement of 7.1 points over the bipolar X lead. The XY leads combination achieved an accuracy of 90.3% ± 3.6%, XZ leads 92.8% ± 2.2%, and YZ leads 89.3% ± 2.8%. In 9 of 10 bins, the XYZ signals were the best scheme (Figure 9).
With the best values by bin, the classification rate contains the following results: sensitivity of 94.4% ± 3.6%, specificity of 94.5% ± 3.8%, precision of 94.4% ± 3.4%, and accuracy 94.4% ± 2.0%. The results in Figure 8 and Figure 10 are subtracted from each other and compared in Figure 11. As a result, the sensitivity achieved an advance of [12.7%, 3.5%, 9.7%, 3.6%, 12.0%, 5.6%, 5.5%] in [ B 1 , B 3 , B 4 , B 6 , B 7 , B 9 , B 10 ] but a decrement of [1.8%, 2.5%, 0.4% ] in [ B 2 , B 5 , B 8 ] . The specificity achieved an increment of [7.3%, 17.9%, 14.4%, 5.9%, 9.4%, 18.8%, 8.2%, 4.0%] in [ B 1 , B 2 , B 3 , B 4 , B 5 , B 6 , B 8 , B 9 ] but a decrement of [2.1%, 0.9%] in [ B 7 , B 10 ] . The precision achieved an improvement of [7.2%, 14.4%, 12.0%, 6.5%, 9.4%, 15.5%, 8.0%, 4.8 %] in [ B 1 , B 2 , B 3 , B 4 , B 5 , B 6 , B 8 , B 9 ] but a decrement of [1.3%, 0.8%] in [ B 7 , B 10 ] . The accuracy in all segments achieved an improvement of [9.9%, 8.2%, 9.0%, 7.8%, 3.5%, 11.3%, 4.8%, 4.1%, 4.7%, 1.9%] in [ B 1 , B 2 , B 3 , B 4 , B 5 , B 6 , B 7 , B 8 , B 9 , B 10 ] .
In synthesis, [B1, B3, B4, B6, B9] received a direct improvement in all their derivation values. The cases of B2 and B5 are similar; both received a decrement in sensitivity of 1.8% and 2.5%, with a score of 91.0% and 86.3%; in contrast, both achieved a significant improvement in specificity and precision with values of B2:[93.4%, 93.1%] and B5:[98.9%, 98.8%], respectively. On the other hand, B7 received a decrease in specificity and precision of 2.1% and 1.3% with an equal value of 92.5% and was supported with an increase of 12% in the sensitivity with a reach of 98%. B8 achieved 94.3% in their sensitivity, which represents a decrement of 0.4%, but an increase of 8.2%, 8.0%, and 4.1% in their specificity, precision, and accuracy with final values of 94.7%, 93.2%, and 96.6%. Finally, B10 increased by 5.5% and 1.9% in the sensitivity and accuracy values to 96.5% and 95.5% and reduced by 0.9% and 0.8% in the specificity and precision with final values of 94.7% and 93.7%. We present the general classification metrics of the experiment in Table 2.

Toward ECG Sex Recognition for Smartphone Authentication?

The previous sections have depicted our architecture proposal and results, getting a significant classification score for ECG sex recognition, and consolidating the ECG soft-biometric sex attribute extraction. Nonetheless, there is a valid inquiry, can the ECG acquisition be exploitable and deployable to the market? The short answer is: Yes. Different companies have developed different products oriented at portable ECG analytics. All of them use Lead-I configuration and offer comfortable signal acquisition suitable to their final application. However, none of them focus on sex recognition but facilitate identification, driving risks, and health monitoring. We will make a brief of our selected products offered by AliveCor, CardioID, and NYMI (Figure 12).
In the case of heart monitoring, KardiaMobile from AliveCor generated a comfortable attachable smartphone device that helps to detect six different types of arrhythmias. Further, the functionality can be expanded with the app KardiaCare for cloud services such as periodic cardiologist revision, and secure storage, among others. This device has FDA approval, has been applied for user verification [51], and in 2020 received investments of USD 65 million from tier-1 companies, such as Qualcomm, among others [52].
CardioID, which is in the heart monitoring trade, has a deep research background and partners such as CEiiA and the multinational Bosh. Their heartbeat traceability is applied in the transportation business, embedding the leads in the truck wheel with the intention of tracking driver identity, drowsiness or fatigue detection, and cardiac pathologies [53]. In this way, upon some heart abnormality, an alert is enabled, being all linked with their central platforms. CardioID belongs to the project VALU3S (Verification and Validation of Automated Systems’ Safety and Security) sponsored by the European Union with partners from 10 countries, Portugal, France, Germany, Czech Republic, Ireland, Italy, Spain, Sweden, and Turkey. Importantly, the CardioID acquisition technique can be applied to any other object and context, including a smartphone.
NYMI is part of the Innominds consortium; they are a startup from the University of Toronto and have developed a fully integrated workspace wearable that includes 100+ well-ranked companies such as Microsoft and Honeywell, among others [54]. NYMI is a band that validates a user into an organization for accessing control. This band has on-body detection supported by two biometrics measures: heartbeat and fingerprint. The validation works as a user identifier for company places, object manipulation, and computer session login through NFC. Depending on the user ACL, the company authorizes specific triggered options that are enabled under user presence, for example, machine ignition option or distance measuring through their Bluetooth radio.
Progressively, the comfort variable has been decreasing in complexity. AliveCor, CardioID, and NYMI are spearheading examples of innovative ways for capturing the ECG signals in their products. It would prepare the market for future implementations to evaluate the ECG sex classification algorithm implementation on their related devices as a soft-biometric input feature for medical decisions or user authentication.

6. Discussion

From the perspective of our experiment, the average classification accuracy obtained was 94.4%, with peak values greater than 96% in different bins by using the Rochester database and classified with GoogLeNet, although these results are influenced by the signal quality and attributes of the database population.
The method proposed in this work congregates the time-frequency inputs of the XYZ signals in one RGB image. The literature close to this work commonly uses 12 leads and a time window of 10 s. In contrast, our approach uses a pseudo-orthogonal configuration and less signal time capture (approx. 4 s), allowing the use of fewer electrodes and fewer data. In addition, we propose to establish the heart rate dynamism as an atomic variable before any classification approach, respecting its physiological nature. Furthermore, this experiment does not control the person’s position compared to close investigations that use only the resting position.
Our work and current research in this area confirm that the classification of sex with ECG signals is realizable. As a consequence, we observe that the sex property of this signal is a hidden soft biometric feature that could enrich the feature vector of an ECG identification system or areas related to health, expanding this area of study.
The projection for further research is to analyze the classification accuracy of the architecture due to the change in the heartbeat number collected. In addition, our proposal must be analyzed with a new database to evaluate the cross-validation of our model. This study invites future investigators to include the architecture proposed in their related studies and to include sex differentiation in their models. As future work for new investigations, we propose expanding the current model to apply it to a smartphone authentication context.

7. Conclusions

This work proposes a novel architecture that looks for the ECG sex recognition based on the pseudo-orthogonal electrode configuration using a pre-trained convolutional neural network, reaching an accuracy of 94.4% ± 2.0%. This result helps the design of our future work focused on the use of the sex soft-biometric attribute, evaluating its contribution in a user authentication context by ECG.

Author Contributions

Conceptualization, J.-L.C.L., C.P., L.G. and L.T.; methodology, J.-L.C.L. and C.P.; software, J.-L.C.L.; validation, J.-L.C.L. and C.P.; formal analysis, J.-L.C.L. and C.P.; investigation, J.-L.C.L.; resources, J.-L.C.L. and L.T.; data curation, J.-L.C.L.; writing—original draft preparation, J.-L.C.L.; writing—review and editing, J.-L.C.L. and C.P.; visualization, J.-L.C.L.; supervision, C.P.; project administration, J.-L.C.L.; funding acquisition, L.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Fundacion Universitaria Compensar, the Colombian Ministry for the Information and Communications Technology (Ministerio de Tecnologías de la Información y las Comunicaciones—MinTIC) and the Colombian Administrative Department of Science, Technology and Innovation (Departamento Administrativo de Ciencia, Tecnología e Innovación—Colciencias) through the Fondo Nacional de Financiamiento para la Ciencia, la Tecnología y la Innovación Francisco José de Caldas (Project ID: FP44842-502-2015).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Medical Center of the University of Rochester provided the data involved in this paper. Please email the authors for guidance.

Acknowledgments

The authors would like to acknowledge mainly the Fundación Universitaria Compensar, including their Telecommunications Engineering Department, Research Department, English Area, and Bilingual Department, for providing time and support to complete this paper. Also to the cooperation of all partners within the Centro de Excelencia y Apropiación en Internet de las Cosas (CEA-IoT) project. The authors would also like to thank all the institutions that supported this work: the Colombian Ministry for the Information and Communications Technology (Ministerio de Tecnologías de la Información y las Comunicaciones—MinTIC) and the Colombian Administrative Department of Science, Technology and Innovation (Departamento Administrativo de Ciencia, Tecnología e Innovación—Colciencias) through the Fondo Nacional de Financiamiento para la Ciencia, la Tecnología y la Innovación Francisco José de Caldas (Project ID: FP44842-502-2015). An additional acknowledgment corresponds to Elena Wei, and Rachelle Bracha Fishbin.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADFECGDBabdominal and direct fECG database
AFAtrial fibrillation
ECGElectrocardiogram
CNNConvolutional Neural Network
DNNDeep Neural Network
TFTransformation
HPFHigh Pass Filter
LBBBLeft Bundle Branch Block
KNNk-nearest neighbors algorithm
LSTMLong short-term memory
LVHLeft ventricular hypertrophy
MFFMultimodal Feature Fusion
MIFMultimodal Image Fusion
NSRNormal Sinus Rhythm
PACPremature atrial contractions
PCLRPatient Contrastive Learning of Representations
PTB-XLPhysikalisch-Technische Bundesanstalt
PVCPremature ventricular contractions
RBBBRight Bundle Branch Block
SBPStop Band Filter
SCP-ECGStandard communications protocol for computer-assisted electrocardiography
SimCLRSimple Framework for Contrastive Learning
SPARSymmetric Projection Attractor Reconstruction
SVEBSupraventricular ectopic beat
SVTASupraventricular Tachyarrhythmia
VEBVentricular ectopic beat

References

  1. Hsu, Y.L.; Wang, J.S.; Chiang, W.C.; Hung, C.H. Automatic ECG-Based Emotion Recognition in Music Listening. IEEE Trans. Affect. Comput. 2020, 11, 85–99. [Google Scholar] [CrossRef]
  2. Attia, Z.I.; Friedman, P.A.; Noseworthy, P.A.; Lopez-Jimenez, F.; Ladewig, D.J.; Satam, G.; Pellikka, P.A.; Munger, T.M.; Asirvatham, S.J.; Scott, C.G.; et al. Age and Sex Estimation Using Artificial Intelligence From Standard 12-Lead ECGs. Circ. Arrhythmia Electrophysiol. 2019, 12, e007284. [Google Scholar] [CrossRef]
  3. Cabra, J.L.; Mendez, D.; Trujillo, L.C. Wide Machine Learning Algorithms Evaluation Applied to ECG Authentication and Gender Recognition. In Proceedings of the 2nd International Conference on Biometric Engineering and Applications (ICBEA), Amsterdam, The Netherlands, 16–18 May 2018; pp. 58–64. [Google Scholar]
  4. Siegersma, K.; Van De Leur, R.; Onland-Moret, N.C.; Van Es, R.; Den Ruijter, H.M. Misclassification of sex by deep neural networks reveals novel ECG characteristics that explain a higher risk of mortality in women and in men. Eur. Heart J. 2021, 42, 3162. [Google Scholar] [CrossRef]
  5. Nguyen, D.T.; Kim, K.W.; Hong, H.G.; Koo, J.H.; Kim, M.C.; Park, K.R. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction. Sensors 2017, 17, 637. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Ghildiyal, A.; Sharma, S.; Verma, I.; Marhatta, U. Age and Gender Predictions using Artificial Intelligence Algorithm. In Proceedings of the 3rd International Conference on Intelligent Sustainable Systems (ICISS’20), Thoothukudi, India, 3–5 December 2020; pp. 371–375. [Google Scholar]
  7. Lee, M.; Lee, J.H.; Kim, D.H. Gender recognition using optimal gait feature based on recursive feature elimination in normal walking. Expert Syst. Appl. 2022, 189, 116040. [Google Scholar] [CrossRef]
  8. Alkhawaldeh, R.S. DGR: Gender Recognition of Human Speech Using One-Dimensional Conventional Neural Network. Sci. Program. 2019, 2019, 7213717. [Google Scholar] [CrossRef]
  9. Ikae, C.; Savoy, J. Gender identification on Twitter. J. Assoc. Inf. Sci. Technol. 2021, 73, 58–69. [Google Scholar] [CrossRef]
  10. Tsimperidis, I.; Yucel, C.; Katos, V. Age and Gender as Cyber Attribution Features in Keystroke Dynamic-Based User Classification Processes. Electronics 2021, 10, 835. [Google Scholar] [CrossRef]
  11. Bayer. Las Enfermedades Cardiovasculares son la Primera Causa de Muerte en Colombia y el Mundo. 2020. Available online: https://www.bayer.com/es/co/las-enfermedades-cardiovasculares-son-la-primera-causa-de-muerte-en-colombia-y-el-mundo (accessed on 28 April 2022).
  12. Centers for Disease Control and Prevention. Heart Disease Facts. 2022. Available online: https://www.cdc.gov/heartdisease/facts.htm (accessed on 28 April 2022).
  13. Publishing, H.H. The Heart Attack Gender Gap. 2016. Available online: https://www.health.harvard.edu/heart-health/the-heart-attack-gender-gap (accessed on 28 April 2022).
  14. Health, H. Throughout Life, Heart Attacks are Twice as Common in Men than Women–Harvard Health. 2016. Available online: https://www.health.harvard.edu/heart-health/throughout-life-heart-attacks-are-twice-as-common-in-men-than-women (accessed on 28 April 2022).
  15. Albrektsen, G.; Heuch, I.; Løchen, M.L.; Thelle, D.S.; Wilsgaard, T.; Njølstad, I.; Bønaa, K.H. Lifelong Gender Gap in Risk of Incident Myocardial Infarction: The Tromsø Study. JAMA Intern. Med. 2016, 176, 1673–1679. [Google Scholar] [CrossRef]
  16. Cho, L. Women or Men—Who Has a Higher Risk of Heart Attack? 2020. Available online: https://health.clevelandclinic.org/women-men-higher-risk-heart-attack/ (accessed on 28 April 2022).
  17. Mieszczanska, H.; Pietrasik, G.; Piotrowicz, K.; McNitt, S.; Moss, A.J.; Zareba, W. Gender Related Differences in Electrocardiographic Parameters and Their Association with Cardiac Events in Patients After Myocardial Infarction. Am. J. Cardiol. 2008, 101, 20–24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Nakagawa, M.; Ooie, T.; Ou, B.; Ichinose, M.; Yonemochi, H.; Saikawa, T. Gender differences in the dynamics of terminal T wave intervals. Pacing Clin. Electrophysiol. 2004, 27, 769–774. [Google Scholar] [CrossRef] [PubMed]
  19. Ergin, S.; Uysal, A.K.; Gunal, E.S.; Gunal, S.; Gulmezoglu, M.B. ECG based biometric authentication using ensemble of features. In Proceedings of the 9th Iberian Conference on Information Systems and Technologies (CISTI’14), Barcelona, Spain, 18–21 June 2014; pp. 1274–1279. [Google Scholar]
  20. Pinto, J.R.; Cardoso, J.S.; Lourenço, A. Evolution, Current Challenges, and Future Possibilities in ECG Biometrics. IEEE Access 2018, 6, 34746–34776. [Google Scholar] [CrossRef]
  21. Jain, A.K.; Nandakumar, K.; Lu, X.; Park, U. Integrating Faces, Fingerprints, and Soft Biometric Traits for User Recognition. In Proceedings of the 2nd Biometric Authentication ECCV International Workshop (BioAW’04), Prague, Czech Republic, 15 May 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 259–269. [Google Scholar]
  22. Ceci, M.; Japkowicz, N.; Liu, J.; Papadopoulos, G.A.; Raś, Z.W. Foundations of Intelligent Systems. In Proceedings of the 24th International Symposium, ISMIS 2018, Limassol, Cyprus, 29–31 October 2018; Springer: Cham, Switzerland, 2018; pp. 120–129. [Google Scholar]
  23. Roy, A.; Memon, N.; Ross, A. MasterPrint: Exploring the Vulnerability of Partial Fingerprint-Based Authentication Systems. IEEE Trans. Inf. Forensics Secur. 2017, 12, 2013–2025. [Google Scholar] [CrossRef]
  24. Winder, D. Hackers Claim ‘Any’ Smartphone Fingerprint Lock Can be Broken in 20 Minutes. Available online: https://www.forbes.com/sites/daveywinder/2019/11/02/smartphone-security-alert-as-hackers-claim-any-fingerprint-lock-broken-in-20-minutes/?sh=2ca0734f6853 (accessed on 15 June 2022).
  25. Mott, N. Hacking Fingerprints Is Actually Pretty Easy—And Cheap. 2021. Available online: https://www.pcmag.com/news/hacking-fingerprints-is-actually-pretty-easy-and-cheap (accessed on 15 June 2022).
  26. Winder, D. Apple’s iPhone FaceID Hacked In Less Than 120 Seconds. 2019. Available online: https://www.forbes.com/sites/daveywinder/2019/08/10/apples-iphone-faceid-hacked-in-less-than-120-seconds/?sh=349627621bc3 (accessed on 15 June 2022).
  27. Pato, J.; Millett, L. Biometric Recognition: Challenges and Opportunities, 1st ed.; The National Academies Press: Washington, DC, USA, 2010; p. 165. [Google Scholar]
  28. Atkielski, A. Schematic Diagram of Normal Sinus Rhythm for a Human Heart as Seen on ECG. 2009. Available online: https://commons.wikimedia.org/wiki/File:SinusRhythmLabels-es.svg (accessed on 25 June 2022).
  29. İzci, E.; Değirmenci, M.; Özdemir, M.A.; Akan, A. ECG Arrhythmia Detection with Deep Learning. In Proceedings of the 28th IEEE Signal Processing and Communications Applications (SIU), Gaziantep, Turkey, 5–7 October 2020; pp. 1541–1544. [Google Scholar]
  30. Essa, E.; Xie, X. An Ensemble of Deep Learning-Based Multi-Model for ECG Heartbeats Arrhythmia Classification. IEEE Access 2021, 9, 103452–103464. [Google Scholar] [CrossRef]
  31. Ahmad, Z.; Tabassum, A.; Guan, L.; Khan, N.M. ECG Heartbeat Classification Using Multimodal Fusion. IEEE Access 2021, 9, 100615–100626. [Google Scholar] [CrossRef]
  32. Niu, J.; Tang, Y.; Sun, Z.; Zhang, W. Inter-Patient ECG Classification With Symbolic Representations and Multi-Perspective Convolutional Neural Networks. IEEE J. Biomed. Health Inform. 2020, 24, 1321–1332. [Google Scholar] [CrossRef]
  33. Ebrahimi, Z.; Loni, M.; Daneshtalab, M.; Gharehbaghi, A. A review on deep learning methods for ECG arrhythmia classification. EXpert Syst. Appl. X 2020, 7, 100033. [Google Scholar] [CrossRef]
  34. Bahrami, M.; Forouzanfar, M. Sleep Apnea Detection From Single-Lead ECG: A Comprehensive Analysis of Machine Learning and Deep Learning Algorithms. IEEE Trans. Instrum. Meas. 2022, 71, 1–11. [Google Scholar] [CrossRef]
  35. Lee, K.J.; Lee, B. End-to-End Deep Learning Architecture for Separating Maternal and Fetal ECGs Using W-Net. IEEE Access 2022, 10, 39782–39788. [Google Scholar] [CrossRef]
  36. Li, H.; Deng, J.; Feng, P.; Pu, C.; Arachchige, D.D.K.; Cheng, Q. Short-Term Nacelle Orientation Forecasting Using Bilinear Transformation and ICEEMDAN Framework. Front. Energy Res. 2021, 9, 780928. [Google Scholar] [CrossRef]
  37. Li, H.; Deng, J.; Yuan, S.; Feng, P.; Arachchige, D.D.K. Monitoring and Identifying Wind Turbine Generator Bearing Faults Using Deep Belief Network and EWMA Control Charts. Front. Energy Res. 2021, 9, 799039. [Google Scholar] [CrossRef]
  38. Gajare, A.; Dey, H. MATLAB-based ECG R-peak Detection and Signal Classification using Deep Learning Approach. In Proceedings of the 3rd IEEE Bombay Section Signature Conference (IBSSC), Gwalior, India, 18–20 November 2021; pp. 157–162. [Google Scholar]
  39. Cai, W.; Hu, D. QRS Complex Detection Using Novel Deep Learning Neural Networks. IEEE Access 2020, 8, 97082–97089. [Google Scholar] [CrossRef]
  40. Pokaprakarn, T.; Kitzmiller, R.R.; Moorman, R.; Lake, D.E.; Krishnamurthy, A.K.; Kosorok, M.R. Sequence to Sequence ECG Cardiac Rhythm Classification Using Convolutional Recurrent Neural Networks. IEEE J. Biomed. Health Inform. 2022, 26, 572–580. [Google Scholar] [CrossRef] [PubMed]
  41. Abdeldayem, S.S.; Bourlai, T. A Novel Approach for ECG-Based Human Identification Using Spectral Correlation and Deep Learning. IEEE Trans. Biom. Behav. Identity Sci. 2020, 2, 1–14. [Google Scholar] [CrossRef]
  42. Uwaechia, A.N.; Ramli, D.A. A Comprehensive Survey on ECG Signals as New Biometric Modality for Human Authentication: Recent Advances and Future Challenges. IEEE Access 2021, 9, 2169–3536. [Google Scholar] [CrossRef]
  43. Strodthoff, N.; Wagner, P.; Schaeffter, T.; Samek, W. Deep Learning for ECG Analysis: Benchmarks and Insights from PTB-XL. IEEE J. Biomed. Health Inform. 2021, 25, 1519–1528. [Google Scholar] [CrossRef]
  44. Siegersma, K.R.; van de Leur, R.R.; Onland-Moret, N.C.; Leon, D.A.; Diez-Benavente, E.; Rozendaal, L.; Bots, M.L.; Coronel, R.; Appelman, Y.; Hofstra, L.; et al. Deep neural networks reveal novel sex-specific electrocardiographic features relevant for mortality risk. Eur. Heart J.-Digit. Health 2022, 3, 1–10. [Google Scholar] [CrossRef]
  45. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A Simple Framework for Contrastive Learning of Visual Representations. In Proceedings of the 37th International Conference on Machine Learning (ICML’20), Vienna, Austria, 12–18 July 2020; pp. 1597–1607. [Google Scholar]
  46. Diamant, N.; Reinertsen, E.; Song, S.; Aguirre, A.D.; Stultz, C.M.; Batra, P. Patient contrastive learning: A performant, expressive, and practical approach to electrocardiogram modeling. PLoS Comput. Biol. 2022, 18, e1009862. [Google Scholar] [CrossRef]
  47. Lyle, J.V.; Nandi, M.; Aston, P.J. Symmetric Projection Attractor Reconstruction: Sex Differences in the ECG. Front. Cardiovasc. Med. 2021, 8, 1–17. [Google Scholar] [CrossRef]
  48. brgfx. Telemetric and ECG Holter Warehouse Project. Available online: http://thew-project.org/Database/E-HOL-03-0202-003.html (accessed on 18 June 2022).
  49. brgfx. Vector de Cuerpo Humano Creado por Brgfx - www.freepik.es. 2021. Available online: https://www.freepik.es/vectores/cuerpo-humano (accessed on 18 June 2022).
  50. Padsalgikar, A. Plastics in Medical Devices for Cardiovascular Applications. A Volume in Plastics Design Library, 1st ed.; Elsevier: Kidlington, UK, 2017; p. 115. [Google Scholar]
  51. Arteaga-Falconi, J.S.; Osman, H.A.; Saddik, A.E. ECG Authentication for Mobile Devices. IEEE Trans. Instrum. Meas. 2016, 65, 591–600. [Google Scholar] [CrossRef]
  52. Wiggers, K. AliveCor Raises $65 Million to Detect Heart Problems with AI. 2020. Available online: https://venturebeat.com/2020/11/16/alivecor-raises-65-million-to-detect-heart-problems-with-ai/ (accessed on 28 October 2021).
  53. CardioID. Every Heart Has a Beat, But the Way We Use it is Unique! 2016. Available online: https://www.cardio-id.com/ (accessed on 28 October 2021).
  54. Nymi. Nymi Workplace Wearables. 2021. Available online: https://www.nymi.com/nymi-band (accessed on 28 October 2021).
Figure 1. Fiducial PQRST. Based on [28].
Figure 1. Fiducial PQRST. Based on [28].
Applsci 12 06573 g001
Figure 2. Pseudo-orthogonal lead configuration. Based on [49].
Figure 2. Pseudo-orthogonal lead configuration. Based on [49].
Applsci 12 06573 g002
Figure 3. Heart rate histogram by sex—10 bins.
Figure 3. Heart rate histogram by sex—10 bins.
Applsci 12 06573 g003
Figure 4. Heartbeat study methodology.
Figure 4. Heartbeat study methodology.
Applsci 12 06573 g004
Figure 5. System block diagram.
Figure 5. System block diagram.
Applsci 12 06573 g005
Figure 6. Research architecture.
Figure 6. Research architecture.
Applsci 12 06573 g006
Figure 7. Single lead accuracy results. ( means maximum value per set).
Figure 7. Single lead accuracy results. ( means maximum value per set).
Applsci 12 06573 g007
Figure 8. Single lead confusion matrix derivation results.
Figure 8. Single lead confusion matrix derivation results.
Applsci 12 06573 g008
Figure 9. Combined leads accuracy results. ( means maximum value per set).
Figure 9. Combined leads accuracy results. ( means maximum value per set).
Applsci 12 06573 g009
Figure 10. Combined leads classification results.
Figure 10. Combined leads classification results.
Applsci 12 06573 g010
Figure 11. Combined leads vs. single lead: Score comparison.
Figure 11. Combined leads vs. single lead: Score comparison.
Applsci 12 06573 g011
Figure 12. Alivecor, CardioID, and NYMI product examples.
Figure 12. Alivecor, CardioID, and NYMI product examples.
Applsci 12 06573 g012
Table 1. Related work about ECG sex identification with ML algorithms.
Table 1. Related work about ECG sex identification with ML algorithms.
Ref.Acc.
[%]
LeadSample
Length
[s]
Tech.Fs
[Hz]
PositionMale-
Female
[%]
Tr. | Ts.
Sample
Year
[2]90.41210CNN500Supine52–48∼500 k |∼275 k2019
[4]92.212N/ADNNN/AN/A50.5–49.5∼131 k |∼68.5 k2021
[47]DB1:
91.3
DB2:
86.3
1210SPAR
& KNN
DB1:
1000
DB2:
500
RestingDB1:
60–40
DB2:
46–54
DB1:
N = 0.104 k
DB2:
N = 8.9 k
2021
[43]84.91210CNN
xresnet
1d101
100Resting52–48N = ∼22 k
10 fold:
8 | 2
2021
[44]valid.
int: 89
ext: 81 & 82
1210DNN250 or 500RestingTr: N/A
int. valid.: N/A
ext. valid.
42.6–57.4
Tr: ∼132 k
int. valid.: 68.5 k
ext. valid.: 7.7 k
2022
[46]f-score:
87
1210PCLR &
constrastive
learning
250 or
500
RestingN/AN = ∼3229 k
90% | 10%
2022
Own94.46 3.93 ¯ CNN200Random51–49∼3 k |∼1.3 k2022
Table 2. Experiment classification metrics. (Maximum values per item are in bold).
Table 2. Experiment classification metrics. (Maximum values per item are in bold).
CMDLeadB1B2B3B4B5B6B7B8B9B10
Acc.X0.8090.8410.8530.8640.8920.8130.9040.9270.9150.936
Y0.7430.8020.8110.8110.8520.8510.8900.8920.8830.924
Z0.8030.8260.8300.8690.8890.8210.8660.9260.9100.924
XY0.8700.8930.8710.8890.8960.9200.8560.9430.9550.937
YZ0.7710.8780.8760.9250.8950.8910.8780.9320.9390.947
XZ0.7770.8640.8560.8750.8780.9030.9240.9160.8900.894
XYZ0.9080.9220.9430.9470.9260.9640.9520.9660.9620.951
AUCX0.8920.9370.9400.9430.9570.9550.9700.9720.9660.978
Y0.8480.8770.8870.8960.9250.9360.9600.9570.9500.976
Z0.9140.9210.9350.9390.9470.9530.9660.9840.9770.978
XY0.9390.9590.9660.9730.9770.9750.9440.9850.9930.988
YZ0.9170.9480.9630.9750.9760.9830.9860.9830.9840.988
XZ0.9230.9390.9390.9510.9600.9560.9780.9640.9550.950
XYZ0.9730.9810.9860.9880.9890.9920.9910.9950.9940.990
Prec.X0.7980.7870.8080.8550.8930.7370.9380.9290.9260.943
Y0.8220.7980.7880.7850.8620.7970.8590.8600.8660.955
Z0.7560.7690.7670.9000.8610.7420.9440.8930.9460.881
XY0.8420.9300.9490.9330.9620.9020.9600.9360.9710.958
YZ0.7050.8730.8160.8960.8420.8280.8050.9340.9490.957
XZ0.9120.8870.8570.8220.8390.9380.9140.8870.9070.871
XYZ0.8700.9310.9280.9650.9880.9520.9250.9730.9740.933
Sens.X0.8320.9280.9250.8760.8880.9680.8590.9170.8890.910
Y0.6260.8030.8500.8550.8360.9400.9280.9250.8870.869
Z0.8950.9250.9460.8290.9260.9790.7710.9600.8570.958
XY0.9130.8480.7820.8360.8240.9430.7360.9480.9300.896
YZ0.9370.8820.9700.9610.9720.9860.9900.9220.9190.921
XZ0.6190.8310.8520.9550.9350.8630.9320.9470.8540.894
XYZ0.9590.9100.9600.9260.8630.9760.9800.9570.9450.958
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cabra Lopez, J.-L.; Parra, C.; Gomez, L.; Trujillo, L. Sex Recognition through ECG Signals aiming toward Smartphone Authentication. Appl. Sci. 2022, 12, 6573. https://doi.org/10.3390/app12136573

AMA Style

Cabra Lopez J-L, Parra C, Gomez L, Trujillo L. Sex Recognition through ECG Signals aiming toward Smartphone Authentication. Applied Sciences. 2022; 12(13):6573. https://doi.org/10.3390/app12136573

Chicago/Turabian Style

Cabra Lopez, Jose-Luis, Carlos Parra, Libardo Gomez, and Luis Trujillo. 2022. "Sex Recognition through ECG Signals aiming toward Smartphone Authentication" Applied Sciences 12, no. 13: 6573. https://doi.org/10.3390/app12136573

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop