You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

20 March 2024

Multi-Session Electrocardiogram–Electromyogram Database for User Recognition

,
,
,
,
,
,
,
,
and
IT Research Institute, Chosun University, Gwangju 61452, Republic of Korea
*
Authors to whom correspondence should be addressed.
This article belongs to the Special Issue Deep Networks for Biosignals

Abstract

Current advancements in biosignal-based user recognition technology are paving the way for a next-generation solution that addresses the limitations of face- and fingerprint-based user recognition methods. However, existing biosignal benchmark databases (DBs) for user recognition often suffer from limitations, such as data collection from a small number of subjects in a single session, hindering comprehensive analysis of biosignal variability. This study introduces CSU_MBDB1 and CSU_MBDB2, databases containing electrocardiogram (ECG) and electromyogram (EMG) signals from diverse experimental subjects recorded across multiple sessions. These in-house DBs comprise ECG and EMG data recorded in multiple sessions from 36 and 58 subjects, respectively, with a time interval of more than one day between sessions. During the experiments, subjects performed a total of six gestures while comfortably seated at a desk. CSU_MBDB1 and CSU_MBDB2 consist of three identical gestures, providing expandable data for various applications. When the two DBs are expanded, ECGs and EMGs from 94 subjects can be used, which is the largest number among the multi-biosignal benchmark DBs built by multi-sessions. To assess the usability of the constructed DBs, a user recognition experiment was conducted, resulting in an accuracy of 66.39% for ten subjects. It is important to emphasize that we focused on demonstrating the applicability of the constructed DBs using a basic neural network without signal denoising capabilities. While this approach results in a sacrifice in accuracy, it concurrently provides substantial opportunities for performance enhancement through the implementation of optimized algorithms. Adapting signal denoising processes to the constructed DBs and designing a more sophisticated neural network would undoubtedly contribute to improving the recognition accuracy. Consequently, these constructed DBs hold promise in user recognition, offering valuable research for future investigations. Additionally, DBs can be used in research to analyze the nonlinearity characteristics of ECG and EMG.

1. Introduction

In today’s advanced society, user recognition technologies are increasingly important for safeguarding personal information. Among these technologies, biosignal-based user recognition stands out as a solution to the shortcomings of conventional methods, such as facial, fingerprint, and iris recognition, which are susceptible to replication. This technology is actively researched as the next-generation approach to user recognition []. Biosignals encompass information that measures the microcurrents generated by human physical activity, including electrocardiogram (ECG), electromyogram (EMG), and electroencephalogram (EEG) signals. As biosignals exhibit the unique physiological characteristics of an individual, they remain imperceptible to the naked eye from the outside. Leveraging the advantages of being unforgeable and variable, biosignals address the challenges of conventional user recognition methods [].
To measure such biosignals, a sensor must be attached to the body, as shown in Figure 1 []. ECG is a biosignal originating from the heartbeat, producing a signal composed of PQRST waves. ECG signals can be acquired from both hands and both feet following the standard 12-lead method, as shown in Figure 1a. EEG is a biosignal generated by brain activity and can be measured at specific locations using the international 10–20 system, positioned based on front–back or left–right distance on the skull, as shown in Figure 1b. EMG is a signal that measures the microcurrents generated when a muscle moves, and the signal can be acquired by attaching a sensor to a muscle in the body, as shown in Figure 1c. Since biosignals are measured by attaching sensors to the body, subjects may experience discomfort and repulsion when constructing the database (DB). To conduct user recognition research using biosignals, a substantial DB (comprising a large number of subjects and repetitions) is essential. However, open access benchmark biosignal DBs used in previous studies have typically featured a limited number of subjects and repetitions. Furthermore, despite the fluctuation in biosignals over time, data in most benchmark biosignal DBs are recorded in a single session (usually one day or less), posing the challenge of limited analyzability.
Figure 1. Common biosignal measurement methods. (a) ECG measurement method. (b) EEG measurement method. (c) EMG measurement method.
To solve this issue, we present two large Chosun University Biosignal Databases (CSU_BIODBs) designed for user recognition in this study. These databases, referred to as CSU_MBDB1 and CSU_MBDB2, encompass datasets that concurrently acquire both ECG and EMG signals. CSU_MBDB1 comprises ECG and EMG data recorded over multiple sessions (2 days or more) while 36 subjects performed six hand gestures at intervals exceeding one day. Similarly, CSU_MBDB2 includes ECG and EMG data recorded across multiple sessions with 58 subjects executing six hand gestures at intervals exceeding one day. To evaluate the effectiveness of constructed DBs, we conducted a user recognition experiment.
The paper’s structure is outlined as follows: Section 2 analyzes open access benchmark DBs used in conventional biosignal-based user recognition research. Section 3 presents CSU_MBDB1 and CSU_MBDB2, two extensive multi-session electrocardiogram–electromyogram DBs introduced in this study. Section 4 analyzes the usability of the constructed benchmark biosignal DBs, and Section 5 concludes the paper.

3. Measured Method of Multi-Session Biosignal Benchmarking DBs

The multi-biosignal DB for user recognition research was constructed by simultaneously measuring ECG and EMG in subjects. During the execution of a specific hand gesture by each subject, the EMG signal from the muscle was recorded using two channels, while the ECG signal generated from the heart was recorded using one channel. Given that the constructed multi-biosignal DB contains simultaneously recorded signals, we have the flexibility to use the signals either in the form of multi-biosignal data or individually for each signal. The benchmarking multi-biosignal DB introduced in this study comprises two datasets (CSU_MBDB1 and CSU_MBDB2). CSU_MBDB1 includes ECG and EMG data recorded as 36 subjects performed six hand gestures, while CSU_MBDB2 encompasses ECG and EMG data recorded as 58 subjects performed the same six hand gestures.

3.1. Multi-Biosignal Measurement Method

In the case of CSU_MBDB1, signals were recorded during the performance of six hand gestures (Table 4) by each participant. These gestures include the following: (1) clenching the fist, (2) pressing the index finger with the thumb while clenching the fist, (3) simultaneously flexing the index, middle, and ring fingers, (4) flexing the wrist, (5) extending the wrist outward, and (6) rotating the wrist 90 degrees to the left. The construction of CSU_MBDB1 involved the active participation of 60 subjects.
Table 4. Six hand gestures from CSU_MBDB1.
Each hand gesture followed a rest period–gesture period–rest period sequence, repeated 10 times within a single session. The gesturing method for each gesture was specifically standardized in certain steps to ensure consistency across all subjects. Figure 2 shows an example of the rest period–gesture period–rest period situation during a single execution of the gesture. The ECG and EMG were measured across two sessions, with the DB’s construction including a minimum one-week interval between the sessions to facilitate the analysis of biosignal variability.
Figure 2. Examples of DB recording method (an example of the biosignal recording procedure for the fist-clenching gesture).
We used the Biopac MP160 as the biosignal construction equipment, measuring ECG with one channel and EMG with two channels. When ECG and EMG were acquired simultaneously, interference occurred between the ECG sensor and the EMG sensor. Therefore, we attached the EMG sensor to the position where the muscles are activated while performing hand gestures, changed the ECG sensor location, as shown in Figure 3, and checked for interference between sensors. The location where the final ECG and EMG sensors were attached was where interference between the sensors was minimized. ECG sensors were positioned under the left and right biceps brachii, with the GND attached above the left biceps brachii. For EMG, Ch1 was attached to the flexor carpi radialis, Ch2 to the extensor indicis proprius, and GND to the outside of the upper arm. Figure 4 shows the ECG and EMG sensor positions. The signal bandwidth was set to 0.5–35 Hz, the sampling rate to 2000 Hz, and the ADC resolution to 16 bits.
Figure 3. Signal measurement position experiment for electrode positions.
Figure 4. Electrode positions for CSU_MBDB1.
For CSU_MBDB2, signals were recorded as each subject performed six hand gestures, outlined in Table 5: (1) clenching the fist, (2) flexing the wrist, (3) extending the wrist upward, (4) rotating the wrist 90 degrees to the left, (5) rotating the wrist 90 degrees to the right, and (6) raising the cell phone 90 degrees. The DB was designed for expansion since CSU_MBDB2 has the same three hand gestures as CSU_MBDB1 (CSU_MBDB1: gesture no. 1 and CSU_MBDB2: gesture no. 1; CSU_MBDB1: gesture no. 4 and CSU_MBDB2: gesture no. 2; CSU_MBDB1: gesture no. 6 and CSU_MBDB2: gesture no. 4). Each hand gesture was repeated 10 times in one session, following the same rest period–gesture period–rest period procedure as in CSU_MBDB1. Signals were recorded across two sessions with an interval of at least one week. A hundred subjects participated in the construction of CSU_MBDB2. EMG ch2 was attached to the extensor carpi radialis longus of the right arm. EMG ch1, ECG sensor, signal bandwidth, and sampling rate were set the same as CSU_MBDB1.
Table 5. Six hand gestures from CSU_MBDB2.

3.2. Multi-Biosignal DB Verification and Segmentation

In order to minimize the variability in the sensor location while proceeding with the measurement protocol, hand gestures were performed when attaching the EMG sensor to confirm the location where the muscles were activated (moved). Additionally, in order to maintain the same force and duration, the DB construction procedure was explained to the subjects, which they had to learn. Information such as the duration and amplitude of ECG and EMG measured newly (Day 2) from the same subject was reviewed by comparing similarity with signals from other sessions (Day 1). However, due to technical problems, an incomplete signal was confirmed. Various types of incomplete signals were observed, including (1) Bluetooth communication instability between the measurement PC and Biopac MP160 during the gesture period, (2) interference between EMG and ECG sensors during the gesture period, (3) resistance in the connection cable between the sensor and the measurement device during the gesture period, and (4) destruction of the signal waveform due to body movement during the rest period. In CSU_MBDB1, data from 24 subjects are incomplete, meaning that data from 36 subjects can be used. In CSU_MBDB2, data from 42 subjects is incomplete, so data from 58 subjects can be used. Therefore, the final CSU_MBDB1 consists of the ECG and EMG signals obtained from 36 subjects, while CSU_MBDB2 consists of those obtained from 58 subjects.
The ECG and EMG data underwent visual inspection, and division was carried out to ensure the preservation of the PQRST waveform in the ECG after the gesture was performed. Figure 5 shows the segmented ECG and EMG waveforms. Data segmentation occurred only for the segments longer than or equal to 0.5 s, considering both before/after data acquisition. The introduced benchmark DBs, CSU_MBDB1 and CSU_MBDB2, consist of raw signals, including R-peak positions and division points that facilitate the division process, as shown in Figure 5.
Figure 5. Examples of dividing the ECG and EMG waveforms in CSU_MBDB1 and CSU_MBDB2. (a) Examples of EMG_ch1 waveforms. (b) Examples of EMG_ch2 waveforms. (c) Examples of ECG waveforms.
Table 6 shows a summary description of the constructed CSU_MBDB1 and CSU_MBDB2. In contrast to Table 3, which summarizes existing multi-biosignal DBs, CSU_MBDB1 and CSU_MBDB2 were designed as multi-session data with a substantial number of subjects. An advantageous feature of CSU_MBDB1 and CSU_MBDB2 is their expandability, as they share three identical gestures, resulting in multi-session databases with a total of 94 subjects. This expansion allows for more subjects compared to the benchmark DBs of ECG (Table 1) and EMG (Table 2), excluding CSU_ECG and CSU_sEMG. The databases can be actively used for research on biosignal variability and user recognition.
Table 6. Description of CSU_MBDB1 and CSU_MBDB2.

4. User Recognition Method, Results, and Discussion

A user recognition experiment was performed to confirm the usability of CSU_MBDB1 and CSU_MBDB2, the benchmark DBs constructed in this study. The experiment employed a pre-existing designed network [], depicted in Figure 6. This designed network consists of two sub-networks, with each sub-network featuring six convolutional layers and two max-pooling layers. Within the convolutional layers, features are extracted by configuring 8, 16, and 32 filters of size [1 × 3]. After the convolutional layer, batch normalization is performed. The pooling layer is a filter of size [1 × 2] and reduces the feature dimension with padding 0 and stride [1,2]. Sub-network 1 utilizes as input data a size of [1 × 5000], representing an ECG signal, while sub-network 2 uses an input data size of [1 × 5000], representing an EMG signal. Both sub-networks share the weights of the convolutional layers during training. The features calculated from sub-network 1 and sub-network 2 are concatenated and input to the fully connected layer. The fully connected layer consists of 2048, 256, and 36 (number of classes) to recognize users. The experiment is conducted using a batch size of 256, epoch 100, learning rate of 0.001, and activation function rectified linear unit (ReLU).
Figure 6. Networks used for user recognition.
In the experiment, the multi-session signals are divided into Each Data and Cross Data. Each Data are used to confirm the variability in biosignal analysis. As shown in Figure 7, this method uses signals recorded in different sessions for both training and testing. Specifically, the Each Data method uses the ECG and EMG data recorded on Day 1 for training and ECG data recorded on Day 2 for testing. However, Cross Data are used to analyze the validity of the constructed DB. As shown in Figure 7, this method divides signals recorded in different sessions in a 70:30 ratio using them as training and test data, respectively. The Cross Data method uses 70% of the ECG and EMG signals recorded on Days 1 and 2 as training data and 30% of the ECG and EMG signals recorded on Days 1 and 2 as test data. The experiment was conducted with 10 and 36 people of CSU_MBDB1 and 10 and 58 people of CSU_MBDB2. To analyze hand gestures suitable for user recognition in the two constructed databases, the same number of subjects (10 subjects) was used.
Figure 7. Structures of Each Data and Cross Data.
Table 7 shows the experimental results of Each Data and Cross Data using CSU_MBDB1. The experiment involved ECG and EMG data from 10 and 36 subjects. The results from the Each Data showed an accuracy of 45.11% for the 10-subject dataset and an accuracy of 35.28% for the data of 36 subjects. In the Cross Data experiment, the accuracy reached 63.42% for the 10-subject datasets and 45.37% for the 36-subject datasets. The experimental results indicate that the user recognition accuracy of the Cross Data method surpassed that of the Each Data method. The heightened performance of the Cross Data method, using biosignal data from two sessions, underscores the significance of biosignal variability, specifically in ECG and EMG signals. This underscores the necessity of constructing a large DB spanning multiple sessions to effectively analyze the variability for these biosignals.
Table 7. User recognition result using CSU_MBDB1.
Table 8 shows the experimental results of Each Data and Cross Data using CSU_MBDB2. The experiment involved ECG and EMG data from 10 and 58 subjects. In the Each Data experiment, the accuracy was 48.44% for the 10-subject datasets and 27.71% for the 58-subject datasets. Meanwhile, the Cross Data experiment yielded an accuracy of 66.39% for the 10-subject dataset and 49.42% for the 58-subject dataset. Similar to CSU_MBDB1, the experimental results showed that the user recognition accuracy of the Cross Data method in CSU_MBDB2 surpassed that of the Each Data method. In addition, it was confirmed that the three hand gestures performed in CSU_MBDB2 were suitable for user recognition compared to the three hand gestures performed in CSU_MBDB1 (excluding the three identical hand gestures).
Table 8. User recognition result using CSU_MBDB2.
Figure 8 shows the data distribution using CNN features, referring to a previous study [], using the ECG and EMG of 10 CSU_MBDB2 subjects. In Figure 8, it can be seen that the biosignals of subject 1, subject 2, subject 3, and subject 7 are well clustered. However, it can be seen that many feature areas overlap when the data distribution of four subjects (subject 5, subject 8, subject 9, subject 10) uses the CNN features from Figure 6. These results are a result of the variability in ECG and EMG constructed in a multi-session and indicate the need for research on feature extraction technology for user recognition using CSU_MBDB1 and CSU_MBDB2.
Figure 8. Three-dimensional plot of CNN features from 10 subjects (CSU_MBDB2).
Table 9 shows the experimental results of the extended DB using the same three hand gestures (CSU_MBDB1: gesture no. 1 and CSU_MBDB2: gesture no. 1; CSU_MBDB1: gesture no. 4 and CSU_MBDB2: gesture no. 2; CSU_MBDB1: gesture no. 6 and CSU_MBDB2: gesture no. 4) from CSU_MBDB1 and CSU_MBDB2. The experiment used ECG and EMG from 94 subjects as Cross Data. As a result of the experiment, 94 people were recognized with an accuracy of 27.05%. In addition, to show the development potential of the constructed DB, an experiment was conducted using filtering (noise removal), which is widely used in existing research. ECG noise was removed using BPF with a bandwidth of 0.5 to 40 Hz. EMG noise was removed using BPF with a bandwidth of 5 to 500 Hz. The experimental results showed an accuracy of 30.94%, confirming that the accuracy was improved by 3.89% compared to before removing noise (with noise).
Table 9. User recognition result using CSU_MBDB1 and CSU_MBDB2.
The user recognition experiment using the constructed biosignal DB showed a relatively low accuracy of 66.39%. Notably, the ECG signals displayed a significant amount of noise, as shown in Figure 9 (red box), due to the simultaneous measurement of ECG and EMG during gesture performance. It is crucial to note that the user recognition experiment in this study was conducted using raw signals without noise removal or relatively simple noise removal. The network employed for user recognition utilized a straightforward model, contributing to the observed low accuracy. The presented CSU_MBDB1 and CSU_MBDB2 have limitations as they involve the measurement of ECG and EMG exclusively in healthy subjects, rendering them unsuitable as DBs for patient user recognition. Furthermore, given the focus on user recognition during the DB construction, ECG (1ch) and EMG (2ch) were measured with a limited number of channels, posing a drawback in terms of the analysis of various biosignals’ information (e.g., spatio-temporal analysis of HD-sEMG). Nevertheless, by expanding the DB using the same three hand gestures from CSU_MBDB1 and CSU_MBDB2, it was shown that user recognition experiments were possible with ECG and EMG for 94 subjects acquired in multi-sessions, and performance was improved by performing noise removal (BPF). The two built DBs (CSU_MBDB1 and CSU_MBBD2) have the advantage of being able to acquire ECG and EMG in multi-sessions and analyze the nonlinearity characteristics of biosignals that occur over time.
Figure 9. Examples of noise occurring in CSU_MBDBs.
Table 10 shows the calculation of consistency determination to analyze the efficiency and comfort of the constructed DB, referring to existing research []. If the consistency determination is higher than 0.5, it is an appropriate gesture for user recognition, and if it is lower than 0.2, the gesture needs improvement. As a result of the experiment, all gestures from CSU_MBDB1 and CSU_MBDB2 showed values close to 0.5, confirming their suitability for user recognition. Furthermore, when comparing the consistency determination of the three hand gestures from CSU_MBDB1 (gesture no. 1, gesture no. 4, and gesture no. 6) with those of CSU_MBDB2 (gesture no. 1, gesture no. 2, and gesture no. 4), CSU_MBDB1 yielded an average of 0.4447, while CSU_MBDB2 produced an average of 0.4606. Therefore, it was confirmed once again that the three hand gestures from CSU_MBDB2 are more suitable for user recognition than the three hand gestures from CSU_MBDB1.
Table 10. Consistency determination of CSU_MBDB1 and CSU_MBDB2.

5. Conclusions

Various benchmarking biosignal DBs have been established for user recognition research using biosignals. However, these DBs encountered limitations as the variability in signals could not be adequately analyzed, either due to their construction based on a limited number of subjects or the recording of biosignals in single sessions. This study address this gap by introducing CSU_MBDB1 and CSU_MBDB2, constructed by recording biosignals from numerous subjects across multiple sessions. The participant count for constructing the DBs was 60 and 100, respectively, with sensors attached to both the left and right arms for recording ECG and EMG signals during the execution of six hand gestures. Despite technical challenges in the measurement protocol, the DBs ultimately comprised data from 36 and 58 subjects. Since these two benchmarking biosignal DBs share three identical hand gestures, they can be expanded into a combined DB covering 94 subjects. To analyze the usability of the constructed DBs, we conducted a user recognition experiment using a neural network designed in a previous study. Following the experiment, the user recognition accuracy stood at 66.39% (Cross Data and 10 subjects from CSU_MBDB2). This relatively lower accuracy is attributed to the use of a straightforward neural network without the removal of noise from the signals. The introduced DBs (CSU_MBDB1 and CSU_MBDB2) hold potential as benchmark DBs for user recognition research, given their recording of ECG and EMG signals in multiple sessions. By expanding the DB using three gestures from the multi-session DB for user recognition, we showed that user recognition experiments were possible with ECG and EMG from 94 subjects acquired simultaneously. Additionally, it was confirmed that performance improved when using a preprocessed signal (without noise) rather than a raw signal (with noise). There is a prospect of improving user recognition accuracy by implementing noise removal techniques on the constructed multi-session biosignal DBs and designing an optimal neural network. Lastly, by measuring ECG and EMG in multi-sessions, the nonlinear characteristics of biosignals that occur over time can be analyzed, and through this, user recognition research that is robust to volatility can be conducted. Future research endeavors will include investigating the variability analysis of the multi-session biosignal DBs and exploring techniques for noise removal from biosignals to further improve user recognition accuracy.

Author Contributions

Conceptualization, J.S.K. and E.B.; methodology, J.S.K., J.L., Y.-H.B., K.-C.K., E.B. and S.P.; software, J.S.K., J.L. and E.B.; validation, J.S.K., C.H.S., J.M.K., E.B. and S.P.; formal analysis, J.S.K.; investigation, J.S.K., C.H.S., J.M.K. and E.B.; writing—original draft preparation, J.S.K. and E.B.; writing—review and editing, J.S.K., J.J., H.-S.C., K.-C.K., Y.T.K., E.B. and S.P.; supervision, S.P.; project administration, S.P.; funding acquisition, S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Basic Science Research Program through the National Research Foundation of Korea, funded by the Ministry of Education (No. NRF-2017R1A6A1A03015496).

Institutional Review Board Statement

All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Chosun University Institutional Review Board (2-1041055-AB-N-01-2023-21).

Data Availability Statement

CSU_MBDB1 and CSU_MBDB2 constitute a repository of freely accessible biosignal data, administered by the IT Institute. The data supporting the findings presented in this paper can be accessed at http://www.chosun.ac.kr/riit (accessed on 30 January 2024). To request the database, please contact the DB manager and refer to the bulletin board for any additional posted information.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Caputo, D.; Verderame, L.; Ranieri, A.; Merlo, A.; Caviglione, L. Fine-bearing google home: Why silence will not protect your privacy. J. Wirel. Mob. Netw. Ubiquitous Comput. Dependable Appl. 2020, 11, 35–53. [Google Scholar]
  2. Kim, J.S.; Kim, M.G.; Pan, S. Two-step biometrics using electromyogram signal based on convolutional neural network-long short-term memory networks. Appl. Sci. 2021, 11, 6824. [Google Scholar] [CrossRef]
  3. Kim, J.S. A Study on Personal Recognition Using Electromyogram Based on Multi-Stream Siamese Fusion Network Combining Auxiliary Classifier. Ph.D. Thesis, Chosun University, Gwangju, Republic of Korea, 2022. [Google Scholar]
  4. Goldberger, A.L.; Amaral, L.A.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.K.; Stanley, H.E. Physiobank, physiotoolkit, and physionet: Components of a new research resource for complex physiologic signal. Circulation 2000, 101, 215–220. [Google Scholar] [CrossRef]
  5. PhysioNet. Available online: https://physionet.org/ (accessed on 2 January 2024).
  6. Mark, R.G.; Schluter, P.S.; Moody, G.B.; Devlin, P.H.; Chernoff, D. An annotated ECG database for evaluating arrhythmia detectors. IEEE. Trans. Biomed. Eng. 1982, 29, 600. [Google Scholar]
  7. Moody, G.B.; Mark, R.G. The MIT-BIH arrhythmia database on CD-ROM and software for use with it. Comput. Cardiol. 1990, 17, 185–188. [Google Scholar]
  8. Albrecht, P.S.T. S-T Segment Characterization for Long-Term Automated ECG Analysis. Master’s Thesis, MIT, Cambridge, MA, USA, 1983. [Google Scholar]
  9. Laguna, P.; Mark, R.G.; Goldberger, A.L.; Moody, G.B. A database for evaluation of algorithms for measurement of QT and other waveform intervals in the ECG. Comput. Cardiol. 1997, 24, 673–676. [Google Scholar]
  10. Jezewski, J.; Matonia, A.; Kupka, T.; Roj, D.; Czabanski, R. Determination of the fetal heart rate from abdominal signal: Evaluation of beat-to-beat accuracy in relation to the direct fetal electrocardiogram. Biomed. Eng./Biomed. Tech. 2012, 57, 383–394. [Google Scholar] [CrossRef] [PubMed]
  11. Bousselijot, R.; Kreiseler, D.; Schnabel, A. Nutzung der EKG-signaldatenbank CARDIODAT der PTB uber das internet. Biomed. Eng./Biomed. Tech. 1995, 40, 317–318. [Google Scholar]
  12. Lugovaya, T.S. Biometric Human Identification Based on Electrocardiogram. Master’s Thesis, Electrotechnical University, Saint-Petersburg, Russia, 2005. [Google Scholar]
  13. Choi, G.H.; Ko, H.; Pedrycz, W.; Pan, S.B. Post-exercise electrocardiogram identification system using normalized tachycardia based on P, T, wave. In Proceedings of the Information Technology, Electronics and Mobile Communication Conference, Vancouver, BC, Canada, 17–19 October 2019. [Google Scholar]
  14. Sapsanic, C.; Georgoulas, G.; Tzes, A.; Lymberopoulos, D. Improving EMG based classification of basic hand movements using EMD. In Proceedings of the International Conference of the IEEE Engineering in Medicine and Biology Society, Osaka, Japan, 3–7 July 2013. [Google Scholar]
  15. UCI Machine Learning Repository. Available online: http://archive.ics.uci.edu/ml/datasets/sEMG+for+Basic+Hand+movements (accessed on 14 December 2023).
  16. Atzori, M.; Gijsberts, A.; Kuzborskij, I.; Elsig, S.; Hager, A.G.M.; Deriaz, O.; Castellini, C.; Müller, H.; Caputo, B. Characterization of a benchmark database for myoelectric movement classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 73–83. [Google Scholar] [CrossRef] [PubMed]
  17. Atzori, M.; Gijsberts, A.; Castellini, C.; Caputo, B.; Hager, A.G.M.; Elsig, S.; Giatsidis, G.; Bassetto, F.; Müller, H. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Sci. Data 2014, 1, 140053. [Google Scholar] [CrossRef] [PubMed]
  18. Pizzolato, S.; Tagliapietra, L.; Cognolato, M.; Reggiani, M.; Müller, H.; Atzori, M. Comparison of six electromyography acquisition setups on hand movement classification tasks. PLoS ONE 2017, 12, 0186132. [Google Scholar] [CrossRef]
  19. Amma, C.; Krings, T.; Boer, J.; Schultz, T. Advancing muscle-computer interfaces with high-density electromyography. In Proceedings of the Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015. [Google Scholar]
  20. Figshare. Available online: https://figshare.com/articles/dataset/Data_from_Gesture_Recognition_by_Instantaneous_Surface_EMG_Images_CapgMyo-DBa/7210397 (accessed on 30 January 2024).
  21. Coker, J.; Chen, H.; Schall, M.C., Jr.; Zabala, M. EMG and joint angle-based machine learning to predict future joint angles at the knee. Sensors 2021, 21, 3622. [Google Scholar] [CrossRef] [PubMed]
  22. Fang, Y.; Zhang, X.; Zhou, D.; Liu, H. Improve inter-day hand gesture recognition via convolutional neural network based feature fusion. Int. J. Humanoid Robot. 2020, 18, 2050025. [Google Scholar] [CrossRef]
  23. Pradhan, A.; He, J.; Jiang, N. Open access dataset for electromyography based multi-code biometric authentication. arXiv 2022, arXiv:2201.01051. [Google Scholar]
  24. Kim, J.S.; Song, C.H.; Bak, E.S.; Pan, S.B. Multi-session surface-electromyogram signal database for personal identification. Sustainability 2022, 14, 5739. [Google Scholar] [CrossRef]
  25. Cognolato, M.; Gijsberts, A.; Gregori, V.; Saetta, G.; Giacomino, K.; Hager, A.G.M.; Gigli, A.; Faccio, D.; Tiengo, C.; Bassetto, F.; et al. Gaze, visual, myoelectric, and inertial data of grasps for intelligent prosthetics. Sci. Data 2020, 7, 43. [Google Scholar] [CrossRef] [PubMed]
  26. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A database for emotion analysis using physiological signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef]
  27. Katsigiannis, S.; Ramzan, N. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. IEEE J. Biomed. Health Inform. 2018, 22, 98–107. [Google Scholar] [CrossRef]
  28. Healey, J.A.; Picard, R.W. Detecting stress during real-world driving tasks using physiological sensors. IEEE Trans. Intell. Transp. Syst. 2005, 6, 156–166. [Google Scholar] [CrossRef]
  29. Soleymani, M.; Lichtenauer, J.; Pun, T.; Pantic, M. A multimodal database for affect recognition and implicit tagging. IEEE Trans. Affect. Comput. 2012, 3, 42–55. [Google Scholar] [CrossRef]
  30. Kim, J.S.; Pan, S. Weight sharing-based user recognition using multi-biosignal. In Proceedings of the KIIT Conference, Jeju, Republic of Korea, 1–2 June 2023. [Google Scholar]
  31. Belgacem, N.; Fournier, R.; Nait-Ali, A.; Bereksi-Reguig, F. A novel biometric authentication approach using ECG and EMG signals. J. Med. Eng. Technol. 2015, 39, 226–238. [Google Scholar] [CrossRef] [PubMed]
  32. Zhu, Y.; Tang, G.; Liu, W. How post 90’s gesture interact with automobile skylight. Int. J. Hum. Comput. Interact. 2021, 38, 395–405. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.